The document presents a novel technique called particle swarm optimization-based Lambda-Tau (PSOBLT) for analyzing the stochastic behavior of complex repairable industrial systems using uncertain data. PSOBLT combines Lambda-Tau methodology and particle swarm optimization to model system interactions using Petri nets and optimize the membership functions of reliability indices like failure rate and repair time. The technique reduces uncertainty in behavior analysis results compared to existing methods. The document demonstrates PSOBLT on a paper mill feeding unit to analyze system performance and help managers improve profit through maintenance strategies.
A robust algorithm based on a failure sensitive matrix for fault diagnosis of...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
This document discusses an unsupervised feature selection method using swarm intelligence and consensus clustering to improve automatic fault detection and diagnosis in HVAC systems. The proposed method selects important features from original HVAC sensor measurements based on relative entropy between low and high frequency features. When applied to fault data from ASHRAE Project 1312-RP, the selected features achieved the least redundancy compared to other selection methods. Two time-series classification algorithms (NARX-TDNN and HMM) using the selected features achieved high weighted average sensitivity and specificity (over 96% and 86% respectively) for fault detection and diagnosis. The unsupervised feature selection method can potentially improve fault detection performance when applied to other model-based systems.
MAINTENANCE POLICY AND ITS IMPACT ON THE PERFORMABILITY EVALUATION OF EFT SYS...IJCSEA Journal
In the Electronic Funds Transfer (EFT) Systems, faults can cause severe degradation on the performance of this system. Thus, modelling the performance of EFT system without considering dependability aspects can cause inaccurate results. This paper presents a stochastic model for evaluating performance of processing and storage infrastructures of the EFT system. This work also presents a model for evaluating the effects of the proposed preventive maintenance policy and different service level agreements (SLA) on the dependability of the EFT system infrastructure. Then, this paper combines both models (dependability and performance) for evaluating the impact of dependability issues on the performance of the EFT system. Finally, case studies considering EFT system infrastructures are provided to demonstrate the applicability of the adopted approach. Moreover, the results of these case studies are depicted, stressing important aspects of dependability and performance for EFT system planning.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Our Industrial Modeling Service (IMS) involves several important (but rarely implemented) methods to significantly improve and advance your existing models and data. Since it is well-known that good decision-making requires good models and data, IMS is ideally suited to support this continuous-improvement endeavour. IMS is specifically designed to either co-exist with your existing design, planning, scheduling, etc. applications or these same models and data can be used seamlessly into our Industrial Modeling and Programming Language (IMPL) to create new value-added applications. The following techniques form the basis of our IMS offering.
The document compares four accident analysis models - Events and Causal Factors (ECF), Human Factors Analysis and Classification System (HFACS), System-Theoretic Accident Model and Processes (STAMP), and Rasmussen's AcciMaps - in their analysis of a medication dosing error case study involving a computerized physician order entry (CPOE) system. It finds that while the models identify common causes, such as human-computer interaction issues, AcciMaps and STAMP provide the deepest analysis by examining contributing factors across multiple levels of the sociotechnical system, but that the reliability of AcciMap analysis needs improvement for healthcare applications.
Synthetic Data Generation for Statistical TestingLionel Briand
1) The document describes an approach for automatically generating synthetic test data that is both logically valid and statistically representative of real data for testing data-centric systems.
2) The approach takes as input a data schema, statistical characteristics of the data elements, and data validity constraints. It then generates an initial valid data sample before improving representativeness through "corrective constraints".
3) An evaluation on generating test data for a tax management system found the approach could produce samples of up to 1000 instances in under 10 hours, and that the generated data was both valid and statistically representative, outperforming the state-of-the-art.
Prioritizing Test Cases for Regression Testing A Model Based ApproachIJTET Journal
The document summarizes a model-based approach to prioritizing regression test cases. It involves generating test cases from UML models, prioritizing them based on the number of states and transitions covered, and clustering them by severity using a dendrogram approach. This helps decrease the time and cost of regression testing by focusing testing efforts on the most important and affected areas first. The proposed approach constructs models from requirements, identifies states, prioritizes flows, generates test cases, and prioritizes the test cases based on severity to improve regression testing efficiency.
A robust algorithm based on a failure sensitive matrix for fault diagnosis of...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
This document discusses an unsupervised feature selection method using swarm intelligence and consensus clustering to improve automatic fault detection and diagnosis in HVAC systems. The proposed method selects important features from original HVAC sensor measurements based on relative entropy between low and high frequency features. When applied to fault data from ASHRAE Project 1312-RP, the selected features achieved the least redundancy compared to other selection methods. Two time-series classification algorithms (NARX-TDNN and HMM) using the selected features achieved high weighted average sensitivity and specificity (over 96% and 86% respectively) for fault detection and diagnosis. The unsupervised feature selection method can potentially improve fault detection performance when applied to other model-based systems.
MAINTENANCE POLICY AND ITS IMPACT ON THE PERFORMABILITY EVALUATION OF EFT SYS...IJCSEA Journal
In the Electronic Funds Transfer (EFT) Systems, faults can cause severe degradation on the performance of this system. Thus, modelling the performance of EFT system without considering dependability aspects can cause inaccurate results. This paper presents a stochastic model for evaluating performance of processing and storage infrastructures of the EFT system. This work also presents a model for evaluating the effects of the proposed preventive maintenance policy and different service level agreements (SLA) on the dependability of the EFT system infrastructure. Then, this paper combines both models (dependability and performance) for evaluating the impact of dependability issues on the performance of the EFT system. Finally, case studies considering EFT system infrastructures are provided to demonstrate the applicability of the adopted approach. Moreover, the results of these case studies are depicted, stressing important aspects of dependability and performance for EFT system planning.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Our Industrial Modeling Service (IMS) involves several important (but rarely implemented) methods to significantly improve and advance your existing models and data. Since it is well-known that good decision-making requires good models and data, IMS is ideally suited to support this continuous-improvement endeavour. IMS is specifically designed to either co-exist with your existing design, planning, scheduling, etc. applications or these same models and data can be used seamlessly into our Industrial Modeling and Programming Language (IMPL) to create new value-added applications. The following techniques form the basis of our IMS offering.
The document compares four accident analysis models - Events and Causal Factors (ECF), Human Factors Analysis and Classification System (HFACS), System-Theoretic Accident Model and Processes (STAMP), and Rasmussen's AcciMaps - in their analysis of a medication dosing error case study involving a computerized physician order entry (CPOE) system. It finds that while the models identify common causes, such as human-computer interaction issues, AcciMaps and STAMP provide the deepest analysis by examining contributing factors across multiple levels of the sociotechnical system, but that the reliability of AcciMap analysis needs improvement for healthcare applications.
Synthetic Data Generation for Statistical TestingLionel Briand
1) The document describes an approach for automatically generating synthetic test data that is both logically valid and statistically representative of real data for testing data-centric systems.
2) The approach takes as input a data schema, statistical characteristics of the data elements, and data validity constraints. It then generates an initial valid data sample before improving representativeness through "corrective constraints".
3) An evaluation on generating test data for a tax management system found the approach could produce samples of up to 1000 instances in under 10 hours, and that the generated data was both valid and statistically representative, outperforming the state-of-the-art.
Prioritizing Test Cases for Regression Testing A Model Based ApproachIJTET Journal
The document summarizes a model-based approach to prioritizing regression test cases. It involves generating test cases from UML models, prioritizing them based on the number of states and transitions covered, and clustering them by severity using a dendrogram approach. This helps decrease the time and cost of regression testing by focusing testing efforts on the most important and affected areas first. The proposed approach constructs models from requirements, identifies states, prioritizes flows, generates test cases, and prioritizes the test cases based on severity to improve regression testing efficiency.
The document discusses methods for real-time diagnostics of technological processes and field equipment. It proposes a combined method using moving PCA for early detection of abnormal situations, process decomposition for fault localization, and fuzzy production rules for identification. For detection, moving PCA constructs new models over time to accommodate process changes. Identification compares the current situation vector to vectors of possible abnormal situations in diagnostic models. The method was tested on diagnosing a high-pressure polyethylene polymerization process.
Genetic fuzzy process metric measurement system for an operating systemijcseit
Operating system (Os) is the most essential software of the computer system,deprived ofit, the computer
system is totally useless. It is the frontier for assessing relevant computer resources. It performance greatly
enhances user overall objective across the system. Related literatures have try in different methods and
techniques to measure the process matric performance of the operating system but none has incorporated
the use of genetic algorithm and fuzzy logic in their varied techniques which indeed is a novel approach.
Extending the work of Michalis, this research focuses on measuring the process matrix performance of an
operating system utilizing set of operating system criteria’s while fusing fuzzy logic to handle
impreciseness and genetic for process optimization.
BIO-INSPIRED MODELLING OF SOFTWARE VERIFICATION BY MODIFIED MORAN PROCESSESIJCSEA Journal
A new approach for the control and prediction of verification activities for large safety-relevant software
systems will be presented in this paper. The model is applied on a macroscopic system level and based on
so-called Moran processes, which originate from mathematical biology and allow for the description
ofphenomena as, for instance, genetic drift. Beside the theoretical foundations of this novel approach, its
application on a real-world example from the medical engineering domain will be discussed.
This document discusses the theory of software testing. It covers several key topics:
1) It identifies five common problems in software testing like limitations of testing teams and issues with manual testing.
2) It describes different testing processes like verification, validation, white-box testing and black-box testing.
3) It outlines three main phases of software testing - preliminary testing, testing, and user acceptance testing - to evaluate a new software system and identify any issues.
Bio-Inspired Modelling of Software Verification by Modified Moran ProcessesIJCSEA Journal
A new approach for the control and prediction of verification activities for large safety-relevant software systems will be presented in this paper. The model is applied on a macroscopic system level and based on so-called Moran processes, which originate from mathematical biology and allow for the description of phenomena as, for instance, genetic drift. Beside the theoretical foundations of this novel approach, its application on a real-world example from the medical engineering domain will be discussed.
Test Case Optimization and Redundancy Reduction Using GA and Neural Networks IJECEIAES
More than 50% of software development effort is spent in testing phase in a typical software development project. Test case design as well as execution consume a lot of time. Hence, automated generation of test cases is highly required. Here a novel testing methodology is being presented to test objectoriented software based on UML state chart diagrams. In this approach, function minimization technique is being applied and generate test cases automatically from UML state chart diagrams. Software testing forms an integral part of the software development life cycle. Since the objective of testing is to ensure the conformity of an application to its specification, a test “oracle” is needed to determine whether a given test case exposes a fault or not. An automated oracle to support the activities of human testers can reduce the actual cost of the testing process and the related maintenance costs. In this paper, a new concept is being presented using an UML state chart diagram and tables for the test case generation, artificial neural network as an optimization tool for reducing the redundancy in the test case generated using the genetic algorithm. A neural network is trained by the backpropagation algorithm on a set of test cases applied to the original version of the system.
Software plays a critical role in businesses, governments, and societies. To improve
performance and quality of the software are important goals of software engineering. Mining
data has recently emerged as a promising means to meet this goal due to two main trends:
The increasing abundance of such data and its demonstrated helpfulness in solving numerous
real-world problems. Poor performance costs the software industry millions of money
annually in the form of lost revenue, hardware costs, damaged customer relations and
decreased productivity. Performance analysis and evaluation through data mining technique
will result performance improvement suggestions for software developers.
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Black boxtestingmethodsforsoftwarecomponentsAstrid yolanda
This document discusses black-box testing methods for software components. It begins by defining black-box testing as testing that ignores internal mechanisms and focuses on inputs and outputs. It notes that for black-box testing of components, specifications, interfaces, customizations, and source code availability must be considered. Various black-box testing techniques are then described, including random testing, partition testing, boundary value testing, decision tables testing, and mutation testing. The document provides details on how each technique is applied to test software components.
EXTRACTING THE MINIMIZED TEST SUITE FOR REVISED SIMULINK/STATEFLOW MODELijaia
Test case generation techniques are successfully employed to generate test cases from a formal model. A problem is that as the model evolves, test suites tend to grow in size, making it too costly to execute entire test suites. This paper aims to propose a practical approach to reduce the size of test suites for modified Simulink/Stateflow (SL/SF) model, which is popularly used for modeling software behavior in many industries like automobile manufacturers. The model for describing a system is frequently modified until it is fixed. The proposed technique is capable of extracting the minimized sized test suite in terms of test coverage, by taking into account both the modified and the affected portion of revised SL/SF model. Two real models for the ECUs deployed in a commercial car are used for an empirical study.
The document summarizes the key steps and considerations in conducting a feasibility study for a proposed system. It discusses the three main feasibility factors - economic, technical, and behavioral. It outlines the 8 steps in a feasibility study: forming a project team, preparing flowcharts, enumerating candidate systems, describing system characteristics, evaluating performance and costs, weighting systems, selecting the best system, and reporting findings. The economic, technical, and behavioral aspects of each candidate system are evaluated before a recommendation is made.
This document discusses the importance of test data documentation. It defines test data as samples of valid and invalid data used for testing. Documenting test data has advantages like reusing data for regression testing and aiding user acceptance testing. Test design techniques like boundary value analysis and equivalence partitioning help identify test data by partitioning inputs. The document emphasizes generating comprehensive test data through templates and linking it to test scripts to ensure test coverage.
This document summarizes a research article that proposes using continuous hidden Markov models (CHMMs) with a change point detection algorithm for online adaptive bearings condition assessment. The approach aims to (1) estimate the initial number of CHMM states and parameters from historical data and (2) update the state space and parameters during monitoring to adapt to changes. Compared to existing techniques, the proposed approach improves HMM training, detects unknown states earlier, and better represents degradation processes with unknown conditions by changing the CHMM structure.
This document presents a new approach to measuring generic attributes (GAs) as part of process appraisals. It defines two GAs - Usefulness and Cost Effectiveness. Usefulness measures how well process outputs meet user needs. Cost Effectiveness measures whether the benefits of process outputs are worth the resources invested. The approach improves on prior GA definitions by focusing measurements on key process outputs, distinguishing between producers and users of outputs, and using objective evidence. It provides a practical method for incorporating GAs into process appraisals to evaluate the real-world performance and value of processes.
by Andrew Rowland
Management of aging electronic systems is a problem faced by many industries. Management of these systems requires some understanding of their reliability performance. In the United States commercial nuclear industry several approaches are being taken in an attempt to understand the reliability performance of plant systems. This article describes one approach being used. The method is non- parametric and requires no specialized data analysis software.
The document describes a scenario where a systems analyst has been hired to design a new ICT system for Dar Es Salaam High School after several smaller schools merged. The current systems need to be analyzed and a new system designed that can produce hundreds of reports quickly and find individual records efficiently. As part of the design process, the analyst will need to include key items and factors that influence their choice. Technical and program documentation will also need to be created to support the new system design.
The document discusses developing cell layouts in a job shop using group technology. It aims to determine the minimum machine capacity required to form inter-cellular layouts using group technology, improve machine performance measures and efficiency by eliminating unnecessary machines and implementing repetitive lots. Previous research identified formation methods but did not specify minimum capacity or performance improvement. The work to be done includes selecting machines and parts to develop a matrix, creating a new algorithm using existing notation, and designing a computational model of the new cell layout.
A model for run time software architecture adaptationijseajournal
Since the global demand for software systems and constantly changing environments and systems is
increasing, the adaptability of software systems is of significant importance. Due to the architecture of
software system is a high-level view of the system and makes the modifiability possible at an overall level,
the adaptability of the software can be considered an effective approach to adapt software systems by
changing architecture configuration. In this study, the architecture configuration is modified through xADL
language which is a software architecture description language with a high flexibility. Software
architecture reconfiguration is done based on existing rules of rule-based system, which are written with
respect to three strategies of load balancing, fixed bandwidth and fixed latency. The proposed model of the
study is simulated based on samples of client-server system, video conferencing system and students’
grading system. The proposed model can be used in all types of architecture, include Client Server
Architecture, Service Oriented Architecture and etc.
A simple numerical procedure for estimating nonlinear uncertainty propagationISA Interchange
This document presents a numerical method for estimating nonlinear uncertainty propagation. The method approximates the nonlinear function with piecewise linear segments. It then calculates the probability density function of the dependent variable based on the transformations of the linear segments. For functions of a normally distributed independent variable, the mean and confidence intervals of the dependent variable can be calculated using only the error function. A simple example of applying this method to a parabolic function is presented to demonstrate the technique.
On an LAS-integrated soft PLC system based on WorldFIP fieldbusISA Interchange
Communication efficiency is lowered and real-time performance is not good enough in discrete control based on traditional WorldFIP field intelligent nodes in case that the scale of control in field is large. A soft PLC system based on WorldFIP fieldbus was designed and implemented. Link Activity Scheduler (LAS) was integrated into the system and field intelligent I/O modules acted as networked basic nodes. Discrete control logic was implemented with the LAS-integrated soft PLC system. The proposed system was composed of configuration and supervisory sub-systems and running sub-systems. The configuration and supervisory sub-system was implemented with a personal computer or an industrial personal computer; running subsystems were designed and implemented based on embedded hardware and software systems. Communication and schedule in the running subsystem was implemented with an embedded sub-module; discrete control and system self-diagnosis were implemented with another embedded sub-module. Structure of the proposed system was presented. Methodology for the design of the sub-systems was expounded. Experiments were carried out to evaluate the performance of the proposed system both in discrete and process control by investigating the effect of network data transmission delay induced by the soft PLC in WorldFIP network and CPU workload on resulting control performances. The experimental observations indicated that the proposed system is practically applicable.
Bi objective redundancy allocation problem for a system with mixed repairable...ISA Interchange
Traditionally, in the redundancy allocation problem (RAP), two general classes of optimization problems are considered; reliability optimization and availability optimization. Contrary to reliability optimization, fewer researchers have studied availability optimization to find out the optimal combination of components type and redundancy levels for each subsystem in a system for maximizing (or minimizing) the objectives. In each problem it is assumed that either the entire components are repairable or they are non-repairable. However, in real world situations, systems usually consist of both repairable and non-repairable components. In this paper a new Mixed Integer Nonlinear Programming (MINLP) model is presented to analyze the availability optimization of a system with a given structure, using both repairable and non-repairable components, simultaneously. To find the solution of the introduced MINLP, an efficient Genetic Algorithm (GA) is also developed. Furthermore, to show the efficiency of the proposed GA, a numerical example is presented. Experimental results demonstrate that the proposed GA has a better performance compared to one of the most recommended algorithm in the literature.
The document discusses methods for real-time diagnostics of technological processes and field equipment. It proposes a combined method using moving PCA for early detection of abnormal situations, process decomposition for fault localization, and fuzzy production rules for identification. For detection, moving PCA constructs new models over time to accommodate process changes. Identification compares the current situation vector to vectors of possible abnormal situations in diagnostic models. The method was tested on diagnosing a high-pressure polyethylene polymerization process.
Genetic fuzzy process metric measurement system for an operating systemijcseit
Operating system (Os) is the most essential software of the computer system,deprived ofit, the computer
system is totally useless. It is the frontier for assessing relevant computer resources. It performance greatly
enhances user overall objective across the system. Related literatures have try in different methods and
techniques to measure the process matric performance of the operating system but none has incorporated
the use of genetic algorithm and fuzzy logic in their varied techniques which indeed is a novel approach.
Extending the work of Michalis, this research focuses on measuring the process matrix performance of an
operating system utilizing set of operating system criteria’s while fusing fuzzy logic to handle
impreciseness and genetic for process optimization.
BIO-INSPIRED MODELLING OF SOFTWARE VERIFICATION BY MODIFIED MORAN PROCESSESIJCSEA Journal
A new approach for the control and prediction of verification activities for large safety-relevant software
systems will be presented in this paper. The model is applied on a macroscopic system level and based on
so-called Moran processes, which originate from mathematical biology and allow for the description
ofphenomena as, for instance, genetic drift. Beside the theoretical foundations of this novel approach, its
application on a real-world example from the medical engineering domain will be discussed.
This document discusses the theory of software testing. It covers several key topics:
1) It identifies five common problems in software testing like limitations of testing teams and issues with manual testing.
2) It describes different testing processes like verification, validation, white-box testing and black-box testing.
3) It outlines three main phases of software testing - preliminary testing, testing, and user acceptance testing - to evaluate a new software system and identify any issues.
Bio-Inspired Modelling of Software Verification by Modified Moran ProcessesIJCSEA Journal
A new approach for the control and prediction of verification activities for large safety-relevant software systems will be presented in this paper. The model is applied on a macroscopic system level and based on so-called Moran processes, which originate from mathematical biology and allow for the description of phenomena as, for instance, genetic drift. Beside the theoretical foundations of this novel approach, its application on a real-world example from the medical engineering domain will be discussed.
Test Case Optimization and Redundancy Reduction Using GA and Neural Networks IJECEIAES
More than 50% of software development effort is spent in testing phase in a typical software development project. Test case design as well as execution consume a lot of time. Hence, automated generation of test cases is highly required. Here a novel testing methodology is being presented to test objectoriented software based on UML state chart diagrams. In this approach, function minimization technique is being applied and generate test cases automatically from UML state chart diagrams. Software testing forms an integral part of the software development life cycle. Since the objective of testing is to ensure the conformity of an application to its specification, a test “oracle” is needed to determine whether a given test case exposes a fault or not. An automated oracle to support the activities of human testers can reduce the actual cost of the testing process and the related maintenance costs. In this paper, a new concept is being presented using an UML state chart diagram and tables for the test case generation, artificial neural network as an optimization tool for reducing the redundancy in the test case generated using the genetic algorithm. A neural network is trained by the backpropagation algorithm on a set of test cases applied to the original version of the system.
Software plays a critical role in businesses, governments, and societies. To improve
performance and quality of the software are important goals of software engineering. Mining
data has recently emerged as a promising means to meet this goal due to two main trends:
The increasing abundance of such data and its demonstrated helpfulness in solving numerous
real-world problems. Poor performance costs the software industry millions of money
annually in the form of lost revenue, hardware costs, damaged customer relations and
decreased productivity. Performance analysis and evaluation through data mining technique
will result performance improvement suggestions for software developers.
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Black boxtestingmethodsforsoftwarecomponentsAstrid yolanda
This document discusses black-box testing methods for software components. It begins by defining black-box testing as testing that ignores internal mechanisms and focuses on inputs and outputs. It notes that for black-box testing of components, specifications, interfaces, customizations, and source code availability must be considered. Various black-box testing techniques are then described, including random testing, partition testing, boundary value testing, decision tables testing, and mutation testing. The document provides details on how each technique is applied to test software components.
EXTRACTING THE MINIMIZED TEST SUITE FOR REVISED SIMULINK/STATEFLOW MODELijaia
Test case generation techniques are successfully employed to generate test cases from a formal model. A problem is that as the model evolves, test suites tend to grow in size, making it too costly to execute entire test suites. This paper aims to propose a practical approach to reduce the size of test suites for modified Simulink/Stateflow (SL/SF) model, which is popularly used for modeling software behavior in many industries like automobile manufacturers. The model for describing a system is frequently modified until it is fixed. The proposed technique is capable of extracting the minimized sized test suite in terms of test coverage, by taking into account both the modified and the affected portion of revised SL/SF model. Two real models for the ECUs deployed in a commercial car are used for an empirical study.
The document summarizes the key steps and considerations in conducting a feasibility study for a proposed system. It discusses the three main feasibility factors - economic, technical, and behavioral. It outlines the 8 steps in a feasibility study: forming a project team, preparing flowcharts, enumerating candidate systems, describing system characteristics, evaluating performance and costs, weighting systems, selecting the best system, and reporting findings. The economic, technical, and behavioral aspects of each candidate system are evaluated before a recommendation is made.
This document discusses the importance of test data documentation. It defines test data as samples of valid and invalid data used for testing. Documenting test data has advantages like reusing data for regression testing and aiding user acceptance testing. Test design techniques like boundary value analysis and equivalence partitioning help identify test data by partitioning inputs. The document emphasizes generating comprehensive test data through templates and linking it to test scripts to ensure test coverage.
This document summarizes a research article that proposes using continuous hidden Markov models (CHMMs) with a change point detection algorithm for online adaptive bearings condition assessment. The approach aims to (1) estimate the initial number of CHMM states and parameters from historical data and (2) update the state space and parameters during monitoring to adapt to changes. Compared to existing techniques, the proposed approach improves HMM training, detects unknown states earlier, and better represents degradation processes with unknown conditions by changing the CHMM structure.
This document presents a new approach to measuring generic attributes (GAs) as part of process appraisals. It defines two GAs - Usefulness and Cost Effectiveness. Usefulness measures how well process outputs meet user needs. Cost Effectiveness measures whether the benefits of process outputs are worth the resources invested. The approach improves on prior GA definitions by focusing measurements on key process outputs, distinguishing between producers and users of outputs, and using objective evidence. It provides a practical method for incorporating GAs into process appraisals to evaluate the real-world performance and value of processes.
by Andrew Rowland
Management of aging electronic systems is a problem faced by many industries. Management of these systems requires some understanding of their reliability performance. In the United States commercial nuclear industry several approaches are being taken in an attempt to understand the reliability performance of plant systems. This article describes one approach being used. The method is non- parametric and requires no specialized data analysis software.
The document describes a scenario where a systems analyst has been hired to design a new ICT system for Dar Es Salaam High School after several smaller schools merged. The current systems need to be analyzed and a new system designed that can produce hundreds of reports quickly and find individual records efficiently. As part of the design process, the analyst will need to include key items and factors that influence their choice. Technical and program documentation will also need to be created to support the new system design.
The document discusses developing cell layouts in a job shop using group technology. It aims to determine the minimum machine capacity required to form inter-cellular layouts using group technology, improve machine performance measures and efficiency by eliminating unnecessary machines and implementing repetitive lots. Previous research identified formation methods but did not specify minimum capacity or performance improvement. The work to be done includes selecting machines and parts to develop a matrix, creating a new algorithm using existing notation, and designing a computational model of the new cell layout.
A model for run time software architecture adaptationijseajournal
Since the global demand for software systems and constantly changing environments and systems is
increasing, the adaptability of software systems is of significant importance. Due to the architecture of
software system is a high-level view of the system and makes the modifiability possible at an overall level,
the adaptability of the software can be considered an effective approach to adapt software systems by
changing architecture configuration. In this study, the architecture configuration is modified through xADL
language which is a software architecture description language with a high flexibility. Software
architecture reconfiguration is done based on existing rules of rule-based system, which are written with
respect to three strategies of load balancing, fixed bandwidth and fixed latency. The proposed model of the
study is simulated based on samples of client-server system, video conferencing system and students’
grading system. The proposed model can be used in all types of architecture, include Client Server
Architecture, Service Oriented Architecture and etc.
A simple numerical procedure for estimating nonlinear uncertainty propagationISA Interchange
This document presents a numerical method for estimating nonlinear uncertainty propagation. The method approximates the nonlinear function with piecewise linear segments. It then calculates the probability density function of the dependent variable based on the transformations of the linear segments. For functions of a normally distributed independent variable, the mean and confidence intervals of the dependent variable can be calculated using only the error function. A simple example of applying this method to a parabolic function is presented to demonstrate the technique.
On an LAS-integrated soft PLC system based on WorldFIP fieldbusISA Interchange
Communication efficiency is lowered and real-time performance is not good enough in discrete control based on traditional WorldFIP field intelligent nodes in case that the scale of control in field is large. A soft PLC system based on WorldFIP fieldbus was designed and implemented. Link Activity Scheduler (LAS) was integrated into the system and field intelligent I/O modules acted as networked basic nodes. Discrete control logic was implemented with the LAS-integrated soft PLC system. The proposed system was composed of configuration and supervisory sub-systems and running sub-systems. The configuration and supervisory sub-system was implemented with a personal computer or an industrial personal computer; running subsystems were designed and implemented based on embedded hardware and software systems. Communication and schedule in the running subsystem was implemented with an embedded sub-module; discrete control and system self-diagnosis were implemented with another embedded sub-module. Structure of the proposed system was presented. Methodology for the design of the sub-systems was expounded. Experiments were carried out to evaluate the performance of the proposed system both in discrete and process control by investigating the effect of network data transmission delay induced by the soft PLC in WorldFIP network and CPU workload on resulting control performances. The experimental observations indicated that the proposed system is practically applicable.
Bi objective redundancy allocation problem for a system with mixed repairable...ISA Interchange
Traditionally, in the redundancy allocation problem (RAP), two general classes of optimization problems are considered; reliability optimization and availability optimization. Contrary to reliability optimization, fewer researchers have studied availability optimization to find out the optimal combination of components type and redundancy levels for each subsystem in a system for maximizing (or minimizing) the objectives. In each problem it is assumed that either the entire components are repairable or they are non-repairable. However, in real world situations, systems usually consist of both repairable and non-repairable components. In this paper a new Mixed Integer Nonlinear Programming (MINLP) model is presented to analyze the availability optimization of a system with a given structure, using both repairable and non-repairable components, simultaneously. To find the solution of the introduced MINLP, an efficient Genetic Algorithm (GA) is also developed. Furthermore, to show the efficiency of the proposed GA, a numerical example is presented. Experimental results demonstrate that the proposed GA has a better performance compared to one of the most recommended algorithm in the literature.
Robust PID controller design for non-minimum phase time delay systemsISA Interchange
1. A robust PID controller design method is presented for non-minimum phase systems with uncertain time delays.
2. The gain-phase margin tester method is used to determine a parameter region in the controller parameter plane that guarantees at least a specified gain and phase margin, providing robustness against instability induced by time delays.
3. PID controller parameters are selected from this region to achieve a compromise between robustness and tracking performance even in the presence of uncertain time delays.
This document describes a novel auto-tuning method for cascade control systems. The method uses a simple relay feedback test to simultaneously identify both the inner and outer loop process model parameters. This allows established PID tuning rules to then be applied to tune both loops. The method is simpler than existing approaches and can be directly integrated into commercial auto-tuning systems. It is illustrated through examples to be effective and robust.
A fuzzy logic method is developed for gain scheduling of PID controllers that improves upon existing fuzzy-PID schemes. The method uses a single fuzzy input variable related to the derivative of the PID manipulated variable, eliminating redundant rules. Online tuning improves PID performance while retaining original PID parameters. The fuzzy and PID manipulated variables are related through a differential equation, allowing online replacement and tuning with two parameters. The method is demonstrated on a temperature control process, improving PID performance to the level of model predictive control with only a few tuning tests.
Plant-Wide Control: Eco-Efficiency and Control Loop ConfigurationISA Interchange
Since the eco-efficiency of all industrial processes/plants has become increasingly important, engineers need to find a way to integrate the control loop configuration and the measurements of eco-efficiency. A new measure of eco-efficiency, the exergy eco-efficiency factor, for control loop configuration, is proposed in this paper. The exergy eco-efficiency factor is based on the thermodynamic concept of exergy which can be used to analyse a process in terms of its efficiency associated with the control configuration. The combination of control pairing configuration techniques (such as the relative gain array, RGA and Niederlinski index, NI) and the proposed exergy eco-efficiency factor will guide the process designer to reach the optimal control design with low operational cost (i.e., energy consumption). The exergy eco-efficiency factor is implemented in the process simulation case study and the reliability of the proposed method is demonstrated by dynamic simulation results.
Adaptive backstepping sliding mode control of flexible ball screw drives with...ISA Interchange
This paper presents a method to model and design servo controllers for flexible ball screw drives with dynamic variations. A mathematical model describing the structural flexibility of the ball screw drive containing time-varying uncertainties and disturbances with unknown bounds is proposed. A mode-compensating adaptive backstepping sliding mode controller is designed to suppress the vibration. The time-varying uncertainties and disturbances represented in finite-term Fourier series can be estimated by updating the Fourier coefficients through function approximation technique. Adaptive laws are obtained from Lyapunov approach to guarantee the convergence and stability of the closed loop system. The simulation results indicate that the tracking accuracy is improved considerably with the proposed scheme when the time-varying parametric uncertainties and disturbances exist.
Robust PID tuning strategy for uncertain plants based on the Kharitonov theoremISA Interchange
This document proposes a robust PID tuning strategy for uncertain plants based on the Kharitonov theorem.
1) The Kharitonov theorem is used to define four "vortex polynomials" that characterize the stability of an uncertain plant family.
2) A systematic graphical method is developed to design a PID controller that stabilizes all four vortex polynomials simultaneously. This results in a "Kharitonov region" of stabilizing controller parameters.
3) Additional gain and phase margin specifications are imposed to ensure the robust stability and performance of the closed loop system for the uncertain plant family.
Estimation of region of attraction for polynomial nonlinear systems a numeric...ISA Interchange
This document introduces a numerical method to estimate the region of attraction (ROA) for polynomial nonlinear systems using sum-of-squares programming. The method computes a local Lyapunov function and an invariant set around a locally asymptotically stable equilibrium point. This invariant set provides an estimation of the ROA for the equilibrium point. The paper then proposes an algorithm to select a "shape factor" based on the linearized dynamic model of the system, which is used to enlarge the estimation of the ROA by solving a sum-of-squares optimization problem in each iteration. Numerical examples are provided to demonstrate the efficiency of the proposed method.
A modified narmax model based self-tuner with fault tolerance for unknown non...ISA Interchange
A modified nonlinear autoregressive moving average with exogenous inputs (NARMAX) model-based state-space self-tuner with fault tolerance is proposed in this paper for the unknown nonlinear stochastic hybrid system with a direct transmission matrix from input to output. Through the off-line observer/Kalman filter identification method, one has a good initial guess of modified NARMAX model to reduce the on-line system identification process time. Then, based on the modified NARMAX-based system identification, a corresponding adaptive digital control scheme is presented for the unknown continuous-time nonlinear system, with an input–output direct transmission term, which also has measurement and system noises and inaccessible system states. Besides, an effective state space self-turner with fault tolerance scheme is presented for the unknown multivariable stochastic system. A quantitative criterion is suggested by comparing the innovation process error estimated by the Kalman filter estimation algorithm, so that a weighting matrix resetting technique by adjusting and resetting the covariance matrices of parameter estimate obtained by the Kalman filter estimation algorithm is utilized to achieve the parameter estimation for faulty system recovery. Consequently, the proposed method can effectively cope with partially abrupt and/or gradual system faults and input failures by the fault detection.
Design of a self tuning regulator for temperature control of a polymerization...ISA Interchange
The temperature control of a polymerization reactor described by Chylla and Haase, a control engineering benchmark problem, is used to illustrate the potential of adaptive control design by employing a self-tuning regulator concept. In the benchmark scenario, the operation of the reactor must be guaranteed under various disturbing influences, e.g., changing ambient temperatures or impurity of the monomer. The conventional cascade control provides a robust operation, but often lacks in control performance concerning the required strict temperature tolerances. The self-tuning control concept presented in this contribution solves the problem. This design calculates a trajectory for the cooling jacket temperature in order to follow a predefined trajectory of the reactor temperature. The reaction heat and the heat transfer coefficient in the energy balance are estimated online by using an unscented Kalman filter (UKF). Two simple physically motivated relations are employed, which allow the non-delayed estimation of both quantities. Simulation results under model uncertainties show the effectiveness of the self-tuning control concept.
Cascade control of superheated steam temperature with neuro PID controllerISA Interchange
In this paper, an improved cascade control methodology for superheated processes is developed, in which the primary PID controller is implemented by neural networks trained by minimizing error entropy criterion. The entropy of the tracking error can be estimated recursively by utilizing receding horizon window technique. The measurable disturbances in superheated processes are input to the neuro-PID controller besides the sequences of tracking error in outer loop control system, hence, feedback control is combined with feedforward control in the proposed neuro-PID controller. The convergent condition of the neural networks is analyzed. The implementation procedures of the proposed cascade control approach are summarized. Compared with the neuro-PID controller using minimizing squared error criterion, the proposed neuro-PID controller using minimizing error entropy criterion may decrease fluctuations of the superheated steam temperature. A simulation example shows the advantages of the proposed method.
Modified Smith predictor design for periodic disturbance rejectionISA Interchange
The modified Smith predictor control scheme aims to improve rejection of periodic disturbances while maintaining the superior setpoint response of traditional Smith predictor control. It does this by adding an additional feedback loop containing controllers Gc2 and Gc3 that provide a stabilizing effect. The stability of the overall closed-loop system is analyzed, showing it can stabilize both stable and unstable processes with time delay. Simulation and experimental results demonstrate the effectiveness of the approach.
An opto isolator based linearization technique of a typical thyristor driven ...ISA Interchange
A thyristor driven pump is operated by varying the DC input signal in the firing circuit of thyristor drive. This operation suffers from difficulties due to the nonlinear relation between thyristor output and DC input. In the present paper, an opto-isolator based linearization technique of a typical thyristor driven pump has been proposed. The design, fabrication and the necessary circuit diagram along with theoretical explanations of the resultant output has been described. The operation of the linearized thyristor driven pump has been studied experimentally and the experimental data before and after linearization are reported. The characteristic graphs are found to have very good linearity.
This document provides summaries of several appendices related to PID tuning and control. Appendix A offers a short cut tuning method that can identify process dynamics and tune a controller in about five dead times. It reduces open loop test time by over 80% for processes with large time constants. Appendix B provides a PID checklist to help utilize full PID capabilities and ensure parameters are correctly set. Appendix C derives equations to understand the effects of dynamics and tuning on performance. It provides a guide to change plant dynamics and tuning to achieve objectives.
This document summarizes a research article that analyzes the performance of a sugar mill feeding system using Markov process modeling. The feeding system has four subsystems: a cutting system, crushing system, bagasse carrying system, and heat generating system. The researchers model the system's states using a time-homogeneous Markov process to determine the reliability function and steady-state availability. They then use genetic algorithm optimization to determine optimal system design parameters that maximize availability. The methodology section outlines the assumptions made and describes how Markov modeling and genetic algorithms are applied to analyze the system and optimize its performance.
Comparative Study on the Prediction of Remaining Useful Life of an Aircraft E...IRJET Journal
The document discusses comparative studies on predicting the remaining useful life (RUL) of aircraft engines using machine learning models. It first introduces the importance of RUL prediction for aviation safety and describes the CMAPSS dataset used containing sensor data from aircraft engines. Various RUL prediction techniques are discussed including physics-based, data-driven, and hybrid approaches. Popular machine learning algorithms for RUL prediction are evaluated on the dataset including linear regression, support vector regression, random forest, decision trees and neural networks. Random forest is found to have the highest accuracy for RUL prediction based on the evaluation metrics. The study aims to determine the best machine learning model for accurate RUL prediction and scheduling maintenance activities for aircraft engines.
Assessing Software Reliability Using SPC – An Order Statistics Approach IJCSEA Journal
There are many software reliability models that are based on the times of occurrences of errors in the debugging of software. It is shown that it is possible to do asymptotic likelihood inference for software reliability models based on order statistics or Non-Homogeneous Poisson Processes (NHPP), with asymptotic confidence levels for interval estimates of parameters. In particular, interval estimates from these models are obtained for the conditional failure rate of the software, given the data from the debugging process. The data can be grouped or ungrouped. For someone making a decision about when to market software, the conditional failure rate is an important parameter. Order statistics are used in a wide variety of practical situations. Their use in characterization problems, detection of outliers, linear estimation, study of system reliability, life-testing, survival analysis, data compression and many other fields can be seen from the many books. Statistical Process Control (SPC) can monitor the forecasting of software failure and thereby contribute significantly to the improvement of software reliability. Control charts are widely used for software process control in the software industry. In this paper we proposed a control mechanism based on order statistics of cumulative quantity between observations of time domain
failure data using mean value function of Half Logistics Distribution (HLD) based on NHPP.
Assessing Software Reliability Using SPC – An Order Statistics ApproachIJCSEA Journal
There are many software reliability models that are based on the times of occurrences of errors in the debugging of software. It is shown that it is possible to do asymptotic likelihood inference for software reliability models based on order statistics or Non-Homogeneous Poisson Processes (NHPP), with asymptotic confidence levels for interval estimates of parameters. In particular, interval estimates from these models are obtained for the conditional failure rate of the software, given the data from the debugging process. The data can be grouped or ungrouped. For someone making a decision about when to market software, the conditional failure rate is an important parameter. Order statistics are used in a wide variety of practical situations. Their use in characterization problems, detection of outliers, linear estimation, study of system reliability, life-testing, survival analysis, data compression and many other fields can be seen from the many books. Statistical Process Control (SPC) can monitor the forecasting of software failure and thereby contribute significantly to the improvement of software reliability. Control charts are widely used for software process control in the software industry. In this paper we proposed a control mechanism based on order statistics of cumulative quantity between observations of time domain
failure data using mean value function of Half Logistics Distribution (HLD) based on NHPP.
A Real-Time Information System For Multivariate Statistical Process ControlAngie Miller
This document describes the design and implementation of a real-time multivariate process control system that uses principal component analysis models to monitor a manufacturing process in real-time. The system analyzes process data, detects errors, and presents contributing factors through a graphical user interface for operators and engineers. It is intended to help identify improvement opportunities by better utilizing available process data and information within temporal bounds important for process control.
Parameter Estimation of Software Reliability Growth Models Using Simulated An...Editor IJCATR
The parameter estimation of Goel’s Okomotu Model is performed victimisation simulated annealing. The Goel’s Okomotu
Model is predicated on Exponential model and could be a easy non-homogeneous Poisson method (NHPP) model. Simulated
annealing could be a heuristic optimisation technique that provides a method to flee local optima. The information set is optimized
using simulated annealing technique. SA could be a random algorithmic program with higher performance than Genetic algorithmic
program (GA) that depends on the specification of the neighbourhood structure of a state area and parameter settings for its cooling
schedule.
Software Cost Estimation Using Clustering and Ranking SchemeEditor IJMTER
Software cost estimation is an important task in the software design and development process.
Planning and budgeting tasks are carried out with reference to the software cost values. A variety of
software properties are used in the cost estimation process. Hardware, products, technology and
methodology factors are used in the cost estimation process. The software cost estimation quality is
measured with reference to the accuracy levels.
Software cost estimation is carried out using three types of techniques. They are regression based
model, anology based model and machine learning model. Each model has a set of technique for the
software cost estimation process. 11 cost estimation techniques fewer than 3 different categories are
used in the system. The Attribute Relational File Format (ARFF) is used maintain the software product
property values. The ARFF file is used as the main input for the system.
The proposed system is designed to perform the clustering and ranking of software cost
estimation methods. Non overlapped clustering technique is enhanced with optimal centroid estimation
mechanism. The system improves the clustering and ranking process accuracy. The system produces
efficient ranking results on software cost estimation methods.
Smart E-Logistics for SCM Spend AnalysisIRJET Journal
This document discusses applying predictive analytics and machine learning techniques like LSTM models to supply chain management problems. It focuses on spend analysis and extracting fields from invoices and proofs of delivery using optical character recognition. The key points are:
1. LSTM models are applied to time series spend analysis data and shown to provide more accurate predictions than ARIMA models.
2. A technique is proposed to extract fields from printed and handwritten documents using models trained on Form Recognizer and then cleaning the extracted data.
3. The technique aims to reconcile invoices and proofs of delivery by comparing extracted data fields and calculating a match confidence score.
Guidelines to Understanding Design of Experiment and Reliability Predictionijsrd.com
This paper will focus on how to plan experiments effectively and how to analyse data correctly. Practical and correct methods for analysing data from life testing will also be provided. This paper gives an extensive overview of reliability issues, definitions and prediction methods currently used in the industry. It defines different methods and correlations between these methods in order to make reliability comparison statements from different manufacturers' in easy way that may use different prediction methods and databases for failure rates. The paper finds however such comparison very difficult and risky unless the conditions for the reliability statements are scrutinized and analysed in detail.
Mc calley pserc_final_report_s35_special_protection_schemes_dec_2010_nm_nsrcNeil McNeill
This document provides a summary of a report on system protection schemes (SPS). It discusses SPS standards, practices, and advancements. It also examines relationships between SPS and other industries like process control and nuclear. The report proposes frameworks to identify risks to SPS from both a process and system view. It contributes methods to assess SPS operational complexity and incorporate this into transmission planning studies. The frameworks and models developed in this report can be applied to real utility systems to evaluate SPS reliability and impacts on the power grid.
Software aging prediction – a new approach IJECEIAES
To meet the users’ requirements which are very diverse in recent days, computing infrastructure has become complex. An example of one such infrastructure is a cloud-based system. These systems suffer from resource exhaustion in the long run which leads to performance degradation. This phenomenon is called software aging. There is a need to predict software aging to carry out pre-emptive rejuvenation that enhances service availability. Software rejuvenation is the technique that refreshes the system and brings it back to a healthy state. Hence, software aging should be predicted in advance to trigger the rejuvenation process to improve service availability. In this work, the k-nearest neighbor (k-NN) algorithm-based new approach has been used to identify the virtual machine's status, and a prediction of resource exhaustion time has been made. The proposed prediction model uses static thresholding and adaptive thresholding methods. The performance of the algorithms is compared, and it is found that for classification, the k-NN performs comparatively better, i.e., k-NN showed an accuracy of 97.6. In contrast, its counterparts performed with an accuracy of 96.0 (naïve Bayes) and 92.8 (decision tree). The comparison of the proposed work with previous similar works has also been discussed.
TRANSFORMING SOFTWARE REQUIREMENTS INTO TEST CASES VIA MODEL TRANSFORMATIONijseajournal
Executable test cases originate at the onset of testing as abstract requirements that represent system
behavior. Their manual development is time-consuming, susceptible to errors, and expensive. Translating
system requirements into behavioral models and then transforming them into a scripting language has the
potential to automate their conversion into executable tests. Ideally, an effective testing process should
start as early as possible, refine the use cases with ample details, and facilitate the creation of test cases.
We propose a methodology that enables automation in converting functional requirements into executable
test cases via model transformation. The proposed testing process starts with capturing system behavior in
the form of visual use cases, using a domain-specific language, defining transformation rules, and
ultimately transforming the use cases into executable tests.
Specification Based or Black Box TechniquesRakhesLeoPutra
This document defines and describes several specification-based black-box testing techniques:
1) Equivalence partitioning divides conditions into groups that should be handled equivalently, and tests one condition from each group.
2) Decision tables aid in systematically selecting test cases to test combinations of inputs and states.
3) State transition testing models systems with different outputs depending on prior states using state diagrams.
4) Use case testing exercises end-to-end transactions by deriving tests from descriptions of how actors use the system.
ABSTRACT In order to reduce the number and time of layups and to increase the...khasnaelvinurlita
This document discusses predictive maintenance of mechatronic systems to reduce downtime. It proposes performing predictive repairs on groups of components that have reached or exceeded their operating hours limit during a single stoppage. This approach allows repairs to be done in parallel, reducing system downtime and increasing production output. A mathematical model is presented showing the system states and transitions between states when components fail and the system is repaired.
Specification based or black box techniques (andika m)Andika Mardanu
This document discusses specification-based or black box testing techniques, specifically equivalence partitioning, boundary value analysis, decision tables, state transition testing, and use case testing. It provides definitions and explanations of each technique, including that equivalence partitioning divides test conditions into groups that should be handled equivalently by the system, decision tables deal with combinations of inputs and conditions, state transition testing models systems that can be in different states, and use case testing identifies test cases that exercise full system transactions.
With the emergence of virtualization and cloud computing technologies, several services are housed on virtualization platform. Virtualization is the technology that many cloud service providers rely on for efficient management and coordination of the resource pool. As essential services are also housed on cloud platform, it is necessary to ensure continuous availability by implementing all necessary measures. Windows Active Directory is one such service that Microsoft developed for Windows domain networks. It is included in Windows Server operating systems as a set of processes and services for authentication and authorization of users and computers in a Windows domain type network. The service is required to run continuously without downtime. As a result, there are chances of accumulation of errors or garbage leading to software aging which in turn may lead to system failure and associated consequences. This results in software aging. In this work, software aging patterns of Windows active directory service is studied. Software aging of active directory needs to be predicted properly so that rejuvenation can be triggered to ensure continuous service delivery. In order to predict the accurate time, a model that uses time series forecasting technique is built.
The adoption of cloud environment for various application uses has led to security and privacy concern of user’s data. To protect user data and privacy on such platform is an area of concern.
Many cryptography strategy has been presented to provide secure sharing of resource on cloud platform. These methods tries to achieve a secure authentication strategy to realize feature such as self-blindable access tickets, group signatures, anonymous access tickets, minimal disclosure of tickets and revocation but each one varies in realization of these features. Each feature requires different cryptography mechanism for realization. Due to this it induces computation complexity which affects the deployment of these models in practical application. Most of these techniques are designed for a particular application environment and adopt public key cryptography which incurs high cost due to computation complexity.
To address these issues this work present an secure and efficient privacy preserving of mining data on public cloud platform by adopting party and key based authentication strategy. The proposed SCPPDM (Secure Cloud Privacy Preserving Data Mining) is deployed on Microsoft azure cloud platform. Experiment is conducted to evaluate computation complexity. The outcome shows the proposed model achieves significant performance interm of computation overhead and cost.
The document describes an automated process for bug triage that uses text classification and data reduction techniques. It proposes using Naive Bayes classifiers to predict the appropriate developers to assign bugs to by applying stopword removal, stemming, keyword selection, and instance selection on bug reports. This reduces the data size and improves quality. It predicts developers based on their history and profiles while tracking bug status. The goal is to more efficiently handle software bugs compared to traditional manual triage processes.
A KPI-based process monitoring and fault detection framework for large-scale ...ISA Interchange
Large-scale processes, consisting of multiple interconnected sub-processes, are commonly encountered in industrial systems, whose performance needs to be determined. A common approach to this problem is to use a key performance indicator (KPI)-based approach. However, the different KPI-based approaches are not developed with a coherent and consistent framework. Thus, this paper proposes a framework for KPI-based process monitoring and fault detection (PM-FD) for large-scale industrial processes, which considers the static and dynamic relationships between process and KPI variables. For the static case, a least squares-based approach is developed that provides an explicit link with least-squares regression, which gives better performance than partial least squares. For the dynamic case, using the kernel re- presentation of each sub-process, an instrument variable is used to reduce the dynamic case to the static case. This framework is applied to the TE benchmark process and the hot strip mill rolling process. The results show that the proposed method can detect faults better than previous methods.
In the present paper, applicability and
capability of A.I techniques for effort estimation prediction has
been investigated. It is seen that neuro fuzzy models are very
robust, characterized by fast computation, capable of handling
the distorted data. Due to the presence of data non-linearity, it is
an efficient quantitative tool to predict effort estimation. The one
hidden layer network has been developed named as OHLANFIS
using MATLAB simulation environment.
Here the initial parameters of the OHLANFIS are
identified using the subtractive clustering method. Parameters of
the Gaussian membership function are optimally determined
using the hybrid learning algorithm. From the analysis it is seen
that the Effort Estimation prediction model developed using
OHLANFIS technique has been able to perform well over normal
ANFIS Model.
Similar to Stochastic behavior analysis of complex repairable industrial systems (20)
An optimal general type-2 fuzzy controller for Urban Traffic NetworkISA Interchange
This document presents an optimal general type-2 fuzzy controller (OGT2FC) for controlling traffic signal scheduling and phase succession to minimize wait times and average queue length. The OGT2FC uses a combination of general type-2 fuzzy logic sets and the Modified Backtracking Search Algorithm (MBSA) to optimize the membership function parameters. Simulation results show the OGT2FC performs better than conventional type-1 fuzzy controllers in regulating urban traffic flow.
Embedded intelligent adaptive PI controller for an electromechanical systemISA Interchange
In this study, an intelligent adaptive controller approach using the interval type-2 fuzzy neural network (IT2FNN) is presented. The proposed controller consists of a lower level proportional - integral (PI) controller, which is the main controller and an upper level IT2FNN which tuning on-line the parameters of a PI controller. The proposed adaptive PI controller based on IT2FNN (API-IT2FNN) is implemented practically using the Arduino DUE kit for controlling the speed of a nonlinear DC motor-generator system. The parameters of the IT2FNN are tuned on-line using back-propagation algorithm. The Lyapunov theorem is used to derive the stability and convergence of the IT2FNN. The obtained experimental results, which are compared with other controllers, demonstrate that the proposed API-IT2FNN is able to improve the system response over a wide range of system uncertainties.
State of charge estimation of lithium-ion batteries using fractional order sl...ISA Interchange
This paper presents a state of charge (SOC) estimation method based on fractional order sliding mode observer (SMO) for lithium-ion batteries. A fractional order RC equivalent circuit model (FORCECM) is firstly constructed to describe the charging and discharging dynamic characteristics of the battery. Then, based on the differential equations of the FORCECM, fractional order SMOs for SOC, polarization voltage and terminal voltage estimation are designed. After that, convergence of the proposed observers is analyzed by Lyapunov’s stability theory method. The framework of the designed observer system is simple and easy to implement. The SMOs can overcome the uncertainties of parameters, modeling and measurement errors, and present good robustness. Simulation results show that the presented estima- tion method is effective, and the designed observers have good performance.
Fractional order PID for tracking control of a parallel robotic manipulator t...ISA Interchange
This paper presents the tracking control for a robotic manipulator type delta employing fractional order PID controllers with computed torque control strategy. It is contrasted with an integer order PID controller with computed torque control strategy. The mechanical structure, kinematics and dynamic models of the delta robot are descripted. A SOLIDWORKS/MSC-ADAMS/MATLAB co-simulation model of the delta robot is built and employed for the stages of identification, design, and validation of control strategies. Identification of the dynamic model of the robot is performed using the least squares algorithm. A linearized model of the robotic system is obtained employing the computed torque control strategy resulting in a decoupled double integrating system. From the linearized model of the delta robot, fractional order PID and integer order PID controllers are designed, analyzing the dynamical behavior for many evaluation trajectories. Controllers robustness is evaluated against external disturbances employing performance indexes for the joint and spatial error, applied torque in the joints and trajectory tracking. Results show that fractional order PID with the computed torque control strategy has a robust performance and active disturbance rejection when it is applied to parallel robotic manipulators on tracking tasks.
Fuzzy logic for plant-wide control of biological wastewater treatment process...ISA Interchange
The application of control strategies is increasingly used in wastewater treatment plants with the aim of improving effluent quality and reducing operating costs. Due to concerns about the progressive growth of greenhouse gas emissions (GHG), these are also currently being evaluated in wastewater treatment plants. The present article proposes a fuzzy controller for plant-wide control of the biological wastewater treatment process. Its design is based on 14 inputs and 6 outputs in order to reduce GHG emissions, nutrient concentration in the effluent and operational costs. The article explains and shows the effect of each one of the inputs and outputs of the fuzzy controller, as well as the relationship between them. Benchmark Simulation Model no 2 Gas is used for testing the proposed control strategy. The results of simulation results show that the fuzzy controller is able to reduce GHG emissions while improving, at the same time, the common criteria of effluent quality and operational costs.
Design and implementation of a control structure for quality products in a cr...ISA Interchange
In recent years, interest for petrochemical processes has been increasing, especially in refinement area. However, the high variability in the dynamic characteristics present in the atmospheric distillation column poses a challenge to obtain quality products. To improve distillates quality in spite of the changes in the input crude oil composition, this paper details a new design of a control strategy in a conventional crude oil distillation plant defined using formal interaction analysis tools. The process dynamic and its control are simulated on Aspen HYSYS dynamic environment under real operating conditions. The simulation results are compared against a typical control strategy commonly used in crude oil atmospheric distillation columns.
Model based PI power system stabilizer design for damping low frequency oscil...ISA Interchange
This paper explores a two-level control strategy by blending a local controller with a centralized controller for the low frequency oscillations in a power system. The proposed control scheme provides stabilization of local modes using a local controller and minimizes the effect of inter-connection of sub-systems performance through a centralized control. For designing the local controllers in the form of proportional-integral power system stabilizer (PI-PSS), a simple and straight forward frequency domain direct synthesis method is considered that works on use of a suitable reference model which is based on the desired requirements. Several examples both on one machine infinite bus and multi-machine systems taken from the literature are illustrated to show the efficacy of the proposed PI-PSS. The effective damping of the systems is found to be increased remarkably which is reflected in the time-responses; even unstable operation has been stabilized with improved damping after applying the proposed controller. The proposed controllers give remarkable improvement in damping the oscillations in all the illustrations considered here and as for example, the value of damping factor has been increased from 0.0217 to 0.666 in Example 1. The simulation results obtained by the proposed control strategy are favorably compared with some controllers prevalent in the literature.
A comparison of a novel robust decentralized control strategy and MPC for ind...ISA Interchange
This document summarizes a research article that compares a novel decentralized control strategy based on override control to a model predictive controller (MPC) for controlling an industrial high purity methanol distillation column. Both controllers were able to maintain tight product purity and high recovery specifications under disturbances. The MPC provided tighter control of product purity but used more energy, while the proposed override control provided tighter recovery control and had lower costs. An economic analysis showed the optimal choice depends on factors like energy costs.
Fault detection of feed water treatment process using PCA-WD with parameter o...ISA Interchange
This research article proposes a new fault detection algorithm called PCA-WD that combines wavelet denoising (WD) with principal component analysis (PCA) to improve fault detection performance for feed water treatment processes (FWTP). The algorithm is applied to operational data from a FWTP sustaining two 1000 MW coal-fired power plants. Parameter selection for the PCA-WD algorithm is formulated as an optimization problem solved using particle swarm optimization to determine optimal parameters automatically rather than relying on individual experience. Results show that WD effectively reduces noise in PCA statistics, improving fault detection. The optimized PCA-WD algorithm outperforms classical PCA and a related method in detecting various faults in the FWTP data.
Model-based adaptive sliding mode control of the subcritical boiler-turbine s...ISA Interchange
As higher requirements are proposed for the load regulation and efficiency enhancement, the control performance of boiler-turbine systems has become much more important. In this paper, a novel robust control approach is proposed to improve the coordinated control performance for subcritical boiler-turbine units. To capture the key features of the boiler-turbine system, a nonlinear control-oriented model is established and validated with the history operation data of a 300 MW unit. To achieve system linearization and decoupling, an adaptive feedback linearization strategy is proposed, which could asymptotically eliminate the linearization error caused by the model uncertainties. Based on the linearized boiler-turbine system, a second-order sliding mode controller is designed with the super-twisting algorithm. Moreover, the closed-loop system is proved robustly stable with respect to uncertainties and disturbances. Simulation results are presented to illustrate the effectiveness of the proposed control scheme, which achieves excellent tracking performance, strong robustness and chattering reduction.
A Proportional Integral Estimator-Based Clock Synchronization Protocol for Wi...ISA Interchange
Clock synchronization is an issue of vital importance in applications of wireless sensor networks (WSNs). This paper proposes a proportional integral estimator-based protocol (EBP) to achieve clock synchronization for wireless sensor networks. As each local clock skew gradually drifts, synchronization accuracy will decline over time. Compared with existing consensus-based approaches, the proposed synchronization protocol improves synchronization accuracy under time-varying clock skews. Moreover, by restricting synchronization error of clock skew into a relative small quantity, it could reduce periodic re-synchronization frequencies. At last, a pseudo-synchronous implementation for skew compensation is introduced as synchronous protocol is unrealistic in practice. Numerical simulations are shown to illustrate the performance of the proposed protocol.
An artificial intelligence based improved classification of two-phase flow patte...ISA Interchange
Flow pattern recognition is necessary to select design equations for finding operating details of the process and to perform computational simulations. Visual image processing can be used to automate the interpretation of patterns in two-phase flow. In this paper, an attempt has been made to improve the classification accuracy of the flow pattern of gas/ liquid two- phase flow using fuzzy logic and Support Vector Machine (SVM) with Principal Component Analysis (PCA). The videos of six different types of flow patterns namely, annular flow, bubble flow, churn flow, plug flow, slug flow and stratified flow are re- corded for a period and converted to 2D images for processing. The textural and shape features extracted using image processing are applied as inputs to various classification schemes namely fuzzy logic, SVM and SVM with PCA in order to identify the type of flow pattern. The results obtained are compared and it is observed that SVM with features reduced using PCA gives the better classification accuracy and computationally less intensive than other two existing schemes. This study results cover industrial application needs including oil and gas and any other gas-liquid two-phase flows.
New Method for Tuning PID Controllers Using a Symmetric Send-On-Delta Samplin...ISA Interchange
In this paper we present a new method for tuning PI controllers with symmetric send-on-delta (SSOD) sampling strategy. First we analyze the conditions that produce oscillations in event based systems considering SSOD sampling strategy. The Describing Function is the tool used to address the problem. Once the conditions for oscillations are established, a new robustness to oscillation performance measure is introduced which entails with the concept of phase margin, one of the most traditional measures of relative stability in closed-loop control systems. Therefore, the application of the proposed robustness measure is easy and intuitive. The method is tested by both simulations and experiments. Additionally, a Java application has been developed to aid in the design according to the results presented in the paper.
Load estimator-based hybrid controller design for two-interleaved boost conve...ISA Interchange
This paper is devoted to the development of a hybrid controller for a two-interleaved boost converter dedicated to renewable energy and automotive applications. The control requirements, resumed in fast transient and low input current ripple, are formulated as a problem of fast stabilization of a predefined optimal limit cycle, and solved using hybrid automaton formalism. In addition, a real time estimation of the load is developed using an algebraic approach for online adjustment of the hybrid controller. Mathematical proofs are provided with simulations to illustrate the effectiveness and the robustness of the proposed controller despite different disturbances. Furthermore, a fuel cell system supplying a resistive load through a two-interleaved boost converter is also highlighted.
Effects of Wireless Packet Loss in Industrial Process Control SystemsISA Interchange
Timely and reliable sensing and actuation control are essential in networked control. This depends on not only the precision/quality of the sensors and actuators used but also on how well the communications links between the field instruments and the controller have been designed. Wireless networking offers simple deployment, reconfigurability, scalability, and reduced operational expenditure, and is easier to upgrade than wired solutions. However, the adoption of wireless networking has been slow in industrial process control due to the stochastic and less than 100% reliable nature of wireless communications and lack of a model to evaluate the effects of such communications imperfections on the overall control performance. In this paper, we study how control performance is affected by wireless link quality, which in turn is adversely affected by severe propagation loss in harsh industrial environments, co-channel interference, and unintended interference from other devices. We select the Tennessee Eastman Challenge Model (TE) for our study. A decentralized process control system, first proposed by N. Ricker, is adopted that employs 41 sensors and 12 actuators to manage the production process in the TE plant. We consider the scenario where wireless links are used to periodically transmit essential sensor measurement data, such as pressure, temperature and chemical composition to the controller as well as control commands to manipulate the actuators according to predetermined setpoints. We consider two models for packet loss in the wireless links, namely, an independent and identically distributed (IID) packet loss model and the two-state Gilbert-Elliot (GE) channel model. While the former is a random loss model, the latter can model bursty losses. With each channel model, the performance of the simulated decentralized controller using wireless links is compared with the one using wired links providing instant and 100% reliable communications. The sensitivity of the controller to the burstiness of packet loss is also characterized in different process stages. The performance results indicate that wireless links with redundant bandwidth reservation can meet the requirements of the TE process model under normal operational conditions. When disturbances are introduced in the TE plant model, wireless packet loss during transitions between process stages need further protection in severely impaired links. Techniques such as re-transmission scheduling, multi-path routing and enhanced physical layer design are discussed and the latest industrial wireless protocols are compared.
Fault Detection in the Distillation Column ProcessISA Interchange
Chemical plants are complex large-scale systems which need designing robust fault detection schemes to ensure high product quality, reliability and safety under different operating conditions. The present paper is concerned with a feasibility study of the application of the black-box modeling method and Kullback Leibler divergence (KLD) to the fault detection in a distillation column process. A Nonlinear Auto-Regressive Moving Average with eXogenous input (NARMAX) polynomial model is firstly developed to estimate the nonlinear behavior of the plant. Furthermore, the KLD is applied to detect abnormal modes. The proposed FD method is implemented and validated experimentally using realistic faults of a distillation plant of laboratory scale. The experimental results clearly demonstrate the fact that proposed method is effective and gives early alarm to operators.
Neural Network-Based Actuator Fault Diagnosis for a Non-Linear Multi-Tank SystemISA Interchange
The paper is devoted to the problem of the robust actuator fault diagnosis of the dynamic non-linear systems. In the proposed method, it is assumed that the diagnosed system can be modelled by the recurrent neural network, which can be transformed into the linear parameter varying form. Such a system description allows developing the designing scheme of the robust unknown input observer within H1 framework for a class of non-linear systems. The proposed approach is designed in such a way that a prescribed disturbance attenuation level is achieved with respect to the actuator fault estimation error, while guaranteeing the convergence of the observer. The application of the robust unknown input observer enables actuator fault estimation, which allows applying the developed approach to the fault tolerant control tasks.
An adaptive PID like controller using mix locally recurrent neural network fo...ISA Interchange
Being complex, non-linear and coupled system, the robotic manipulator cannot be effectively controlled using classical proportional integral derivative (PID) controller. To enhance the effectiveness of the conventional PID controller for the nonlinear and uncertain systems, gains of the PID controller should be conservatively tuned and should adapt to the process parameter variations. In this work, a mix locally recurrent neural network (MLRNN) architecture is investigated to mimic a conventional PID controller which consists of at most three hidden nodes which act as proportional, integral and derivative node. The gains of the mix locally recurrent neural network based PID (MLRNNPID) controller scheme are initi- alized with a newly developed cuckoo search algorithm (CSA) based optimization method rather than assuming randomly. A sequential learning based least square algorithm is then investigated for the on- line adaptation of the gains of MLRNNPID controller. The performance of the proposed controller scheme is tested against the plant parameters uncertainties and external disturbances for both links of the two link robotic manipulator with variable payload (TL-RMWVP). The stability of the proposed controller is analyzed using Lyapunov stability criteria. A performance comparison is carried out among MLRNNPID controller, CSA optimized NNPID (OPTNNPID) controller and CSA optimized conventional PID (OPTPID) controller in order to establish the effectiveness of the MLRNNPID controller.
A method to remove chattering alarms using median filtersISA Interchange
Chattering alarms are the most found nuisance alarms that will probably reduce the usability and result in a confidence crisis of alarm systems for industrial plants. This paper addresses the chattering alarm reduction using median filters. Two rules are formulated to design the window size of median filters. If the alarm probability is estimated using process data, one rule is based on the probability of alarms to satisfy some requirements on the false alarm rate, or missed alarm rate. If there are only historical alarm data available, the other rule is based on percentage reduction of chattering alarms using alarm duration distribution. Experimental results for industrial cases testify that the proposed method is effective.
Design of a new PID controller using predictive functional control optimizati...ISA Interchange
An improved proportional integral derivative (PID) controller based on predictive functional control (PFC) is proposed and tested on the chamber pressure in an industrial coke furnace. The proposed design is motivated by the fact that PID controllers for industrial processes with time delay may not achieve the desired control performance because of the unavoidable model/plant mismatches, while model predictive control (MPC) is suitable for such situations. In this paper, PID control and PFC algorithm are combined to form a new PID controller that has the basic characteristic of PFC algorithm and at the same time, the simple structure of traditional PID controller. The proposed controller was tested in terms of set-point tracking and disturbance rejection, where the obtained results showed that the proposed controller had the better ensemble performance compared with traditional PID controllers.
High-Quality IPTV Monthly Subscription for $15advik4387
Experience high-quality entertainment with our IPTV monthly subscription for just $15. Access a vast array of live TV channels, movies, and on-demand shows with crystal-clear streaming. Our reliable service ensures smooth, uninterrupted viewing at an unbeatable price. Perfect for those seeking premium content without breaking the bank. Start streaming today!
https://rb.gy/f409dk
SATTA MATKA DPBOSS KALYAN MATKA RESULTS KALYAN CHART KALYAN MATKA MATKA RESULT KALYAN MATKA TIPS SATTA MATKA MATKA COM MATKA PANA JODI TODAY BATTA SATKA MATKA PATTI JODI NUMBER MATKA RESULTS MATKA CHART MATKA JODI SATTA COM INDIA SATTA MATKA MATKA TIPS MATKA WAPKA ALL MATKA RESULT LIVE ONLINE MATKA RESULT KALYAN MATKA RESULT DPBOSS MATKA 143 MAIN MATKA KALYAN MATKA RESULTS KALYAN CHART
L'indice de performance des ports à conteneurs de l'année 2023SPATPortToamasina
Une évaluation comparable de la performance basée sur le temps d'escale des navires
L'objectif de l'ICPP est d'identifier les domaines d'amélioration qui peuvent en fin de compte bénéficier à toutes les parties concernées, des compagnies maritimes aux gouvernements nationaux en passant par les consommateurs. Il est conçu pour servir de point de référence aux principaux acteurs de l'économie mondiale, notamment les autorités et les opérateurs portuaires, les gouvernements nationaux, les organisations supranationales, les agences de développement, les divers intérêts maritimes et d'autres acteurs publics et privés du commerce, de la logistique et des services de la chaîne d'approvisionnement.
Le développement de l'ICPP repose sur le temps total passé par les porte-conteneurs dans les ports, de la manière expliquée dans les sections suivantes du rapport, et comme dans les itérations précédentes de l'ICPP. Cette quatrième itération utilise des données pour l'année civile complète 2023. Elle poursuit le changement introduit l'année dernière en n'incluant que les ports qui ont eu un minimum de 24 escales valides au cours de la période de 12 mois de l'étude. Le nombre de ports inclus dans l'ICPP 2023 est de 405.
Comme dans les éditions précédentes de l'ICPP, la production du classement fait appel à deux approches méthodologiques différentes : une approche administrative, ou technique, une méthodologie pragmatique reflétant les connaissances et le jugement des experts ; et une approche statistique, utilisant l'analyse factorielle (AF), ou plus précisément la factorisation matricielle. L'utilisation de ces deux approches vise à garantir que le classement des performances des ports à conteneurs reflète le plus fidèlement possible les performances réelles des ports, tout en étant statistiquement robuste.
SATTA MATKA DPBOSS KALYAN MATKA RESULTS KALYAN MATKA MATKA RESULT KALYAN MATKA TIPS SATTA MATKA MATKA COM MATKA PANA JODI TODAY BATTA SATKA MATKA PATTI JODI NUMBER MATKA RESULTS MATKA CHART MATKA JODI SATTA COM INDIA SATTA MATKA MATKA TIPS MATKA WAPKA ALL MATKA RESULT LIVE ONLINE MATKA RESULT KALYAN MATKA RESULT DPBOSS MATKA 143 MAIN MATKA KALYAN MATKA RESULTS KALYAN CHART KALYAN CHART
SATTA MATKA DPBOSS KALYAN MATKA RESULTS KALYAN CHART KALYAN MATKA MATKA RESULT KALYAN MATKA TIPS SATTA MATKA MATKA COM MATKA PANA JODI TODAY BATTA SATKA MATKA PATTI JODI NUMBER MATKA RESULTS MATKA CHART MATKA JODI SATTA COM INDIA SATTA MATKA MATKA TIPS MATKA WAPKA ALL MATKA RESULT LIVE ONLINE MATKA RESULT KALYAN MATKA RESULT DPBOSS MATKA 143 MAIN MATKA KALYAN MATKA RESULTS KALYAN CHART
SATTA MATKA DPBOSS KALYAN MATKA RESULTS KALYAN CHART KALYAN MATKA MATKA RESULT KALYAN MATKA TIPS SATTA MATKA MATKA COM MATKA PANA JODI TODAY BATTA SATKA MATKA PATTI JODI NUMBER MATKA RESULTS MATKA CHART MATKA JODI SATTA COM INDIA SATTA MATKA MATKA TIPS MATKA WAPKA ALL MATKA RESULT LIVE ONLINE MATKA RESULT KALYAN MATKA RESULT DPBOSS MATKA 143 MAIN MATKA KALYAN MATKA RESULTS KALYAN CHART INDIA MATKA KALYAN SATTA MATKA 420 INDIAN MATKA SATTA KING MATKA FIX JODI FIX FIX FIX SATTA NAMBAR MATKA INDIA SATTA BATTA
SATTA MATKA DPBOSS KALYAN MATKA RESULTS KALYAN CHART KALYAN MATKA MATKA RESULT KALYAN MATKA TIPS SATTA MATKA MATKA COM MATKA PANA JODI TODAY BATTA SATKA MATKA PATTI JODI NUMBER MATKA RESULTS MATKA CHART MATKA JODI SATTA COM INDIA SATTA MATKA MATKA TIPS MATKA WAPKA ALL MATKA RESULT LIVE ONLINE MATKA RESULT KALYAN MATKA RESULT DPBOSS MATKA 143 MAIN MATKA KALYAN MATKA RESULTS KALYAN CHART
Enhancing Adoption of AI in Agri-food: IntroductionCor Verdouw
Introduction to the Panel on: Pathways and Challenges: AI-Driven Technology in Agri-Food, AI4Food, University of Guelph
“Enhancing Adoption of AI in Agri-food: a Path Forward”, 18 June 2024
Tired of chasing down expiring contracts and drowning in paperwork? Mastering contract management can significantly enhance your business efficiency and productivity. This guide unveils expert secrets to streamline your contract management process. Learn how to save time, minimize risk, and achieve effortless contract management.
Discover the Beauty and Functionality of The Expert Remodeling Serviceobriengroupinc04
Unlock your kitchen's true potential with expert remodeling services from O'Brien Group Inc. Transform your space into a functional, modern, and luxurious haven with their experienced professionals. From layout reconfiguration to high-end upgrades, they deliver stunning results tailored to your style and needs. Visit obriengroupinc.com to elevate your kitchen's beauty and functionality today.
Enabling Digital Sustainability by Jutta EcksteinJutta Eckstein
This is a New Zealand wide meetup event with meetup groups from Auckland, Wellington and Christchurch attending and open to anyone with an interest in digital sustainability or agile. All welcome. Joke, this is how it started. Jutta is now also available in Germany, i.e. hosted by Berlin/Brandenburg
According to the World Economic Forum, digital technologies can help reduce global carbon emissions by up to 15%. However, digitalization also comes with some challenges. Thus, if we want to make a positive impact by increasing sustainability, we need to address challenges like the digital divide, energy consumption of IT, or the rise of electronic waste. In this talk, I want to explore how Agile can help to leverage Digital Sustainability.
Unlocking WhatsApp Marketing with HubSpot: Integrating Messaging into Your Ma...Niswey
50 million companies worldwide leverage WhatsApp as a key marketing channel. You may have considered adding it to your marketing mix, or probably already driving impressive conversions with WhatsApp.
But wait. What happens when you fully integrate your WhatsApp campaigns with HubSpot?
That's exactly what we explored in this session.
We take a look at everything that you need to know in order to deploy effective WhatsApp marketing strategies, and integrate it with your buyer journey in HubSpot. From technical requirements to innovative campaign strategies, to advanced campaign reporting - we discuss all that and more, to leverage WhatsApp for maximum impact. Check out more details about the event here https://events.hubspot.com/events/details/hubspot-new-delhi-presents-unlocking-whatsapp-marketing-with-hubspot-integrating-messaging-into-your-marketing-strategy/
Revolutionizing Surface Protection Xlcoatings Nano Based SolutionsExcel coatings
Excelcoating Transforming surface protection with their cutting-edge, eco-friendly nano-based coatings. This presentation delves into their innovative product lineup, including Excel CoolCoat for roof cooling, Excel NanoSeal for cement surfaces, Excel StayCool for UV-filtering glass, Excel StayClean for solar panels, Excel CoolTile for heat-reflective tiles, and Excel InsulX for film insulation.
Revolutionizing Surface Protection Xlcoatings Nano Based Solutions
Stochastic behavior analysis of complex repairable industrial systems
1. Stochastic behavior analysis of complex repairable industrial systems
utilizing uncertain data
Harish Garg n
, S.P. Sharma
Department of Mathematics, Indian Institute of Technology Roorkee, Roorkee 247667, Uttarakhand, India
a r t i c l e i n f o
Article history:
Received 13 November 2011
Received in revised form
26 June 2012
Accepted 26 June 2012
Available online 15 July 2012
Keywords:
Paper mill
Particle swarm optimization
Fuzzy logic
Lambda-Tau methodology
a b s t r a c t
The purpose of this paper is to present a novel technique for analyzing the behavior of an industrial
system stochastically by utilizing vague, imprecise, and uncertain data. In the present study two
important tools namely Lambda-Tau methodology and particle swarm optimization are combinedly
used to present a novel technique named as particle swarm optimization based Lambda-Tau (PSOBLT)
for analyzing the behavior of a complex repairable system stochastically up to a desired degree of
accuracy. Expressions of reliability indices like failure rate, repair time, mean time between failures
(MTBF), expected number of failures (ENOF), reliability and availability for the system are obtained by
using Lambda-Tau methodology and particle swarm optimization is used to construct their member-
ship function. The interaction among the working units of the system is modeled with the help of Petri
nets. The feeding unit of a paper mill situated in a northern part of India, producing approximately
200 ton of paper per day, has been considered to demonstrate the proposed approach. Sensitivity
analysis of system’s behavior has also been done. The behavior analysis results computed by PSOBLT
technique have a reduced region of prediction in comparison of existing technique region,
i.e. uncertainties involved in the analysis are reduced. Thus, it may be a more useful analysis tool to
assess the current system conditions and involved uncertainties.
& 2012 ISA. Published by Elsevier Ltd. All rights reserved.
1. Introduction
The industrial systems are generally repairable and consist of
several subsystems. Each subsystem is composed of various
complex components and the probability of system survival
depends directly on each of its constituent components. Industrial
systems are expected to be operational and available for the
maximum possible time so as to maximize the overall production
and hence profit. However, failure is an unavoidable phenomenon
in mechanical systems/process plants/components. These failures
may be the result of human error, poor maintenance, or inade-
quate testing and inspection. Therefore, the systems and compo-
nents undergo several failure–repair cycles that include logistic
delays while performing repair leads to the degradation of
systems’ overall performance. Behavior of these systems will help
to analyze the systems’ overall performance and to carry out
design modifications so that timely action may be initiated to
achieve the desired industrial goals.
But, the complexity of industrial systems and the non-linearity
of their behavior are such that explicit functions modeling of the
system behavior are not readily available. Due to these obstacles,
researchers gave attention to the systems’ behavior analysis [1–6].
Most of the above recorded works depended on available histor-
ical records, gathered from various sources and utilized traditional
analysis techniques like Markovian approach, fault tree analysis
(FTA), reliability block diagrams (RBD), Petri nets (PN), etc. to
model the systems’ behavior. They analyzed or optimized systems’
behavior in terms of some specific reliability indices like relia-
bility, availability or maintainability etc. at a time. For example, in
[1,3,4] they analyzed the behavior/performance of industrial
systems utilizing Markovian approach. Gupta et al. [5] used
numerical method for behavior analysis of a dairy plant. Aksu
et al. [2] proposed a methodology based on FTA and Markovian
approach for the reliability and availability assessment of a pod
propulsion system. Yuzgec [7] had optimized the feeding profile of
an industrial scale fed-batch baker’s yeast fermentation process
using four different differential evolution algorithms. Wu et al. [8]
proposed an improved particle swarm optimization algorithm for
solving the reliability problems. Additionally, there are some other
types of reliability problems developed by the researchers such as
process control [9] reliability, distribution system reliability [10],
reliability of dynamic systems [11] and so on.
All of them have used the historical data which are either out of
date or collected under different operating and environmental
conditions. Thus, the used data were vague, imprecise, and uncer-
tain, i.e. historical records can only represent the past behavior but
Contents lists available at SciVerse ScienceDirect
journal homepage: www.elsevier.com/locate/isatrans
ISA Transactions
0019-0578/$ - see front matter & 2012 ISA. Published by Elsevier Ltd. All rights reserved.
http://dx.doi.org/10.1016/j.isatra.2012.06.012
n
Corresponding author. Fax: þ91 9897599923.
E-mail address: harishg58iitr@gmail.com (H. Garg).
ISA Transactions 51 (2012) 752–762
2. may be unable to predict future behavior of the equipment.
Unfortunately, using historical database and rough (approximate)
estimates, estimated failure and repair rates (crisp) have some
uncertainties. Thus current failure and repair rates (crisp) are not
sufficient to account the involved uncertainties.
Another prominent shortcoming of existing methodologies is
that traditional analytical techniques need large amounts of data,
which are difficult to obtain because of various practical con-
straints such as rare events of components, human errors, and
economic considerations for the estimation of failure/repair
characteristics of the system. In such circumstances, it is usually
not easy to analyze the behavior and performance of these
systems up to desired degree of accuracy by utilizing available
resources, data, and information. Furthermore, if analysis has
been done by using some suitable techniques listed above, then
any reliability index alone is inadequate to give deeper idea about
such a type of systems’ behavior because a lot of factors exist
which overall influence the systems’ performance and conse-
quently their behavior. Thus, to analyze more closely the system’s
behavior, other reliability criteria should be included in the
traditional analysis and involved uncertainties must be quanti-
fied. The inclusion of various reliability indices as criteria helps
the management to understand the effect of increasing/decreas-
ing the failure and repair rates of a particular component or
subsystem upon the overall performance of the system and
quantification of uncertainties provide results closer to the real
situational environment’s results.
Knezevic and Odoom [12] highlighted these ideas and
analyzed the behavior of a general repairable system by introdu-
cing the concept of fuzzy Lambda-Tau technique coupled with PN
in terms of various reliability indices utilizing quantified data. In
their approach, PN is used to model the system while fuzzy set
theory is used to quantify the uncertain, vague, and imprecise
data. They used fuzzy triangular numbers to quantify the involved
uncertainties in the failure/repair data because it is easy for
preparation, evaluation, and interpretation of engineering data.
In their analysis several reliability indices are used such as failure
rate, repair time, mean time between failures (MTBF), expected
number of failures (ENOF), and availability and reliability of the
system which gave more sound idea about the system’s behavior.
Komal et al. [13] used this approach for behavior analysis of press
unit using FTA instead of PN in a paper mill while the authors in
[14–17] have analyzed the behavior of some complex repairable
industrial system by using PN and fuzzy approach.
It has been analysed from these studies that when this app-
roach has been applied on system whose structure become more
complex or number of components in the system increases, then
the computed reliability indices in the form of fuzzy membership
function have wide spread due to various arithmetic operations
involved in the calculations and thus cannot give the precise idea
about the behavior of the system [18]. To reduce the uncertainty
level in the analysis, spread for each reliability index must be
reduced up to a desired degree of accuracy so that plant personnel
may use these indices to analyze the system’s behavior more
closely and take more sound decisions to improve the performance
of the plant. Mon and Cheng [19] suggested a way to optimize the
spread of fuzzy membership function, of a nonrepairable system,
using some available software packages GINO. Also in literature
variety of methods and algorithms exist for optimization and have
been applied in various technological fields, during the last three
decades [20–22]. Particle swarm optimization (PSO) is one of such
type of widely used algorithm and hence can be used to optimize
the spread of fuzzy membership function to reduce the uncertain-
ties up to a desired degree of accuracy.
Thus, the main objective of this paper is to quantify the
uncertainties with the help of fuzzy numbers and to develop a
technique to analyze the system’s behavior more closely and to
make the decisions more realistic and generic for further applica-
tion. In this paper, a technique named as particle swarm optimi-
zation-based Lambda-Tau (PSOBLT) has been developed for
analyzing the behavior of complex repairable industrial systems.
Thus, it is observed from the study that using uncertain and
limited data for complex repairable industrial system, stochastic
behavior can be analyzed up to a desired degree of accuracy. Plant
personnel may use the results and can give guidelines to improve
the system’s performance by adopting suitable maintenance
strategies. An example of the feeding unit in a paper mill is taken
into account to demonstrate the proposed technique. Results
obtained from PSOBLT technique are compared with the existing
Lambda-Tau and genetic algorithms-based Lambda-Tau (GABLT)
techniques result. The obtained results will help the management
for reallocating the resources to achieve the targeted goal of
higher profit.
2. Petri net theory
Petri nets (PN), developed by Carl Petri [23], are a useful tool
for analyzing and modelling the dynamic behaviour of complex
systems with concurrent discrete events [24]. Mathematically,
Petri net is a 5-tuple, PN ¼ ðP,T,F,W,M0Þ, where P ¼ fp1,p2 . . . pmg is
a finite set of places, T ¼ ft1,t2 . . . tng is a finite set of transitions,
F DðP Â TÞ [ ðT Â PÞ is a set of arcs, W : F-f1; 2,3 . . .g is a weight
associated with the arcs in F, M0 : P-f0; 1,2, Á Á Ág is the initial
marking, P T ¼ f and P [ T af.
The PN in its simplest form is a directed bipartite graph, where
the two types of disjoint nodes are known as places (drawn as
circles) and transitions (drawn as boxes or bars). For building a
Petri net model, the events and their conditions and conse-
quences in a system are first defined and then represented by
transitions and places in a Petri net model. In modeling [24], using
the concept of conditions and events, places represent conditions,
and transitions represent events. A transition has a certain
number of input places and output places representing the
preconditions and post-conditions of an event. The places are
connected to the transitions by input and output arcs. A directed
arc (F) from a transition to a place is said to be input arc and the
one from place to transition is called an output arc, with respect
to the place and vice versa with respect to transition.
Similar to fault tree model, PN also represents graphically the
cause and effect relationship and interaction among the working
units of a system to be modeled [25]. As obtaining minimal cut
sets in a fault tree model is a tedious process due to the large
number of gates and basic events. Contrary to fault trees,
Petrinets can more efficiently derive the minimal cut and path
sets simultaneously [12,25]. PN has a static part as well as
dynamic part. The static part consists of places, transitions, and
arrows. Meanwhile the dynamic part is related with marking of
graph by tokens, which are present, not present or evolves
dynamically on firing of valid transitions. In this study, only the
static part of PN is used to model the quantitative behavior of
system, i.e. the tokens are omitted and it is assumed that
transitions are not timed, i.e. they are immediate transitions.
For more details, refer to [26]. Fig. 1(a) and (b) illustrate the
equivalent PN models, corresponding to the logical basic AND and
OR gates.
3. Basic notation on fuzzy approach
The section presents only those basic concepts related to fuzzy
set theory, which are helpful for analyzing system behavior.
H. Garg, S.P. Sharma / ISA Transactions 51 (2012) 752–762 753
3. 3.1. Crisp versus fuzzy set
Crisp(classical) sets contain objects that satisfy precise proper-
ties of membership functions. Only two possibilities exist – an
element belongs to, or does not belong to a set. This binary issue
of membership can be represented mathematically by the indi-
cator function as,
XAðxÞ ¼
1 if xAA
0 if x =2 A
(
ð1Þ
On the other hand fuzzy sets contain objects that satisfy
imprecise properties of membership functions i.e. membership
of an object in a fuzzy set can be partial [27]. Contrary to classical
sets, fuzzy sets accommodate various degrees of membership on
the real continuous interval ½0; 1Š where ‘0’ conforms to no
membership and ‘1’ conforms to full membership. Mathemati-
cally, a fuzzy set ~A is defined by its m~A ðxÞ that satisfies
m~A ðxÞA½0; 1Š ð2Þ
where m~A ðxÞ is the degree of membership of element x in fuzzy set ~A.
3.2. Extension principle
The extension principle was developed by [27,28] and later
elaborated by [29] to enable the extension of the domain of a
function on fuzzy sets. It plays a fundamental role in translating
the set-based concepts to fuzzy set counterparts. A principle for
fuzzifying crisp functions (or possibly crisp relations) is called
extension principle [30].
A crisp function f : X-Y, defined on two universes of dis-
course X and Y, is fuzzified when it is extended to act on fuzzy sets
~FðXÞ and ~FðYÞ. The corresponding fuzzified function f has the form,
f : ~FðXÞ-~FðYÞ.
3.3. a-cuts
The a-cut of a fuzzy set ~A, denoted by Aa, is a crisp set
consisting of elements of ~A having the degree of membership at
least a and is mathematically defined as
Aa ¼ fxAX : m~A ðxÞZag ð3Þ
where a is the parameter in the range of 0rar1 and X is the
universe of discourse. The concept of a-cut offers a method for
resolving any fuzzy sets in terms of constituent crisp sets.
3.4. Membership functions
The concept of membership function is an most important
aspect in the fuzzy set theory. They are used to represent various
fuzzy sets. Many membership functions such as normal, triangular,
trapezoidal can be used to represent fuzzy numbers. However,
triangular membership functions (TMF) are widely used for
calculating and interpreting reliability data because of their
simplicity and understandability [31,32]. The decision of selecting
triangular fuzzy numbers (TFNs) lies in their ease to represent the
membership function effectively and to incorporate the judgement
distribution of multiple experts. This is not true for complex
membership functions, such as trapezoidal one, etc. For instance,
imprecise or incomplete information such as low/high failure rate
i.e. about 4 or between 5 and 7 is well represented by TMF. In the
present paper triangular membership function is used as it not
only conveys the behavior of system parameters but also reflect
the dispersion of the data adequately. The dispersion takes care of
inherent variation in human performance, vagueness in system
performance due to age and adverse operating conditions. Thus it
becomes intuitive for the engineers to arrive at right decisions.
A triangular fuzzy number (TFN) is defined by the ordered
triplet ~A ¼ ða,b,cÞ representing, respectively, the lower value, the
modal value, and the upper value of a triangular fuzzy member-
ship function. Its membership function m~A : RÀ!½0; 1Š is defined as
m~A ðxÞ ¼
xÀa
bÀa
if arxrb
1 if x ¼ b
cÀx
cÀb
if brxrc
8
>>><
>>>:
ð4Þ
The a-cut of fuzzy number ða,b,cÞ is defined below and shown
graphically in Fig. 2
Aa ¼ ½aðaÞ
,cðaÞ
Š ð5Þ
The interval of confidence defined by a-cuts can be written as
Aa ¼ ½ðbÀaÞaþa,ÀðcÀbÞaþcŠ ð6Þ
The basic arithmetic operations, i.e. addition, subtraction,
multiplication and division based on two fuzzy sets ~A and ~B, are
shown in Table 1 for the following intervals: Aa ¼ ½AðaÞ
1 ,AðaÞ
3 Š,
Ba ¼ ½BðaÞ
1 ,BðaÞ
3 Š, aA½0; 1Š.
It is clear that the multiplication and division of two TFNs are
not again a TFN with linear sides but it is a new fuzzy number
with parabolic sides.
P1
P2
P3
P1
P2
P3
AND OR
Fig. 1. Petri Net model of Logical- AND and OR operations.
1
0 b ca
Fig. 2. Triangular fuzzy number of fuzzy set ~A.
Table 1
Basic operations on fuzzy numbers.
Operation Crisp Fuzzy
Addition AþB ~A þ ~B ¼ ½AðaÞ
1 þBðaÞ
1 ,AðaÞ
3 þBðaÞ
3 Š
Subtraction AÀB ~AÀ ~B ¼ ½AðaÞ
1 ÀBðaÞ
3 ,AðaÞ
3 ÀBðaÞ
1 Š
Multiplication A Á B ~A Á ~B ¼ ½AðaÞ
1 Á BðaÞ
1 ,AðaÞ
3 Á BðaÞ
3 Š
Division ACB ~AC ~B ¼ ½AðaÞ
1 CBðaÞ
3 ,AðaÞ
3 CBðaÞ
1 Š, if 0 =2 ½BðaÞ
1 ,BðaÞ
3 Š
H. Garg, S.P. Sharma / ISA Transactions 51 (2012) 752–762754
4. 4. Methodology for behavior analysis
The motive of the study is to analyze the behavior of the
system by utilizing quantified vague, imprecise and uncertain
information/data.
4.1. Lambda-Tau methodology
Lambda-Tau methodology is a traditional method in which
fault tree is used to model the system. The constant failure rate
model is adopted in this method and the basic expressions used
to evaluate the system’s failure rate ðlÞ and repair time ðtÞ
associated with the logical AND- and OR-gates are summarized
in Table 2 [12,33]. But, Knezevic and Odoom [12] extended this
idea by coupling it with PN and fuzzy set theory and have
analysed the various reliability parameters (indices) in the form
of fuzzy membership functions for a repairable system. Their
approach is based on qualitative modeling using PN and quanti-
tative analysis using Lambda-Tau method of solution with basic
events represented by fuzzy numbers of triangular membership
functions.
But disadvantage of this methodology is that as the number of
components increases or system structure become more complex,
results in the form of fuzzy membership function have wide
spread due to various fuzzy arithmetic operations used in the
calculations [18]. So to analyze the stochastic behavior of complex
industrial system up to a desired degree of accuracy, an effective
and advanced technique is needed. For this PSOBLT technique is
included in this paper and is described herein.
4.2. PSOBLT technique
In PSOBLT technique, two important tools, namely Lambda-
Tau methodology and PSO are combinely used. This technique
utilizes ordinary arithmetic and optimization techniques instead
of fuzzy arithmetic for the computation of system’s fuzzy relia-
bility indices.
The main assumptions used in this technique are given below:
component failures and repair rates are statistically indepen-
dent, constant, very small and obey exponential distribution
function;
l5t and their product is small.
after repairs, the repaired component is considered as good
as new.
the standby units are of same nature and capacity as the
active units.
system structure is precisely known.
Strategy followed through this approach is shown by flow
chart in Fig. 3 and details are given hereafter.
First step in this technique is the information extraction phase.
In this, information in the form of failure rates (l’s) and repair
times (t’s) of each component of the system is extracted from the
available historical data/logbooks etc. which is imprecise in
nature due to the reasons already stated above.
In the next step, the obtained crisp data is converted into fuzzy
numbers, for accounting the uncertainties in the analysis, as it
allow experts opinion, linguistic variables, operating conditions,
uncertainty and imprecision in reliability information. Triangular
fuzzy number (TFN) is used for this purpose because it is easy for
presentation, evaluation and interpretation of engineering data
[31,32]. Thus, more specifically extracted crisp failure rates and
repair times are converted into triangular fuzzy numbers having
known spread (support) suggested by decision maker (DM)/design
maintenance expert/system reliability analyst. An input data for
failure rate li and repair time ti of the ith component of a system in
the form of TFNs with equal spread 715% in both the directions
(left and right to the middle) are shown in Fig. 4.
Table 2
Basic expressions of Lambda-Tau methodology.
Gate lAND tAND lOR tOR
Expression
Qn
j ¼ 1
lj
Pn
i ¼ 1
Qn
j ¼ 1
i a j
tj
2
4
3
5
Qn
i ¼ 1 ti
Pn
j ¼ 1
Qn
i ¼ 1
i a j
ti
h i
Pn
i ¼ 1
li
Pn
i ¼ 1 liti
Pn
i ¼ 1 li
Information
extraction in
the form of
parameters
of failure
rate and
repair time
Historical records
system reliability analyst
reliability database
gnisuybreifizzuF
triangular fuzzy
numbers
Obtain
reliability
indices using
ledomsNP
Construct fuzzy
reliability indices
membership function
using PSO
ybreifizzufeD
COG method
R
E
L
I
A
B
I
L
I
T
Y
P
A
R
A
M
E
T
E
R
S
System
behavior
analysis
Fuzzy
Crisp
Defuzzifed
fuzzy
output
crisp
input
fuzzy
data
Step 2
Step 3
Step 4
Step 1
Fig. 3. Flow chart of PSOBLT technique.
H. Garg, S.P. Sharma / ISA Transactions 51 (2012) 752–762 755
5. In the next step of the technique, system is modeled with the
help of Petri nets by finding its minimum-cut sets. Based on these
cut sets, expressions of various reliability indices of interest such
as system’s failure rate, repair time, MTBF, ENOF, availability and
reliability are obtained using Lambda-Tau methodology i.e. by
using Tables 2 and 3 results [12,33].
As the expressions of the obtained reliability index are highly
complex or non-linear in nature which contains high level of the
uncertainties. But, in order to take more appropriate decision for
improving the performance of the system, it is necessary that
spread for each reliability index must be reduced up to a desired
degree of accuracy. For this, membership functions of each relia-
bility index is constructed by formulating a non-linear program-
ming problem, at each cut level a, by utilizing the quantified fuzzy
l’s and t’s. In this optimization problem, expression of reliability
indices are obtained by using an ordinary arithmetic unlike of the
fuzzy arithmetic operations.
Then, the upper boundary values of reliability indices are
computed at cut level a by solving the optimization problems
Maximize : ~Fðl1,l2, . . . ,ln,t1,t2, . . . ,tmÞ or
~Fðt=l1,l2, . . . ,ln,t1,t2, . . . ,tmÞ
Subject to : mli
ðxÞZa,
mtj
ðxÞZa,
0rar1,
i ¼ 1; 2, . . . ,n, j ¼ 1; 2, . . . ,m: ð7Þ
The obtained maximum value of F is denoted by Fmax.
The lower boundary value of reliability indices are computed
at cut level a by solving the optimization problem (8)
Minimize : ~Fðl1,l2, . . . ,ln,t1,t2, . . . ,tmÞ or
~Fðt=l1,l2, . . . ,ln,t1,t2, . . . ,tmÞ
Subject to : mli
ðxÞZa,
mtj
ðxÞZa,
0rar1,
i ¼ 1; 2, . . . ,n, j ¼ 1; 2, . . . ,m: ð8Þ
The obtained minimum value of F is denoted by Fmin.
The membership function values of ~F at Fmax and Fmin are both
a that is:
m~F ðFmaxÞ ¼ m~F ðFminÞ ¼ a
where ~Fðl1,l2, . . . ,ln,t1,t2, . . . ,tmÞ and ~Fðt=l1,l2, . . . ,ln,t1,
t2, . . . ,tmÞ are time independent and dependent fuzzy reliability
indices.
Since the problem is non-linear in nature, it needs some effective
techniques and tools for its global solution. Out of the existing variety
of methods and algorithms, evolutionary algorithmic (EAs) techni-
ques are widely used to determine the global optimal solution of
nonlinear optimization problems without any pre-assumptions such
as continuity and differentiability. PSO is one of the family of EAs
which is basically a random search technique and has been applied
effectively to many different problems like system reliability/avail-
ability/optimization [8,22,34,35]. Thus in the light of applicability, this
paper use PSO as a tool to solve the optimization problems (7) and (8)
in the process of determining the fuzzy membership function of each
reliability index which has optimized spread. The description of the
PSO algorithm is given below.
4.3. Particle swarm optimization
Particle Swarm Optimization (PSO), first introduced by Ken-
nedy and Eberhart [34], is a stochastic global optimization
technique inspired by social behavior of bird flocking or fish
schooling. It simulated the feature of bird flocking and fish
schooling to configure the heuristic learning mechanism.
The algorithm works by initializing a flock of birds randomly over
the searching space, where every bird is called as a ‘‘particle’’.
These ‘‘particles’’ fly with a certain velocity and find the global best
position after some iteration. At each iteration, each particle can
adjust its velocity vector, based on its momentum and the
influence of its best position (pbest) as well as the best position
of its neighbors (gbest), and then compute a new position that the
1 1
Triangular Membership functions of Triangular Membership functions of
Fig. 4. Input Triangular Fuzzy Numbers for the ith component of the system.
Table 3
Some reliability parameters.
Parameters Expressions
Failure rate
MTTFs ¼
1
ls
Repair time
MTTRs ¼
1
ms
¼ ts
MTBF MTBFs ¼ MTTFs þMTTRs
ENOF
Wsð0,tÞ ¼
lsmst
ls þms
þ
l2
s
ðls þmsÞ2
½1ÀeÀðls þ ms Þt
Š
Reliability Rs ¼ eÀls t
Availability
As ¼ ms
ls þms
þ
ls
ls þms
eÀðls þ msÞt
H. Garg, S.P. Sharma / ISA Transactions 51 (2012) 752–762756
6. ‘‘particle’’ is to fly to. Suppose the dimension for a searching space
is D, the total number of particles is n, the position of the ith
particle can be expressed as vector xi ¼ ½xi1,xi2, . . . ,xiDŠ the best
position of the ith particle is denoted as pbesti ¼ ½pbesti1,
pbesti2, . . . ,pbestiDŠ, and the best position of the total particle
swarm is denoted as vector gbest ¼ ½gbest1,gbest2, . . . ,gbestDŠ, the
velocity of the ith particle is represented as vector vi ¼ ½vi1,
vi2, . . . ,viDŠ. Then the position and velocity of the particle are
updated by the following relations:
viðtþ1Þ ¼ wnviðtÞþc1nr1nðpbestiðtÞÀxiðtÞÞþc2nr2nðgbestðtÞÀxiðtÞÞ
ð9Þ
xiðtþ1Þ ¼ xiðtÞþviðtþ1Þ ð10Þ
where c1 and c2 are constants, r1 and r2 are random variable with
uniform distribution between 0 and 1, w is inertia weight, which
shows that the effect of previous velocity vector on the new vector.
The pseudo code of the algorithm is described in Algorithm 1.
Algorithm 1. Pseudo code of Particle swarm optimization (PSO).
1: Objective function: fðxÞ, x ¼ ðx1,x2, . . . ,xK Þ;
2: For each particle:
Initialize particle position and velocity
3: Do:
4: For each particle:
(a) Calculate fitness value
(b) If the fitness value is better than the best fitness value
(pbest) in history.
(c) Set current value as the new pbest.
5: End for
6: For each particle:
(a) Find in the particle neighborhood, the particle with the
best fitness.
(b) Calculate particle velocity according to the velocity
equation (9).
(c) Update particle position according to the position
equation (10).
(d) Apply the position constriction.
7: End for
8: While maximum iterations or minimum error criteria is not
attained
5. Illustrative example
The above mentioned technique, PSOBLT, for analyzing the
behavior of complex repairable system is illustrated through the
behavior of feeding system of a paper mill (situated in the
Northern part of India). The brief description of the system (paper
mill) is given below.
5.1. System description
For the production of paper, the raw material (softwood,
hardwood, bamboo, etc.) is chopped into small pieces of approxi-
mately uniform in size and transported for temporarily storage
through compressed air. Conveyor in the feeding system carry the
chips from the store to the digesters, whenever required. These
chips are cooked in the digester by using white liqueur
(NaOHþNa2S) with steam at a pressure of 8:5 kg=cm2
(around
180 1C temperature). The chips when cooked are referred to as
‘pulp’. The pulp is then transported to the storage tanks and
stirred continuously. After that it is further processed through
fiberlizer and refiner. The pulp is then filtered and washed
(in stages) with water to remove knots and chemicals. The final
washed pulp is stored in a surge tank. The next stages of
processing are bleaching and screening. For the production of
white paper, pulp is bleached by passing chlorine gas through the
pulp stored in the tank. For the production of brown pulp, used
for packaging purpose, pulp is screened directly. The white pulp
so obtained is passed through screeners to separate odd and
oversized particles. The pulp is then made to pass through
cleaners which separate heavy material from the pulp. Then, pulp
is fed to the head box of the paper machine comprising three
sections viz. forming, press and dryer. In the forming section of
the paper machine, the suction box (having six pumps) de-waters
the pulp by vacuum action. The paper in the form of sheets
produced by rolling presses is sent to press and dryer section to
reduce the moisture content by means of heat and vapour
transfer and to smooth out any irregularities. Finally, the rolled-
dried sheet of the paper (in the form of rolls) is sent for packaging.
Wood Chips
Blower for
pushing the
wood chips
Feeder unit for
carrying the
chips
Store of
wood chips
Chain
Conveyor
Belt
Conveyor
Bucket
Conveyor
Digester
Compressed
Air
Pipe filled by
compressed air
FSF
A F
E G
B C D
Fig. 5. (a) Systematic diagram and (b) PN model of feeding system.
Table 4
Failure rate and repair time data for feeding system.
Component Failure rate (li) Repair time (ti)
(Failures/h) (h)
A: Blower (i¼1) 2 Â 10À3 10
B: Chain Conveyor (i¼2) 3 Â 10À2 10
C: Belt Conveyor (i¼3) 4 Â 10À2 5.0
D: Butter Conveyor (i¼4) 5 Â 10À2 3.5
E: Feeder (i¼5) 2 Â 10À2 5.0
H. Garg, S.P. Sharma / ISA Transactions 51 (2012) 752–762 757
7. 5.2. Feeding system
Feeding system [15,36,37] is the first functioning part of a
paper mill and has a dominant role in the production of paper.
The function of feeding unit is to feed the chipped wood to the
digester for preparing the pulp from the chipping house where
the wood is chipped and stored. The system comprised the
following subsystems:
Blower (A): It is used for pushing the wooden chips through
the pipe by compressed air whose failure will cause complete
failure of the feeding system.
0.01 0.02 0.03 0.04 0.05
0
0.2
0.4
0.6
0.8
1
Failure rate
DegreeofMembership
Lambda−Tau
GABLT
PSOBLT
0 2 4 6 8 10 12 14
0
0.2
0.4
0.6
0.8
1
Repair time
DegreeofMembership
Lambda−Tau
GABLT
PSOBLT
20 30 40 50 60 70 80
0
0.2
0.4
0.6
0.8
1
MTBF
DegreeofMembership
Lambda−Tau
GABLT
PSOBLT
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7
0
0.2
0.4
0.6
0.8
1
ENOF
DegreeofMembership
Lambda−Tau
GABLT
PSOBLT
0.6 0.65 0.7 0.75 0.8 0.85 0.9
0
0.2
0.4
0.6
0.8
1
Reliability
DegreeofMembership
Lambda−Tau
GABLT
PSOBLT
0.65 0.7 0.75 0.8 0.85 0.9 0.95 1
0
0.2
0.4
0.6
0.8
1
Availability
DegreeofMembership
Lambda−Tau
GABLT
PSOBLT
Fig. 6. Fuzzy reliability indices plots for feeding system. (a) failure rate; (b) repair time; (c) MTBF; (d) ENOF; (e) reliability; (f) availability.
H. Garg, S.P. Sharma / ISA Transactions 51 (2012) 752–762758
8. Conveyor Subsystem: It consists of three operating units in
series, namely Chain conveyor (B), Belt conveyor (C) and
Bucket conveyor (D) for lifting the chips up to the hight of
digester. When there is a failure in any of the subsystem,
standby unit (E) is switched on which feeds the digester slowly
causing delay in digesting process and hence loss in further
production.
Feeder (E): It acts as a standby unit with the conveyor
subsystem for carrying the chips by compressed air from the
store to the digester with less capacity. This unit works either
when there is an extra demand of chips or there is a sudden
failure in conveyor subsystem.
The systematic diagram and interactions among the working
components of the system are modeled using PN and are shown
in Fig. 5(a) and (b) respectively, where FSF denotes the feeding
system top failure event. Under the information extraction phase,
the data related to failure rates ðli’sÞ and repair times ðti’sÞ of the
main components of the feeding system are collected from the
present/historical records of the paper mill. The collected data is
integrated with expertise of maintenance personnel and are given
in the Table 4 [15,36,37].
6. Computational results
6.1. Parameter setting
The optimization method has been implemented in Matlab
(MathWorks) and the program has been run on a T6400 @ 2 GHz
Intel Core(TM) 2 Duo processor with 2 GB of Random Access
Memory (RAM). In order to eliminate stochastic discrepancy,
25 independent runs have been made that involves 25 different
initial trial solutions with swarm size is 20 Â (no. of variables), the
acceleration coefficients parameters c1 and c2 are taken as
c1 ¼ c2 ¼ 1:5 while inertia weight w is defined as w ¼ ðw1Àw2Þ
ððitermaxÀiterÞ=itermaxÞþw2. Here w1 ¼0.9 and w2 ¼0.4 are the
initial and final values of inertia weight respectively, itermax
represents the maximum generation number (¼150) and ‘iter’
is used generation number. The termination criterion has been set
either limited to a maximum number of generations or to the
order of relative error equal to 10À6
, whichever is achieved first.
6.2. Result and discussion
The data, given in Table 4, as collected from historical records
and opinion of field experts are imprecise and vague so are
represented in the form of triangular fuzzy numbers with
715% spread suggested by systems expertise. Based on PN
model, the minimal cut sets, as obtained by using matrix method,
are fAg, fB,Eg, fC,Eg and fD,Eg. Using these minimal cut sets,
expression for systems’ failure rate ðlsÞ and repair time ðtsÞ are
obtained using the results given in Table 2, as follows:
ls ¼ l1 þl2l5ðt2 þt5Þþl3l5ðt3 þt5Þþl4l5ðt4 þt5Þ
ts ¼
l1t1 þl2l5t2t5 þl3l5t3t5 þl4l5t4t5
ls
Using these expressions of ls and ts, the basic steps of the
PSOBLT technique have been followed, at various membership
grades, for computing the fuzzy reliability indices for the mission
time t¼10 h with left and right spread. The computed results of
the PSOBLT technique are depicted graphically in Fig. 6 for 715%
spreads along with Lambda-Tau and GABLT techniques result.
Figure shows that PSOBLT results have reduced region and small
spread in comparison of existing results. The reason behind is that
PSO gives near to the optimal solution. This suggests that DM has
smaller and more sensitive region to make more sound and
effective decision in lesser time.
6.2.1. Analysis with different spreads
For defuzzification, center of gravity method [38] is used
because it has the advantage of being whole membership function
into account for this transformation. The crisp and defuzzified
values of reliability indices at 715%, 725% and 760% spreads
are computed and compared with Lambda-Tau and GABLT results
are shown in Table 5. It shows that when uncertainty level in the
form of spread increases from 715% to 725% and further 760%,
the variation in defuzzified values for almost all the reliability
indices is not so much as shown by results of Lambda-Tau and
GABLT techniques. From Table 5, it is evident that defuzzified
values change with change in spread. For example the failure rate
Table 5
Crisp and defuzzified values for feeding system.
Reliability indices Crisp Defuzzified values at (spread)
715% 725% 760%
Failure rate 0.0275000 Lambda-Tau: 0.0296364 0.0337164 0.0809759
GABLT: 0.0270031 0.0276512 0.0316168
PSOBLT: 0.0275748 0.0276272 0.0270348
Repair time 3.1818181 Lambda-Tau: 4.8672386 9.3490486 89.1941971
GABLT: 3.1890921 3.1787921 3.5311456
PSOBLT: 3.1881448 3.1980623 3.2053274
MTBF 39.5454545 Lambda-Tau: 44.6454958 55.8693987 101.5435504
GABLT: 42.6605303 55.5885769 58.8732983
PSOBLT: 39.4447996 39.7245899 42.29707718
ENOF 0.2591351 Lambda-Tau: 0.3072978 0.5114377 70.8981951
GABLT: 0.2536513 0.2581375 0.2915676
PSOBLT: 0.2596373 0.2590009 0.2582328
Availability 0.9221779 Lambda-Tau: 0.8713598 0.7824887 0.5861624
GABLT: 0.9243923 0.9232263 0.9082149
PSOBLT: 0.9220529 0.9226205 0.9258657
Reliability 0.7595721 Lambda-Tau: 0.7486219 0.7296919 0.6209493
GABLT: 0.7607065 0.7554107 0.7477982
PSOBLT: 0.7585523 0.7589268 0.7673566
H. Garg, S.P. Sharma / ISA Transactions 51 (2012) 752–762 759
9. of the system increases by 7.768%, 1.806%, 0.272% for fuzzy
Lambda-Tau, GABLT and PSOBLT respectively, when spread
changes from 715% to 725%, and it further increases by
13.766%, 2.401%, 0.190%, when spread changes from 725% to
760%. Based on results shown in Fig. 6, changes in defuzzified
value for all the techniques from crisp results have been com-
puted and given in Table 6 and concluded that variation in the
PSOBLT technique is smaller than existing Lambda-Tau and
GABLT techniques. Due to their reduced region of prediction,
the value obtained through PSOBLT technique may be beneficial
for system expert/analyst for future course of action, i.e. now the
maintenance will be based on the defuzzified values rather than
crisp values.
6.2.2. Behavior analysis
To analyze the impact of change in values of reliability indices
on to the system’s behavior, behavioral plots have been plotted
for different combination of reliability indices and are shown in
Fig. 7. Throughout the combinations, ranges of repair time and
ENOF are fixed and have been varied, along the x- and y-axes
0
5
10
15
0
0.2
0.4
0.6
0.8
0
100
200
300
400
Repair time
Reliability = 0.64, Failure rate = 0.017,
Availability = 0.74
ENOF
MTBF
2.5
3
3.5
4
0
0.2
0.4
50
100
150
200
Repair Time
Reliability=0.64, Failure Rate=0.017, Availability=0.74
ENOF
MTBF
2
2.5
3
3.5
0.2
0.22
0.24
0.26
80
90
100
110
Repair time
Reliability = 0.64, Failure rate=0.017,
Availability = 0.74
ENOF
MTBF
0
5
10
15
0
0.2
0.4
0.6
0.8
0
50
100
150
200
Repair time
Reliability = 0.64, Failure rate = 0.029,
Availability = 0.74
ENOF
MTBF
2.5
3
3.5
4
0
0.2
0.4
40
60
80
100
Repair Time
Reliability=0.64, Failure Rate=0.029, Availability=0.74
ENOF
MTBF
2
2.5
3
3.5
0.2
0.22
0.24
0.26
50
55
60
65
Repair time
Reliability = 0.64, Failure rate = 0.029,
Availability = 0.74
ENOF
MTBF
0
5
10
15
0
0.2
0.4
0.6
0.8
0
50
100
150
200
Repair time
Reliability = 0.64, Failure rate = 0.041,
Availability = 0.74
ENOF
MTBF
2.5
3
3.5
4
0
0.2
0.4
20
40
60
80
Repair Time
Reliability=0.64, Failure Rate=0.041, Availability=0.74
ENOF
MTBF
2
2.5
3
3.5
0.2
0.22
0.24
0.26
0.28
40
42
44
46
48
50
Repair time
Reliability = 0.64, Failure rate = 0.041,
Availability = 0.74
ENOF
MTBF
Fig. 7. Plots feeding unit behavior analysis: (a) Lambda-Tau; (b) GABLT; (c) PSOBLT.
Table 6
Change in defuzzified values of reliability indices.
Change in value (in %) from crisp to Reliability indices
Failure rate Repair time MTBF ENOF Availability Reliability
Lambda-Tau 7.768727 52.970359 12.896656 18.585942 5.510661 1.441627
GABLT 1.806909 0.228611 7.877203 2.116193 0.240127 0.149347
PSOBLT 0.272000 0.198839 0.254529 0.193798 0.013554 0.134259
H. Garg, S.P. Sharma / ISA Transactions 51 (2012) 752–762760
10. respectively, in the range computed by their membership func-
tions (Fig. 6(b) and (d)) at cut levels a ¼ 0. The effects on MTBF by
taking different combinations of the remaining parameters (relia-
bility, failure rate and availability) are computed and have been
shown along the z-axis. For instance, in the first three plots, the
reliability and availability are fixed to 0.64 and 0.74 respectively
while the failure rate changes from 0.017 to 0.029 and further to
0.041. The corresponding effects on MTBF for all the techniques
are shown graphically in Fig. 7.
It may be observed that for this combination the prediction
range of MTBF is reduced almost by 69.8064% and 94.0707% from
fuzzy Lambda-Tau when GABLT and PSOBLT techniques are
applied respectively while reduced by 80.3622% from GABLT
when PSOBLT technique is applied. The computed range of MTBF
for all the combinations as well as for all the techniques are
tabulated in Table 7. The plots show that as the failure rate of the
system increases then for the prescribed ranges and values of the
other indices, MTBF of the system decreases exponentially as
shown in Table 7. This observation infers that if system analysts
use PSOBLT results for the system then they may have less range
of prediction which finally leads to more sound decisions. Thus,
based on the behavioral plots and corresponding table, the system
manager can analyze the critical behavior of the system and plan
for suitable maintenance.
7. Conclusion
This paper presents a novel technique named as PSOBLT for
determining the membership function of the reliability indices of
complex repairable industrial system having lesser uncertainty.
Major advantage of the proposed technique is that it gives
compressed search space for each computed reliability index by
utilizing available information and uncertain data. The technique
has been demonstrated through an example of feeding unit of a
paper mill. This technique optimize the spread of the reliability
indices indicating higher sensitivity zone and thus may be useful
for the reliability engineers/experts to make more sound deci-
sions. Also, it is observed from the analysis that PSOBLT performs
consistently well in comparison to other existing techniques.
If system analysts use PSOBLT results then they may predict the
system behavior with more confidence. Thus, it will facilitate the
management in reallocating the resources, making maintenance
decisions, achieving long run availability of the system, and
enhancing the overall productivity of the paper industry.
In nutshell, the important managerial implications drawn
using the discussed techniques are to:
model and predict the behavior of industrial systems in more
consistent manner;
to analyze the behavior of the system in higher sensitivity
zone;
analyze failure behavior of industrial systems in more realistic
manner as they often make use of imprecise data;
determine reliability indices such as MTBF, MTTR which are
important for planning the maintenance need of the
systems; and
plan suitable maintenance strategies to improve system per-
formance and to reduce operation and maintenance costs.
References
[1] Kumar S, Kumar D, Mehta NP. Behavioural analysis of shell gasification and
carbon recovery process in a urea fertilizer plant. Microelectronics Reliability
1996;36(5):671–3.
[2] Aksu S, Aksu S, Turan O. Reliability and availability of pod propulsion system.
Journal of Quality and Reliability International 2006;22:41–58.
[3] Arora N, Kumar D. Availability analysis of steam and power generation
systems in the thermal power plant. Microelectronics Reliability 1997;
37(5):795–9.
[4] Arora N, Kumar D. System analysis and maintenance management for the
coal handling system in a paper plant. International Journal of Management
and Systems 2000;16(2):137–56.
[5] Gupta P, Lal AK, Sharma RK, Singh J. Numerical analysis of reliability and
availability of the serial processes in butter-oil processing plant. International
Journal of Quality and Reliability Management 2005;22(3):303–16.
[6] Sharma RK, Kumar S. Performance modeling in critical engineering systems
using RAM analysis. Reliability Engineering and System Safety 2008;93(6):
913–9.
[7] Yuzgec U. Performance comparison of differential evolution techniques on
optimization of feeding profile for an industrial scale bakers yeast fermenta-
tion process. ISA Transactions 2010;49(1):167–76.
[8] Wu P, Gao L, Zou D, Li S. An improved particle swarm optimization algorithm
for reliability problems. ISA Transactions 2011;50:71–81.
[9] Estevez–Reyes L. Process control reliability; the key to investing in your
infrastructure. ISA Transactions 2000;39(1):115–21.
[10] Hosseini M, Shayanfar HA, Fotuhi–Firuzabad M. Reliability improvement of
distribution systems using SSVR. ISA Transactions 2009;48(1):98–106.
[11] Fales R. Uncertainty modeling and predicting the probability of stability and
performance in the manufacture of dynamic systems. ISA Transactions
2010;49:528–34.
[12] Knezevic J, Odoom ER. Reliability modeling of repairable systems using Petri
nets and Fuzzy Lambda-Tau Methodology. Reliability Engineering and
System Safety 2001;73(1):1–17.
[13] Komal, Sharma SP, Kumar D. RAM analysis of the press unit in a paper plant
using genetic algorithm and lambda-tau methodology. In: Proceeding of 13th
online international conference WSC-2008. Applications of soft computing
(Springer book series), vol. 58; 2009. p. 127–37.
[14] Sharma SP, Garg H. Behavioral analysis of a urea decomposition system in a
fertilizer plant. International Journal of Industrial and System Engineering
2011;8(3):271–97.
[15] Komal, Sharma SP, Kumar D. Reliability analysis of the feeding system in a
paper industry using Lambda-Tau technique. In: 3rd international conference
on reliability and safety engineering (INCRESE), IIT Kharagpur, India; 2007.
p. 531–7.
[16] Garg H, Sharma SP. Behavior analysis of synthesis unit in fertilizer plant.
International Journal of Quality and Reliability Management 2012;29(2):
217–32.
[17] Garg H, Sharma SP. Behavior and system performance optimization for an
industrial system by using particle swarm optimization. In: 2011 IEEE
international conference on intelligent computing and intelligent systems
(ICIS 2011), Guangzhou, China; 2011. p. 237–41.
[18] Chen SM. Fuzzy system reliability analysis using fuzzy number arithmetic
operations. Fuzzy Sets and Systems 1994;64(1):31–8.
[19] Mon DL, Cheng CH. Fuzzy system reliability analysis for components with
different membership functions. Fuzzy Sets and Systems 1994;61(1):145–57.
Table 7
Change in MTBF for various combinations of reliability indices for feeding unit.
S.no. [Reliability,
failure rate,
availability]
Mean time between failures
Lambda-Tau GABLT PSOBLT
1. ½0:64,0:017,0:74Š Min: 41.90051 80.56864 97.92452
Max: 307.8967 160.8823 113.6963
2. ½0:64,0:029,0:74Š Min: 24.70506 48.10601 58.53055
Max: 194.2765 96.71446 68.24198
3. ½0:64,0:041,0:74Š Min: 17.57524 34.64588 42.19646
Max: 147.1657 70.10828 49.39509
4. ½0:75,0:017,0:86Š Min: 26.97297 51.71077 62.83435
Max: 194.9376 103.0899 72.88147
5. ½0:75,0:029,0:86Š Min: 15.88857 30.78496 37.44052
Max: 121.6966 61.72657 43.58110
6. ½0:75,0:041,0:86Š Min: 11.29261 22.10841 26.91137
Max: 91.32841 44.57591 31.43216
7. ½0:86,0:017,0:98Š Min: 14.07028 26.67554 32.38300
Max: 95.35781 52.85365 37.41915
8. ½0:86,0:029,0:98Š Min: 8.259071 15.70478 19.06979
Max: 56.95981 31.16811 22.05786
9. ½0:86,0:041,0:98Š Min: 5.849546 11.15592 13.54968
Max: 41.03869 22.17654 15.68854
H. Garg, S.P. Sharma / ISA Transactions 51 (2012) 752–762 761
11. [20] Tillman FA, Hwang CL, Kuo W. Optimization of systems reliability. New York:
Marcel Dekker; 1980.
[21] Komal SP, Kumar D. Analysis of repairable industrial systems utilizing
uncertain data. Applied Soft Computing 2010;10:1208–21.
[22] Garg H, Sharma SP. Multi-objective optimization of crystallization unit in a
fertilizer plant using particle swarm optimization. International Journal of
Applied Science and Engineering 2011;9(4):261–76.
[23] Petri CA. Communication with automata. PhD thesis, University of Bonn,
Technical Report (English) RADC-TR-65-377, Giriffis (NY): Rome Air Devel-
opment Center; 1962.
[24] Murata T. Petri nets: properties analysis and applications. In: Proceedings of
IEEE, vol. 77; 1989. p. 541–80.
[25] Liu T, Chiou S. The application of Petri nets to failure analysis. Reliability
Engineering and System Safety 1997;57:129–42.
[26] Peterson JL. Petri net theory and the modeling of systems. Englewood Cliffs,
NJ: Prentice-Hall; 1981.
[27] Zadeh LA. Fuzzy sets. Information and Control 1965;8:338–53.
[28] Zadeh LA. The concept of a linguistic variable and its application to
approximate reasoning: part—1. Information Science 1975;8:199–251.
[29] Yager RR. A characterization of the extension principle. Fuzzy Sets and
Systems 1986;18:205–17.
[30] Zimmermann HJ. Fuzzy set theory and its applications. Kluwer Academic
Publishers; 2001.
[31] Pedrycz W. Why triangular membership functions? Fuzzy Sets and Systems
1994;64(1):21–30.
[32] Bai X, Asgarpoor S. Fuzzy based approaches to substation reliability evalua-
tion. Electric Power Systems Research 2004;69:197–204.
[33] Ebeling C. An introduction to reliability and maintainability engineering.
New York: Tata McGraw-Hill Company Ltd.; 2001.
[34] Kennedy J, Eberhart RC. Particle swarm optimization. In: IEEE international
conference on neural networks, vol. IV, Piscataway, NJ, Seoul, Korea; 1995.
p. 1942–8.
[35] Fang G, Kwok NM, Ha Q. Automatic fuzzy membership function tuning using
the particle swarm optimisation. In: IEEE Pacific–Asia workshop on compu-
tational intelligence and industrial application; 2008. p. 324–8.
[36] Kumar D. Analysis and optimization of systems availability in sugar, paper
and fertilizer industries. PhD thesis, University of Roorkee, India; 1991.
[37] Komal. Reliability analysis using fuzziness of real-time based industrial
processes. PhD thesis, Department of Mathematics, Indian Institute of
Technology Roorkee, India; 2010.
[38] Ross TJ. Fuzzy logic with engineering applications. 2nd edition New York, NY:
Wiley; 2004.
H. Garg, S.P. Sharma / ISA Transactions 51 (2012) 752–762762