Within the class of non-homogeneous Poisson process (NHPP) models and as a result of the simplicity of
the mathematical computations of the Power Law Process (PLP) model and the attractive physical
explanation of its parameters, this model has found considerable attention in repairable systems literature.
In this article, we conduct the investigation of new estimation approach, the regression estimation
procedure, on the performance of the parametric PLP model. The regression approach for estimating the
unknown parameters of the PLP model through the mean time between failure (TBF) function is evaluated
against the maximum likelihood estimation (MLE) approach. The results from the regression and MLE
approaches are compared based on three error evaluation criteria in terms of parameter estimation and its
precision, the numerical application shows the effectiveness of the regression estimation approach at
enhancing the predictive accuracy of the TBF measure.
Software Process Control on Ungrouped Data: Log-Power ModelWaqas Tariq
This document presents a statistical process control method using the Log-Power model for ungrouped software failure data. It describes the Log-Power model and maximum likelihood estimation approach. It then applies the model and control chart method to two software failure time datasets to estimate parameters and monitor the failure process. The results show the estimated parameters, calculated control limits, and failure distributions plotted on control charts to detect any changes in the failure process.
Software reliability models (SRMs) are very important for estimating and predicting software
reliability in the testing/debugging phase. The contributions of this paper are as follows. First, a
historical review of the Gompertz SRM is given. Based on several software failure data, the
parameters of the Gompertz software reliability model are estimated using two estimation
methods, the traditional maximum likelihood and the least square. The methods of estimation are
evaluated using the MSE and R-squared criteria. The results show that the least square
estimation is an attractive method in term of predictive performance and can be used when the
maximum likelihood method fails to give good prediction results.
Building a new CTL model checker using Web Servicesinfopapers
Florin Stoica, Laura Stoica, Building a new CTL model checker using Web Services, Proceeding The 21th International Conference on Software, Telecommunications and Computer Networks (SoftCOM 2013), At Split-Primosten, Croatia, 18-20 September, pp. 285-289, 2013
DOI=10.1109/SoftCOM.2013.6671858 http://dx.doi.org/10.1109/SoftCOM.2013.6671858
Implementing an ATL Model Checker tool using Relational Algebra conceptsinfopapers
This document describes the implementation of an ATL (Alternating-Time Temporal Logic) model checker using relational algebra concepts. The model checker is implemented in a client-server paradigm, with the client allowing interactive construction of ATL models as directed multi-graphs. The server embeds an ATL model checking algorithm using ANTLR and relational algebra expressions translated to SQL queries. A key contribution is the implementation of the Pre(A,Θ) function, which computes the set of states agents A can enforce the system into states in Θ in one move, using relational algebra expressions and SQL. The tool was developed to improve applicability of ATL model checking for general-purpose software design.
Feature selection in high-dimensional datasets is
considered to be a complex and time-consuming problem. To
enhance the accuracy of classification and reduce the execution
time, Parallel Evolutionary Algorithms (PEAs) can be used. In
this paper, we make a review for the most recent works which
handle the use of PEAs for feature selection in large datasets.
We have classified the algorithms in these papers into four main
classes (Genetic Algorithms (GA), Particle Swarm Optimization
(PSO), Scattered Search (SS), and Ant Colony Optimization
(ACO)). The accuracy is adopted as a measure to compare the
efficiency of these PEAs. It is noticeable that the Parallel Genetic
Algorithms (PGAs) are the most suitable algorithms for feature
selection in large datasets; since they achieve the highest accuracy.
On the other hand, we found that the Parallel ACO is timeconsuming
and less accurate comparing with other PEA.
Threshold benchmarking for feature ranking techniquesjournalBEEI
In prediction modeling, the choice of features chosen from the original feature set is crucial for accuracy and model interpretability. Feature ranking techniques rank the features by its importance but there is no consensus on the number of features to be cut-off. Thus, it becomes important to identify a threshold value or range, so as to remove the redundant features. In this work, an empirical study is conducted for identification of the threshold benchmark for feature ranking algorithms. Experiments are conducted on Apache Click dataset with six popularly used ranker techniques and six machine learning techniques, to deduce a relationship between the total number of input features (N) to the threshold range. The area under the curve analysis shows that ≃ 33-50% of the features are necessary and sufficient to yield a reasonable performance measure, with a variance of 2%, in defect prediction models. Further, we also find that the log2(N) as the ranker threshold value represents the lower limit of the range.
A REVIEW ON OPTIMIZATION OF LEAST SQUARES SUPPORT VECTOR MACHINE FOR TIME SER...ijaia
Support Vector Machine has appeared as an active study in machine learning community and extensively
used in various fields including in prediction, pattern recognition and many more. However, the Least
Squares Support Vector Machine which is a variant of Support Vector Machine offers better solution
strategy. In order to utilize the LSSVM capability in data mining task such as prediction, there is a need to
optimize its hyper parameters. This paper presents a review on techniques used to optimize the parameters
based on two main classes; Evolutionary Computation and Cross Validation.
Iaetsd protecting privacy preserving for cost effective adaptive actionsIaetsd Iaetsd
This document proposes algorithms to solve the difficult optimization problem of selecting cost-effective adaptive actions to prevent service level agreement (SLA) violations in cloud computing. It formalizes the problem and presents three algorithm approaches: 1) Branch-and-bound searches the solution space systematically to find optimal solutions; 2) Local search heuristically improves initial random solutions through small changes; 3) Genetic algorithms mimic biological evolution through selection, crossover and mutation of fit solutions to converge on good solutions. The algorithms are evaluated within the PREVENT framework for predicting and preventing SLA violations.
Software Process Control on Ungrouped Data: Log-Power ModelWaqas Tariq
This document presents a statistical process control method using the Log-Power model for ungrouped software failure data. It describes the Log-Power model and maximum likelihood estimation approach. It then applies the model and control chart method to two software failure time datasets to estimate parameters and monitor the failure process. The results show the estimated parameters, calculated control limits, and failure distributions plotted on control charts to detect any changes in the failure process.
Software reliability models (SRMs) are very important for estimating and predicting software
reliability in the testing/debugging phase. The contributions of this paper are as follows. First, a
historical review of the Gompertz SRM is given. Based on several software failure data, the
parameters of the Gompertz software reliability model are estimated using two estimation
methods, the traditional maximum likelihood and the least square. The methods of estimation are
evaluated using the MSE and R-squared criteria. The results show that the least square
estimation is an attractive method in term of predictive performance and can be used when the
maximum likelihood method fails to give good prediction results.
Building a new CTL model checker using Web Servicesinfopapers
Florin Stoica, Laura Stoica, Building a new CTL model checker using Web Services, Proceeding The 21th International Conference on Software, Telecommunications and Computer Networks (SoftCOM 2013), At Split-Primosten, Croatia, 18-20 September, pp. 285-289, 2013
DOI=10.1109/SoftCOM.2013.6671858 http://dx.doi.org/10.1109/SoftCOM.2013.6671858
Implementing an ATL Model Checker tool using Relational Algebra conceptsinfopapers
This document describes the implementation of an ATL (Alternating-Time Temporal Logic) model checker using relational algebra concepts. The model checker is implemented in a client-server paradigm, with the client allowing interactive construction of ATL models as directed multi-graphs. The server embeds an ATL model checking algorithm using ANTLR and relational algebra expressions translated to SQL queries. A key contribution is the implementation of the Pre(A,Θ) function, which computes the set of states agents A can enforce the system into states in Θ in one move, using relational algebra expressions and SQL. The tool was developed to improve applicability of ATL model checking for general-purpose software design.
Feature selection in high-dimensional datasets is
considered to be a complex and time-consuming problem. To
enhance the accuracy of classification and reduce the execution
time, Parallel Evolutionary Algorithms (PEAs) can be used. In
this paper, we make a review for the most recent works which
handle the use of PEAs for feature selection in large datasets.
We have classified the algorithms in these papers into four main
classes (Genetic Algorithms (GA), Particle Swarm Optimization
(PSO), Scattered Search (SS), and Ant Colony Optimization
(ACO)). The accuracy is adopted as a measure to compare the
efficiency of these PEAs. It is noticeable that the Parallel Genetic
Algorithms (PGAs) are the most suitable algorithms for feature
selection in large datasets; since they achieve the highest accuracy.
On the other hand, we found that the Parallel ACO is timeconsuming
and less accurate comparing with other PEA.
Threshold benchmarking for feature ranking techniquesjournalBEEI
In prediction modeling, the choice of features chosen from the original feature set is crucial for accuracy and model interpretability. Feature ranking techniques rank the features by its importance but there is no consensus on the number of features to be cut-off. Thus, it becomes important to identify a threshold value or range, so as to remove the redundant features. In this work, an empirical study is conducted for identification of the threshold benchmark for feature ranking algorithms. Experiments are conducted on Apache Click dataset with six popularly used ranker techniques and six machine learning techniques, to deduce a relationship between the total number of input features (N) to the threshold range. The area under the curve analysis shows that ≃ 33-50% of the features are necessary and sufficient to yield a reasonable performance measure, with a variance of 2%, in defect prediction models. Further, we also find that the log2(N) as the ranker threshold value represents the lower limit of the range.
A REVIEW ON OPTIMIZATION OF LEAST SQUARES SUPPORT VECTOR MACHINE FOR TIME SER...ijaia
Support Vector Machine has appeared as an active study in machine learning community and extensively
used in various fields including in prediction, pattern recognition and many more. However, the Least
Squares Support Vector Machine which is a variant of Support Vector Machine offers better solution
strategy. In order to utilize the LSSVM capability in data mining task such as prediction, there is a need to
optimize its hyper parameters. This paper presents a review on techniques used to optimize the parameters
based on two main classes; Evolutionary Computation and Cross Validation.
Iaetsd protecting privacy preserving for cost effective adaptive actionsIaetsd Iaetsd
This document proposes algorithms to solve the difficult optimization problem of selecting cost-effective adaptive actions to prevent service level agreement (SLA) violations in cloud computing. It formalizes the problem and presents three algorithm approaches: 1) Branch-and-bound searches the solution space systematically to find optimal solutions; 2) Local search heuristically improves initial random solutions through small changes; 3) Genetic algorithms mimic biological evolution through selection, crossover and mutation of fit solutions to converge on good solutions. The algorithms are evaluated within the PREVENT framework for predicting and preventing SLA violations.
A hybrid fuzzy ann approach for software effort estimationijfcstjournal
This document presents a study that develops a software effort estimation model using an Adaptive Neuro Fuzzy Inference System (ANFIS). The study evaluates the proposed ANFIS model using COCOMO81 datasets and compares its performance to an Artificial Neural Network (ANN) model and the intermediate COCOMO model. The results show that the ANFIS model provides better estimates than the ANN and COCOMO models, with lower values for metrics like the Root Mean Square Error and Magnitude of Relative Error.
Cost Estimation Predictive Modeling: Regression versus Neural Networkmustafa sarac
This document compares regression and neural network models for cost estimation predictive modeling. It summarizes previous research applying regression and neural networks to cost estimation problems in various fields. The document then describes a study that systematically examines the performance of regression versus neural networks for cost estimation under different data conditions, including varying data set sizes, imperfections from noise, and sampling bias. Both simulated and real-world data sets are used to compare the accuracy, variability, and usability of regression and neural network models for cost estimation.
Presented in this short document is a description of our three separate techniques to analyze the data by checking, clustering and componentizing it before it is used by other IMPL’s routines especially in on-line/real-time decision-making applications. We also have other data consistency or analysis techniques which have been described in other IMPL documents and these relate to the application of data reconciliation and regression with diagnostics but require an explicit model (model-based) whereas the techniques below do not i.e., they are data-based techniques.
Scalability in Model Checking through Relational DatabasesCSCJournals
This document discusses an ATL model checking tool that uses relational databases to improve scalability. The tool allows designers to interactively build models as directed graphs and verify properties expressed as ATL formulas. The key contributions are implementing the Pre function using relational algebra expressions translated to SQL, and generating an ATL model checker from a grammar specification using ANTLR. Relational databases improve performance by efficiently representing large models for verification.
Adaptive response surface by kriging using pilot points for structural reliab...IOSR Journals
Structural reliability analysis aims to compute the probability of failure by considering system uncertainties. However, this approach may require very time-consuming computation and becomes impracticable for complex structures especially when complex computer analysis and simulation codes are involved such as finite element method. Approximation methods are widely used to build simplified approximations, or metamodels providing a surrogate model of the original codes. The most popular surrogate model is the response surface methodology, which typically employs second order polynomial approximation using least-squares regression techniques. Several authors have been used response surface methods in reliability analysis. However, another approximation method based on kriging approach has successfully applied in the field of deterministic optimization. Few studies have treated the use of kriging approximation in reliability analysis and reliability-based design optimization. In this paper, the kriging approximation is used an alternative to the traditional response surface method, to approximate the performance function of the reliability analysis. The main objective of this work is to develop an efficient global approximation while controlling the computational cost and accurate prediction. A pilot point method is proposed to the kriging approximation in order to increase the prior predictivity of the approximation, which the pilot points are good candidates for numerical simulation. In other words, the predictive quality of the initial kriging approximation is improved by adding adaptive information called “pilot points” in areas where the kriging variance is maximum. This methodology allows for an efficient modeling of highly non-linear responses, while the number of simulations is reduced compared to Latin Hypercubes approach. Numerical examples show the efficiency and the interest of the proposed method.
An enhanced pairwise search approach for generatingAlexander Decker
This document summarizes a research paper that proposes an enhanced pairwise search approach for generating test data. The approach aims to generate an optimal number of test cases and reduce execution time. It works by first generating pairs of parameters and values, then combining pairs to construct test cases that maximize covered pairs. An evaluation of the approach on 5 systems found it generated the smallest test suites in 3 of 5 cases compared to other methods. The authors conclude the approach is efficient in reducing test suites and execution time to meet industry needs.
Online learning in estimation of distribution algorithms for dynamic environm...André Gonçalves
This document proposes a new estimation of distribution algorithm called EDAOGMM that uses an online Gaussian mixture model to optimize problems in dynamic environments. EDAOGMM adapts its internal model through online learning as the environment changes. It was tested on benchmark dynamic optimization problems and outperformed other state-of-the-art algorithms, especially in high-frequency changing environments. Future work includes improving EDAOGMM's ability to avoid premature convergence and further experimental testing.
IMC Based Fractional Order Controller for Three Interacting Tank ProcessTELKOMNIKA JOURNAL
This document presents a study on using Internal Model Control (IMC) with fractional-order filters for controlling the liquid level in a three interacting tank process. The third-order nonlinear model of the three tank system is first reduced to a fractional-order first-order plus dead time (FO-FOPDT) model to simplify the controller design. IMC controllers with different integer and fractional-order filters are then designed and optimized using a genetic algorithm. Simulations show that the fractional-order modeled system and fractional-order filters achieve better control performance than conventional integer-order approaches.
This document summarizes a paper that introduces a mathematical approach to quantify errors resulting from reduced order modeling (ROM) techniques. ROM aims to reduce the dimensionality of input and output data for computationally intensive simulations like uncertainty quantification. The paper presents a method to calculate probabilistic error bounds for ROM that account for discarded model components. Numerical experiments on a pin cell reactor physics model demonstrate the ability to determine error bounds and validate them against actual errors with high probability. The error bounds approach could enable ROM techniques to self-adapt the level of reduction needed to ensure errors remain below thresholds for reliability.
HYBRID GENETIC ALGORITHM FOR BI-CRITERIA MULTIPROCESSOR TASK SCHEDULING WITH ...aciijournal
Present work considers the minimization of the bi-criteria function including weighted sum of makespan and total completion time for a Multiprocessor task scheduling problem.Genetic algorithm is the most
appealing choice for the different NP hard problems including multiprocessor task scheduling.
Performance of genetic algorithm depends on the quality of initial solution as good initial solution provides the better results. Different list scheduling heuristics based hybrid genetic algorithms (HGAs) have been
proposed and developedfor the problem. Computational analysis with the help of defined performance
index has been conducted on the standard task scheduling problems for evaluating the performance of the
proposed HGAs. The analysis shows that the ETF-GA is quite efficient and best among the other heuristic based hybrid genetic algorithms in terms of solution quality especially for large and complex problems.
Assessing Software Reliability Using SPC – An Order Statistics ApproachIJCSEA Journal
There are many software reliability models that are based on the times of occurrences of errors in the debugging of software. It is shown that it is possible to do asymptotic likelihood inference for software reliability models based on order statistics or Non-Homogeneous Poisson Processes (NHPP), with asymptotic confidence levels for interval estimates of parameters. In particular, interval estimates from these models are obtained for the conditional failure rate of the software, given the data from the debugging process. The data can be grouped or ungrouped. For someone making a decision about when to market software, the conditional failure rate is an important parameter. Order statistics are used in a wide variety of practical situations. Their use in characterization problems, detection of outliers, linear estimation, study of system reliability, life-testing, survival analysis, data compression and many other fields can be seen from the many books. Statistical Process Control (SPC) can monitor the forecasting of software failure and thereby contribute significantly to the improvement of software reliability. Control charts are widely used for software process control in the software industry. In this paper we proposed a control mechanism based on order statistics of cumulative quantity between observations of time domain
failure data using mean value function of Half Logistics Distribution (HLD) based on NHPP.
A Genetic Algorithm on Optimization Test FunctionsIJMERJOURNAL
ABSTRACT: Genetic Algorithms (GAs) have become increasingly useful over the years for solving combinatorial problems. Though they are generally accepted to be good performers among metaheuristic algorithms, most works have concentrated on the application of the GAs rather than the theoretical justifications. In this paper, we examine and justify the suitability of Genetic Algorithms in solving complex, multi-variable and multi-modal optimization problems. To achieve this, a simple Genetic Algorithm was used to solve four standard complicated optimization test functions, namely Rosenbrock, Schwefel, Rastrigin and Shubert functions. These functions are benchmarks to test the quality of an optimization procedure towards a global optimum. We show that the method has a quicker convergence to the global optima and that the optimal values for the Rosenbrock, Rastrigin, Schwefel and Shubert functions are zero (0), zero (0), -418.9829 and -14.5080 respectively
This document proposes and evaluates a new metaheuristic optimization algorithm called Current Search (CS) and applies it to optimize PID controller parameters for DC motor speed control. The CS is inspired by electric current flow and aims to balance exploration and exploitation. It outperforms genetic algorithm, particle swarm optimization, and adaptive tabu search on benchmark optimization problems, finding better solutions faster. When applied to optimize a PID controller for DC motor speed control, the CS successfully controlled motor speed.
This document summarizes a knowledge engineering approach using analytic hierarchy process (AHP) to resolve conflicts between experts in risk-related decision making. It proposes using a modified version of AHP to increase transparency in the analysis procedure. This allows identification of major causes of inter-expert discrepancy, which are differences in unstated assumptions and subjective weightings of risk factors. The document demonstrates how AHP can systematically decompose complex decision problems, evaluate alternatives based on multiple criteria, and aggregate results to provide an overall evaluation that incorporates differing expert opinions in a consistent manner.
Sca a sine cosine algorithm for solving optimization problemslaxmanLaxman03209
The document proposes a new population-based optimization algorithm called the Sine Cosine Algorithm (SCA) for solving optimization problems. SCA creates multiple random initial solutions and uses sine and cosine functions to fluctuate the solutions outward or toward the best solution, emphasizing exploration and exploitation. The performance of SCA is evaluated on test functions, qualitative metrics, and by optimizing the cross-section of an aircraft wing, showing it can effectively explore, avoid local optima, converge to the global optimum, and solve real problems with constraints.
Criterios Asset Management - Gerencia de ActivosBenito Juarez
What is Asset Management or Asset Management ?
Who is the Asset Manager ?
Tareas of Asset Management
Limitaciones of Asset Management
Herramientas of Asset Management
Criterios
Introduction to Reliability and Maintenance Management - PaperAccendo Reliability
This document provides an introduction to reliability and maintainability management. It defines key terms like reliability, maintainability, availability, and discusses how setting goals and specifications can help manage these concepts. It also covers feedback mechanisms for assessing how a product is performing compared to its goals, like discovering potential failure modes through tools like FMEA and HALT testing. The document introduces concepts like reliability apportionment and feedback loops to evaluate performance versus goals as part of an effective reliability program. It includes brief biographies of the authors to establish their expertise in reliability engineering.
1. The document contains 20 problems related to reliability engineering and design for reliability. The problems cover topics like calculating service life, tolerance analysis, interference fits, system reliability, and reliability of components.
2. Solutions to the problems are provided in the solutions manual. The problems get progressively more complex and involve reliability concepts like Weibull distribution, fatigue life, mean time between failures, and fault tree analysis.
3. The document is from a textbook on advanced engineering design with a focus on designing for reliability. It provides real-world examples and problems to help students learn reliability engineering techniques.
This document provides teaching notes and solutions to discussion and review questions for a chapter supplement on reliability. It covers key topics like quantifying reliability, the role of redundancy, and availability. Quantitative methods for determining reliability include probabilities and distributions. Students sometimes struggle with the exponential distribution, but the normal distribution is important for later concepts. The solutions provide step-by-step working of reliability calculations and examples involving probabilities, redundancy, and distributions.
Many organizations struggle with their asset management strategy, the new ISO 55000 standard will help; developing an organization-level policy and provides guidance on successfully implementing the changes to your organization
A hybrid fuzzy ann approach for software effort estimationijfcstjournal
This document presents a study that develops a software effort estimation model using an Adaptive Neuro Fuzzy Inference System (ANFIS). The study evaluates the proposed ANFIS model using COCOMO81 datasets and compares its performance to an Artificial Neural Network (ANN) model and the intermediate COCOMO model. The results show that the ANFIS model provides better estimates than the ANN and COCOMO models, with lower values for metrics like the Root Mean Square Error and Magnitude of Relative Error.
Cost Estimation Predictive Modeling: Regression versus Neural Networkmustafa sarac
This document compares regression and neural network models for cost estimation predictive modeling. It summarizes previous research applying regression and neural networks to cost estimation problems in various fields. The document then describes a study that systematically examines the performance of regression versus neural networks for cost estimation under different data conditions, including varying data set sizes, imperfections from noise, and sampling bias. Both simulated and real-world data sets are used to compare the accuracy, variability, and usability of regression and neural network models for cost estimation.
Presented in this short document is a description of our three separate techniques to analyze the data by checking, clustering and componentizing it before it is used by other IMPL’s routines especially in on-line/real-time decision-making applications. We also have other data consistency or analysis techniques which have been described in other IMPL documents and these relate to the application of data reconciliation and regression with diagnostics but require an explicit model (model-based) whereas the techniques below do not i.e., they are data-based techniques.
Scalability in Model Checking through Relational DatabasesCSCJournals
This document discusses an ATL model checking tool that uses relational databases to improve scalability. The tool allows designers to interactively build models as directed graphs and verify properties expressed as ATL formulas. The key contributions are implementing the Pre function using relational algebra expressions translated to SQL, and generating an ATL model checker from a grammar specification using ANTLR. Relational databases improve performance by efficiently representing large models for verification.
Adaptive response surface by kriging using pilot points for structural reliab...IOSR Journals
Structural reliability analysis aims to compute the probability of failure by considering system uncertainties. However, this approach may require very time-consuming computation and becomes impracticable for complex structures especially when complex computer analysis and simulation codes are involved such as finite element method. Approximation methods are widely used to build simplified approximations, or metamodels providing a surrogate model of the original codes. The most popular surrogate model is the response surface methodology, which typically employs second order polynomial approximation using least-squares regression techniques. Several authors have been used response surface methods in reliability analysis. However, another approximation method based on kriging approach has successfully applied in the field of deterministic optimization. Few studies have treated the use of kriging approximation in reliability analysis and reliability-based design optimization. In this paper, the kriging approximation is used an alternative to the traditional response surface method, to approximate the performance function of the reliability analysis. The main objective of this work is to develop an efficient global approximation while controlling the computational cost and accurate prediction. A pilot point method is proposed to the kriging approximation in order to increase the prior predictivity of the approximation, which the pilot points are good candidates for numerical simulation. In other words, the predictive quality of the initial kriging approximation is improved by adding adaptive information called “pilot points” in areas where the kriging variance is maximum. This methodology allows for an efficient modeling of highly non-linear responses, while the number of simulations is reduced compared to Latin Hypercubes approach. Numerical examples show the efficiency and the interest of the proposed method.
An enhanced pairwise search approach for generatingAlexander Decker
This document summarizes a research paper that proposes an enhanced pairwise search approach for generating test data. The approach aims to generate an optimal number of test cases and reduce execution time. It works by first generating pairs of parameters and values, then combining pairs to construct test cases that maximize covered pairs. An evaluation of the approach on 5 systems found it generated the smallest test suites in 3 of 5 cases compared to other methods. The authors conclude the approach is efficient in reducing test suites and execution time to meet industry needs.
Online learning in estimation of distribution algorithms for dynamic environm...André Gonçalves
This document proposes a new estimation of distribution algorithm called EDAOGMM that uses an online Gaussian mixture model to optimize problems in dynamic environments. EDAOGMM adapts its internal model through online learning as the environment changes. It was tested on benchmark dynamic optimization problems and outperformed other state-of-the-art algorithms, especially in high-frequency changing environments. Future work includes improving EDAOGMM's ability to avoid premature convergence and further experimental testing.
IMC Based Fractional Order Controller for Three Interacting Tank ProcessTELKOMNIKA JOURNAL
This document presents a study on using Internal Model Control (IMC) with fractional-order filters for controlling the liquid level in a three interacting tank process. The third-order nonlinear model of the three tank system is first reduced to a fractional-order first-order plus dead time (FO-FOPDT) model to simplify the controller design. IMC controllers with different integer and fractional-order filters are then designed and optimized using a genetic algorithm. Simulations show that the fractional-order modeled system and fractional-order filters achieve better control performance than conventional integer-order approaches.
This document summarizes a paper that introduces a mathematical approach to quantify errors resulting from reduced order modeling (ROM) techniques. ROM aims to reduce the dimensionality of input and output data for computationally intensive simulations like uncertainty quantification. The paper presents a method to calculate probabilistic error bounds for ROM that account for discarded model components. Numerical experiments on a pin cell reactor physics model demonstrate the ability to determine error bounds and validate them against actual errors with high probability. The error bounds approach could enable ROM techniques to self-adapt the level of reduction needed to ensure errors remain below thresholds for reliability.
HYBRID GENETIC ALGORITHM FOR BI-CRITERIA MULTIPROCESSOR TASK SCHEDULING WITH ...aciijournal
Present work considers the minimization of the bi-criteria function including weighted sum of makespan and total completion time for a Multiprocessor task scheduling problem.Genetic algorithm is the most
appealing choice for the different NP hard problems including multiprocessor task scheduling.
Performance of genetic algorithm depends on the quality of initial solution as good initial solution provides the better results. Different list scheduling heuristics based hybrid genetic algorithms (HGAs) have been
proposed and developedfor the problem. Computational analysis with the help of defined performance
index has been conducted on the standard task scheduling problems for evaluating the performance of the
proposed HGAs. The analysis shows that the ETF-GA is quite efficient and best among the other heuristic based hybrid genetic algorithms in terms of solution quality especially for large and complex problems.
Assessing Software Reliability Using SPC – An Order Statistics ApproachIJCSEA Journal
There are many software reliability models that are based on the times of occurrences of errors in the debugging of software. It is shown that it is possible to do asymptotic likelihood inference for software reliability models based on order statistics or Non-Homogeneous Poisson Processes (NHPP), with asymptotic confidence levels for interval estimates of parameters. In particular, interval estimates from these models are obtained for the conditional failure rate of the software, given the data from the debugging process. The data can be grouped or ungrouped. For someone making a decision about when to market software, the conditional failure rate is an important parameter. Order statistics are used in a wide variety of practical situations. Their use in characterization problems, detection of outliers, linear estimation, study of system reliability, life-testing, survival analysis, data compression and many other fields can be seen from the many books. Statistical Process Control (SPC) can monitor the forecasting of software failure and thereby contribute significantly to the improvement of software reliability. Control charts are widely used for software process control in the software industry. In this paper we proposed a control mechanism based on order statistics of cumulative quantity between observations of time domain
failure data using mean value function of Half Logistics Distribution (HLD) based on NHPP.
A Genetic Algorithm on Optimization Test FunctionsIJMERJOURNAL
ABSTRACT: Genetic Algorithms (GAs) have become increasingly useful over the years for solving combinatorial problems. Though they are generally accepted to be good performers among metaheuristic algorithms, most works have concentrated on the application of the GAs rather than the theoretical justifications. In this paper, we examine and justify the suitability of Genetic Algorithms in solving complex, multi-variable and multi-modal optimization problems. To achieve this, a simple Genetic Algorithm was used to solve four standard complicated optimization test functions, namely Rosenbrock, Schwefel, Rastrigin and Shubert functions. These functions are benchmarks to test the quality of an optimization procedure towards a global optimum. We show that the method has a quicker convergence to the global optima and that the optimal values for the Rosenbrock, Rastrigin, Schwefel and Shubert functions are zero (0), zero (0), -418.9829 and -14.5080 respectively
This document proposes and evaluates a new metaheuristic optimization algorithm called Current Search (CS) and applies it to optimize PID controller parameters for DC motor speed control. The CS is inspired by electric current flow and aims to balance exploration and exploitation. It outperforms genetic algorithm, particle swarm optimization, and adaptive tabu search on benchmark optimization problems, finding better solutions faster. When applied to optimize a PID controller for DC motor speed control, the CS successfully controlled motor speed.
This document summarizes a knowledge engineering approach using analytic hierarchy process (AHP) to resolve conflicts between experts in risk-related decision making. It proposes using a modified version of AHP to increase transparency in the analysis procedure. This allows identification of major causes of inter-expert discrepancy, which are differences in unstated assumptions and subjective weightings of risk factors. The document demonstrates how AHP can systematically decompose complex decision problems, evaluate alternatives based on multiple criteria, and aggregate results to provide an overall evaluation that incorporates differing expert opinions in a consistent manner.
Sca a sine cosine algorithm for solving optimization problemslaxmanLaxman03209
The document proposes a new population-based optimization algorithm called the Sine Cosine Algorithm (SCA) for solving optimization problems. SCA creates multiple random initial solutions and uses sine and cosine functions to fluctuate the solutions outward or toward the best solution, emphasizing exploration and exploitation. The performance of SCA is evaluated on test functions, qualitative metrics, and by optimizing the cross-section of an aircraft wing, showing it can effectively explore, avoid local optima, converge to the global optimum, and solve real problems with constraints.
Criterios Asset Management - Gerencia de ActivosBenito Juarez
What is Asset Management or Asset Management ?
Who is the Asset Manager ?
Tareas of Asset Management
Limitaciones of Asset Management
Herramientas of Asset Management
Criterios
Introduction to Reliability and Maintenance Management - PaperAccendo Reliability
This document provides an introduction to reliability and maintainability management. It defines key terms like reliability, maintainability, availability, and discusses how setting goals and specifications can help manage these concepts. It also covers feedback mechanisms for assessing how a product is performing compared to its goals, like discovering potential failure modes through tools like FMEA and HALT testing. The document introduces concepts like reliability apportionment and feedback loops to evaluate performance versus goals as part of an effective reliability program. It includes brief biographies of the authors to establish their expertise in reliability engineering.
1. The document contains 20 problems related to reliability engineering and design for reliability. The problems cover topics like calculating service life, tolerance analysis, interference fits, system reliability, and reliability of components.
2. Solutions to the problems are provided in the solutions manual. The problems get progressively more complex and involve reliability concepts like Weibull distribution, fatigue life, mean time between failures, and fault tree analysis.
3. The document is from a textbook on advanced engineering design with a focus on designing for reliability. It provides real-world examples and problems to help students learn reliability engineering techniques.
This document provides teaching notes and solutions to discussion and review questions for a chapter supplement on reliability. It covers key topics like quantifying reliability, the role of redundancy, and availability. Quantitative methods for determining reliability include probabilities and distributions. Students sometimes struggle with the exponential distribution, but the normal distribution is important for later concepts. The solutions provide step-by-step working of reliability calculations and examples involving probabilities, redundancy, and distributions.
Many organizations struggle with their asset management strategy, the new ISO 55000 standard will help; developing an organization-level policy and provides guidance on successfully implementing the changes to your organization
STRATEGIC MANAGEMENT, GERENCIA ESTRATEGICA, By LIC. SALVADOR ALFARO GOMEZ, Ja...LEWI
Strategic management
Is the art, science and craft of formulating, implementing and evaluating cross-functional decisions that will enable an organization to achieve its long-term objectives.
It is the process of specifying the organization's mission, vision and objectives, developing policies and plans, often in terms of projects and programs, which are designed to achieve these objectives, and then allocating resources to implement the policies and plans, projects and programs.
Strategic management seeks to coordinate and integrate the activities of the various functional areas of a business in order to achieve long-term organizational objectives.
A balanced scorecard is often used to evaluate the overall performance of the business and its progress towards objectives.
Strategic management is the highest level of managerial activity. Strategies are typically planned, crafted or guided by the Chief Executive Officer, approved or authorized by the Board of directors, and then implemented under the supervision of the organization's top management team or senior executives.
Strategic management provides overall direction to the enterprise and is closely related to the field of Organization Studies. In the field of business administration it is useful to talk about "strategic alignment" between the organization and its environment or "strategic consistency".
There is strategic consistency when the actions of an organization are consistent with the expectations of management, and these in turn are with the market and the context."
This document discusses Weibull analysis, which is commonly used in reliability engineering. The Weibull distribution can take on many shapes depending on the value of the β parameter. Weibull analysis is useful for mechanical reliability due to its versatility. The document defines the Weibull probability density function and describes how it is used to derive reliability metrics like failure rate and mean time to failure. Examples are provided to demonstrate how Weibull analysis can be used to determine failure percentages and mean time to failure for products.
El documento describe varios modelos gerenciales como la Calidad Total, Kaizen, Justo a Tiempo, Desarrollo a Escala Humana, Empoderamiento, Reingeniería, Benchmarking y Outsourcing. Cada modelo se explica brevemente con sus principios y modo de operación.
Este documento presenta la información general sobre un diplomado internacional en gerencia de mantenimiento que se llevará a cabo en Lima, Perú del 28 al 30 de enero de 2011. El curso se centrará en la gerencia de sistemas de mantenimiento y consta de 16 horas divididas en 8 sesiones que cubrirán temas como planeación de mantenimiento, ejecución de mantenimiento, tercerización, mantenimiento predictivo y gestión de mantenimiento. El objetivo general es identificar los procesos clave de la gerencia moderna de manten
Reliability engineering deals with studying, evaluating, and managing the reliability of systems and components. It aims to ensure equipment operates reliably under all conditions as required. As systems become more complex with many interconnected parts, the overall reliability decreases unless the reliability of individual components is improved. Reliability engineering applies principles of probability and statistics to analyze failure rates and availability of systems. It is important for fields such as aviation, defense, and healthcare where failure could result in dangerous situations.
Dimitri kececioglu, Reliability engineering handbook vol. 2Rahman Hakim
All of material inside is un-licence, kindly use it for educational only but please do not to commercialize it.
Based on 'ilman nafi'an, hopefully this file beneficially for you.
Thank you.
Achieving high product reliability has become increasingly vital for manufacturers in order to meet customer expectations amid the threat of strong global competition. Poor reliability can doom a product and jeopardize the reputation of a brand or company. Inadequate reliability also presents financial risks from warranty, product recalls, and potential litigation. When developing new products, it is imperative that manufacturers develop reliability specifications and utilize methods to predict and verify that those reliability specifications will be met. This 4-Hour course provides an overview of quantitative methods for predicting product reliability from data gathered from physical testing or from field data
Este documento presenta una introducción al análisis estratégico y la gerencia estratégica. Explica conceptos clave como el pensamiento estratégico, el proceso de gerencia estratégica, y herramientas de análisis como PEST, matriz DOFA, y estrategia del océano azul. El objetivo es conocer sobre la importancia de la dirección estratégica y cómo aplicarla en la vida práctica para generar ventajas competitivas.
This is a three parts lecture series. The parts will cover the basics and fundamentals of reliability engineering. Part 1 begins with introduction of reliability definition and other reliability characteristics and measurements. It will be followed by reliability calculation, estimation of failure rates and understanding of the implications of failure rates on system maintenance and replacements in Part 2. Then Part 3 will cover the most important and practical failure time distributions and how to obtain the parameters of the distributions and interpretations of these parameters. Hands-on computations of the failure rates and the estimation of the failure time distribution parameters will be conducted using standard Microsoft Excel.
Part 2. Reliability Calculations
1.Use of failure data
2.Density functions
3.Reliability function
4.Hazard and failure rates
This is a three parts lecture series. The parts will cover the basics and fundamentals of reliability engineering. Part 1 begins with introduction of reliability definition and other reliability characteristics and measurements. It will be followed by reliability calculation, estimation of failure rates and understanding of the implications of failure rates on system maintenance and replacements in Part 2. Then Part 3 will cover the most important and practical failure time distributions and how to obtain the parameters of the distributions and interpretations of these parameters. Hands-on computations of the failure rates and the estimation of the failure time distribution parameters will be conducted using standard Microsoft Excel.
Part 1. Reliability Definitions
1.Reliability---Time dependent characteristic
2.Failure rate
3.Mean Time to Failure
4.Availability
5.Mean residual life
Este documento presenta un resumen de 6 volúmenes sobre mantenimiento industrial. Los volúmenes cubren temas como mantenimiento sistemático, paradas y grandes revisiones, y mantenimiento predictivo. El documento también describe los tipos de empresas que emplean mantenimiento sistemático y cómo se puede contratar este tipo de mantenimiento.
DETECTION OF RELIABLE SOFTWARE USING SPRT ON TIME DOMAIN DATAIJCSEA Journal
In Classical Hypothesis testing volumes of data is to be collected and then the conclusions are drawn which may take more time. But, Sequential Analysis of statistical science could be adopted in order to decide upon the reliable / unreliable of the developed software very quickly. The procedure adopted for this is, Sequential Probability Ratio Test (SPRT). In the present paper we proposed the performance of SPRT on Time domain data using Weibull model and analyzed the results by applying on 5 data sets. The parameters are estimated using Maximum Likelihood Estimation.
Real-time PMU Data Recovery Application Based on Singular Value DecompositionPower System Operation
Phasor measurement units (PMUs) allow for the enhancement of power system monitoring and control applications and they will prove even more crucial in the future, as the grid becomes more decentralized and subject to higher uncertainty. Tools that improve PMU data quality and facilitate data analytics workflows are thus needed. In this work, we leverage a previously described algorithm to develop a python application for PMU data recovery. Because of its intrinsic nature, PMU data can be dimensionally reduced using singular value decomposition (SVD). Moreover, the high spatio-temporal correlation can be leveraged to estimate the value of measurements that are missing due to drop-outs. These observations are at the base of the data recovery application described in this work. Extensive testing is performed to study the performance under different data drop-out scenarios, and the results show very high recovery accuracy. Additionally, the application is designed to take advantage of a high performance PMU data platform called PredictiveGrid™, developed by PingThings
Real-time PMU Data Recovery Application Based on Singular Value DecompositionPower System Operation
Phasor measurement units (PMUs) allow for the enhancement of power system monitoring and control applications and they will prove even more crucial in the future, as the grid becomes more decentralized and subject to higher uncertainty. Tools that improve PMU data quality and facilitate data analytics workflows are thus needed. In this work, we leverage a previously described algorithm to develop a python application for PMU data recovery. Because of its intrinsic nature, PMU data can be dimensionally reduced using singular value decomposition (SVD). Moreover, the high spatio-temporal correlation can be leveraged to estimate the value of measurements that are missing due to drop-outs. These observations are at the base of the data recovery application described in this work. Extensive testing is performed to study the performance under different data drop-out scenarios, and the results show very high recovery accuracy. Additionally, the application is designed to take advantage of a high performance PMU data platform called PredictiveGrid™, developed by PingThings.
KEYWORDS
Assessing Software Reliability Using SPC – An Order Statistics Approach IJCSEA Journal
There are many software reliability models that are based on the times of occurrences of errors in the debugging of software. It is shown that it is possible to do asymptotic likelihood inference for software reliability models based on order statistics or Non-Homogeneous Poisson Processes (NHPP), with asymptotic confidence levels for interval estimates of parameters. In particular, interval estimates from these models are obtained for the conditional failure rate of the software, given the data from the debugging process. The data can be grouped or ungrouped. For someone making a decision about when to market software, the conditional failure rate is an important parameter. Order statistics are used in a wide variety of practical situations. Their use in characterization problems, detection of outliers, linear estimation, study of system reliability, life-testing, survival analysis, data compression and many other fields can be seen from the many books. Statistical Process Control (SPC) can monitor the forecasting of software failure and thereby contribute significantly to the improvement of software reliability. Control charts are widely used for software process control in the software industry. In this paper we proposed a control mechanism based on order statistics of cumulative quantity between observations of time domain
failure data using mean value function of Half Logistics Distribution (HLD) based on NHPP.
A study of the Behavior of Floating-Point Errorsijpla
The dangers of programs performing floating-point computations are well known. This is due to numerical reliability issues resulting from rounding errors arising during the computations. In general, these round-off errors are neglected because they are small. However, they can be accumulated and propagated and lead to faulty execution and failures. Typically, in critical embedded systems scenario, these faults may cause dramatic damages (eg. failures of Ariane 5 launch and Patriot Rocket mission). The ufp (unit in the first place) and ulp (unit in the last place) functions are used to estimate maximum value of round-off errors. In this paper, the idea consists in studying the behavior of round-off errors, checking their numerical stability using a set of constraints and ensuring that the computation results of round-off errors do not become larger when solving constraints about the ufp and ulp values.
A Comparative study of locality Preserving Projection & Principle Component A...RAHUL WAGAJ
The document compares the dimensionality reduction techniques of locality preserving projection (LPP) and principal component analysis (PCA) when used with logistic regression for classification. Five public datasets were used to evaluate the techniques. LPP was found to outperform PCA across all datasets and performance metrics by better preserving the local data structure, which is more important for classification than the global structure preserved by PCA. LPP achieved higher accuracy, sensitivity, specificity, precision, F-score, and area under the ROC curve than PCA for all datasets. The results indicate LPP is an effective dimensionality reduction method for classification tasks when local structure is significant.
This document compares the performance of two neural network architectures, multi-layer perceptron (MLP) and radial basis function (RBF) networks, on a face recognition system. It trains MLP networks using different variants of the backpropagation algorithm and compares the results to RBF networks. The document finds that RBF networks provide better generalization performance compared to backpropagation algorithms and have faster training times, making them more suitable for face recognition.
This document describes a new recursive Monte Carlo simulation algorithm called the Sampled Path Set Algorithm (SPSA) for modeling complex k-out-of-n reliability systems. The SPSA uses a graph representation of a reliability block diagram and recursively searches the graph to determine system response based on the system state vector at each simulation iteration, allowing modeling of systems with general component failure and repair distributions and large numbers of components. Existing methods for analyzing such systems using tie/cut sets have limitations as the number of sets grows non-linearly with increased system complexity. The SPSA provides a more efficient alternative with linear growth in processing and memory requirements.
Neural Network-Based Actuator Fault Diagnosis for a Non-Linear Multi-Tank SystemISA Interchange
The paper is devoted to the problem of the robust actuator fault diagnosis of the dynamic non-linear systems. In the proposed method, it is assumed that the diagnosed system can be modelled by the recurrent neural network, which can be transformed into the linear parameter varying form. Such a system description allows developing the designing scheme of the robust unknown input observer within H1 framework for a class of non-linear systems. The proposed approach is designed in such a way that a prescribed disturbance attenuation level is achieved with respect to the actuator fault estimation error, while guaranteeing the convergence of the observer. The application of the robust unknown input observer enables actuator fault estimation, which allows applying the developed approach to the fault tolerant control tasks.
An SPRT Procedure for an Ungrouped Data using MMLE ApproachIOSR Journals
This document describes a sequential probability ratio test (SPRT) procedure for analyzing ungrouped software failure data using a modified maximum likelihood estimation (MMLE) approach. The SPRT procedure can help quickly detect unreliable software by making decisions with fewer observed failures than traditional hypothesis testing methods. Parameters are estimated using MMLE, which approximates functions in the maximum likelihood equation with linear functions to simplify calculations compared to other estimation methods. The document provides details on how to apply the SPRT procedure and MMLE parameter estimation to a software reliability growth model to analyze software failure data sequentially and detect unreliable software components earlier.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
ENSEMBLE REGRESSION MODELS FOR SOFTWARE DEVELOPMENT EFFORT ESTIMATION: A COMP...ijseajournal
As demand for computer software continually increases, software scope and complexity become higher than ever. The software industry is in real need of accurate estimates of the project under development. Software development effort estimation is one of the main processes in software project management. However, overestimation and underestimation may cause the software industry loses. This study determines which technique has better effort prediction accuracy and propose combined techniques that could provide better estimates. Eight different ensemble models to estimate effort with Ensemble Models were compared with each other base on the predictive accuracy on the Mean Absolute Residual (MAR) criterion and statistical tests. The results have indicated that the proposed ensemble models, besides delivering high efficiency in contrast to its counterparts, and produces the best responses for software project effort estimation. Therefore, the proposed ensemble models in this study will help the project managers working with development quality software.
Artificial Neural Network and Multi-Response Optimization in Reliability Meas...inventionjournals
Neural network is an important tool for reliability analysis, including estimation of reliability or utility function which are too complicated to be analytical expressed for large or complex system. It has been demonstrated the neural network has significant improvement in the parameter estimation accuracy over the traditional chi-square test. There are many parameters of a neural network that should be determined while training the dataset, since different setups of algorithm parameters affect the estimation performance in either accuracy or computation efficiency. In this paper, neural network training is used to estimate the utility function for the parallel-series redundancy allocation problem, and weighted principal component based multi-response optimization method is applied to find the optimal setting of neural network parameters so that the simultaneous minimizations of training error and computing time are achieved.
Verification of confliction and unreachability in rule based expert systems w...ijaia
It is important to find optimal solutions for structural errors in rule-based expert systems .Solutions to
discovering such errors by using model checking techniques have already been proposed, but these
solutions have problems such as state space explosion. In this paper, to overcome these problems, we
model the rule-based systems as finite state transition systems and express confliction and
unreachabilityas Computation Tree Logic (CTL) logic formula and then use the technique of model
checking to detect confliction and unreachability in rule-based systems with the model checker UPPAAL.
This document summarizes a study on estimating reliability and maintainability (RAM) parameters of a component under imperfect maintenance conditions. The study formulates the problem as a nonlinear optimization model with the likelihood function as the objective to be maximized. A genetic algorithm is then used to solve the model and determine RAM parameter values, optimal maintenance schedules, and component replacement times. The approach provides a way to estimate RAM parameters while accounting for the non-homogeneity of failure processes due to imperfect repair and maintenance.
A Review on Parameter Estimation Techniques of Software Reliability Growth Mo...Editor IJCATR
Software reliability is considered as a quantifiable metric, which is defined as the probability of a software to operate
without failure for a specified period of time in a specific environment. Various software reliability growth models have been proposed
to predict the reliability of a software. These models help vendors to predict the behaviour of the software before shipment. The
reliability is predicted by estimating the parameters of the software reliability growth models. But the model parameters are generally
in nonlinear relationships which creates many problems in finding the optimal parameters using traditional techniques like Maximum
Likelihood and least Square Estimation. Various stochastic search algorithms have been introduced which have made the task of
parameter estimation, more reliable and computationally easier. Parameter estimation of NHPP based reliability models, using MLE
and using an evolutionary search algorithm called Particle Swarm Optimization, has been explored in the paper.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
UNDERSTANDING LEAST ABSOLUTE VALUE IN REGRESSION-BASED DATA MININGIJDKP
This article advances our understanding of regression-based data mining by comparing the utility of Least
Absolute Value (LAV) and Least Squares (LS) regression methods. Using demographic variables from
U.S. state-wide data, we fit variable regression models to dependent variables of varying distributions
using both LS and LAV. Forecasts generated from the resulting equations are used to compare the
performance of the regression methods under different dependent variable distribution conditions. Initial
findings indicate LAV procedures better forecast in data mining applications when the dependent variable
is non-normal. Our results differ from those found in prior research using simulated data.
EXPLAINING THE PERFORMANCE OF COLLABORATIVE FILTERING METHODS WITH OPTIMAL DA...IJCI JOURNAL
The performance of a Collaborative Filtering (CF) method is based on the properties of a User-Item Rating Matrix (URM). And the properties or Rating Data Characteristics (RDC) of a URM are constantly changing. Recent studies significantly explained the variation in the performances of CF methods resulted due to the change in URM using six or more RDC. Here, we found that the significant proportion of variation in the performances of different CF techniques can be accounted to two RDC only. The two RDC are the number of ratings per user or Information per User (IpU) and the number of ratings per item or Information per Item (IpI). And the performances of CF algorithms are quadratic to IpU (or IpI) for a square URM. The findings of this study are based on seven well-established CF methods and three popular public recommender datasets: 1M MovieLens, 25M MovieLens, and Yahoo! Music Rating datasets.
Similar to Testing the performance of the power law process model considering the use of regression estimation approach (20)
We have compiled the most important slides from each speaker's presentation. This year’s compilation, available for free, captures the key insights and contributions shared during the DfMAy 2024 conference.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Testing the performance of the power law process model considering the use of regression estimation approach
1. International Journal of Software Engineering & Applications (IJSEA), Vol.5, No.5, September 2014
TESTING THE PERFORMANCE OF THE POWER
LAW PROCESS MODEL CONSIDERING THE USE
OF REGRESSION ESTIMATION APPROACH
Lutfiah Ismail Al turk
Statistics Department, King Abdulaziz University, Kingdom of Saudi Arabia
ABSTRACT
Within the class of non-homogeneous Poisson process (NHPP) models and as a result of the simplicity of
the mathematical computations of the Power Law Process (PLP) model and the attractive physical
explanation of its parameters, this model has found considerable attention in repairable systems literature.
In this article, we conduct the investigation of new estimation approach, the regression estimation
procedure, on the performance of the parametric PLP model. The regression approach for estimating the
unknown parameters of the PLP model through the mean time between failure (TBF) function is evaluated
against the maximum likelihood estimation (MLE) approach. The results from the regression and MLE
approaches are compared based on three error evaluation criteria in terms of parameter estimation and its
precision, the numerical application shows the effectiveness of the regression estimation approach at
enhancing the predictive accuracy of the TBF measure.
KEYWORDS:
The Power Law Process Model, Regression Estimation Approach, Maximum Likelihood Estimation,
Time-Interval between Failures, Mean Time between Failures.
1. INTRODUCTION
The Power Law Process (PLP) model is a popular infinite NHPP model used to describe the
reliability of repairable systems based on the analysis of observed failure data, this model was
derived from the hardware reliability area. There is a lot of literature on the PLP model from a
classical statistics view. Duane [1] was the first to propose the PLP model. During the
development process of several failure systems, he presented failure data. Also, he formulated a
mathematical relationship for predicting and observing the reliability enhancement as a function
of cumulative failure time. He conducted several hardware applications in which the rate of
failure occurrence was in a power law form in operating time. During the analysis of the failure
data, it was noticed that the cumulative μTBF versus cumulative operating time followed a
straight line on a log-log plot. After that, this model, comprehensively studied by Crow [2], he
formulated the corresponding model as a non-homogeneous Poisson process (NHPP) with a
power intensity law.
DOI : 10.5121/ijsea.2014.5503 3 5
2. International Journal of Software Engineering Applications (IJSEA), Vol.5, No.5, September 2014
The PLP model can be applied in software reliability although some problems may arises, it has
been used in many successful applications, particularly in the defense industry. To model the
failure rate of repairable systems, the PLP model is adopted by the United States Army Materials
System Analysis Activity and called the Crow-AMSAA model. In standard applications it is
assumed that the recurrence rate is the same for all systems that are observed. Bain [3] analyzed
independent equivalent multi-system by employing the PLP model. Much theoretical work
describing the PLP model was performed (examples; Lee, L. and Lee, K. [4], and Engelhardt and
Bain [5]).
The PLP model has been widely used in reliability growth, in repairable systems, and software
reliability models (see; Crow [6], Ascher and Feingold [7], and Kyparisis and Singpurwalla [8]).
Littlewood [9] modified this model to overcome a problem associated with it, the problem of
infinite failure rate at time zero. Rigdon [10] showed the wrongness of the claim that a linear
Duane plot implies a PLP, and the conversion is false too.
In Calabria [11], the modified maximum likelihood (ML) estimators for the PLP model and the
failure intensity of the expected number of failures in a given time interval is studied. Classical
inference on the PLP model, such as point estimation, confidence intervals, tests of hypothesis for
parameters and estimates of the intensity function was reviewed by Rigdon and Basu [12]. Yu et
al. [13] considered the maximum likelihood estimates and the confidence intervals for the
unknown parameters with left-truncated data. They also discussed the hypothesis testing and the
goodness-of-fit test, and the predicted limits of future failure times for failure-truncated data.
Bayesian inference on the PLP model was also studied during the past two decades. One of the
papers on Bayesian NHPP models is by Higgins and Tsokos [14] and the underlying model is the
PLP model. Bayesian point and interval estimates were obtained by Guida [15], and Kyparisis
and Singpurwalla [8]. Calabria et al. [16] considered the lower bounds for a test of equality of
trends in several independent power law processes. Huang and Bier [17] presented a natural
conjugate prior for the PLP model. By using the Bayesian method, Tian et al. [18] developed
estimation and prediction methods for the PLP model in the left-truncated case. They also
obtained the Bayesian point and credible interval estimates for the parameters of interest.
The extended case of inverting the Fisher information matrix to quantify the precision of the
parameter estimates for the PLP model is considered by ( Dyck and Verdonck [19]) where the
recurrence rate between the different systems may vary with known scaling factors. They showed
that the standard error of the parameter estimates can be quantified using analytical method.
In this article three test methods are applied through a numerical application to evaluate the
effectiveness of the regression estimation approach that based on the μTBF function. For the four
real failure data that we considered in this article, we have shown that the regression estimation
approach works better than the maximum likelihood approach. The regression approach gives less
error in terms of all the used measurement criteria compared to the maximum likelihood
approach. The rest of this article is organized as follows. In Section 2, the mathematical formulas
of the PLP model’s characteristics are presented. Section 3 briefly discusses the estimation of the
unknown parameters for the PLP model, considering only the case of time-interval between
failures. Section 4 presents briefly the selected evaluation criteria. Section 5 is an application to
36
3. International Journal of Software Engineering Applications (IJSEA), Vol.5, No.5, September 2014
make a comparison between the performance of the regression estimation procedure and the
method of likelihood estimation to four real data sets. Section 6 gives conclusion of data analysis.
(1)
(4)
%(5)
% (6)
37
2. THE POWER LAW PROCESS (PLP) MODEL
One particular and most commonly used model of the NHPP in the literature is the PLP, also
called Duane, Crow-AMSAA, or Weibull process model. This model was first proposed by
Duane [1] and later developed by Crow [2] as a NHPP model for hardware reliability. This model
has the ability to be applied for the prediction of software reliability as well, and represents a
functional relationship between two quantities, where one quantity varies as a power of another.
With this model we can not make any relation between its parameters and the physical
characteristics of the failure system. This model describes the failure history of reliable systems
by supposing an NHPP for the counting process {N(), 0}, some of measures of reliability of
the PLP model are: the mean value function or the expected cumulative number of failures in [0;
t) and the conditional reliability function, the mathematical formulas of those measures are
respectively as follows
μ
4. and
μ
μ
)
The error-detection rate per error has the following form
!
# $
6. %
(3)
While the corresponding number of remaining error is given by
$ μ
# $
7. The failure intensity function depends only on the cumulative failure time and not on the previous
pattern of failure times, it can be obtained as follows
'μ
'
8. The mean time between failures (μTBF) can be found by the inverse of the intensity function
μ()*
%
%
9.
10. International Journal of Software Engineering Applications (IJSEA), Vol.5, No.5, September 2014
where + , -
# . denote the failure times, + , is the scale parameter, and
is the shape parameter. For the PLP model, if 1, the failure intensity is increasing
exponentially with time, the times between failures tend to be shorter, and this indicates that the
software reliability is deteriorating. While if 1, the failure intensity decreases sharply with
time, the times between failures tend to be longer, then the software reliability of a system is
being modeling with rapid improvement, and when = 1, the mean time between failures is equal
to a constant value, the system is remaining stable over time and the PLP model reduces to the
homogeneous Poisson process (HPP), [see; Figure 1].
38
Intensity Function Plot
h 1
h =1
h 1
0 10000 20000 30000 40000 50000 60000
CFT
Figure 1. Intensity function plot of the PLP model.
0e+00 1e-04 2e-04 3e-04 4e-04 5e-04 6e-04 7e-04
Intensity
3 Estimation of the PLP Model’s Parameters for the Case of Time Data
Statistical inference procedures can be used easily and applied to the PLP model. For non-homogeneous
Poisson process (NHPP) models, there are two statistical categories, namely:
i. Time-interval between failures.
ii. Number of failures observed in a specified interval.
In this article the method of estimation will be applied to the first category, where successive
times to failures are considered as a random variable.
11. International Journal of Software Engineering Applications (IJSEA), Vol.5, No.5, September 2014
$#
8% (8)
39
3.1 The MLE approach of the PLP model
It is easy to work out for estimation of the parameters for the PLP model, by considering
equations (1) and (5) the corresponding cumulative mean value and intensity functions are
respectively
/
μ0
0
%
0-
0-
1 (7)
Derived from the NHPPs and supposing that the cumulative failure time data si (i = 1, 2, … , n)
are observed during the testing phase, then the likelihood function of the PLP model is defined as
follows
23 456
$μ0 7 0
8%
9:
7 0
And the natural logarithm of the joint density is obtained as
;23 456
$0
; ; = $ # ;0
8% (9)
Proceed to estimate and. The first partial derivatives (with respect to ) is taken and
the equation is set equal to zero yield
?@34A6
$0
, (10)
Solving for , we have
BC@D
0
(11)
Also, find the first partial derivatives (with respect to ) and Set the equation equal to
zero yield
?@3EF4A6
F
, (12)
giving
BC@D
:
? 9:= ? 9
GH
IJK(13)
0
LM
M8N
0
LM
M8N
12. International Journal of Software Engineering Applications (IJSEA), Vol.5, No.5, September 2014
40
3.2 The regression estimation approach of the PLP model
The μTBF is the basic measure of the system’s reliability, for the regression procedure we use the
μTBF to estimate the unknown parameters, by taking the natural logarithm of both sides of the
μTBF function in Equation (6), we get
lnOμ()*0 P = - ln( + (1- ln0) (14)
Using the method of regression estimation for the linear regression model, (Ryan [20]) we can
derive the regression estimators of and as follows
QRST
# $
= X:3Y6= X:3μZ[Y 6 :
GH
= ?9 ?μUVW9 :
GH
: GH
:
]
=O?9 P]^=X:3Y6_
:
(15)
and
BRST
`a
= X:3μZ[Y 6 :GH
= X:3Y6 :
GH
: bHacdef
:
cdef
(16)
Where 0 -
# . denote the cumulative failure times of occurrence,
4 Error Measurement Criteria
After the estimation of the model parameters, several tests exist to evaluate the performance of
software reliability models quantitatively. The literature (Sharma et al. [21], Norman [22]) refers
to several common evaluation criteria for software reliability models. In this section three of them
are considered in order to evaluate the accuracy of the estimates from the PLP model, they are
described below. Among them, the model performance is better when they are smaller.
4.1 The Bias
The bias is the sum of the difference between the estimated curve and the actual data, and
computed by
Bias =
%
g= 3μ()* $μ()* h 6 g
13. 8% (17)
4.2 The mean squared error (MSE)
The MSE is the most widely used evaluation criteria of prediction. It is used to describe the
deviation between the predicted values with the actual values, and is defined as
MSE =
%
gi= 3μ()* $μ()* h 6 g
15. International Journal of Software Engineering Applications (IJSEA), Vol.5, No.5, September 2014
41
4.3 The mean error of prediction (MEOP)
The MEOP is the summation of the absolute value of the variation between the real data and the
predicted values. The equation of evaluating the MEOP is given by
MEOP =
%
gi%= 4μ()* $μ()* h 4 g
16. 8% , (19)
where n represents the number of measurements used for estimating the model parameters, and p
the number of parameters.
5 Empirical Application
In our analysis, four real failure data sets are analyzed using the PLP model, to easy implement
the analysis of failure data, R code is created. The first step and for the case of TBF data, the
estimates of the unknown parameters are found using the MLE and the regression approaches.
Both objective and graphical method based on the cumulative mean time between failures are
used to study the effectiveness of the regression estimation procedure. Three evaluation criteria
are used to evaluate the prediction results.
5.1 Failure datasets
For evaluation the performance of the regression estimation approach with the traditional MLE
approach, four actual software reliability failures data are analyzed. The first data set is presented
by Musa [23]. The second data set is taken from Jeske [24].The Apollo 8 software test data is first
reported in Jelenski and Moranda [25]. Tohma’s software failure data is reported and studied by
Tohma [26]. Figure 2 shows the time between failures versus the failure number for all the
selected failure data.
17. International Journal of Software Engineering Applications (IJSEA), Vol.5, No.5, September 2014
42
Musa’s Data (1979)
0 10 20 30
Failure Number
Jeske’s Data (2000)
2 4 6 8 10 12
0 2000 6000
Failure Number
TBF
Apollo 8 Data (1972)
5 10 15 20
Failure Number
Tohma’s Data (1989)
5 10 15 20
5 10 15
Failure Number
TBF
Figure 2. The time between failures versus the failure number
0 4000 8000 12000
TBF
2 4 6 8 10 12
TBF
5.2 Discussion
To test the performance of the PLP model for the four selected failure data, first the estimates of
the unknown parameters and are obtained using the MLE and the regression approaches. For
the regression the reliability metric μ()* is considered. The actual and the predicted cumulative
μ()*j0-kJIlmJl'ln0-mo-l is represented graphically in Figure 3. From this
figure we can see that, according to the μ()* regression approach, it appears that the PLP model
estimates theμ()*pKqI;; for the four selected failure data sets, the predictiveμ()* curve is
very closely related to the actual μ()* curve which suggests that the four selected failure data
sets do follow the PLP model very closely. But this is not the case when using the MLE approach,
as this parametric approach gives poor prediction results. In Table 1 the MSE, Bias, and MEOP is
reported, the obtained lower values of this measures considering the regression estimation
18. International Journal of Software Engineering Applications (IJSEA), Vol.5, No.5, September 2014
approach indicates the effective capability of using this procedure. All the selected performance
measures agree on that the regression approach gives an enhanced prediction results.
43
0 20000 40000 60000
Figure 3. Cumulative μTBF vs cumulative failure time plot
0 20000 50000
Figure 3.a Musa’s Data (1979)
CFT
Comulative MTBF
MTBF(Observed)
MTBF(Pred.by Reg)
MTBF(Pred.by MLE)
0 10000 30000 50000
0 20000 40000 60000
Figure 3.b Jeske’s Data (2000)
CFT
Comulative MTBF
MTBF(Observed)
MTBF(Pred.by Reg)
MTBF(Pred.by MLE)
20 40 60 80 100 120
0 20 60 100
Figure 3.c Apollo 8 Data (1972)
CFT
Comulative MTBF
MTBF(Observed)
MTBF(Pred.by Reg)
MTBF(Pred.by MLE)
0 20 40 60 80
0 20 40 60 80
Figure 3.d Tohma’s Data (1989)
CFT
Comulative MTBF
MTBF(Observed)
MTBF(Pred.by Reg)
MTBF(Pred.by MLE)
19. International Journal of Software Engineering Applications (IJSEA), Vol.5, No.5, September 2014
44
Table 1. The MSE, Bias, and MEOP of the two estimation approaches
Datsets Evaluation
criteria
MLE
approach
Regression
approach
Musa’s Data (1979) MSE
Bias
MEOP
483927928 1739304
13067.4700 348.8833
13420.6500 358.3126
Jeske’s Data (2000) MSE
Bias
MEOP
1330498442 1263012
25966.6200 219.7441
28130.5000 238.0561
Apollo 8 Data (1972)
MSE
Bias
MEOP
6402.7620 21.2888
67.8297 0.8530
70.7788 0.8901
Tohma’s Data (1989) MSE
Bias
MEOP
2505.0440 5.4390
37.5658 0.6771
39.3546 0.7093
6 CONCLUSION
One of the earliest proposed and common used reliability models is the PLP model, it is simple
flexible and has been doing very well in many applications for reliability prediction. It can be
used to model deteriorating systems as well as to model developing systems. In this article, the
performance of the PLP model has been evaluated for several repairable data based on the MLE
method which can be obtained using the likelihood function, and the regression estimation
approach based on the μ()* function of cumulative failure time data. According to the selected
error measurement criteria, the use of the regression estimation approach resulted in much-enhanced
predictive capability, it produces good evaluation of the μ()*. The MLE approach
gives poor results in terms of three different evaluation criteria. This suggests that, in software
reliability modeling, the first step should be to consider the regression approach for software
reliability prediction.
REFERENCES
[1] Duane, J. T. (1964). Learning Curve Approach to Reliability Monitoring, IEEE Trans. Aerospace
Electron System, 2, 563-566.
[2] Crow, L. H. (1974). Reliability for Complex Systems, Reliability and Biometry. Society for Industrial
and Applied Mathematics (SIAM), pp. 379-410.
[3] Bain, L. J. (1978). Statistical Analysis of Reliability and Life Testing Models, Decker, New York.
[4] Lee, L. and Lee, K. (1978). Some Results on Inference for the Weibull Process, Technometrics, 20,
41-45.
[5] Engelhardt, M. and Bain, L. J. (1986). On the Mean Time Between Failures for Repairable Systems.
IEEE Transactions on Reliability, R-35, 419-422.
[6] Crow, L. H. (1982). Confidence Interval Procedures for the Weibull Process with Applications to
Reliability Growth. Technometrics, 24, 67-72.
[7] Ascher, H. and Feingold, H. .(1984). Repairable Systems Reliability, Decker, New York.
20. International Journal of Software Engineering Applications (IJSEA), Vol.5, No.5, September 2014
[8] Kyparisis, J. and Singpurwalla, N. D. (1985). Bayesian Inference for the Weibull Process with
Applications to Assessing Software Reliability Growth and predicting Software Failures. Computer
Science and Statistics: Proceedings of the Sixteenth Symposium on the Interface (ed. L. Billard), 57-
64, Elsevier Science Publishers B. V. (North Holland), Amsterdam.
[9] Littlewood, B. (1984). Rationale for modified Duame model, IEEE Trans. Reliability, Vol. R-33
45
No.2, pp.157-9.
[10] Rigdon, S. E. (2002). Properties of the Duane plot for repairable systems.Quality and Reliability
Engineering International 18, 1-4.
[11] Calabria, R. Guida, M. and Pulcini, G. (1988). Some Modified Maximum Likelihood Estimators for
the Weibull Process. Reliability Engineering and System Safety, 23, 51-58.
[12] Rigdon, S. E. and Basu, A. P. (1990). Estimating the Intensity Function of a Power Law Process at the
Current Time: Time Truncated Case. Communications in Statistics Simulation and Computation, 19,
1079-1104.
[13] Yu, J. W., Tian, G. L. and Tang, M. L. (2008). Statistical inference and prediction for the Weibull
process with incomplete observations. Computational Statistics and Data Analysis 52, 1587-1603.
[14] Higgins, J. J. and Tsokos, C. P. (1981). A Quasi-Bayes Estimate of the Failure Intensity of a
Reliability Growth Model, IEEE Trans. on Reliability.
[15] Guida, M. , Calabria, R. and Pulcini, G. (1989). Bayes Inference for a Non-homogeneous Poisson
Process with Power Law Intensity. IEEE Transactions on Reliability. R-38, 603-609.
[16] Calabria, R., Guida, M. and Pulcini, G. (1992). Power Bounds for a Test of Equality of Trends in k
Independent Power Law Processes. Comm. Statist. Theory Methods, 21(11): 3275-3290.
[17] Huang,Y. and Bier,V. (1998). A Natural Conjugate Prior for the Non-Homogeneous Poisson Process
with a Power Law Intensity Function, Commun. Statist. 27(2): 525-551.
[18] Guo-Liang Tian, Man-Lai Tang, and Jun-Wu Yu, (2011). Bayesian Estimation and Prediction for the
Power Law Process with Left-Truncated Data. Journal of Data Science, 445-470.
[19] Van Dyck, Tim Verdonck, (2014). Precision of power-law NHPP estimates for multiple systems with
known failure rate scaling. Rel. Eng. Sys. Safety 126: 143-152.
[20] Ryan S. E, and L. S. Porth, A Tutorial on the Piecewise Regression Approach Applied to Bedload
Transport Data, Gen. Tech. Rep. RMRS-GTP-189. Fort Collins, CO. U. S. Department of
Agriculture, Forest Service, Rocky Mountain Research Station, 2007.
[21] Kapil Sharma, Rakesh Garg, Nagpal C. K., and Garg R. K. (2010). “Selection of Optimal Software
Reliability Growth Models Using a Distance Based Approach”, IEEE Transactions On Reliability,
Vol. 59, No. 2, Page 266-277.
[22] Norman F. Schneidewind. (1993). “Software Reliability Model with Optimal Selection of Failure
Data”, IEEE Transactions on Software Engineering, Vol.19, No. 11, Page 1095-1105.
[23] Musa, J. D. (1979). Software Reliability Data. Report Available from Data Analysis Center for
software, Rome Air Development Center, Rome, New York, USA.
[24] Jeske, D. R. and Qureshi, M. (2000). Estimating the Failure Rate of Evolving Software, Proceedings
of 11 th International Symposium on Software Reliability Engineering (San Jose) (Est. Acceptance
Ratio: 40%), pp. 52-61.
[25] Jelinski Z., and Moranda P. B. (1972). Software Reliability Research. Statistical Computer
Performance Evaluation. Academic Press, NewYork, pp. 465-484.
[26] Tohma Y. , Jacoby R., Murata Y., and Yamamoto, M. (1989). “Hyper-Geometric Distribution Model
to Estimate the Number of Residual Software Faults,” Proc. COMPsac-89, Orlando, pp. 610−617.
21. International Journal of Software Engineering Applications (IJSEA), Vol.5, No.5, September 2014
46
Authors
Lutfiah Ismail Al turk is presently Assistant Professor of Mathematical Statistics in Statistics Department
at Faculty of Sciences, King AbdulAziz University, Saudi Arabia. Lutfiah Ismail Al turk obtained her B.Sc
degree in Statistics and Computer Science from Faculty of Sciences, King AbdulAziz University in 1993
and M.Sc (Mathematical statistics ) degree from Statistics Department, Faculty of Sciences, King
AbdulAziz University in 1999. She received her Ph.D in Mathematical Statistics from university of Surrey,
UK in 2007.