A new approach for the control and prediction of verification activities for large safety-relevant software systems will be presented in this paper. The model is applied on a macroscopic system level and based on so-called Moran processes, which originate from mathematical biology and allow for the description of phenomena as, for instance, genetic drift. Beside the theoretical foundations of this novel approach, its application on a real-world example from the medical engineering domain will be discussed.
BIO-INSPIRED MODELLING OF SOFTWARE VERIFICATION BY MODIFIED MORAN PROCESSESIJCSEA Journal
A new approach for the control and prediction of verification activities for large safety-relevant software
systems will be presented in this paper. The model is applied on a macroscopic system level and based on
so-called Moran processes, which originate from mathematical biology and allow for the description
ofphenomena as, for instance, genetic drift. Beside the theoretical foundations of this novel approach, its
application on a real-world example from the medical engineering domain will be discussed.
Software Testing Outline Performances and Measurementsijtsrd
The procedure of carrying out a program or else scheme by means of the target of ruling bugs called “Software s w Testing”. It is whichever action intended by estimating a characteristic or else ability of a program system plus shaping that it congregates its requisite consequences. Testing is an essential piece in s w growth. It is generally arranged in each stage in the s w progress sequence. Classically, in excess of fifty two perecent of the progress period is used up in testing. Metrics are attainmenting significance plus receiving in commercial segments as associations raise, grown up and endeavour to get better venture values. This study talks about s w testing methods as well as measurements. Indu Maurya "Software Testing Outline: Performances and Measurements" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-2 , February 2021, URL: https://www.ijtsrd.com/papers/ijtsrd38550.pdf Paper Url: https://www.ijtsrd.com/computer-science/other/38550/software-testing-outline-performances-and-measurements/indu-maurya
Review Paper on Recovery of Data during Software FaultAM Publications
In this paper here we will discuss different types of technique that are used for recovery of data during
software fault. The major objective of this paper to specify recovery technique that are used during software and
hardware fault.
Minimal Testcase Generation for Object-Oriented Software with State Chartsijseajournal
Today statecharts are a de facto standard in industry for modeling system behavior. Test data generation is
one of the key issues in software testing. This paper proposes an reduction approach to test data generation
for the state-based software testing. In this paper, first state transition graph is derived from state chart
diagram. Then, all the required information are extracted from the state chart diagram. Then, test cases
are generated. Lastly, a set of test cases are minimized by calculating the node coverage for each test case.
It is also determined that which test cases are covered by other test cases. The advantage of our test
generation technique is that it optimizes test coverage by minimizing time and cost. The present test data
generation scheme generates test cases which satisfy transition path coverage criteria, path coverage
criteria and action coverage criteria. A case study on Railway Ticket Vending Machine (RTVM) has been
presented to illustrate our approach.
With the rise of software systems ranging from personal assistance to the nation's facilities, software defects become more critical concerns as they can cost millions of dollar as well as impact human lives. Yet, at the breakneck pace of rapid software development settings (like DevOps paradigm), the Quality Assurance (QA) practices nowadays are still time-consuming. Continuous Analytics for Software Quality (i.e., defect prediction models) can help development teams prioritize their QA resources and chart better quality improvement plan to avoid pitfalls in the past that lead to future software defects. Due to the need of specialists to design and configure a large number of configurations (e.g., data quality, data preprocessing, classification techniques, interpretation techniques), a set of practical guidelines for developing accurate and interpretable defect models has not been well-developed.
The ultimate goal of my research aims to (1) provide practical guidelines on how to develop accurate and interpretable defect models for non-specialists; (2) develop an intelligible defect model that offer suggestions how to improve both software quality and processes; and (3) integrate defect models into a real-world practice of rapid development cycles like CI/CD settings. My research project is expected to provide significant benefits including the reduction of software defects and operating costs, while accelerating development productivity for building software systems in many of Australia's critical domains such as Smart Cities and e-Health.
A metrics suite for variable categorizationt to support program invariants[IJCSEA Journal
Invariants are generally implicit. Explicitly stating program invariants, help programmers to identify
program properties that must be preserved while modifying the code. Existing dynamic techniques detect
invariants which includes both relevant and irrelevant/unused variables and thereby relevant and
irrelevant invariants involved in the program. Due to the presence of irrelevant variables and irrelevant
invariants, speed and efficiency of techniques are affected. Also, displaying properties about irrelevant
variables and irrelevant invariants distracts the user from concentrating on properties of relevant
variables. To overcome these deficiencies only relevant variables are considered by ignoring irrelevant
variables. Further, relevant variables are categorized as design variables and non-design variables. For
this purpose a metrics suite is proposed. These metrics are validated against Weyuker’s principles and
applied on RFV and JLex open source software. Similarly, relevant invariants are categorized as design
invariants, non-design invariants and hybrid invariants. For this purpose a set of rules are proposed. This
entire process enormously improves the speed and efficiency of dynamic invariant detection techniques
BIO-INSPIRED MODELLING OF SOFTWARE VERIFICATION BY MODIFIED MORAN PROCESSESIJCSEA Journal
A new approach for the control and prediction of verification activities for large safety-relevant software
systems will be presented in this paper. The model is applied on a macroscopic system level and based on
so-called Moran processes, which originate from mathematical biology and allow for the description
ofphenomena as, for instance, genetic drift. Beside the theoretical foundations of this novel approach, its
application on a real-world example from the medical engineering domain will be discussed.
Software Testing Outline Performances and Measurementsijtsrd
The procedure of carrying out a program or else scheme by means of the target of ruling bugs called “Software s w Testing”. It is whichever action intended by estimating a characteristic or else ability of a program system plus shaping that it congregates its requisite consequences. Testing is an essential piece in s w growth. It is generally arranged in each stage in the s w progress sequence. Classically, in excess of fifty two perecent of the progress period is used up in testing. Metrics are attainmenting significance plus receiving in commercial segments as associations raise, grown up and endeavour to get better venture values. This study talks about s w testing methods as well as measurements. Indu Maurya "Software Testing Outline: Performances and Measurements" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-2 , February 2021, URL: https://www.ijtsrd.com/papers/ijtsrd38550.pdf Paper Url: https://www.ijtsrd.com/computer-science/other/38550/software-testing-outline-performances-and-measurements/indu-maurya
Review Paper on Recovery of Data during Software FaultAM Publications
In this paper here we will discuss different types of technique that are used for recovery of data during
software fault. The major objective of this paper to specify recovery technique that are used during software and
hardware fault.
Minimal Testcase Generation for Object-Oriented Software with State Chartsijseajournal
Today statecharts are a de facto standard in industry for modeling system behavior. Test data generation is
one of the key issues in software testing. This paper proposes an reduction approach to test data generation
for the state-based software testing. In this paper, first state transition graph is derived from state chart
diagram. Then, all the required information are extracted from the state chart diagram. Then, test cases
are generated. Lastly, a set of test cases are minimized by calculating the node coverage for each test case.
It is also determined that which test cases are covered by other test cases. The advantage of our test
generation technique is that it optimizes test coverage by minimizing time and cost. The present test data
generation scheme generates test cases which satisfy transition path coverage criteria, path coverage
criteria and action coverage criteria. A case study on Railway Ticket Vending Machine (RTVM) has been
presented to illustrate our approach.
With the rise of software systems ranging from personal assistance to the nation's facilities, software defects become more critical concerns as they can cost millions of dollar as well as impact human lives. Yet, at the breakneck pace of rapid software development settings (like DevOps paradigm), the Quality Assurance (QA) practices nowadays are still time-consuming. Continuous Analytics for Software Quality (i.e., defect prediction models) can help development teams prioritize their QA resources and chart better quality improvement plan to avoid pitfalls in the past that lead to future software defects. Due to the need of specialists to design and configure a large number of configurations (e.g., data quality, data preprocessing, classification techniques, interpretation techniques), a set of practical guidelines for developing accurate and interpretable defect models has not been well-developed.
The ultimate goal of my research aims to (1) provide practical guidelines on how to develop accurate and interpretable defect models for non-specialists; (2) develop an intelligible defect model that offer suggestions how to improve both software quality and processes; and (3) integrate defect models into a real-world practice of rapid development cycles like CI/CD settings. My research project is expected to provide significant benefits including the reduction of software defects and operating costs, while accelerating development productivity for building software systems in many of Australia's critical domains such as Smart Cities and e-Health.
A metrics suite for variable categorizationt to support program invariants[IJCSEA Journal
Invariants are generally implicit. Explicitly stating program invariants, help programmers to identify
program properties that must be preserved while modifying the code. Existing dynamic techniques detect
invariants which includes both relevant and irrelevant/unused variables and thereby relevant and
irrelevant invariants involved in the program. Due to the presence of irrelevant variables and irrelevant
invariants, speed and efficiency of techniques are affected. Also, displaying properties about irrelevant
variables and irrelevant invariants distracts the user from concentrating on properties of relevant
variables. To overcome these deficiencies only relevant variables are considered by ignoring irrelevant
variables. Further, relevant variables are categorized as design variables and non-design variables. For
this purpose a metrics suite is proposed. These metrics are validated against Weyuker’s principles and
applied on RFV and JLex open source software. Similarly, relevant invariants are categorized as design
invariants, non-design invariants and hybrid invariants. For this purpose a set of rules are proposed. This
entire process enormously improves the speed and efficiency of dynamic invariant detection techniques
Test Case Optimization and Redundancy Reduction Using GA and Neural Networks IJECEIAES
More than 50% of software development effort is spent in testing phase in a typical software development project. Test case design as well as execution consume a lot of time. Hence, automated generation of test cases is highly required. Here a novel testing methodology is being presented to test objectoriented software based on UML state chart diagrams. In this approach, function minimization technique is being applied and generate test cases automatically from UML state chart diagrams. Software testing forms an integral part of the software development life cycle. Since the objective of testing is to ensure the conformity of an application to its specification, a test “oracle” is needed to determine whether a given test case exposes a fault or not. An automated oracle to support the activities of human testers can reduce the actual cost of the testing process and the related maintenance costs. In this paper, a new concept is being presented using an UML state chart diagram and tables for the test case generation, artificial neural network as an optimization tool for reducing the redundancy in the test case generated using the genetic algorithm. A neural network is trained by the backpropagation algorithm on a set of test cases applied to the original version of the system.
Configuration Navigation Analysis Model for Regression Test Case Prioritizationijsrd.com
Regression testing has been receiving increasing attention nowadays. Numerous regression testing strategies have been proposed. Most of them take into account various metrics like cost as well as the ability to find faults quickly thereby saving overall testing time. In this paper, a new model called the Configuration Navigation Analysis Model is proposed which tries to consider all stakeholders and various testing aspects while prioritizing regression test cases.
In today's increasingly digitalised world, software defects are enormously expensive. In 2018, the Consortium for IT Software Quality reported that software defects cost the global economy $2.84 trillion dollars and affected more than 4 billion people. The average annual cost of software defects on Australian businesses is A$29 billion per year. Thus, failure to eliminate defects in safety-critical systems could result in serious injury to people, threats to life, death, and disasters. Traditionally, software quality assurance activities like testing and code review are widely adopted to discover software defects in a software product. However, ultra-large-scale systems, such as, Google, can consist of more than two billion lines of code, so exhaustively reviewing and testing every single line of code isn't feasible with limited time and resources. This project aims to create technologies that enable software engineers to produce the highest quality software systems with the lowest operational costs. To achieve this, this project will invent an end-to-end explainable AI platform to (1) understand the nature of critical defects; (2) predict and locate defects; (3) explain and visualise the characteristics of defects; (4) suggest potential patches to automatically fix defects; (5) integrate such platform as a GitHub bot plugin.
Verification of the protection services in antivirus systems by using nusmv m...ijfcstjournal
In this paper, a model of protection services in the antivirus system is proposed. The antivirus system
behavior separate in to preventive and control behaviors. We extract the properties which are expected
from the model of antivirus system approach from control behavior in the form of CTL and LTL temporal
logic formulas. To implement the behavior models of antivirus system approach, the ArgoUML tool and the
NuSMV model checker are employed. The results show that the antivirus system approach can detects
fairness, reachability, deadlock free and verify some properties of the proposed model verified by using
NuSMV model checker.
With the rise of the Mining Software Repositories (MSR) field, defect datasets extracted from software repositories play a foundational role in many empirical studies related to software quality. At the core of defect data preparation is the identification of post-release defects. Prior studies leverage many heuristics (e.g., keywords and issue IDs) to identify post-release defects. However, such the heuristic approach is based on several assumptions, which pose common threats to the validity of many studies. In this paper, we set out to investigate the nature of the difference of defect datasets generated by the heuristic approach and the realistic approach that leverages the earliest affected release that is realistically estimated by a software development team for a given defect. In addition, we investigate the impact of defect identification approaches on the predictive accuracy and the ranking of defective modules that are produced by defect models. Through a case study of defect datasets of 32 releases, we find that that the heuristic approach has a large impact on both defect count datasets and binary defect datasets. Surprisingly, we find that the heuristic approach has a minimal impact on defect count models, suggesting that future work should not be too concerned about defect count models that are constructed using heuristic defect datasets. On the other hand, using defect datasets generated by the realistic approach lead to an improvement in the predictive accuracy of defect classification models.
Comparative Performance Analysis of Machine Learning Techniques for Software ...csandit
Machine learning techniques can be used to analyse data from different perspectives and enable
developers to retrieve useful information. Machine learning techniques are proven to be useful
in terms of software bug prediction. In this paper, a comparative performance analysis of
different machine learning techniques is explored for software bug prediction on public
available data sets. Results showed most of the machine learning methods performed well on
software bug datasets.
Practical Guidelines to Improve Defect Prediction Model – A Reviewinventionjournals
Defect prediction models are used to pinpoint risky software modules and understand past pitfalls that lead to defective modules. The predictions and insights that are derived from defect prediction models may not be accurate and reliable if researchers do not consider the impact of experimental components (e.g., datasets, metrics, and classifiers) of defect prediction modeling. Therefore, a lack of awareness and practical guidelines from previous research can lead to invalid predictions and unreliable insights. Through case studies of systems that span both proprietary and open-source domains, find that (1) noise in defect datasets; (2) parameter settings of classification techniques; and (3) model validation techniques have a large impact on the predictions and insights of defect prediction models, suggesting that researchers should carefully select experimental components in order to produce more accurate and reliable defect prediction models.
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
Machine Learning approaches are good in solving problems that have less information. In most cases, the
software domain problems characterize as a process of learning that depend on the various circumstances
and changes accordingly. A predictive model is constructed by using machine learning approaches and
classified them into defective and non-defective modules. Machine learning techniques help developers to
retrieve useful information after the classification and enable them to analyse data from different
perspectives. Machine learning techniques are proven to be useful in terms of software bug prediction. This
study used public available data sets of software modules and provides comparative performance analysis
of different machine learning techniques for software bug prediction. Results showed most of the machine
learning methods performed well on software bug datasets.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
A Review on Software Fault Detection and Prevention Mechanism in Software Dev...iosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
A Review on Parameter Estimation Techniques of Software Reliability Growth Mo...Editor IJCATR
Software reliability is considered as a quantifiable metric, which is defined as the probability of a software to operate
without failure for a specified period of time in a specific environment. Various software reliability growth models have been proposed
to predict the reliability of a software. These models help vendors to predict the behaviour of the software before shipment. The
reliability is predicted by estimating the parameters of the software reliability growth models. But the model parameters are generally
in nonlinear relationships which creates many problems in finding the optimal parameters using traditional techniques like Maximum
Likelihood and least Square Estimation. Various stochastic search algorithms have been introduced which have made the task of
parameter estimation, more reliable and computationally easier. Parameter estimation of NHPP based reliability models, using MLE
and using an evolutionary search algorithm called Particle Swarm Optimization, has been explored in the paper.
A Novel Approach to Derive the Average-Case Behavior of Distributed Embedded ...ijccmsjournal
Monte-Carlo simulation is widely used in distributed embedded system in our present era. In this
research work, we have put an emphasis on reliability assessment of any distributed embedded system
through Monte-Carlo simulation. We have done this assessment on random data which represents input
voltages ranging from 0 volt to 12 volt; several numbers of trials have been executed on those data to
check the average case behavior of a distributed real time embedded system. From the experimental result, a saturation point has been achieved against the time behavior which shows the average case behavior of the concerned distributed embedded system.
Project Risk management is an integral part for business survival. This research paper focuses on determining project risk factors using genetic algorithm and fuzzy logic base on the demerits of conventional approaches. Genetic algorithm help optimise the parameters data items while fuzzy logic handle imprecisions. Unified Modelling Language was utilized for modelling the software system, depicting clearly the interaction between various components and the dynamic aspect of the system. This paper demonstrates the practical application of metric based soft computing techniques in the health sector in determining patient’s satisfaction
Assessing Software Reliability Using SPC – An Order Statistics Approach IJCSEA Journal
There are many software reliability models that are based on the times of occurrences of errors in the debugging of software. It is shown that it is possible to do asymptotic likelihood inference for software reliability models based on order statistics or Non-Homogeneous Poisson Processes (NHPP), with asymptotic confidence levels for interval estimates of parameters. In particular, interval estimates from these models are obtained for the conditional failure rate of the software, given the data from the debugging process. The data can be grouped or ungrouped. For someone making a decision about when to market software, the conditional failure rate is an important parameter. Order statistics are used in a wide variety of practical situations. Their use in characterization problems, detection of outliers, linear estimation, study of system reliability, life-testing, survival analysis, data compression and many other fields can be seen from the many books. Statistical Process Control (SPC) can monitor the forecasting of software failure and thereby contribute significantly to the improvement of software reliability. Control charts are widely used for software process control in the software industry. In this paper we proposed a control mechanism based on order statistics of cumulative quantity between observations of time domain
failure data using mean value function of Half Logistics Distribution (HLD) based on NHPP.
Test Case Optimization and Redundancy Reduction Using GA and Neural Networks IJECEIAES
More than 50% of software development effort is spent in testing phase in a typical software development project. Test case design as well as execution consume a lot of time. Hence, automated generation of test cases is highly required. Here a novel testing methodology is being presented to test objectoriented software based on UML state chart diagrams. In this approach, function minimization technique is being applied and generate test cases automatically from UML state chart diagrams. Software testing forms an integral part of the software development life cycle. Since the objective of testing is to ensure the conformity of an application to its specification, a test “oracle” is needed to determine whether a given test case exposes a fault or not. An automated oracle to support the activities of human testers can reduce the actual cost of the testing process and the related maintenance costs. In this paper, a new concept is being presented using an UML state chart diagram and tables for the test case generation, artificial neural network as an optimization tool for reducing the redundancy in the test case generated using the genetic algorithm. A neural network is trained by the backpropagation algorithm on a set of test cases applied to the original version of the system.
Configuration Navigation Analysis Model for Regression Test Case Prioritizationijsrd.com
Regression testing has been receiving increasing attention nowadays. Numerous regression testing strategies have been proposed. Most of them take into account various metrics like cost as well as the ability to find faults quickly thereby saving overall testing time. In this paper, a new model called the Configuration Navigation Analysis Model is proposed which tries to consider all stakeholders and various testing aspects while prioritizing regression test cases.
In today's increasingly digitalised world, software defects are enormously expensive. In 2018, the Consortium for IT Software Quality reported that software defects cost the global economy $2.84 trillion dollars and affected more than 4 billion people. The average annual cost of software defects on Australian businesses is A$29 billion per year. Thus, failure to eliminate defects in safety-critical systems could result in serious injury to people, threats to life, death, and disasters. Traditionally, software quality assurance activities like testing and code review are widely adopted to discover software defects in a software product. However, ultra-large-scale systems, such as, Google, can consist of more than two billion lines of code, so exhaustively reviewing and testing every single line of code isn't feasible with limited time and resources. This project aims to create technologies that enable software engineers to produce the highest quality software systems with the lowest operational costs. To achieve this, this project will invent an end-to-end explainable AI platform to (1) understand the nature of critical defects; (2) predict and locate defects; (3) explain and visualise the characteristics of defects; (4) suggest potential patches to automatically fix defects; (5) integrate such platform as a GitHub bot plugin.
Verification of the protection services in antivirus systems by using nusmv m...ijfcstjournal
In this paper, a model of protection services in the antivirus system is proposed. The antivirus system
behavior separate in to preventive and control behaviors. We extract the properties which are expected
from the model of antivirus system approach from control behavior in the form of CTL and LTL temporal
logic formulas. To implement the behavior models of antivirus system approach, the ArgoUML tool and the
NuSMV model checker are employed. The results show that the antivirus system approach can detects
fairness, reachability, deadlock free and verify some properties of the proposed model verified by using
NuSMV model checker.
With the rise of the Mining Software Repositories (MSR) field, defect datasets extracted from software repositories play a foundational role in many empirical studies related to software quality. At the core of defect data preparation is the identification of post-release defects. Prior studies leverage many heuristics (e.g., keywords and issue IDs) to identify post-release defects. However, such the heuristic approach is based on several assumptions, which pose common threats to the validity of many studies. In this paper, we set out to investigate the nature of the difference of defect datasets generated by the heuristic approach and the realistic approach that leverages the earliest affected release that is realistically estimated by a software development team for a given defect. In addition, we investigate the impact of defect identification approaches on the predictive accuracy and the ranking of defective modules that are produced by defect models. Through a case study of defect datasets of 32 releases, we find that that the heuristic approach has a large impact on both defect count datasets and binary defect datasets. Surprisingly, we find that the heuristic approach has a minimal impact on defect count models, suggesting that future work should not be too concerned about defect count models that are constructed using heuristic defect datasets. On the other hand, using defect datasets generated by the realistic approach lead to an improvement in the predictive accuracy of defect classification models.
Comparative Performance Analysis of Machine Learning Techniques for Software ...csandit
Machine learning techniques can be used to analyse data from different perspectives and enable
developers to retrieve useful information. Machine learning techniques are proven to be useful
in terms of software bug prediction. In this paper, a comparative performance analysis of
different machine learning techniques is explored for software bug prediction on public
available data sets. Results showed most of the machine learning methods performed well on
software bug datasets.
Practical Guidelines to Improve Defect Prediction Model – A Reviewinventionjournals
Defect prediction models are used to pinpoint risky software modules and understand past pitfalls that lead to defective modules. The predictions and insights that are derived from defect prediction models may not be accurate and reliable if researchers do not consider the impact of experimental components (e.g., datasets, metrics, and classifiers) of defect prediction modeling. Therefore, a lack of awareness and practical guidelines from previous research can lead to invalid predictions and unreliable insights. Through case studies of systems that span both proprietary and open-source domains, find that (1) noise in defect datasets; (2) parameter settings of classification techniques; and (3) model validation techniques have a large impact on the predictions and insights of defect prediction models, suggesting that researchers should carefully select experimental components in order to produce more accurate and reliable defect prediction models.
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
Machine Learning approaches are good in solving problems that have less information. In most cases, the
software domain problems characterize as a process of learning that depend on the various circumstances
and changes accordingly. A predictive model is constructed by using machine learning approaches and
classified them into defective and non-defective modules. Machine learning techniques help developers to
retrieve useful information after the classification and enable them to analyse data from different
perspectives. Machine learning techniques are proven to be useful in terms of software bug prediction. This
study used public available data sets of software modules and provides comparative performance analysis
of different machine learning techniques for software bug prediction. Results showed most of the machine
learning methods performed well on software bug datasets.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
A Review on Software Fault Detection and Prevention Mechanism in Software Dev...iosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
A Review on Parameter Estimation Techniques of Software Reliability Growth Mo...Editor IJCATR
Software reliability is considered as a quantifiable metric, which is defined as the probability of a software to operate
without failure for a specified period of time in a specific environment. Various software reliability growth models have been proposed
to predict the reliability of a software. These models help vendors to predict the behaviour of the software before shipment. The
reliability is predicted by estimating the parameters of the software reliability growth models. But the model parameters are generally
in nonlinear relationships which creates many problems in finding the optimal parameters using traditional techniques like Maximum
Likelihood and least Square Estimation. Various stochastic search algorithms have been introduced which have made the task of
parameter estimation, more reliable and computationally easier. Parameter estimation of NHPP based reliability models, using MLE
and using an evolutionary search algorithm called Particle Swarm Optimization, has been explored in the paper.
A Novel Approach to Derive the Average-Case Behavior of Distributed Embedded ...ijccmsjournal
Monte-Carlo simulation is widely used in distributed embedded system in our present era. In this
research work, we have put an emphasis on reliability assessment of any distributed embedded system
through Monte-Carlo simulation. We have done this assessment on random data which represents input
voltages ranging from 0 volt to 12 volt; several numbers of trials have been executed on those data to
check the average case behavior of a distributed real time embedded system. From the experimental result, a saturation point has been achieved against the time behavior which shows the average case behavior of the concerned distributed embedded system.
Project Risk management is an integral part for business survival. This research paper focuses on determining project risk factors using genetic algorithm and fuzzy logic base on the demerits of conventional approaches. Genetic algorithm help optimise the parameters data items while fuzzy logic handle imprecisions. Unified Modelling Language was utilized for modelling the software system, depicting clearly the interaction between various components and the dynamic aspect of the system. This paper demonstrates the practical application of metric based soft computing techniques in the health sector in determining patient’s satisfaction
Assessing Software Reliability Using SPC – An Order Statistics Approach IJCSEA Journal
There are many software reliability models that are based on the times of occurrences of errors in the debugging of software. It is shown that it is possible to do asymptotic likelihood inference for software reliability models based on order statistics or Non-Homogeneous Poisson Processes (NHPP), with asymptotic confidence levels for interval estimates of parameters. In particular, interval estimates from these models are obtained for the conditional failure rate of the software, given the data from the debugging process. The data can be grouped or ungrouped. For someone making a decision about when to market software, the conditional failure rate is an important parameter. Order statistics are used in a wide variety of practical situations. Their use in characterization problems, detection of outliers, linear estimation, study of system reliability, life-testing, survival analysis, data compression and many other fields can be seen from the many books. Statistical Process Control (SPC) can monitor the forecasting of software failure and thereby contribute significantly to the improvement of software reliability. Control charts are widely used for software process control in the software industry. In this paper we proposed a control mechanism based on order statistics of cumulative quantity between observations of time domain
failure data using mean value function of Half Logistics Distribution (HLD) based on NHPP.
Assessing Software Reliability Using SPC – An Order Statistics ApproachIJCSEA Journal
There are many software reliability models that are based on the times of occurrences of errors in the debugging of software. It is shown that it is possible to do asymptotic likelihood inference for software reliability models based on order statistics or Non-Homogeneous Poisson Processes (NHPP), with asymptotic confidence levels for interval estimates of parameters. In particular, interval estimates from these models are obtained for the conditional failure rate of the software, given the data from the debugging process. The data can be grouped or ungrouped. For someone making a decision about when to market software, the conditional failure rate is an important parameter. Order statistics are used in a wide variety of practical situations. Their use in characterization problems, detection of outliers, linear estimation, study of system reliability, life-testing, survival analysis, data compression and many other fields can be seen from the many books. Statistical Process Control (SPC) can monitor the forecasting of software failure and thereby contribute significantly to the improvement of software reliability. Control charts are widely used for software process control in the software industry. In this paper we proposed a control mechanism based on order statistics of cumulative quantity between observations of time domain
failure data using mean value function of Half Logistics Distribution (HLD) based on NHPP.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
Call for paper 2012, hard copy of Certificate, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJCER, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, research and review articles, IJCER Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathematics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer review journal, indexed journal, research and review articles, engineering journal, www.ijceronline.com, research journals,
yahoo journals, bing journals, International Journal of Computational Engineering Research, Google journals, hard copy of Certificate,
journal of engineering, online Submission
STATE-OF-THE-ART IN EMPIRICAL VALIDATION OF SOFTWARE METRICS FOR FAULT PRONEN...IJCSES Journal
With the sharp rise in software dependability and failure cost, high quality has been in great demand.However, guaranteeing high quality in software systems which have grown in size and complexity coupled with the constraints imposed on their development has become increasingly difficult, time and resource consuming activity. Consequently, it becomes inevitable to deliver software that have no serious faults. In
this case, object-oriented (OO) products being the de facto standard of software development with their unique features could have some faults that are hard to find or pinpoint the impacts of changes. The earlier faults are identified, found and fixed, the lesser the costs and the higher the quality. To assess product quality, software metrics are used. Many OO metrics have been proposed and developed. Furthermore,
many empirical studies have validated metrics and class fault proneness (FP) relationship. The challenge is which metrics are related to class FP and what activities are performed. Therefore, this study bring together the state-of-the-art in fault prediction of FP that utilizes CK and size metrics. We conducted a systematic literature review over relevant published empirical validation articles. The results obtained are
analysed and presented. It indicates that 29 relevant empirical studies exist and measures such as complexity, coupling and size were found to be strongly related to FP.
A SECURITY EVALUATION FRAMEWORK FOR U.K. E-GOVERNMENT SERVICES AGILE SOFTWARE...IJNSA Journal
This study examines the traditional approach to software development within the United Kingdom Government and the accreditation process. Initially we look at the Waterfall methodology that has been used for several years. We discuss the pros and cons of Waterfall before moving onto the Agile Scrum methodology. Agile has been adopted by the majority of Government digital departments including the Government Digital Services. Agile, despite its ability to achieve high rates of productivity organized in short, flexible, iterations, has faced security professionals’ disbelief when working within the U.K. Government. One of the major issues is that we develop in Agile but the accreditation process is conducted using Waterfall resulting in delays to go live dates. Taking a brief look into the accreditation process that is used within Government for I.T. systems and applications, we focus on giving the accreditor the assurance they need when developing new applications and systems. A framework has been produced by utilising the Open Web Application Security Project’s (OWASP) Application Security Verification Standard (ASVS). This framework will allow security and Agile to work side by side and produce secure code.
Formal method techniques provides a suitable platform for the software development in software systems.
Formal methods and formal verification is necessary to prove the correctness and improve performance of
software systems in various levels of design and implementation, too. Security Discussion is an important
issue in computer systems. Since the antivirus applications have very important role in computer systems
security, verifying these applications is very essential and necessary. In this paper, we present four new
approaches for antivirus system behavior and a behavioral model of protection services in the antivirus
system is proposed. We divided the behavioral model in to preventive behavior and control behavior and
then we formal these behaviors. Finally by using some definitions we explain the way these behaviors are
mapped on each other by using our new approaches.
IoT Device Intelligence & Real Time Anomaly DetectionBraja Krishna Das
-- Real Time Anomaly Detection
-- IoT Device Intelligence
-- Uni Variate and Multi Variate Anomaly Detection
-- Unsupervised Learning Classification from Anomaly Detection
Software testing effort estimation with cobb douglas function a practical app...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Software testing effort estimation with cobb douglas function- a practical ap...eSAT Journals
Abstract Effort estimation is one of the critical challenges in Software Testing Life Cycle (STLC). It is the basis for the project’s effort estimation, planning, scheduling and budget planning. This paper illustrates model with an objective to depict the accuracy and bias variation of an organization’s estimates of software testing effort through Cobb-Douglas function (CDF). Data variables selected for building the model were believed to be vital and have significant impact on the accuracy of estimates. Data gathered for the completed projects in the organization for about 13 releases. Statistically, all variables in this model were statistically significant at p<0.05><0.01 levels. The Cobb-Douglas function was selected and used for the software testing effort estimation. The results achieved with CDF were compared with the estimates provided by the area expert. The model’s estimation figures are more accurate than the expert judgment. CDF has one of the appropriate techniques for estimating effort for software testing. CDF model accuracy is 93.42%.
Formal Verification of Distributed Checkpointing Using Event-Bijcsit
The development of complex system makes challenging task for correct software development. Due to faulty
specification, software may involve errors. The traditional testing methods are not sufficient to verify the
correctness of such complex system. In order to capture correct system requirements and rigorous
reasoning about the problems, formal methods are required. Formal methods are mathematical techniques
that provide precise specification of problems with their solutions and proof of correctness. In this paper,
we have done formal verification of check pointing process in a distributed database system using Event B.
Event-B is an event driven formal method which is used to develop formal models of distributed database
systems. In a distributed database system, the database is stored at different sites that are connected
together through the network. Checkpoint is a recovery point which contains the state information about
the site. In order to do recovery of a distributed transaction a global checkpoint number (GCPN) is
required. A global checkpoint number decides which transaction will be included for recovery purpose. All
transactions whose timestamp are less than global checkpoint number will be marked as before checkpoint
transaction (BCPT) and will be considered for recovery purpose. The transactions whose timestamp are
greater than GCPN will be marked as after checkpoint transaction (ACPT) and will be part of next global
checkpoint number.
Proposed Algorithm for Surveillance ApplicationsEditor IJCATR
Technological systems are vulnerable to faults. In many fault situations, the system operation has to be stopped to avoid
damage to machinery and humans. As a consequence, the detection and the handling of faults play an increasing role in modern
technology, where many highly automated components interact in a complex way such that a fault in a single component may cause
the malfunction of the whole system. This work introduces the main ideas of fault diagnosis and fault-tolerant control under the optics
of various research work done in this area. It presents the Arduino technology in both hardware and software sides. The purpose of this
paper is to propose a diagnostic algorithm based on this technology. A case study is proposed for this setting. Moreover, we explained
and discussed the result of our algorithm.
SYSTEM IDENTIFICATION AND MODELING FOR INTERACTING AND NON-INTERACTING TANK S...ijistjournal
System identification from the experimental data plays a vital role for model based controller design. Derivation of process model from first principles is often difficult due to its complexity. The first stage in the development of any control and monitoring system is the identification and modeling of the system. Each model is developed within the context of a specific control problem. Thus, the need for a general system identification framework is warranted. The proposed framework should be able to adapt and emphasize different properties based on the control objective and the nature of the behavior of the system. Therefore, system identification has been a valuable tool in identifying the model of the system based on the input and output data for the design of the controller. The present work is concerned with the identification of transfer function models using statistical model identification, process reaction curve method, ARX model, genetic algorithm and modeling using neural network and fuzzy logic for interacting and non interacting tank process. The identification technique and modeling used is prone to parameter change & disturbance. The proposed methods are used for identifying the mathematical model and intelligent model of interacting and non interacting process from the real time experimental data.
SYSTEM IDENTIFICATION AND MODELING FOR INTERACTING AND NON-INTERACTING TANK S...ijistjournal
System identification from the experimental data plays a vital role for model based controller design. Derivation of process model from first principles is often difficult due to its complexity. The first stage in the development of any control and monitoring system is the identification and modeling of the system. Each model is developed within the context of a specific control problem. Thus, the need for a general system identification framework is warranted. The proposed framework should be able to adapt and emphasize different properties based on the control objective and the nature of the behavior of the system. Therefore, system identification has been a valuable tool in identifying the model of the system based on the input and output data for the design of the controller. The present work is concerned with the identification of transfer function models using statistical model identification, process reaction curve method, ARX model, genetic algorithm and modeling using neural network and fuzzy logic for interacting and non interacting tank process. The identification technique and modeling used is prone to parameter change & disturbance. The proposed methods are used for identifying the mathematical model and intelligent model of interacting and non interacting process from the real time experimental data.
Similar to Bio-Inspired Modelling of Software Verification by Modified Moran Processes (20)
Forklift Classes Overview by Intella PartsIntella Parts
Discover the different forklift classes and their specific applications. Learn how to choose the right forklift for your needs to ensure safety, efficiency, and compliance in your operations.
For more technical information, visit our website https://intellaparts.com
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Dr.Costas Sachpazis
Terzaghi's soil bearing capacity theory, developed by Karl Terzaghi, is a fundamental principle in geotechnical engineering used to determine the bearing capacity of shallow foundations. This theory provides a method to calculate the ultimate bearing capacity of soil, which is the maximum load per unit area that the soil can support without undergoing shear failure. The Calculation HTML Code included.
Overview of the fundamental roles in Hydropower generation and the components involved in wider Electrical Engineering.
This paper presents the design and construction of hydroelectric dams from the hydrologist’s survey of the valley before construction, all aspects and involved disciplines, fluid dynamics, structural engineering, generation and mains frequency regulation to the very transmission of power through the network in the United Kingdom.
Author: Robbie Edward Sayers
Collaborators and co editors: Charlie Sims and Connor Healey.
(C) 2024 Robbie E. Sayers
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
We have compiled the most important slides from each speaker's presentation. This year’s compilation, available for free, captures the key insights and contributions shared during the DfMAy 2024 conference.
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...Amil Baba Dawood bangali
Contact with Dawood Bhai Just call on +92322-6382012 and we'll help you. We'll solve all your problems within 12 to 24 hours and with 101% guarantee and with astrology systematic. If you want to take any personal or professional advice then also you can call us on +92322-6382012 , ONLINE LOVE PROBLEM & Other all types of Daily Life Problem's.Then CALL or WHATSAPP us on +92322-6382012 and Get all these problems solutions here by Amil Baba DAWOOD BANGALI
#vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore#blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #blackmagicforlove #blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #Amilbabainuk #amilbabainspain #amilbabaindubai #Amilbabainnorway #amilbabainkrachi #amilbabainlahore #amilbabaingujranwalan #amilbabainislamabad
Immunizing Image Classifiers Against Localized Adversary Attacksgerogepatton
This paper addresses the vulnerability of deep learning models, particularly convolutional neural networks
(CNN)s, to adversarial attacks and presents a proactive training technique designed to counter them. We
introduce a novel volumization algorithm, which transforms 2D images into 3D volumetric representations.
When combined with 3D convolution and deep curriculum learning optimization (CLO), itsignificantly improves
the immunity of models against localized universal attacks by up to 40%. We evaluate our proposed approach
using contemporary CNN architectures and the modified Canadian Institute for Advanced Research (CIFAR-10
and CIFAR-100) and ImageNet Large Scale Visual Recognition Challenge (ILSVRC12) datasets, showcasing
accuracy improvements over previous techniques. The results indicate that the combination of the volumetric
input and curriculum learning holds significant promise for mitigating adversarial attacks without necessitating
adversary training.
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.
Final project report on grocery store management system..pdf
Bio-Inspired Modelling of Software Verification by Modified Moran Processes
1. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.5, No.3, June 2015
DOI : 10.5121/ijcsea.2015.5301 1
BIO-INSPIRED MODELLING OF SOFTWARE
VERIFICATION BY MODIFIED MORAN PROCESSES
Sven Söhnlein
Method Park Engineering GmbH, Wetterkreuz 19a, Erlangen, Germany
ABSTRACT
A new approach for the control and prediction of verification activities for large safety-relevant software
systems will be presented in this paper. The model is applied on a macroscopic system level and based on
so-called Moran processes, which originate from mathematical biology and allow for the description
ofphenomena as, for instance, genetic drift. Beside the theoretical foundations of this novel approach, its
application on a real-world example from the medical engineering domain will be discussed.
KEYWORDS
Modelling,Simulation, Dependability, Reliability, Software Engineering
1. INTRODUCTION
The development of safety-relevant software systems usually underlies very strict regulations
prescribed by corresponding standards, like the IEC 62304 for medical device software [1], for
example. In order to provide the necessary control and prediction instruments for the required
verification activities of such applications, the use of software reliability models seems to be
reasonable. Here, a huge spectrum of different theoreticalapproaches is available in the literature
(see [2, 3, 4] for an overview). But the problems in the practical implementation of such models
in a real-world software lifecycle process are manifold:
First of all, the usually very strict (and non-verifiable model assumptions [2]) are not flexible
enough to map also in cases of continuous integration paradigms [5] or post-development phases,
where patches or add-ons are integrated [6]. Moreover, these assumptions are usually not implied
from the relevant standards and regulations, but are frequently model-intrinsic [2]. In addition to
that, implications that come from typical management necessities in those areas are
predominantly ignored [7].
With regard to these determining factors, we propose a practical model that applies on a
macroscopic level of large systems and takes into account regulative prescriptions regarding the
lifecycle process, software architecture, as well as planning and management demands. The
introduced model is inspired by mathematical concepts, that were originally applied to describe
biological processes in finite populations.
2. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.5, No.3, June 2015
2
1.1. Paper Structure
The paper is organized as follows: In section 2, the relevant regulative and organisational factors
are determined, which will be used in the following to derive the theoretical basis for the model.
The approach itself will be introduced in section 3. Section 4 illustrates the application of the
model on a real-world system from the medical engineering domain, followed by a conclusion in
section 5.
2. DETERMINING REGULATIVE AND ORGANISATIONAL FACTORS
In order to derive an adequate context-specific model, one has to analyse the implications that
come from the corresponding standards in the particular application domain. In case of medical
device software, the IEC 62304 [1] represents the relevant norm (where it should be stated, that
similar standards exist for other safety-relevant applications, like the ISO 26262 [8] for the
automotive domain, for instance).
In the following, the key-aspects carvedout from the regulative and organisational prescriptions
will be highlighted, and referenced in subsequent sections as a basis for the provided model.
Regulative Factors:
R1. Software Lifecycle Process: The development underlies a strict plan-driven software
lifecycle process (like the V-Model [9] or the Waterfall-Model [9]). This implies particularly that
at least every requirement has to be verified by one (or more) corresponding test casesor by
another adequate verification technique [1].
R2. Software Architecture: The subdivision of the software system into interacting components
and units must be described and documented. With regard to this modularization, software units
represent the smallest atomic parts in the software architecture [1], whereas components in turn
consist of a finite number of units [1].
R3. Quality Management System: The IEC 62304 [1] prescribes a quality management system
(as defined by the ISO 13485 [10], for instance). Thus, it is required to define quality goals and
verify to which extend they are fulfilled.
Organisational Factors:
O1. Verification and Correction Phases: The typical management procedure [11] for the
verification process in the considered domain consists of a timely subdivided organization of
verification and correction phases, which consist of a certain subset of the overall number of
planned test cases.
O2. Impact Analysis:In advance to every correction, an impact analysis [12] is performed in
order to reveal the number of units that will be “touched” in the subsequent correction phase.
O3. Statistical Process Control: Statistical process control [13] is performed with the intentto
derive measures (considering the progress of verification and correction activities)from past
projectswith regard to the current or upcoming one.
3. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.5, No.3, June 2015
3
Delimitation of Consideration:
Further, the scope of consideration will be delimited as follows:
D1. Classification of Software Units:Software units represent the smallest parts of consideration
and will be classified as ‘correct’ XOR ‘faulty’ (with no further distinction regarding the involved
code parts).
D2. Correction of Faults: Faulty software units which are corrected during a verification and
correction phase, change their classification status from ‘faulty’ to ‘correct’.
D3. Insertion of Faults:The correction process is not perfect, i.e. it also has the potential to inject
new faults into the system, which is represented by a change of the classification status of a
software unit from ‘correct’ to ‘faulty’.
Taking all these aspects into account, the following relation between the relevant elements of the
verification and correction process can be established (see figure 1), where(with = 1, … , )
denotes a requirement,
(with = 1, … ,
4. ) a test case,
(with = 1, … , ) a software unit and
(with = 1, … , ) a component:
Each requirement is at least verified by one or more test cases (with regard to assumption R1),
where test cases “spot” the ‘faulty’ (or ‘correct’) units within certain components of the system
(with regard to assumptions R2 and D1). The software units to be “touched” in the subsequent
correction phase are revealed by the performed impact analysis (with regard to assumption O2).
These software units might thereby change its classification status from ‘faulty’ to ‘correct’
(which is the more probable case) but possibly also from ‘correct’ to ‘faulty’ (with regard to
assumptions D2 and D3).
5. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.5, No.3, June 2015
4
Figure 1. Relation between verification and correction elements
6. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.5, No.3, June 2015
5
3. MODELLING SOFTWARE VERIFICATION VIA MORAN PROCESSES
Moran processes are stochastic models that originate from mathematical biology and are used to
describe, for instance, mutations in finite populations (see [14, 15] for an introduction). In the
basic model, a finite population of size ∈ ℕ consists of two alleles (let’s say ‘green’ and ‘red’),
which are competing for dominance. In each time step, a random individual is chosen for
reproduction and another one is chosen for death, thus ensuring a constant population size. The
“fitness” of the alleles hereby determines how likely they are to be chosen for reproduction and
thereforeaffects the time for fixation (i. e. the time for taking over the whole population).
In order to map this biological model to theconsidered software context, the discussed aspects
from section 2 are addressed as follows: The whole software system (which can be interpreted as
the DNA [16]) consists of components (DNA Segments) that consist of a finite population of
units (genes [16]). Those units (genes) can be classified into two categories (alleles [16]) marked
‘correct’ (green) XOR ‘faulty’ (red). A single unit can shift its classification (allel) from ‘correct’
to ‘faulty’ or from ‘faulty’ to ‘correct’ in one time step (which represents the mutation process
[16]). This means that the whole verification and correction process can be considered as the
genetic drift [16] in the software system, where the goodness of the process is affected by the
fitness of the alleles. Table 2 shows an overview of the corresponding elements from both worlds.
Table 1. Mapping oftechnical and biological elements
Software World Biological World
Software System DNA
Component DNA Segment
Unit Gene
Classification of a unit {‘correct’, ‘faulty’} Allel {‘green’, ‘red’}
Correction of a fault / Insertion of a fault Mutation
Verification and correction process Genetic drift
Goodness of the verification and correction activities Fitness
In accordance to this mapping, the verification and correction process can be described by an
irreducible ergodic discrete-time Markov chain (DTMC [15])
() with
∈ ℕ
where () denotes the family of random variables (indexed by the discrete time
). The process
underlies a finite state space ∈ ℕ, where
|| = + 1
and ∈ ℕ represents the number of software units in the system. Every state ∈ (with =
0, … , ) is hereby associated with a software system consisting of correct (verified) software
units (and − faulty units).
7. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.5, No.3, June 2015
6
With regard to assumption O1 (see section 2), it is assumed that in verification and correction
phase (with ∈ ℕ and = 1, … , ) where represents the overall number of verification
and correction phases, we reach a certain state .Then, the required impact analysis (see
assumption O2) will reveal the number of software units, that is “touched” in the next verification
and correction phase !and therefore implies the expected number of time steps for the
Moran process in the subsequent phase (see figure 2 for an illustration).
Figure 2. Verification and correction phases () in the Moran process
Therefore, the DTMC for the Moran Process described above can be defined by the || × ||
transition matrix#, where the entries of# are specified as
#, ! =
φ
∙
φ
∙ + −
∙
−
for 0 ≤ (1)
#,'! =
−
φ
∙ + −
∙
for 0 ≤ (2)
#, = 1 − #,'! − #, ! for 0 (3)
#, = 1 for = 0 ∨ = (4)
and all other entries of #are zero, which results in a triangular matrix. Here, φ
represents the
mentioned phase-specific “fitness” of the verification and correction activities and can be derived
by statistical process control techniques (see assumption O3 in section 2). A coarse
approximation of φ
might be estimated by the fraction of successfully corrected components in
relation to the inserted faults. Note that in contrast to the original Moran process model [14], the
fitness is not fixed here, but changes in accordance to the phase of the whole verification and
correction process, which is reasonable with regard to the different preconditions in each phase.
In general, φ
can be categorized as follows:
φ
1: This is the usual and expected case, where (significantly) more faults are
detected and corrected than injected.
φ
= 1: In this case, we have a “neutral” drift (and an unsystematic verification and
correction process).
φ
1: This is the unusual and unexpected case, where more new faults are injected in
the system than detected and corrected.
i-1 i+1
i
!
8. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.5, No.3, June 2015
7
The defined Moran process is initialized by the start vector +(,)
with entries
+-
(,)
= 1 for . = 1 (5)
+-
(,)
= 0 for . ≠ 1 (6)
which means that at least one correct unit is available at the beginning. Further, we denote by 0
(with 0∈ ℕ and 0 = 0, … , )the number of software units to be touched in a certain(see
the presumed impact analysis O2). Then, if the first verification and correction phase !
reveals, that the number of software units to be touched in this phase is 0!, than the state vector of
the Moran process at this stage is computed by
+(!)
= +(,)
∙ #(12) (7)
More generally, if in phase , we reach a certain state (which is associated with a software
system of already verified software units),than the state vector for phase ! is computed by
+( !)
= +()
∙ #(1342) (8)
with
+-
()
= 1 for . = (9)
+-
()
= 0 for . ≠ (10)
Moreover, by
σ = max
-9,,…,:
+-
()
(11)
the most probable state .;=
()
at can be determined with
+-?@
()
= σ (12)
In order to estimate, if predefined reliability targets (see assumption R3) are met (in terms of the
minimum number of components that have to be correct after a certain verification and correction
phase), we denote by A;B(C) the probability, that in phase we have at least C correct
components, which can be computed by
A;B(C) = D +-
()
:
-9E3
(13)
9. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.5, No.3, June 2015
8
4. EXAMPLE APPLICATION
In this section, the application of the model on a real-world system (from the medical engineering
domain) will be discussed.The model was applied in order to assess the progress of the
verification and correction activities, with a specific focus on the reachability of the predefined
reliability targets. The system consisted of an overall number of = 363 software units. The
verification and correction process was subdivided into = 5 phases. Table 2 shows the number
of software units 0 that were touched in each phase (estimated by the corresponding impact
analysis), the verification fitness φ
for each phase (estimated by the application of the previously
mentioned statistical process control techniques), the predefined reliability targetC for each
phase (as an outcome from the project and risk management activities)as well as the computed
measures according to equations (1) – (13) from section 4.
Table 2. Computed measures for the example system
0
0
∙ 100% Σ0 Σ13
:
∙ 100% φ C
C
∙ 100% .;=
() A;B(C)
1 114 31,40 114 31,40 37,96 72,60 20,00 65 0.14
2 95 26,17 209 57,57 18,79 163,35 45,00 168 0.69
3 74 20,39 283 77,96 13,67 235,95 65,00 234 0.40
4 53 14,60 336 92,56 11,32 290,40 80,00 293 0.87
5 27 7,44 363 100,00 9,41 326,70 90,00 337 0.99
If we look at the predefined reliability target C and the computed most probable state .;=
()
for
each phase, we can see how the predefinition differs from the prediction according to the varying
fitness and number of touched components in each phase. While for phases K, L and M,
the most probable states are pretty close to the predefined targets, the discrepancies in phases
! and N are comparatively high. And apart from !and L, the predefined targets
underestimated the most probable states in this case, which is also illustrated in figure 3. But this
might be a little bit misleading with regard to the computed probabilities A;B(C) of reaching
the predefined goals. Here, only M and N establish a substantial confidence in the
reachability of the predefined quality goals, which is also shown in figure 4.
10. International Journal of Computer Science, Engineering and Applications (
Figure 3. Comparison of predefined and predicted measures
Figure 4. Evolving probability of meeting the
The computed measures illustrate
targets for the verification an correction
5. CONCLUSIONS
In this paper, a novel approach for
software systems was introduced. Beside the derivation of the theoretical foundations of th
model, its application on a real
International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.5, No.3, June 2015
Figure 3. Comparison of predefined and predicted measures
Figure 4. Evolving probability of meeting the predefined targets
lustrate how the introduced model can be utilized to adjustpredefined
the verification an correction phases.
roach for the support of correction processes of large safety
was introduced. Beside the derivation of the theoretical foundations of th
its application on a real-world example was also shown. Thereby it could be
IJCSEA) Vol.5, No.3, June 2015
9
to adjustpredefined
safety-relevant
was introduced. Beside the derivation of the theoretical foundations of this
world example was also shown. Thereby it could be
11. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.5, No.3, June 2015
10
demonstrated, how this technique can serve as an instrument for the planning and control of the
verification activities in such an environment.
REFERENCES
[1] International Electrotechnical Commission: Medical device software - Software life-cycle processes,
IEC62304:2006 (2006).
[2] M. R. Lyu (Editor), Handbook of Software Reliability Engineering, IEEE Computer Society Press,
McGraw-Hill, 1996.
[3] J. D. Musa, Software Reliability Engineering, McGraw-Hill, 1999.
[4] D. P. Siewiorek and R. S. Swarz, The Theory and Practice of Reliable System Design,Digital Press,
1982.
[5] P. M. Duvall et al., Continuous Integration: Improving Software Quality and Reducing Risk,
Addison-Wesley, 2007.
[6] S. S. Yau and J. S. Collofello, “Design Stability Measures for Software Maintenance”, IEEE
Transactions on Software Engineering, Vol. 11 (9), pp. 84-856, 1985.
[7] S. R. Rakitin, Software Verification and Validation for Practitioners and Managers, 2nd ed., Artech
House, Inc., 2001.
[8] ISO 26262-1:2011(en) Road vehicles - Functional safety, International Standardization Organization
(2011).
[9] I. Sommerville, Software Engineering, 9th ed., Pearson, 2012.
[10] ISO 13485:2003 Medical devices - Quality management systems - Requirements for regulatory
purposes (2003).
[11] M. Pol et al., Software Testing: A Guide to the TMap Approach, Addison-Wesley Professional, 2001.
[12] K. Fisler et al.,Verification and change-impact analysis of access-control policies, Proceedings of
the 27th international conference on Software engineering, ACM, 2005.
[13] J. S. Oakland, Statistical process control, Routledge, 2008.
[14] P. A. P. Moran, Random processes in genetics, Mathematical Proceedings of the Cambridge
Philosophical Society, Vol. 54. (1), Cambridge University Press, 1958.
[15] M. A. Nowak, Evolutionary dynamics, Harvard University Press, 2006.
[16] K. S. Trivedi, Probability Statistics with Reliability, Queuing and Computer Science Applications,
PHI Learning Pvt. Limited, 2011.
AUTHOR
Dr. Sven Söhnlein received a PhD in Engineering and a MSc in Computer Science from the University of
Erlangen-Nürnberg (Germany). Until 2014 he was a Senior Researcher at the University of Erlangen-
Nürnberg and is currently working for the company Method Park Engineering GmbH.