Software reliability is considered as a quantifiable metric, which is defined as the probability of a software to operate
without failure for a specified period of time in a specific environment. Various software reliability growth models have been proposed
to predict the reliability of a software. These models help vendors to predict the behaviour of the software before shipment. The
reliability is predicted by estimating the parameters of the software reliability growth models. But the model parameters are generally
in nonlinear relationships which creates many problems in finding the optimal parameters using traditional techniques like Maximum
Likelihood and least Square Estimation. Various stochastic search algorithms have been introduced which have made the task of
parameter estimation, more reliable and computationally easier. Parameter estimation of NHPP based reliability models, using MLE
and using an evolutionary search algorithm called Particle Swarm Optimization, has been explored in the paper.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Software testing is an important activity of the software development process. Software testing is most
efforts consuming phase in software development. One would like to minimize the effort and maximize the
number of faults detected and automated test case generation contributes to reduce cost and time effort.
Hence test case generation may be treated as an optimization problem In this paper we have used genetic
algorithm to optimize the test case that are generated applying conditional coverage on source code. Test
case data is generated automatically using genetic algorithm are optimized and outperforms the test cases
generated by random testing.
Machine Learning approaches are good in solving problems that have less information. In most cases, the
software domain problems characterize as a process of learning that depend on the various circumstances
and changes accordingly. A predictive model is constructed by using machine learning approaches and
classified them into defective and non-defective modules. Machine learning techniques help developers to
retrieve useful information after the classification and enable them to analyse data from different
perspectives. Machine learning techniques are proven to be useful in terms of software bug prediction. This
study used public available data sets of software modules and provides comparative performance analysis
of different machine learning techniques for software bug prediction. Results showed most of the machine
learning methods performed well on software bug datasets.
Determination of Software Release Instant of Three-Tier Client Server Softwar...Waqas Tariq
Quality of any software system mainly depends on how much time testing take place, what kind of testing methodologies are used, how complex the software is, the amount of efforts put by software developers and the type of testing environment subject to the cost and time constraint. More time developers spend on testing more errors can be removed leading to better reliable software but then testing cost will also increase. On the contrary, if testing time is too short, software cost could be reduced provided the customers take risk of buying unreliable software. However, this will increase the cost during operational phase since it is more expensive to fix an error during operational phase than during testing phase. Therefore it is essentially important to decide when to stop testing and release the software to customers based on cost and reliability assessment. In this paper we present a mechanism of when to stop testing process and release the software to end-user by developing a software cost model with risk factor. Based on the proposed method we specifically address the issues of how to decide that we should stop testing and release the software based on three-tier client server architecture which would facilitates software developers to ensure on-time delivery of a software product meeting the criteria of achieving predefined level of reliability and minimizing the cost. A numerical example has been cited to illustrate the experimental results showing significant improvements over the conventional statistical models based on NHPP.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Software testing is an important activity of the software development process. Software testing is most
efforts consuming phase in software development. One would like to minimize the effort and maximize the
number of faults detected and automated test case generation contributes to reduce cost and time effort.
Hence test case generation may be treated as an optimization problem In this paper we have used genetic
algorithm to optimize the test case that are generated applying conditional coverage on source code. Test
case data is generated automatically using genetic algorithm are optimized and outperforms the test cases
generated by random testing.
Machine Learning approaches are good in solving problems that have less information. In most cases, the
software domain problems characterize as a process of learning that depend on the various circumstances
and changes accordingly. A predictive model is constructed by using machine learning approaches and
classified them into defective and non-defective modules. Machine learning techniques help developers to
retrieve useful information after the classification and enable them to analyse data from different
perspectives. Machine learning techniques are proven to be useful in terms of software bug prediction. This
study used public available data sets of software modules and provides comparative performance analysis
of different machine learning techniques for software bug prediction. Results showed most of the machine
learning methods performed well on software bug datasets.
Determination of Software Release Instant of Three-Tier Client Server Softwar...Waqas Tariq
Quality of any software system mainly depends on how much time testing take place, what kind of testing methodologies are used, how complex the software is, the amount of efforts put by software developers and the type of testing environment subject to the cost and time constraint. More time developers spend on testing more errors can be removed leading to better reliable software but then testing cost will also increase. On the contrary, if testing time is too short, software cost could be reduced provided the customers take risk of buying unreliable software. However, this will increase the cost during operational phase since it is more expensive to fix an error during operational phase than during testing phase. Therefore it is essentially important to decide when to stop testing and release the software to customers based on cost and reliability assessment. In this paper we present a mechanism of when to stop testing process and release the software to end-user by developing a software cost model with risk factor. Based on the proposed method we specifically address the issues of how to decide that we should stop testing and release the software based on three-tier client server architecture which would facilitates software developers to ensure on-time delivery of a software product meeting the criteria of achieving predefined level of reliability and minimizing the cost. A numerical example has been cited to illustrate the experimental results showing significant improvements over the conventional statistical models based on NHPP.
Reliability is concerned with decreasing faults and their impact. The earlier the faults are detected the better. That's why this presentation talks about automated techniques using machine learning to detect faults as early as possible.
Software testing defect prediction model a practical approacheSAT Journals
Abstract Software defects prediction aims to reduce software testing efforts by guiding the testers through the defect classification of software systems. Defect predictors are widely used in many organizations to predict software defects in order to save time, improve quality, testing and for better planning of the resources to meet the timelines. The application of statistical software testing defect prediction model in a real life setting is extremely difficult because it requires more number of data variables and metrics and also historical defect data to predict the next releases or new similar type of projects. This paper explains our statistical model, how it will accurately predict the defects for upcoming software releases or projects. We have used 20 past release data points of software project, 5 parameters and build a model by applying descriptive statistics, correlation and multiple linear regression models with 95% confidence intervals (CI). In this appropriate multiple linear regression model the R-square value was 0.91 and its Standard Error is 5.90%. The Software testing defect prediction model is now being used to predict defects at various testing projects and operational releases. We have found 90.76% precision between actual and predicted defects.
With the rise of software systems ranging from personal assistance to the nation's facilities, software defects become more critical concerns as they can cost millions of dollar as well as impact human lives. Yet, at the breakneck pace of rapid software development settings (like DevOps paradigm), the Quality Assurance (QA) practices nowadays are still time-consuming. Continuous Analytics for Software Quality (i.e., defect prediction models) can help development teams prioritize their QA resources and chart better quality improvement plan to avoid pitfalls in the past that lead to future software defects. Due to the need of specialists to design and configure a large number of configurations (e.g., data quality, data preprocessing, classification techniques, interpretation techniques), a set of practical guidelines for developing accurate and interpretable defect models has not been well-developed.
The ultimate goal of my research aims to (1) provide practical guidelines on how to develop accurate and interpretable defect models for non-specialists; (2) develop an intelligible defect model that offer suggestions how to improve both software quality and processes; and (3) integrate defect models into a real-world practice of rapid development cycles like CI/CD settings. My research project is expected to provide significant benefits including the reduction of software defects and operating costs, while accelerating development productivity for building software systems in many of Australia's critical domains such as Smart Cities and e-Health.
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
In today's increasingly digitalised world, software defects are enormously expensive. In 2018, the Consortium for IT Software Quality reported that software defects cost the global economy $2.84 trillion dollars and affected more than 4 billion people. The average annual cost of software defects on Australian businesses is A$29 billion per year. Thus, failure to eliminate defects in safety-critical systems could result in serious injury to people, threats to life, death, and disasters. Traditionally, software quality assurance activities like testing and code review are widely adopted to discover software defects in a software product. However, ultra-large-scale systems, such as, Google, can consist of more than two billion lines of code, so exhaustively reviewing and testing every single line of code isn't feasible with limited time and resources. This project aims to create technologies that enable software engineers to produce the highest quality software systems with the lowest operational costs. To achieve this, this project will invent an end-to-end explainable AI platform to (1) understand the nature of critical defects; (2) predict and locate defects; (3) explain and visualise the characteristics of defects; (4) suggest potential patches to automatically fix defects; (5) integrate such platform as a GitHub bot plugin.
Comparative Performance Analysis of Machine Learning Techniques for Software ...csandit
Machine learning techniques can be used to analyse data from different perspectives and enable
developers to retrieve useful information. Machine learning techniques are proven to be useful
in terms of software bug prediction. In this paper, a comparative performance analysis of
different machine learning techniques is explored for software bug prediction on public
available data sets. Results showed most of the machine learning methods performed well on
software bug datasets.
Practical Guidelines to Improve Defect Prediction Model – A Reviewinventionjournals
Defect prediction models are used to pinpoint risky software modules and understand past pitfalls that lead to defective modules. The predictions and insights that are derived from defect prediction models may not be accurate and reliable if researchers do not consider the impact of experimental components (e.g., datasets, metrics, and classifiers) of defect prediction modeling. Therefore, a lack of awareness and practical guidelines from previous research can lead to invalid predictions and unreliable insights. Through case studies of systems that span both proprietary and open-source domains, find that (1) noise in defect datasets; (2) parameter settings of classification techniques; and (3) model validation techniques have a large impact on the predictions and insights of defect prediction models, suggesting that researchers should carefully select experimental components in order to produce more accurate and reliable defect prediction models.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Software analytics focuses on analyzing and modeling a rich source of software data using well-established data analytics techniques in order to glean actionable insights for improving development practices, productivity, and software quality. However, if care is not taken when analyzing and modeling software data, the predictions and insights that are derived from analytical models may be inaccurate and unreliable. The goal of this hands-on tutorial is to guide participants on how to (1) analyze software data using statistical techniques like correlation analysis, hypothesis testing, effect size analysis, and multiple comparisons, (2) develop accurate, reliable, and reproducible analytical models, (3) interpret the models to uncover relationships and insights, and (4) discuss pitfalls associated with analytical techniques including hands-on examples with real software data. R will be the primary programming language. Code samples will be available in a public GitHub repository. Participants will do exercises via either RStudio or Jupyter Notebook through Binder.
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology
Evaluating SRGMs for Automotive Software ProjectRAKESH RANA
Evaluation of standard reliability growth models in the context of automotive software systems
Presented at:
PROFES conferences, the 14th International Conference of Product Focused Software Development and Process Improvement, in Paphos, Cyprus, 12-14 June 2013.
Optimal Selection of Software Reliability Growth Model-A StudyIJEEE
People use software and sometime software fails.so they try to quantify software reliability and try to understand how and why it fails.For this purpose so many software Reliability models have been developed to estimate the defects in the software while delivering it to the customer.Till now so many software Reliability models have been developed,but main issue is that it remain largely unsolved that how to calculate software reliability efficiently.In everycircumstance we cannotuse one model because no single model can completely represent all features.This paper describes the circumstances and criteria under which particular model can be selected.
Defect prediction models help software quality assurance teams to effectively allocate their limited resources to the most defect-prone software modules. Model validation techniques, such as k-fold cross-validation, use this historical data to estimate how well a model will perform in the future. However, little is known about how accurate the performance estimates of these model validation techniques tend to be. In this paper, we set out to investigate the bias and variance of model validation techniques in the domain of defect prediction. A preliminary analysis of 101 publicly available defect prediction datasets suggests that 77% of them are highly susceptible to producing unstable results. Hence, selecting an appropriate model validation technique is a critical experimental design choice. Based on an analysis of 256 studies in the defect prediction literature, we select the 12 most commonly adopted model validation techniques for evaluation. Through a case study of data from 18 systems that span both open-source and proprietary domains, we derive the following practical guidelines for future defect prediction studies: (1) the single holdout validation techniques should be avoided; and (2) researchers should use the out-of-sample bootstrap validation technique instead of holdout or the commonly-used cross-validation techniques.
Software Quality Assurance (SQA) teams play a critical role in the software development process to ensure the absence of software defects. It is not feasible to perform exhaustive SQA tasks (i.e., software testing and code review) on a large software product given the limited SQA resources that are available. Thus, the prioritization of SQA efforts is an essential step in all SQA efforts. Defect prediction models are used to prioritize risky software modules and understand the impact of software metrics on the defect-proneness of software modules. The predictions and insights that are derived from defect prediction models can help software teams allocate their limited SQA resources to the modules that are most likely to be defective and avoid common past pitfalls that are associated with the defective modules of the past. However, the predictions and insights that are derived from defect prediction models may be inaccurate and unreliable if practitioners do not control for the impact of experimental components (e.g., datasets, metrics, and classifiers) on defect prediction models, which could lead to erroneous decision-making in practice. In this thesis, we investigate the impact of experimental components on the performance and interpretation of defect prediction models. More specifically, we investigate the impact of the three often overlooked experimental components (i.e., issue report mislabelling, parameter optimization of classification techniques, and model validation techniques) have on defect prediction models. Through case studies of systems that span both proprietary and open-source domains, we demonstrate that (1) issue report mislabelling does not impact the precision of defect prediction models, suggesting that researchers can rely on the predictions of defect prediction models that were trained using noisy defect datasets; (2) automated parameter optimization for classification techniques substantially improve the performance and stability of defect prediction models, as well as they change their interpretation, suggesting that researchers should no longer shy from applying parameter optimization to their models; and (3) the out-of-sample bootstrap validation technique produces a good balance between bias and variance of performance estimates, suggesting that the single holdout and cross-validation families that are commonly-used nowadays should be avoided.
Regression testing concentrates on finding defects after a major code change has occurred. Specifically, it
exposes software regressions or old bugs that have reappeared. It is an expensive testing process that has
been estimated to account for almost half of the cost of software maintenance. To improve the regression
testing process, test case prioritization techniques organizes the execution level of test cases. Further, it
gives an improved rate of fault identification, when test suites cannot run to completion.
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Reliability is concerned with decreasing faults and their impact. The earlier the faults are detected the better. That's why this presentation talks about automated techniques using machine learning to detect faults as early as possible.
Software testing defect prediction model a practical approacheSAT Journals
Abstract Software defects prediction aims to reduce software testing efforts by guiding the testers through the defect classification of software systems. Defect predictors are widely used in many organizations to predict software defects in order to save time, improve quality, testing and for better planning of the resources to meet the timelines. The application of statistical software testing defect prediction model in a real life setting is extremely difficult because it requires more number of data variables and metrics and also historical defect data to predict the next releases or new similar type of projects. This paper explains our statistical model, how it will accurately predict the defects for upcoming software releases or projects. We have used 20 past release data points of software project, 5 parameters and build a model by applying descriptive statistics, correlation and multiple linear regression models with 95% confidence intervals (CI). In this appropriate multiple linear regression model the R-square value was 0.91 and its Standard Error is 5.90%. The Software testing defect prediction model is now being used to predict defects at various testing projects and operational releases. We have found 90.76% precision between actual and predicted defects.
With the rise of software systems ranging from personal assistance to the nation's facilities, software defects become more critical concerns as they can cost millions of dollar as well as impact human lives. Yet, at the breakneck pace of rapid software development settings (like DevOps paradigm), the Quality Assurance (QA) practices nowadays are still time-consuming. Continuous Analytics for Software Quality (i.e., defect prediction models) can help development teams prioritize their QA resources and chart better quality improvement plan to avoid pitfalls in the past that lead to future software defects. Due to the need of specialists to design and configure a large number of configurations (e.g., data quality, data preprocessing, classification techniques, interpretation techniques), a set of practical guidelines for developing accurate and interpretable defect models has not been well-developed.
The ultimate goal of my research aims to (1) provide practical guidelines on how to develop accurate and interpretable defect models for non-specialists; (2) develop an intelligible defect model that offer suggestions how to improve both software quality and processes; and (3) integrate defect models into a real-world practice of rapid development cycles like CI/CD settings. My research project is expected to provide significant benefits including the reduction of software defects and operating costs, while accelerating development productivity for building software systems in many of Australia's critical domains such as Smart Cities and e-Health.
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
In today's increasingly digitalised world, software defects are enormously expensive. In 2018, the Consortium for IT Software Quality reported that software defects cost the global economy $2.84 trillion dollars and affected more than 4 billion people. The average annual cost of software defects on Australian businesses is A$29 billion per year. Thus, failure to eliminate defects in safety-critical systems could result in serious injury to people, threats to life, death, and disasters. Traditionally, software quality assurance activities like testing and code review are widely adopted to discover software defects in a software product. However, ultra-large-scale systems, such as, Google, can consist of more than two billion lines of code, so exhaustively reviewing and testing every single line of code isn't feasible with limited time and resources. This project aims to create technologies that enable software engineers to produce the highest quality software systems with the lowest operational costs. To achieve this, this project will invent an end-to-end explainable AI platform to (1) understand the nature of critical defects; (2) predict and locate defects; (3) explain and visualise the characteristics of defects; (4) suggest potential patches to automatically fix defects; (5) integrate such platform as a GitHub bot plugin.
Comparative Performance Analysis of Machine Learning Techniques for Software ...csandit
Machine learning techniques can be used to analyse data from different perspectives and enable
developers to retrieve useful information. Machine learning techniques are proven to be useful
in terms of software bug prediction. In this paper, a comparative performance analysis of
different machine learning techniques is explored for software bug prediction on public
available data sets. Results showed most of the machine learning methods performed well on
software bug datasets.
Practical Guidelines to Improve Defect Prediction Model – A Reviewinventionjournals
Defect prediction models are used to pinpoint risky software modules and understand past pitfalls that lead to defective modules. The predictions and insights that are derived from defect prediction models may not be accurate and reliable if researchers do not consider the impact of experimental components (e.g., datasets, metrics, and classifiers) of defect prediction modeling. Therefore, a lack of awareness and practical guidelines from previous research can lead to invalid predictions and unreliable insights. Through case studies of systems that span both proprietary and open-source domains, find that (1) noise in defect datasets; (2) parameter settings of classification techniques; and (3) model validation techniques have a large impact on the predictions and insights of defect prediction models, suggesting that researchers should carefully select experimental components in order to produce more accurate and reliable defect prediction models.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Software analytics focuses on analyzing and modeling a rich source of software data using well-established data analytics techniques in order to glean actionable insights for improving development practices, productivity, and software quality. However, if care is not taken when analyzing and modeling software data, the predictions and insights that are derived from analytical models may be inaccurate and unreliable. The goal of this hands-on tutorial is to guide participants on how to (1) analyze software data using statistical techniques like correlation analysis, hypothesis testing, effect size analysis, and multiple comparisons, (2) develop accurate, reliable, and reproducible analytical models, (3) interpret the models to uncover relationships and insights, and (4) discuss pitfalls associated with analytical techniques including hands-on examples with real software data. R will be the primary programming language. Code samples will be available in a public GitHub repository. Participants will do exercises via either RStudio or Jupyter Notebook through Binder.
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology
Evaluating SRGMs for Automotive Software ProjectRAKESH RANA
Evaluation of standard reliability growth models in the context of automotive software systems
Presented at:
PROFES conferences, the 14th International Conference of Product Focused Software Development and Process Improvement, in Paphos, Cyprus, 12-14 June 2013.
Optimal Selection of Software Reliability Growth Model-A StudyIJEEE
People use software and sometime software fails.so they try to quantify software reliability and try to understand how and why it fails.For this purpose so many software Reliability models have been developed to estimate the defects in the software while delivering it to the customer.Till now so many software Reliability models have been developed,but main issue is that it remain largely unsolved that how to calculate software reliability efficiently.In everycircumstance we cannotuse one model because no single model can completely represent all features.This paper describes the circumstances and criteria under which particular model can be selected.
Defect prediction models help software quality assurance teams to effectively allocate their limited resources to the most defect-prone software modules. Model validation techniques, such as k-fold cross-validation, use this historical data to estimate how well a model will perform in the future. However, little is known about how accurate the performance estimates of these model validation techniques tend to be. In this paper, we set out to investigate the bias and variance of model validation techniques in the domain of defect prediction. A preliminary analysis of 101 publicly available defect prediction datasets suggests that 77% of them are highly susceptible to producing unstable results. Hence, selecting an appropriate model validation technique is a critical experimental design choice. Based on an analysis of 256 studies in the defect prediction literature, we select the 12 most commonly adopted model validation techniques for evaluation. Through a case study of data from 18 systems that span both open-source and proprietary domains, we derive the following practical guidelines for future defect prediction studies: (1) the single holdout validation techniques should be avoided; and (2) researchers should use the out-of-sample bootstrap validation technique instead of holdout or the commonly-used cross-validation techniques.
Software Quality Assurance (SQA) teams play a critical role in the software development process to ensure the absence of software defects. It is not feasible to perform exhaustive SQA tasks (i.e., software testing and code review) on a large software product given the limited SQA resources that are available. Thus, the prioritization of SQA efforts is an essential step in all SQA efforts. Defect prediction models are used to prioritize risky software modules and understand the impact of software metrics on the defect-proneness of software modules. The predictions and insights that are derived from defect prediction models can help software teams allocate their limited SQA resources to the modules that are most likely to be defective and avoid common past pitfalls that are associated with the defective modules of the past. However, the predictions and insights that are derived from defect prediction models may be inaccurate and unreliable if practitioners do not control for the impact of experimental components (e.g., datasets, metrics, and classifiers) on defect prediction models, which could lead to erroneous decision-making in practice. In this thesis, we investigate the impact of experimental components on the performance and interpretation of defect prediction models. More specifically, we investigate the impact of the three often overlooked experimental components (i.e., issue report mislabelling, parameter optimization of classification techniques, and model validation techniques) have on defect prediction models. Through case studies of systems that span both proprietary and open-source domains, we demonstrate that (1) issue report mislabelling does not impact the precision of defect prediction models, suggesting that researchers can rely on the predictions of defect prediction models that were trained using noisy defect datasets; (2) automated parameter optimization for classification techniques substantially improve the performance and stability of defect prediction models, as well as they change their interpretation, suggesting that researchers should no longer shy from applying parameter optimization to their models; and (3) the out-of-sample bootstrap validation technique produces a good balance between bias and variance of performance estimates, suggesting that the single holdout and cross-validation families that are commonly-used nowadays should be avoided.
Regression testing concentrates on finding defects after a major code change has occurred. Specifically, it
exposes software regressions or old bugs that have reappeared. It is an expensive testing process that has
been estimated to account for almost half of the cost of software maintenance. To improve the regression
testing process, test case prioritization techniques organizes the execution level of test cases. Further, it
gives an improved rate of fault identification, when test suites cannot run to completion.
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
A survey of predicting software reliability using machine learning methodsIAESIJAI
In light of technical and technological progress, software has become an urgent need in every aspect of human life, including the medicine sector and industrial control. Therefore, it is imperative that the software always works flawlessly. The information technology sector has witnessed a rapid expansion in recent years, as software companies can no longer rely only on cost advantages to stay competitive in the market, but programmers must provide reliable and high-quality software, and in order to estimate and predict software reliability using machine learning and deep learning, it was introduced A brief overview of the important scientific contributions to the subject of software reliability, and the researchers' findings of highly efficient methods and techniques for predicting software reliability.
A Novel Approach to Derive the Average-Case Behavior of Distributed Embedded ...ijccmsjournal
Monte-Carlo simulation is widely used in distributed embedded system in our present era. In this
research work, we have put an emphasis on reliability assessment of any distributed embedded system
through Monte-Carlo simulation. We have done this assessment on random data which represents input
voltages ranging from 0 volt to 12 volt; several numbers of trials have been executed on those data to
check the average case behavior of a distributed real time embedded system. From the experimental result, a saturation point has been achieved against the time behavior which shows the average case behavior of the concerned distributed embedded system.
A Review on Software Fault Detection and Prevention Mechanism in Software Dev...iosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
From the past many years many software defects prediction models are developed to solve the various issues in software project development. Software reliability is the significant in software quality which evaluates and predicts the quality of the software based on the defects prediction. Many software companies are trying to improve the software quality and also trying to reduce the cost of the software development. Rayleigh model is one of the significant models to analyze the software defects based on the generated data. Analysis of means (ANOM) is statistical technique which gives the quality assurance based on the situations. In this paper, an improved software defect prediction models (ISDPM) are used for predicting defects occur at the time of five phases such as analysis, planning, design, testing and maintenance. To improve the performance of the proposed methodology an order statistics is adopted for better prediction. The experiments are conducted on 2 synthetic projects that are used to analyze the defects.
SRGM Analyzers Tool of SDLC for Software Improving QualityIJERA Editor
Software Reliability Growth Models (SRGM) have been developed to estimate software reliability measures such as
software failure rate, number of remaining faults and software reliability. In this paper, the software analyzers tool proposed
for deriving several software reliability growth models based on Enhanced Non-homogeneous Poisson Process (ENHPP) in
the presence of imperfect debugging and error generation. The proposed models are initially formulated for the case when
there is no differentiation between failure observation and fault removal testing processes and then this extended for the case
when there is a clear differentiation between failure observation and fault removal testing processes. Many Software
Reliability Growth Models (SRGM) have been developed to describe software failures as a random process and can be used
to measure the development status during testing. With SRGM software consultants can easily measure (or evaluate) the
software reliability (or quality) and plot software reliability growth charts.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
Call for paper 2012, hard copy of Certificate, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJCER, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, research and review articles, IJCER Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathematics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer review journal, indexed journal, research and review articles, engineering journal, www.ijceronline.com, research journals,
yahoo journals, bing journals, International Journal of Computational Engineering Research, Google journals, hard copy of Certificate,
journal of engineering, online Submission
Software Defect Prediction Using Radial Basis and Probabilistic Neural NetworksEditor IJCATR
Defects in modules of software systems is a major problem in software development. There are a variety of data mining
techniques used to predict software defects such as regression, association rules, clustering, and classification. This paper is concerned
with classification based software defect prediction. This paper investigates the effectiveness of using a radial basis function neural
network and a probabilistic neural network on prediction accuracy and defect prediction. The conclusions to be drawn from this work is
that the neural networks used in here provide an acceptable level of accuracy but a poor defect prediction ability. Probabilistic neural
networks perform consistently better with respect to the two performance measures used across all datasets. It may be advisable to use
a range of software defect prediction models to complement each other rather than relying on a single technique.
Text Mining in Digital Libraries using OKAPI BM25 ModelEditor IJCATR
The emergence of the internet has made vast amounts of information available and easily accessible online. As a result, most libraries have digitized their content in order to remain relevant to their users and to keep pace with the advancement of the internet. However, these digital libraries have been criticized for using inefficient information retrieval models that do not perform relevance ranking to the retrieved results. This paper proposed the use of OKAPI BM25 model in text mining so as means of improving relevance ranking of digital libraries. Okapi BM25 model was selected because it is a probability-based relevance ranking algorithm. A case study research was conducted and the model design was based on information retrieval processes. The performance of Boolean, vector space, and Okapi BM25 models was compared for data retrieval. Relevant ranked documents were retrieved and displayed at the OPAC framework search page. The results revealed that Okapi BM 25 outperformed Boolean model and Vector Space model. Therefore, this paper proposes the use of Okapi BM25 model to reward terms according to their relative frequencies in a document so as to improve the performance of text mining in digital libraries.
Green Computing, eco trends, climate change, e-waste and eco-friendlyEditor IJCATR
This study focused on the practice of using computing resources more efficiently while maintaining or increasing overall performance. Sustainable IT services require the integration of green computing practices such as power management, virtualization, improving cooling technology, recycling, electronic waste disposal, and optimization of the IT infrastructure to meet sustainability requirements. Studies have shown that costs of power utilized by IT departments can approach 50% of the overall energy costs for an organization. While there is an expectation that green IT should lower costs and the firm’s impact on the environment, there has been far less attention directed at understanding the strategic benefits of sustainable IT services in terms of the creation of customer value, business value and societal value. This paper provides a review of the literature on sustainable IT, key areas of focus, and identifies a core set of principles to guide sustainable IT service design.
Policies for Green Computing and E-Waste in NigeriaEditor IJCATR
Computers today are an integral part of individuals’ lives all around the world, but unfortunately these devices are toxic to the environment given the materials used, their limited battery life and technological obsolescence. Individuals are concerned about the hazardous materials ever present in computers, even if the importance of various attributes differs, and that a more environment -friendly attitude can be obtained through exposure to educational materials. In this paper, we aim to delineate the problem of e-waste in Nigeria and highlight a series of measures and the advantage they herald for our country and propose a series of action steps to develop in these areas further. It is possible for Nigeria to have an immediate economic stimulus and job creation while moving quickly to abide by the requirements of climate change legislation and energy efficiency directives. The costs of implementing energy efficiency and renewable energy measures are minimal as they are not cash expenditures but rather investments paid back by future, continuous energy savings.
Performance Evaluation of VANETs for Evaluating Node Stability in Dynamic Sce...Editor IJCATR
Vehicular ad hoc networks (VANETs) are a favorable area of exploration which empowers the interconnection amid the movable vehicles and between transportable units (vehicles) and road side units (RSU). In Vehicular Ad Hoc Networks (VANETs), mobile vehicles can be organized into assemblage to promote interconnection links. The assemblage arrangement according to dimensions and geographical extend has serious influence on attribute of interaction .Vehicular ad hoc networks (VANETs) are subclass of mobile Ad-hoc network involving more complex mobility patterns. Because of mobility the topology changes very frequently. This raises a number of technical challenges including the stability of the network .There is a need for assemblage configuration leading to more stable realistic network. The paper provides investigation of various simulation scenarios in which cluster using k-means algorithm are generated and their numbers are varied to find the more stable configuration in real scenario of road.
Optimum Location of DG Units Considering Operation ConditionsEditor IJCATR
The optimal sizing and placement of Distributed Generation units (DG) are becoming very attractive to researchers these days. In this paper a two stage approach has been used for allocation and sizing of DGs in distribution system with time varying load model. The strategic placement of DGs can help in reducing energy losses and improving voltage profile. The proposed work discusses time varying loads that can be useful for selecting the location and optimizing DG operation. The method has the potential to be used for integrating the available DGs by identifying the best locations in a power system. The proposed method has been demonstrated on 9-bus test system.
Analysis of Comparison of Fuzzy Knn, C4.5 Algorithm, and Naïve Bayes Classifi...Editor IJCATR
Early detection of diabetes mellitus (DM) can prevent or inhibit complication. There are several laboratory test that must be done to detect DM. The result of this laboratory test then converted into data training. Data training used in this study generated from UCI Pima Database with 6 attributes that were used to classify positive or negative diabetes. There are various classification methods that are commonly used, and in this study three of them were compared, which were fuzzy KNN, C4.5 algorithm and Naïve Bayes Classifier (NBC) with one identical case. The objective of this study was to create software to classify DM using tested methods and compared the three methods based on accuracy, precision, and recall. The results showed that the best method was Fuzzy KNN with average and maximum accuracy reached 96% and 98%, respectively. In second place, NBC method had respective average and maximum accuracy of 87.5% and 90%. Lastly, C4.5 algorithm had average and maximum accuracy of 79.5% and 86%, respectively.
Web Scraping for Estimating new Record from Source SiteEditor IJCATR
Study in the Competitive field of Intelligent, and studies in the field of Web Scraping, have a symbiotic relationship mutualism. In the information age today, the website serves as a main source. The research focus is on how to get data from websites and how to slow down the intensity of the download. The problem that arises is the website sources are autonomous so that vulnerable changes the structure of the content at any time. The next problem is the system intrusion detection snort installed on the server to detect bot crawler. So the researchers propose the use of the methods of Mining Data Records and the method of Exponential Smoothing so that adaptive to changes in the structure of the content and do a browse or fetch automatically follow the pattern of the occurrences of the news. The results of the tests, with the threshold 0.3 for MDR and similarity threshold score 0.65 for STM, using recall and precision values produce f-measure average 92.6%. While the results of the tests of the exponential estimation smoothing using ? = 0.5 produces MAE 18.2 datarecord duplicate. It slowed down to 3.6 datarecord from 21.8 datarecord results schedule download/fetch fix in an average time of occurrence news.
Evaluating Semantic Similarity between Biomedical Concepts/Classes through S...Editor IJCATR
Most of the existing semantic similarity measures that use ontology structure as their primary source can measure semantic similarity between concepts/classes using single ontology. The ontology-based semantic similarity techniques such as structure-based semantic similarity techniques (Path Length Measure, Wu and Palmer’s Measure, and Leacock and Chodorow’s measure), information content-based similarity techniques (Resnik’s measure, Lin’s measure), and biomedical domain ontology techniques (Al-Mubaid and Nguyen’s measure (SimDist)) were evaluated relative to human experts’ ratings, and compared on sets of concepts using the ICD-10 “V1.0” terminology within the UMLS. The experimental results validate the efficiency of the SemDist technique in single ontology, and demonstrate that SemDist semantic similarity techniques, compared with the existing techniques, gives the best overall results of correlation with experts’ ratings.
Semantic Similarity Measures between Terms in the Biomedical Domain within f...Editor IJCATR
The techniques and tests are tools used to define how measure the goodness of ontology or its resources. The similarity between biomedical classes/concepts is an important task for the biomedical information extraction and knowledge discovery. However, most of the semantic similarity techniques can be adopted to be used in the biomedical domain (UMLS). Many experiments have been conducted to check the applicability of these measures. In this paper, we investigate to measure semantic similarity between two terms within single ontology or multiple ontologies in ICD-10 “V1.0” as primary source, and compare my results to human experts score by correlation coefficient.
A Strategy for Improving the Performance of Small Files in Openstack Swift Editor IJCATR
This is an effective way to improve the storage access performance of small files in Openstack Swift by adding an aggregate storage module. Because Swift will lead to too much disk operation when querying metadata, the transfer performance of plenty of small files is low. In this paper, we propose an aggregated storage strategy (ASS), and implement it in Swift. ASS comprises two parts which include merge storage and index storage. At the first stage, ASS arranges the write request queue in chronological order, and then stores objects in volumes. These volumes are large files that are stored in Swift actually. During the short encounter time, the object-to-volume mapping information is stored in Key-Value store at the second stage. The experimental results show that the ASS can effectively improve Swift's small file transfer performance.
Integrated System for Vehicle Clearance and RegistrationEditor IJCATR
Efficient management and control of government's cash resources rely on government banking arrangements. Nigeria, like many low income countries, employed fragmented systems in handling government receipts and payments. Later in 2016, Nigeria implemented a unified structure as recommended by the IMF, where all government funds are collected in one account would reduce borrowing costs, extend credit and improve government's fiscal policy among other benefits to government. This situation motivated us to embark on this research to design and implement an integrated system for vehicle clearance and registration. This system complies with the new Treasury Single Account policy to enable proper interaction and collaboration among five different level agencies (NCS, FRSC, SBIR, VIO and NPF) saddled with vehicular administration and activities in Nigeria. Since the system is web based, Object Oriented Hypermedia Design Methodology (OOHDM) is used. Tools such as Php, JavaScript, css, html, AJAX and other web development technologies were used. The result is a web based system that gives proper information about a vehicle starting from the exact date of importation to registration and renewal of licensing. Vehicle owner information, custom duty information, plate number registration details, etc. will also be efficiently retrieved from the system by any of the agencies without contacting the other agency at any point in time. Also number plate will no longer be the only means of vehicle identification as it is presently the case in Nigeria, because the unified system will automatically generate and assigned a Unique Vehicle Identification Pin Number (UVIPN) on payment of duty in the system to the vehicle and the UVIPN will be linked to the various agencies in the management information system.
Assessment of the Efficiency of Customer Order Management System: A Case Stu...Editor IJCATR
The Supermarket Management System deals with the automation of buying and selling of good and services. It includes both sales and purchase of items. The project Supermarket Management System is to be developed with the objective of making the system reliable, easier, fast, and more informative.
Energy-Aware Routing in Wireless Sensor Network Using Modified Bi-Directional A*Editor IJCATR
Energy is a key component in the Wireless Sensor Network (WSN)[1]. The system will not be able to run according to its function without the availability of adequate power units. One of the characteristics of wireless sensor network is Limitation energy[2]. A lot of research has been done to develop strategies to overcome this problem. One of them is clustering technique. The popular clustering technique is Low Energy Adaptive Clustering Hierarchy (LEACH)[3]. In LEACH, clustering techniques are used to determine Cluster Head (CH), which will then be assigned to forward packets to Base Station (BS). In this research, we propose other clustering techniques, which utilize the Social Network Analysis approach theory of Betweeness Centrality (BC) which will then be implemented in the Setup phase. While in the Steady-State phase, one of the heuristic searching algorithms, Modified Bi-Directional A* (MBDA *) is implemented. The experiment was performed deploy 100 nodes statically in the 100x100 area, with one Base Station at coordinates (50,50). To find out the reliability of the system, the experiment to do in 5000 rounds. The performance of the designed routing protocol strategy will be tested based on network lifetime, throughput, and residual energy. The results show that BC-MBDA * is better than LEACH. This is influenced by the ways of working LEACH in determining the CH that is dynamic, which is always changing in every data transmission process. This will result in the use of energy, because they always doing any computation to determine CH in every transmission process. In contrast to BC-MBDA *, CH is statically determined, so it can decrease energy usage.
Security in Software Defined Networks (SDN): Challenges and Research Opportun...Editor IJCATR
In networks, the rapidly changing traffic patterns of search engines, Internet of Things (IoT) devices, Big Data and data centers has thrown up new challenges for legacy; existing networks; and prompted the need for a more intelligent and innovative way to dynamically manage traffic and allocate limited network resources. Software Defined Network (SDN) which decouples the control plane from the data plane through network vitalizations aims to address these challenges. This paper has explored the SDN architecture and its implementation with the OpenFlow protocol. It has also assessed some of its benefits over traditional network architectures, security concerns and how it can be addressed in future research and related works in emerging economies such as Nigeria.
Measure the Similarity of Complaint Document Using Cosine Similarity Based on...Editor IJCATR
Report handling on "LAPOR!" (Laporan, Aspirasi dan Pengaduan Online Rakyat) system depending on the system administrator who manually reads every incoming report [3]. Read manually can lead to errors in handling complaints [4] if the data flow is huge and grows rapidly, it needs at least three days to prepare a confirmation and it sensitive to inconsistencies [3]. In this study, the authors propose a model that can measure the identities of the Query (Incoming) with Document (Archive). The authors employed Class-Based Indexing term weighting scheme, and Cosine Similarities to analyse document similarities. CoSimTFIDF, CoSimTFICF and CoSimTFIDFICF values used in classification as feature for K-Nearest Neighbour (K-NN) classifier. The optimum result evaluation is pre-processing employ 75% of training data ratio and 25% of test data with CoSimTFIDF feature. It deliver a high accuracy 84%. The k = 5 value obtain high accuracy 84.12%
Hangul Recognition Using Support Vector MachineEditor IJCATR
The recognition of Hangul Image is more difficult compared with that of Latin. It could be recognized from the structural arrangement. Hangul is arranged from two dimensions while Latin is only from the left to the right. The current research creates a system to convert Hangul image into Latin text in order to use it as a learning material on reading Hangul. In general, image recognition system is divided into three steps. The first step is preprocessing, which includes binarization, segmentation through connected component-labeling method, and thinning with Zhang Suen to decrease some pattern information. The second is receiving the feature from every single image, whose identification process is done through chain code method. The third is recognizing the process using Support Vector Machine (SVM) with some kernels. It works through letter image and Hangul word recognition. It consists of 34 letters, each of which has 15 different patterns. The whole patterns are 510, divided into 3 data scenarios. The highest result achieved is 94,7% using SVM kernel polynomial and radial basis function. The level of recognition result is influenced by many trained data. Whilst the recognition process of Hangul word applies to the type 2 Hangul word with 6 different patterns. The difference of these patterns appears from the change of the font type. The chosen fonts for data training are such as Batang, Dotum, Gaeul, Gulim, Malgun Gothic. Arial Unicode MS is used to test the data. The lowest accuracy is achieved through the use of SVM kernel radial basis function, which is 69%. The same result, 72 %, is given by the SVM kernel linear and polynomial.
Application of 3D Printing in EducationEditor IJCATR
This paper provides a review of literature concerning the application of 3D printing in the education system. The review identifies that 3D Printing is being applied across the Educational levels [1] as well as in Libraries, Laboratories, and Distance education systems. The review also finds that 3D Printing is being used to teach both students and trainers about 3D Printing and to develop 3D Printing skills.
Survey on Energy-Efficient Routing Algorithms for Underwater Wireless Sensor ...Editor IJCATR
In underwater environment, for retrieval of information the routing mechanism is used. In routing mechanism there are three to four types of nodes are used, one is sink node which is deployed on the water surface and can collect the information, courier/super/AUV or dolphin powerful nodes are deployed in the middle of the water for forwarding the packets, ordinary nodes are also forwarder nodes which can be deployed from bottom to surface of the water and source nodes are deployed at the seabed which can extract the valuable information from the bottom of the sea. In underwater environment the battery power of the nodes is limited and that power can be enhanced through better selection of the routing algorithm. This paper focuses the energy-efficient routing algorithms for their routing mechanisms to prolong the battery power of the nodes. This paper also focuses the performance analysis of the energy-efficient algorithms under which we can examine the better performance of the route selection mechanism which can prolong the battery power of the node
Comparative analysis on Void Node Removal Routing algorithms for Underwater W...Editor IJCATR
The designing of routing algorithms faces many challenges in underwater environment like: propagation delay, acoustic channel behaviour, limited bandwidth, high bit error rate, limited battery power, underwater pressure, node mobility, localization 3D deployment, and underwater obstacles (voids). This paper focuses the underwater voids which affects the overall performance of the entire network. The majority of the researchers have used the better approaches for removal of voids through alternate path selection mechanism but still research needs improvement. This paper also focuses the architecture and its operation through merits and demerits of the existing algorithms. This research article further focuses the analytical method of the performance analysis of existing algorithms through which we found the better approach for removal of voids
Decay Property for Solutions to Plate Type Equations with Variable CoefficientsEditor IJCATR
In this paper we consider the initial value problem for a plate type equation with variable coefficients and memory in
1 n R n ), which is of regularity-loss property. By using spectrally resolution, we study the pointwise estimates in the spectral
space of the fundamental solution to the corresponding linear problem. Appealing to this pointwise estimates, we obtain the global
existence and the decay estimates of solutions to the semilinear problem by employing the fixed point theorem
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
A Review on Parameter Estimation Techniques of Software Reliability Growth Models
1. International Journal of Computer Applications Technology and Research
Volume 3– Issue 4, 267 - 272, 2014
www.ijcat.com 267
A Review on Parameter Estimation Techniques of
Software Reliability Growth Models
Karambir Bidhan
University Institute of Engineering and Technology
Kurukshetra University Haryana,
India
Adima Awasthi
University Institute of Engineering and Technology
Kurukshetra University Haryana,
India
Abstract: Software reliability is considered as a quantifiable metric, which is defined as the probability of a software to operate
without failure for a specified period of time in a specific environment. Various software reliability growth models have been proposed
to predict the reliability of a software. These models help vendors to predict the behaviour of the software before shipment. The
reliability is predicted by estimating the parameters of the software reliability growth models. But the model parameters are generally
in nonlinear relationships which creates many problems in finding the optimal parameters using traditional techniques like Maximum
Likelihood and least Square Estimation. Various stochastic search algorithms have been introduced which have made the task of
parameter estimation, more reliable and computationally easier. Parameter estimation of NHPP based reliability models, using MLE
and using an evolutionary search algorithm called Particle Swarm Optimization, has been explored in the paper.
Keywords: Software Reliability, Software Reliability Growth Models, Parameter Estimation, Maximum Likelihood Estimation,
Partical Swarm Optimization
1. INTRODUCTION
Software can be defined as an instrument comprising a set of
coded statements which takes a discrete set of inputs and
transform them into a discrete set of outputs. Software may
contain discrepancies or faults which may result into software
failures. Any kind of deviation from the desired behaviour of
the software is apparently unwanted for the user and to check
the correctness of the program two approaches were used
which can be named as : program proving and program
testing. Program proving is a formal and mathematical
approach in which a finite sequence of logical statements
ending in the statement, usually the output specification
statement, to be proved is constructed. Each of the logical
statements is either an axiom or is a statement derived from
earlier statements by the application of an inference rule.
However Gerhart and Yelowitz [1] presented various
programs which were proved to be correct by this approach
but in actually, were containing faults. On the other hand
program testing is more practical approach and is heuristic in
nature. Program testing basically involves symbolic or
physical execution of a set of test cases with the objective of
exposing embedded faults in the program. Program testing,
like that of program proving is an imperfect tool for ensuring
program correctness. As these approaches cannot completely
assure about the correctness of the program, a metric is
required which can help in assessing the degree of program
correctness and software reliability fulfils this requirement.
Reliability of a software can be defined as: ―The probability of
failure-free software operation for a specific period of time in
a specified environment‖ [2]. Reliability of a software needs
to be assessed before it is delivered to the customer. Various
software reliability growth models (SRGM) has been
introduced for predicting the reliability of a software.
Although according to an observation made by M. R. Lyu in
[3], ―There is no universally acceptable model that can be
trusted to give accurate results in all circumstances; users
should not trust claims to the contrary.‖ Every model has
some advantages and some disadvantages. The choice
regarding which model to follow for the prediction depends
upon the requirements of the software.
We ask that authors follow some simple guidelines. This
document is a template. An electronic copy can be
downloaded from the journal website. For questions on paper
guidelines, please contact the conference publications
committee as indicated on the conference website.
Information about final paper submission is available from the
conference website
2. SOFTWARE RELIABILITY
GROWTH MODELS
A Software Reliability Growth Model can be considered as
one of the fundamental techniques to assess the reliability of a
software quantitatively. Any software reliability model
presents a mathematical function which illustrates defect
detection rates. These models are classified in two categories:
Concave and S-shaped models , which can be illustrated with
the help of fig-1[4].
Figure-1: Concave and S-Shaped Models
Concave rnodels are so-called because they bend downward.
S-shaped models, on the other hand, are first convex and then
concave. This reflects their underlying assumption that early
testing is not as efficient as later testing, so there is a "ramp-
up" period during which the defect-detection rate increases.
This period terminates at the inflection point in the S-shaped
curve. The most important thing about both models is that
they have the same asymptotic behavior: The defect-detection
rate decreases as the number of defects detected increases, and
2. International Journal of Computer Applications Technology and Research
Volume 3– Issue 4, 267 - 272, 2014
www.ijcat.com 268
the total number of defects detected asymptotically
approaches a finite value. Several models had been proposed
for assessing the reliability of the softwares, basic
assumptions of some of the models are discussed below :
Goel-Okumoto Model :
This model was first introduced by Goel and Okumoto [5] and
is based on the following
assumptions:
All faults in the software are mutually independent
from the failure detection point of
view.
The number of failures detected at any time is
proportional to the current number of faults in the
software. This means that the probability of the
failures or faults actually occurring, i.e., being
detected, is constant.
The isolated faults are removed prior to future test
occasions.
Each time a software failure occurs, the software
error which caused it is immediately removed and
no new errors are introduced.
These assumptions lead to the following differential
equations:
=b(a- ) (1)
In this equation, a represents the expected total number of
faults in the software before testing. Parameter b stands for
the failure detection rate or the failure intensity of a fault and
(t) for the expected number of failures detected at time t.
Solving above equation for (t), we obtain the following
mean value function
(2)
Yamada S-Shaped Model :
The Yamada S-Shaped model was first introduced in Yamada
et al. [6]. The model is based on the following assumptions:
All faults in the software are mutually independent
from the failure detection point of view.
The probability of failure detection at any time is
proportional to the current number of faults in the
software.
The proportionality of failure detection is constant.
The initial error content of the software is a random
variable.
A software system is subject to failures at random
times caused by errors present in the system.
The time between the (i-1)th and the ith failure,
depends on the time of the (i-1)th failure.
Musa’s basic execution time model [7]
It assumes that there are N software faults at the start of
testing, each is independent of others and is equally likely to
cause a failure during testing. A detected fault is removed
with certainty in a negligible time and no new faults are
introduced during the debugging process. The hazard function
for this model is given by
(3)
where is the execution time utilized in executing the
program up to the present, f is the linear execution frequency
(average instruction execution rate divided by the number of
instructions in the program), is a proportionality constant,
which is a fault exposure ratio that relates fault exposure
frequency to the linear execution frequency, and n, is the
number of faults corrected during (0, ). One of the main
features of this model is that it explicitly emphasizes the
dependence of the hazard function on execution time. Musa
also provides a systematic approach for converting the model
so that it can be applicable for calendar time as well.
3. PARAMETER ESTIMATION
TECHNIQUES
A software reliability model is simply a function and fitting
this function to the data means estimating its parameters from
the data. One approach to estimating parameters is to input the
data directly into equations for the parameters. The most
common method for this direct parameter estimation is the
maximum likelihood technique. Maximum Likelihood
Estimation is a method which is used for the estimation of the
parameters of a statistical models. If maximum likelihood
estimation is applied to a data set for a given statistical model
then it provides the estimates of that model‘s parameters. A
second approach is fitting the curve described by the function
to the data and estimating the parameters from the best fit to
the curve. The most common method for this indirect
parameter estimation is the least squares technique. Although
methods like MLE and LSE provide a way to estimate the
parameters of reliability models, but the model parameters are
normally in nonlinear relationships and this makes traditional
parameter estimation techniques suffer many problems in
finding the optimal parameters to tune the model for a better
prediction. Parameter estimation problem for nonlinear
systems can be stated and formulated as a function
optimization problem in which the objective is to obtain a set
of parameters that provide the best fit to a measured data
based on a specific type of function to be optimized. Such
parameters are obtained using a search technique in the space
of values specified in advance. Searching techniques are
bound to the complexity of the search space, and the use of
Gradient search might find local minimum solution but not
optimal ones. Stochastic search algorithms, on the other hand,
such as Evolutionary Algorithms e.g Genetic Algorithm (GA)
and Particle Swarm Optimization (PSO) present a more
reliable functionality in estimating models‘ parameters.
4. LITERATURE REVIEW
4.1 Parameter Estimation Using Maximum
Likelihood Estimation
Barnard and Bayes [8] depicted that for most mathematical
models, if the number of parameters is large and the observed
data is erroneous, calibration can be performed in maximum
likelihood (ML) framework, where the state estimate is the
parameter which maximizes the likelihood function.
Knafl [9] proposed existence conditions for maximum
likelihood parameter estimates for several commonly
3. International Journal of Computer Applications Technology and Research
Volume 3– Issue 4, 267 - 272, 2014
www.ijcat.com 269
employed two-parameter software reliability models. For
these models, the maximum likelihood equations were
expressed in terms of a single equation in one unknown.
Bounds were given on solutions to these single equations
problems to serve as initial intervals for search algorithms like
bisection. Uniqueness of the solutions is established in some
cases. Results were given for the case of grouped failure data.
Okumoto [10] reviewed four analytical software reliability
models which are used for estimating and monitoring software
reliability. These models include Times Between failure
Models, Failure Count Models, Fault Studying models and
Input Domain Models. Goel and Okumoto proposed Non
homogeneous Poisson Process model which lie in the
category of Failure Count models of software reliability
estimation. In this model it is assumed that failures occur
during execution of the software, at random times because of
faults present in the software. If we represent cumulative
number of failures occurred in system till time t by N(t) then
the non homogeneous poisson model can be represented as
follows:
{ } (4)
Where m(t) = a(1-e-bt
) (5)
λ(t) = m‘(t)=abe-bt
(6)
m(t) in above equations is the expected no. of failures
observed by time t and λ(t) is the failure intensity function.
Here ‗a‘ is the expected number of failures to be observed
eventually and ‗b‘ is the fault detection rate per fault.
Knafl and Morgan [11] proposed that the reliability of the
software can be estimated using software reliability growth
models, or a non-homogeneous poisson process model with
mean value function µ(t). As in Goel-Gkumoto model µ(t)
consists of two parameters a and b and in order to predict the
reliability of any software the values of these parameters need
to be estimated. These parameters can be estimated by using
the Maximum Likelihood Estimation method. An observation
interval of (0, ] was considered and divided into various
subintervals given by (0, ], ( , ]……, ( , ],the
number of failures per subinterval is denoted by where
(i=1,2,3….k) with respect to the number of failures in
( ]. Thus the likelihood function for mean value
function µ(t) of G-D model can be given by—
∏
{ }
exp{-
} (7)
By taking natural logarithm on both sides of equation (7)-
ln ln ∏
{ }
exp{-
} (8)
= ∑
{ }
exp{- } (9)
= ∑ { [ ] – [ ] - ln
(10)
Maximum likelihood estimates of the parameters a and b can
be computed by taking the partial derivative of equation (8)
with respect to each model parameter and then equating the
derivatives to zero one by one. Thus the estimates of the
model parameters will be computed as follows:
∑ { } = 0 (11)
=∑ ( )=0
(12)
Expression for a and b can be obtained by solving the
following equations:
a =
∑
(13)
∑ (
∑
) =0
(14)
Equation (12) is a nonlinear equation and its not
possible to solve it analytically hence it must be solved
numerically. It can be solved using newton‘s method. A
modification of G-O model was presented by Hossain and
Dahiya [12] in which Maximum likelihood (ML) equations
were investigated and a sufficient condition for the ML
estimators to be finite was given. In the G-O model the
probability distribution function (p.d.f) of the time to first
failure is given by:
g(t)= (15)
this is an improper p.d.f which is a big drawback of this
model. The modified model uses the proper p.d.f which is
given as:
f(t) = (16)
using this equation the modified model was given by:
f(t) = (16)
with the hope that it will do better than the G-O model.The
model (5), when c=0, is G-O model and when c= 1, the
corresponding pdf{time to failure} is proper. In this kind of
model we might anticipate that m(∞) is finite. But when c = 1,
then m(∞) = ∞ , giving rise to a new problem in determining
the mean total number of failures in the system. So to avoid
this situation the modified model needs to choose a {c:
0 c 1 } that gives a better (in some sense) estimated mean
number of failures in the system than the G-O estimate. H-D
model is superior to the G-O model because: 1) it is more
flexible; 2) it assigns less weight at infinity in the pdf of the
time to failure; and 3) the sufficient condition for the
existence of finite solution of ML equations is the same as the
necessary & sufficient condition for the GO model.
4.2 Parameter Estimation Using particle
Swarm Optimization
Kennedy and Eberhart [13] depicted that PSO is a simple
model of social learning whose emergent behaviour has found
popularity in solving difficult optimization problems. The
initial metaphor had two cognitive aspects, individual learning
and learning from a social group. Where an individual finds
itself in a problem space by using its own experience and that
of its peers to move itself toward the solution
(18)
(19)
4. International Journal of Computer Applications Technology and Research
Volume 3– Issue 4, 267 - 272, 2014
www.ijcat.com 270
where constants and determine the balance between
the influence of the individual‘s knowledge ( ) and that of
the group ( ) (both set initially to 2), and are
uniformly distributed random numbers defined by some upper
limit, , that is a parameter of the algorithm, and are
the individual‘s previous best position and the group‘s
previous best position, and is the current position in the
dimension considered.
This was found to suffer from instability caused by particles
accelerating out of the solution space. Eberhart et al [14]
proposed clamping scheme that limited the speed of each
particle to a range [- ] with usually being
somewhere between 0.1 and 1.0 times the maximum position
of the particle. This reduced the possibility of particles flying
out of the problem space.
The most problematic characteristic of PSO is its propensity
to converge, prematurely, on early best solutions. Many
strategies have been developed in attempts to overcome this
but by far the most popular are inertia and constriction.
Therefore Shi and Eberhart [15] introduced the term inertia,
w, as follows:
(20)
Eberhart and Shi [16] introduced an optimal strategy of
initially setting to 0.9 and reducing it linearly to 0.4,
allowing initial exploration followed by acceleration toward
an improved global optimum. They also showed that
combining them by setting the inertia weight, , to the
constriction factor, , improved performance across a wide
range of problems.
Clerc and Kennedy [17] proposed an idea of introducing
constriction, , which alleviated the requirement to clamp the
velocity and is applied as follows:
√
(21)
Where ,
Kennedy [18] revisited the constricted PSO and examined
whether the added components were necessary, and whether
any further components could be removed. Various
experiments were performed with a view to paring the process
for further efficiency gains. To achieve this, a Gaussian PSO
was developed. In this implementation the entire velocity
vector is replaced by a random number generated around the
mean (pid + pgd)/2 with a Gaussian distribution of |pid_pgd| in
each dimension (d). This effectively means that the particles
no longer ‗fly‘ but are ‗teleported‘. Kennedy justified this
departure on the grounds that it is the social aspect of the
swarm that is more important to its effectiveness. This was
empirically backed up with the Gaussian influenced swarms
performing competitively.
Merwe et al [19] investigated the application of the PSO to
cluster data vectors. Two algorithms were tested, namely a
standard gbest PSO and a Hybrid approach where the
individuals of the swarm were seeded by the result of the K-
means algorithm. The two PSO approaches were compared
against K-means clustering, which showed that the PSO
approaches have better convergence to lower quantization
errors, and in general, larger inter-cluster distances and
smaller intracluster distances. It was shown that how PSO
should be used to find the centroids of a user specified
number of clusters. The algorithm was then extended to use
K-means clustering to seed the initial swarm. The second
algorithm basically used PSO to refine the clusters formed by
K-means. The new PSO algorithms was evaluated on six data
sets, and compared to the performance of K-means clustering.
Results showed that both PSO clustering techniques have
much potential.
Monson and Seppi [20], showed that particle motion could be
further improved through changing the mechanism to a
system influenced by Kalman filtering in which the motion
was entirely described by predictions produced by the filter.
Once again, the justification for such a radical change to
particle movement was the maintenance of the social aspect of
PSO, which was achieved through the Kalman filter‘s sensor
model. The approach, whilst providing very accurate
optimization, was found to be computationally expensive
compared with the canonical PSO.
Parsopoulos and Vrahatis [21] modified the constricted
algorithm to harness the explorative behaviour of global
search and exploitative nature of a local neighbourhood
scheme. To combine the two, two velocity updates were
initially calculated:
{ ( )} (22)
{ } (23)
where G and L are the global and local velocity updates
respectively, pg is the global best particle position and is
the particle‘s local neighborhood best particle position. These
two updates were then combined to form a unified velocity
update (U), which is then applied to the current position:
[ ] (24)
(25)
where u is a unification factor that balances the global and
local aspects of the search and suggestions were given to add
mutation style influences to each in turn.
Kaewkamnerdpong and Bentley [22] introduced more realistic
model of particle perception to PSO, with the aim of making it
a viable option for applications that might otherwise suffer
from imperfect communication, such as robotics. In the
natural world, a social animal would often suffer imperfect
perception of the other animals in its sociometric
neighbourhood and must rely on information it receives via
limited perceptive acuity. To model this, the particles were
not allowed to communicate directly but only to observe
particles within their pre-set perceptive range (replicating the
stigmergic behaviour of some species, such as dancing
honeybees); this was also realistic in that the information was
regarded as more reliable where its source was closer. Each
particle was also allowed to observe the local area (i.e. sample
the local solution space). If there was a better solution nearby
it used that as its individual best position. The position update
algorithm for this method is dependent on the current state of
the particle and can be summarised as follows:
• If there is an observed position then set
to .
• If no neighbouring particles are perceived then do not use a
social component.
• If multiple particles are perceived use their average current
position as the group
best( ).
5. International Journal of Computer Applications Technology and Research
Volume 3– Issue 4, 267 - 272, 2014
www.ijcat.com 271
• If a single particle is perceived then use its position as the
group best ( ).
Habibi et al. [23] developed a hybrid PSO, with Ant Colony
(AC) and Simulated Annealing (SA). The AC algorithm
replaced the individual best element of PSO, whilst the
cooling process of SA was used to control the exploration of
the group best element, both of which were then applied
within the PSO framework (all random numbers being
generated using a Gaussian distribution function).
Experimentation with known TSP instances indicated that the
hybrid approach was capable of finding good approximations
efficiently.
Sheta [24] In this paper proposed the preliminary idea of
using particle swarm optimization (PSO) technique to help in
solving the reliability growth modeling problem had been
explored. The proposed approach was used to estimate the
parameters of the well known reliability growth models such
as the exponential model, power model and S-shaped models.
The results were promising.
According to Zhu [25] et.al Particle swarm optimization
(PSO), a swarm intelligence algorithm, had been successfully
applied to many engineering optimization problems and
shown its high search speed in these applications. However,
as the dimension and the number of local optima of
optimization problems increase, PSO and most existing
improved PSO algorithms such as, the standard particle
swarm optimization (SPSO) and the Gaussian particle swarm
optimization (GPSO), were easily trapped in local optima. In
this paper a novel algorithm was proposed which was based
on SPSO called Euclidean particle swarm optimization
(EPSO) which had greatly improved the ability of escaping
from local optima. To confirm the effectiveness of EPSO, five
benchmark functions had been employed to examine it, and
compared it with SPSO and GPSO. The experiments results
showed that EPSO is significantly better than SPSO and
GPSO, especially obvious in higher-dimension problems.
Engelbrecht [26] proposed a heterogeneous PSO (HPSO) in
this paper, where particles were allowed to follow different
search behaviors selected from a behaviour pool, thereby
efficiently addressing the exploration–exploitation trade-off
problem. A preliminary empirical analysis showed that using
heterogeneous swarms would give better results.
According to Quin [27] software reliability prediction
classifies software modules as fault-prone modules and less
fault-prone modules at the early age of software development.
As to a difficult problem of choosing parameters for Support
Vector Machine (SVM), this paper introduced Particle Swarm
Optimization (PSO) to automatically optimize the parameters
of SVM, and constructed a software reliability prediction
model based on PSO and SVM. Finally, the paper introduced
Principal Component Analysis (PCA) method to reduce the
dimension of experimental data, and inputs these reduced data
into software reliability prediction model to implement a
simulation. The results showed that the proposed prediction
model surpassed the traditional SVM in prediction
performance.
According to Malhotra et.al [28] software quality includes
many attributes including reliability of a software. Prediction
of reliability of a software in early phases of software
development would enable software practitioners in
developing robust and fault tolerant systems. The purpose of
the paper was to predict software reliability, by estimating the
parameters of Software Reliability Growth Models (SRGMs).
SRGMs are the mathematical models which generally reflect
the properties of the process of fault detection during testing.
Particle Swarm Optimization (PSO) has been applied to
several optimization problems and has showed good
performance. PSO is a popular machine learning algorithm
under the category of Swarm Intelligence. PSO is an
evolutionary algorithm like Genetic Algorithm (GA). In this
paper the use of PSO algorithm was proposed to estimate
parameters of delayed S-shaped model using machine
learning method PSO and then compare the results with those
of GA. The results are validated using data obtained from 16
projects. The results obtained from PSO had high predictive
ability which was reflected by low error predictions. The
results obtained using PSO are better than those obtained from
GA.
Jadon et.al [29] proposed improved version of PSO called Self
Adaptive Acceleration Factors in PSO (SAAFPSO) to balance
between exploration and exploitation. The constant
acceleration factors used in standard PSO had been converted
into function of particle‘s fitness. If a particle was more fit
then it gave more importance to global best particle and less
to itself to avoid local convergence. In later stages, particles
would be more fitter so all would move towards global best
particle, thus achieved the convergence speed. Experiment
was performed and compared with standard PSO and
Artificial bee colony (ABC) on 14 unbiased benchmark
optimization functions and one real world engineering
optimization problem (known as pressure vessel design) and
results showed that proposed algorithm SAAFPSO dominated
others.
According to Al gargoor et al [30], due to the growth in
demand for software with high reliability and safety, software
reliability prediction becomes more and more essential.
Software reliability is a key part of software quality. However
so many models had been proposed for the software reliability
prediction but none of them can give accurate prediction for
all cases. So in order to improve the accuracy of software
reliability prediction the proposed model combine the
software reliability models with the neural networks (NN).
Particle swarm optimization (PSO) algorithm had been chosen
and applied for learning process to select the best architecture
of the neural network. The applicability of the proposed
model was demonstrated through three software failure data
sets. The results showed that the proposed model has good
prediction capability and more applicable for software
reliability prediction.
5. CONCLUSION
It has been explored that solving optimization problems
using evolutionary algorithms like particle swarm
optimization is found to be more reliable and computationally
easier as compared to traditional techniques. Parameter
estimation of the software reliability models is also an
optimization problem. Traditionally MlE and LSE were used
for the parameter estimation but evolutionary techniques like
that of Genetic Algorithms, Swarm Intelligence etc have
shown better results than that of the traditional methods. The
model parameters usually follow nonlinear relationships
which makes traditional parameter estimation techniques
suffer many problems in finding the optimal parameters to
tune the model for a better prediction these problems can be
overcome by stochastic search methods.
6. International Journal of Computer Applications Technology and Research
Volume 3– Issue 4, 267 - 272, 2014
www.ijcat.com 272
6. ACKNOWLEDGMENTS
I would like to give my sincere gratitude to my guide
Mr. Ajay Bidhan who encouraged and guided me
throughout this paper.
7. REFERENCES
[1] S. Gerhart and L. Yelowitz, ―Observations of availability
in applications of modern programming
methodologies‖, IEEE transaction on Software
Engineering ,pp 195-207, 1976.
[2] ANSI/IEEE, ―Standard glossary of Software Engineering
Terminology,‖ ANSI/IEEE, 1991.
[3] M. R. Lyu (Ed.), Handbook of Software Reliability
Engineering, IEEE Computer Society Press, 1996.
[4] Wood, A., "Predicting software reliability," IEEE, vol.29,
no.11, pp.69-77, 1996.
[5] A.L. Goel and K. Okumoto. Time-dependent error
detection rate model for software reliability and other
performance measures. IEEE Transactions on
Reliability, pp 206-211, 1979.
[6] S. Yamada, M. Ohba, and S. Osaki. S-shaped reliability
growth modeling for software error detection. IEEE
Transactions on Reliability, pp 475-484, 1983.
[7] J. D. Musa, "A theory of software reliability and its
application," IEEE Trans. Software vol. SE-1, 312-327,
1971.
[8] G. Barnard and T. Bayes, ―Studies in the history of
probability and statistics: IX. Thomas Bayes‘s essay
towards solving a problem in the doctrine of chances,‖
Biometrika, vol. 45, pp 293–315, 1985.
[9] Knafl, G.J., "Solving maximum likelihood equations
for two-parameter software reliability models using
grouped data," Third International Symposium
on Software Reliability Engineering, pp 205-213, 1992.
[10] Goel, Amrit L. "Software reliability models:
Assumptions, limitations, and applicability." IEEE
Transactions on Software Engineering, pp 1411-1423,
1985.
[11] G.J. Knafl and J. Morgan, "Solving ML equations for 2-
parameter Poisson-process models for ungrouped
software- failure data", IEEE Transactions on eliability,
pp 42-53, 1996
[12] Hossain, Syed A., and Ram C. Dahiya, "Estimating the
parameters of a non-homogeneous Poisson-process
model for software reliability.", IEEE Transactions on
Reliability pp 604-612, 1993.
[13] Kennedy J, Eberhart RC,‖ Particle swarm optimization‖,
Proceedings of the IEEE international conference on
neural networks, pp 1942–1948, 1995.
[14] Eberhart RC, Simpson P, Dobbins R ,―Computational
intelligence PC tools‖ AP Professional, San Diego, CA,
pp 212-226, 1996.
[15] Shi Y, Eberhart RC ,‖ A modified particle swarm
optimizer‖ , Proceedings of the IEEE international
conference on evolutionary computation. IEEE, pp 69–
73, 1998.
[16] Eberhart RC, Shi Y , ―Comparing inertia weights and
constriction factors in particle swarm optimization‖ In:
Proceedings of the IEEE congress evolutionary
computation, San Diego, pp 84–88, 2000.
[17] Clerc M, Kennedy J ,‖The particle swarm: explosion,
stability and convergence in a multi-dimensional
complex space‖. IEEE pp 58–73, 2002.
[18] Kennedy J Bare bones ,‖particle swarms ― In Proceedings
of the IEEE swarm intelligence symposium
Indianapolis, Indiana, USA, pp 80–87,2003
[19] Van der Merwe, D. W., and Andries Petrus
Engelbrecht."Data clustering using particle swarm
optimization.", Congress on Evolutionary Computation.
Vol. 1. IEEE, 2003.
[20] Monson CK, Seppi KD,‖ The Kalman swarm‖
Proceedings of the genetic and evolutionary
computation conference , Seattle, Washington,2004.
[21] Parsopoulos KE, Vrahatis MN ― a unified particle
swarm optimization scheme‖, Proceedings of
international conference on computational methods in
sciences and engineering, pp 868–873, 2004.
[22] Kaewkamnerdpong B, Bentley P,‖ Perceptive particle
swarm optimization‖, Proceedings of the seventh
international conference on adaptive and natural
computing algorithms , 2005.
[23] Habibi J, Zonouz SA, Saneei M,‖ A hybrid PS-based
optimization algorithm for solving traveling salesman
problem‖, IEEE symposium on frontiers in networking
with applications, pp 18–20 2006.
[24] Sheta, Alaa. "Reliability growth modeling for software
fault detection using particle swarm optimization.",
IEEE Congress on Evolutionary Computation, IEEE,
2006.
[25] Hongbing Zhu,Chengdong Pu, Eguchi, K. andJinguang
Gu, "Euclidean Particle Swarm
Optimization," Intelligent Networks and Intelligent
Systems, IEEE, Second International Conference 1-3,
2009.
[26] Engelbrecht, Andries P. "Heterogeneous particle swarm
optimization.", Springer Berlin Heidelberg, 191-202,
2010.
[27] Li-Na Qin, "Software reliability prediction model based
on PSO and SVM," International Conference on
Consumer Electronics, Communications and Networks
(CECNet), 16-18, 2011.
[28] Malhotra, Ruchika, and Arun Negi. "Reliability modeling
using Particle Swarm Optimization.", International
Journal of System Assurance Engineering and
Management, 2013.
[29] Jadon, Shimpi Singh, et al. "Self Adaptive Acceleration
Factor in Particle Swarm Optimization.", Proceedings of
Seventh International Conference on Bio-Inspired
Computing: Theories and Applications Springer India,
2013.
[30] Rita G. Al gargoor , Nada N. Saleem ,‖ Software
Reliability Prediction Using Artificial Techniques‖,
IJCSI Vol. 10, Issue 4, No 2, 2013.