This document proposes a new software reliability growth model (SRGM) that incorporates both imperfect debugging and change points. It presents differential equations to derive a mean value function for the proposed model that accounts for faults being introduced during debugging and changes in the fault detection rate over time. The conditional reliability function is also derived. Numerical examples using real and simulated failure data are provided to evaluate the proposed model against existing SRGMs. The model could be extended to consider multiple fault types.
Pareto Type II Based Software Reliability Growth ModelWaqas Tariq
The past 4 decades have seen the formulation of several software reliability growth models to predict the reliability and error content of software systems. This paper presents Pareto type II model as a software reliability growth model, together with expressions for various reliability performance measures. Theory of probability, distribution function, probability distributions plays major role in software reliability model building. This paper presents estimation procedures to access reliability of a software system using Pareto distribution, which is based on Non Homogenous Poisson Process (NHPP).
Software Process Control on Ungrouped Data: Log-Power ModelWaqas Tariq
Statistical Process Control (SPC) is the best choice to monitor software reliability process. It assists the software development team to identify and actions to be taken during software failure process and hence, assures better software reliability. In this paper we propose a control mechanism based on the cumulative observations of failures which is ungrouped data using an infinite failure mean value function of Log-Power model, which is Non-Homogenous Poisson Process (NHPP) based. The Maximum Likelihood Estimation (MLE) approach is used to estimate the unknown parameters of the model.
DETECTION OF RELIABLE SOFTWARE USING SPRT ON TIME DOMAIN DATAIJCSEA Journal
In Classical Hypothesis testing volumes of data is to be collected and then the conclusions are drawn which may take more time. But, Sequential Analysis of statistical science could be adopted in order to decide upon the reliable / unreliable of the developed software very quickly. The procedure adopted for this is, Sequential Probability Ratio Test (SPRT). In the present paper we proposed the performance of SPRT on Time domain data using Weibull model and analyzed the results by applying on 5 data sets. The parameters are estimated using Maximum Likelihood Estimation.
Pareto Type II Based Software Reliability Growth ModelWaqas Tariq
The past 4 decades have seen the formulation of several software reliability growth models to predict the reliability and error content of software systems. This paper presents Pareto type II model as a software reliability growth model, together with expressions for various reliability performance measures. Theory of probability, distribution function, probability distributions plays major role in software reliability model building. This paper presents estimation procedures to access reliability of a software system using Pareto distribution, which is based on Non Homogenous Poisson Process (NHPP).
Software Process Control on Ungrouped Data: Log-Power ModelWaqas Tariq
Statistical Process Control (SPC) is the best choice to monitor software reliability process. It assists the software development team to identify and actions to be taken during software failure process and hence, assures better software reliability. In this paper we propose a control mechanism based on the cumulative observations of failures which is ungrouped data using an infinite failure mean value function of Log-Power model, which is Non-Homogenous Poisson Process (NHPP) based. The Maximum Likelihood Estimation (MLE) approach is used to estimate the unknown parameters of the model.
DETECTION OF RELIABLE SOFTWARE USING SPRT ON TIME DOMAIN DATAIJCSEA Journal
In Classical Hypothesis testing volumes of data is to be collected and then the conclusions are drawn which may take more time. But, Sequential Analysis of statistical science could be adopted in order to decide upon the reliable / unreliable of the developed software very quickly. The procedure adopted for this is, Sequential Probability Ratio Test (SPRT). In the present paper we proposed the performance of SPRT on Time domain data using Weibull model and analyzed the results by applying on 5 data sets. The parameters are estimated using Maximum Likelihood Estimation.
Optimal Selection of Software Reliability Growth Model-A StudyIJEEE
People use software and sometime software fails.so they try to quantify software reliability and try to understand how and why it fails.For this purpose so many software Reliability models have been developed to estimate the defects in the software while delivering it to the customer.Till now so many software Reliability models have been developed,but main issue is that it remain largely unsolved that how to calculate software reliability efficiently.In everycircumstance we cannotuse one model because no single model can completely represent all features.This paper describes the circumstances and criteria under which particular model can be selected.
Software reliability models (SRMs) are very important for estimating and predicting software
reliability in the testing/debugging phase. The contributions of this paper are as follows. First, a
historical review of the Gompertz SRM is given. Based on several software failure data, the
parameters of the Gompertz software reliability model are estimated using two estimation
methods, the traditional maximum likelihood and the least square. The methods of estimation are
evaluated using the MSE and R-squared criteria. The results show that the least square
estimation is an attractive method in term of predictive performance and can be used when the
maximum likelihood method fails to give good prediction results.
Software Reliability Growth Model with Logistic- Exponential Testing-Effort F...IDES Editor
Software reliability is one of the important factors of
software quality. Before software delivered in to market it is
thoroughly checked and errors are removed. Every software
industry wants to develop software that should be error free.
Software reliability growth models are helping the software
industries to develop software which is error free and reliable.
In this paper an analysis is done based on incorporating the
logistic-exponential testing-effort in to NHPP Software
reliability growth model and also observed its release policy.
Experiments are performed on the real datasets. Parameters
are calculated and observed that our model is best fitted for
the datasets.
Parameter Estimation of Software Reliability Growth Models Using Simulated An...Editor IJCATR
The parameter estimation of Goel’s Okomotu Model is performed victimisation simulated annealing. The Goel’s Okomotu
Model is predicated on Exponential model and could be a easy non-homogeneous Poisson method (NHPP) model. Simulated
annealing could be a heuristic optimisation technique that provides a method to flee local optima. The information set is optimized
using simulated annealing technique. SA could be a random algorithmic program with higher performance than Genetic algorithmic
program (GA) that depends on the specification of the neighbourhood structure of a state area and parameter settings for its cooling
schedule.
OPTIMAL GLOBAL THRESHOLD ESTIMATION USING STATISTICAL CHANGE-POINT DETECTIONsipij
Aim of this paper is reformulation of global image thresholding problem as a well-founded statistical
method known as change-point detection (CPD) problem. Our proposed CPD thresholding algorithm does
not assume any prior statistical distribution of background and object grey levels. Further, this method is
less influenced by an outlier due to our judicious derivation of a robust criterion function depending on
Kullback-Leibler (KL) divergence measure. Experimental result shows efficacy of proposed method
compared to other popular methods available for global image thresholding. In this paper we also propose
a performance criterion for comparison of thresholding algorithms. This performance criteria does not
depend on any ground truth image. We have used this performance criterion to compare the results of
proposed thresholding algorithm with most cited global thresholding algorithms in the literature.
TIME EXECUTION OF DIFFERENT SORTED ALGORITHMSTanya Makkar
what is Algorithm and classification and its complexity
Time Complexity
Time Space trade-off
Asymptotic time complexity of algorithm and its notation
Why do we need to classify running time of algorithm into growth rates?
Big O-h notation and example
Big omega notation and example
Big theta notation and its example
best among the 3 notation
finding complexity f(n) for certain cases
1. Average case
2.Best case
3.Worst case
Searching
Sorting
complexity of Sorting
Conclusion
Optimization is considered to be one of the pillars of statistical learning and also plays a major role in the design and development of intelligent systems such as search engines, recommender systems, and speech and image recognition software. Machine Learning is the study that gives the computers the ability to learn and also the ability to think without being explicitly programmed. A computer is said to learn from an experience with respect to a specified task and its performance related to that task. The machine learning algorithms are applied to the problems to reduce efforts. Machine learning algorithms are used for manipulating the data and predict the output for the new data with high precision and low uncertainty. The optimization algorithms are used to make rational decisions in an environment of uncertainty and imprecision. In this paper a methodology is presented to use the efficient optimization algorithm as an alternative for the gradient descent machine learning algorithm as an optimization algorithm.
Discretizing of linear systems with time-delay Using method of Euler’s and Tu...IJERA Editor
Delays deteriorate the control performance and could destabilize the overall system in the theory of discretetime
signals and dynamic systems. Whenever a computer is used in measurement, signal processing or control
applications, the data as seen from the computer and systems involved are naturally discrete-time because a
computer executes program code at discrete points of time. Theory of discrete-time dynamic signals and systems
is useful in design and analysis of control systems, signal filters, state estimators and model estimation from
time-series of process data system identification. In this paper, a new approximated discretization method and
digital design for control systems with delays is proposed. System is transformed to a discrete-time model with
time delays. To implement the digital modeling, we used the z-transfer functions matrix which is a useful model
type of discrete-time systems, being analogous to the Laplace-transform for continuous-time systems. The most
important use of the z-transform is for defining z-transfer functions matrix is employed to obtain an extended
discrete-time. The proposed method can closely approximate the step response of the original continuous timedelayed
control system by choosing various of energy loss level. Illustrative example is simulated to demonstrate
the effectiveness of the developed method.\
Traditional interpreted data-flow analysis is executed on whole plans; however, such whole-program psychoanalysis is not
executable for large or uncompleted plans. We suggest fragment data-flow analysis as a substitute approach which
calculates data-flow information for a particular program fragment. The psychoanalysis is parameterized by the extra
information available about the rest of the program. We depict two frameworks for interracial flow-sensitive fragment
psychoanalysis, the relationship amongst fragment psychoanalysis and whole-program analysis, and the necessities ensuring fragment analysis safety and feasibility. We suggest an application of fragment analysis as a second analysis phase after a cheap flow-insensitive whole-program analysis, in order to obtain better data for important program fragments. We also depict the design of two fragment analyses derived from an already existing whole-program flow- and context-sensitive pointer alias analysis for Computer program and present empirical rating of their cost and precision. Our experiments show evidence of dramatic improves precision gettable at a practical cost.
Particle Swarm Optimization in the fine-tuning of Fuzzy Software Cost Estimat...Waqas Tariq
Software cost estimation deals with the financial and strategic planning of software projects. Controlling the expensive investment of software development effectively is of paramount importance. The limitation of algorithmic effort prediction models is their inability to cope with uncertainties and imprecision surrounding software projects at the early development stage. More recently, attention has turned to a variety of machine learning methods, and soft computing in particular to predict software development effort. Fuzzy logic is one such technique which can cope with uncertainties. In the present paper, Particle Swarm Optimization Algorithm (PSOA) is presented to fine tune the fuzzy estimate for the development of software projects . The efficacy of the developed models is tested on 10 NASA software projects, 18 NASA projects and COCOMO 81 database on the basis of various criterion for assessment of software cost estimation models. Comparison of all the models is done and it is found that the developed models provide better estimation
A report on designing a model for improving CPU Scheduling by using Machine L...MuskanRath1
Disclaimer: Please let me know in case some of the portions of the article match your research. I would include the link to your research in the description section of my article.
Description:
The main concern of our paper describes that we are proposing a model for a uniprocessor system for improving CPU scheduling. Our model is implemented at low-level language or assembly language and LINUX is used for the implementation of the model as it is an open-source environment and its kernel is editable.
There are several methods to predict the length of the CPU bursts, such as the exponential averaging method, however, these methods may not give accurate or reliable predicted values. In this paper, we will propose a Machine Learning (ML) based on the best approach to estimate the length of the CPU bursts for processes. We will make use of Bayesian Theory for our model as a classifier tool that will decide which process will execute first in the ready queue. The proposed approach aims to select the most significant attributes of the process using feature selection techniques and then predicts the CPU-burst for the process in the grid. Furthermore, applying attribute selection techniques improves the performance in terms of space, time, and estimation.
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
More Related Content
Similar to SRGM with Imperfect Debugging by Genetic Algorithms
Optimal Selection of Software Reliability Growth Model-A StudyIJEEE
People use software and sometime software fails.so they try to quantify software reliability and try to understand how and why it fails.For this purpose so many software Reliability models have been developed to estimate the defects in the software while delivering it to the customer.Till now so many software Reliability models have been developed,but main issue is that it remain largely unsolved that how to calculate software reliability efficiently.In everycircumstance we cannotuse one model because no single model can completely represent all features.This paper describes the circumstances and criteria under which particular model can be selected.
Software reliability models (SRMs) are very important for estimating and predicting software
reliability in the testing/debugging phase. The contributions of this paper are as follows. First, a
historical review of the Gompertz SRM is given. Based on several software failure data, the
parameters of the Gompertz software reliability model are estimated using two estimation
methods, the traditional maximum likelihood and the least square. The methods of estimation are
evaluated using the MSE and R-squared criteria. The results show that the least square
estimation is an attractive method in term of predictive performance and can be used when the
maximum likelihood method fails to give good prediction results.
Software Reliability Growth Model with Logistic- Exponential Testing-Effort F...IDES Editor
Software reliability is one of the important factors of
software quality. Before software delivered in to market it is
thoroughly checked and errors are removed. Every software
industry wants to develop software that should be error free.
Software reliability growth models are helping the software
industries to develop software which is error free and reliable.
In this paper an analysis is done based on incorporating the
logistic-exponential testing-effort in to NHPP Software
reliability growth model and also observed its release policy.
Experiments are performed on the real datasets. Parameters
are calculated and observed that our model is best fitted for
the datasets.
Parameter Estimation of Software Reliability Growth Models Using Simulated An...Editor IJCATR
The parameter estimation of Goel’s Okomotu Model is performed victimisation simulated annealing. The Goel’s Okomotu
Model is predicated on Exponential model and could be a easy non-homogeneous Poisson method (NHPP) model. Simulated
annealing could be a heuristic optimisation technique that provides a method to flee local optima. The information set is optimized
using simulated annealing technique. SA could be a random algorithmic program with higher performance than Genetic algorithmic
program (GA) that depends on the specification of the neighbourhood structure of a state area and parameter settings for its cooling
schedule.
OPTIMAL GLOBAL THRESHOLD ESTIMATION USING STATISTICAL CHANGE-POINT DETECTIONsipij
Aim of this paper is reformulation of global image thresholding problem as a well-founded statistical
method known as change-point detection (CPD) problem. Our proposed CPD thresholding algorithm does
not assume any prior statistical distribution of background and object grey levels. Further, this method is
less influenced by an outlier due to our judicious derivation of a robust criterion function depending on
Kullback-Leibler (KL) divergence measure. Experimental result shows efficacy of proposed method
compared to other popular methods available for global image thresholding. In this paper we also propose
a performance criterion for comparison of thresholding algorithms. This performance criteria does not
depend on any ground truth image. We have used this performance criterion to compare the results of
proposed thresholding algorithm with most cited global thresholding algorithms in the literature.
TIME EXECUTION OF DIFFERENT SORTED ALGORITHMSTanya Makkar
what is Algorithm and classification and its complexity
Time Complexity
Time Space trade-off
Asymptotic time complexity of algorithm and its notation
Why do we need to classify running time of algorithm into growth rates?
Big O-h notation and example
Big omega notation and example
Big theta notation and its example
best among the 3 notation
finding complexity f(n) for certain cases
1. Average case
2.Best case
3.Worst case
Searching
Sorting
complexity of Sorting
Conclusion
Optimization is considered to be one of the pillars of statistical learning and also plays a major role in the design and development of intelligent systems such as search engines, recommender systems, and speech and image recognition software. Machine Learning is the study that gives the computers the ability to learn and also the ability to think without being explicitly programmed. A computer is said to learn from an experience with respect to a specified task and its performance related to that task. The machine learning algorithms are applied to the problems to reduce efforts. Machine learning algorithms are used for manipulating the data and predict the output for the new data with high precision and low uncertainty. The optimization algorithms are used to make rational decisions in an environment of uncertainty and imprecision. In this paper a methodology is presented to use the efficient optimization algorithm as an alternative for the gradient descent machine learning algorithm as an optimization algorithm.
Discretizing of linear systems with time-delay Using method of Euler’s and Tu...IJERA Editor
Delays deteriorate the control performance and could destabilize the overall system in the theory of discretetime
signals and dynamic systems. Whenever a computer is used in measurement, signal processing or control
applications, the data as seen from the computer and systems involved are naturally discrete-time because a
computer executes program code at discrete points of time. Theory of discrete-time dynamic signals and systems
is useful in design and analysis of control systems, signal filters, state estimators and model estimation from
time-series of process data system identification. In this paper, a new approximated discretization method and
digital design for control systems with delays is proposed. System is transformed to a discrete-time model with
time delays. To implement the digital modeling, we used the z-transfer functions matrix which is a useful model
type of discrete-time systems, being analogous to the Laplace-transform for continuous-time systems. The most
important use of the z-transform is for defining z-transfer functions matrix is employed to obtain an extended
discrete-time. The proposed method can closely approximate the step response of the original continuous timedelayed
control system by choosing various of energy loss level. Illustrative example is simulated to demonstrate
the effectiveness of the developed method.\
Traditional interpreted data-flow analysis is executed on whole plans; however, such whole-program psychoanalysis is not
executable for large or uncompleted plans. We suggest fragment data-flow analysis as a substitute approach which
calculates data-flow information for a particular program fragment. The psychoanalysis is parameterized by the extra
information available about the rest of the program. We depict two frameworks for interracial flow-sensitive fragment
psychoanalysis, the relationship amongst fragment psychoanalysis and whole-program analysis, and the necessities ensuring fragment analysis safety and feasibility. We suggest an application of fragment analysis as a second analysis phase after a cheap flow-insensitive whole-program analysis, in order to obtain better data for important program fragments. We also depict the design of two fragment analyses derived from an already existing whole-program flow- and context-sensitive pointer alias analysis for Computer program and present empirical rating of their cost and precision. Our experiments show evidence of dramatic improves precision gettable at a practical cost.
Particle Swarm Optimization in the fine-tuning of Fuzzy Software Cost Estimat...Waqas Tariq
Software cost estimation deals with the financial and strategic planning of software projects. Controlling the expensive investment of software development effectively is of paramount importance. The limitation of algorithmic effort prediction models is their inability to cope with uncertainties and imprecision surrounding software projects at the early development stage. More recently, attention has turned to a variety of machine learning methods, and soft computing in particular to predict software development effort. Fuzzy logic is one such technique which can cope with uncertainties. In the present paper, Particle Swarm Optimization Algorithm (PSOA) is presented to fine tune the fuzzy estimate for the development of software projects . The efficacy of the developed models is tested on 10 NASA software projects, 18 NASA projects and COCOMO 81 database on the basis of various criterion for assessment of software cost estimation models. Comparison of all the models is done and it is found that the developed models provide better estimation
A report on designing a model for improving CPU Scheduling by using Machine L...MuskanRath1
Disclaimer: Please let me know in case some of the portions of the article match your research. I would include the link to your research in the description section of my article.
Description:
The main concern of our paper describes that we are proposing a model for a uniprocessor system for improving CPU scheduling. Our model is implemented at low-level language or assembly language and LINUX is used for the implementation of the model as it is an open-source environment and its kernel is editable.
There are several methods to predict the length of the CPU bursts, such as the exponential averaging method, however, these methods may not give accurate or reliable predicted values. In this paper, we will propose a Machine Learning (ML) based on the best approach to estimate the length of the CPU bursts for processes. We will make use of Bayesian Theory for our model as a classifier tool that will decide which process will execute first in the ready queue. The proposed approach aims to select the most significant attributes of the process using feature selection techniques and then predicts the CPU-burst for the process in the grid. Furthermore, applying attribute selection techniques improves the performance in terms of space, time, and estimation.
Similar to SRGM with Imperfect Debugging by Genetic Algorithms (20)
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
Francesca Gottschalk - How can education support child empowerment.pptxEduSkills OECD
Francesca Gottschalk from the OECD’s Centre for Educational Research and Innovation presents at the Ask an Expert Webinar: How can education support child empowerment?
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...Levi Shapiro
Letter from the Congress of the United States regarding Anti-Semitism sent June 3rd to MIT President Sally Kornbluth, MIT Corp Chair, Mark Gorenberg
Dear Dr. Kornbluth and Mr. Gorenberg,
The US House of Representatives is deeply concerned by ongoing and pervasive acts of antisemitic
harassment and intimidation at the Massachusetts Institute of Technology (MIT). Failing to act decisively to ensure a safe learning environment for all students would be a grave dereliction of your responsibilities as President of MIT and Chair of the MIT Corporation.
This Congress will not stand idly by and allow an environment hostile to Jewish students to persist. The House believes that your institution is in violation of Title VI of the Civil Rights Act, and the inability or
unwillingness to rectify this violation through action requires accountability.
Postsecondary education is a unique opportunity for students to learn and have their ideas and beliefs challenged. However, universities receiving hundreds of millions of federal funds annually have denied
students that opportunity and have been hijacked to become venues for the promotion of terrorism, antisemitic harassment and intimidation, unlawful encampments, and in some cases, assaults and riots.
The House of Representatives will not countenance the use of federal funds to indoctrinate students into hateful, antisemitic, anti-American supporters of terrorism. Investigations into campus antisemitism by the Committee on Education and the Workforce and the Committee on Ways and Means have been expanded into a Congress-wide probe across all relevant jurisdictions to address this national crisis. The undersigned Committees will conduct oversight into the use of federal funds at MIT and its learning environment under authorities granted to each Committee.
• The Committee on Education and the Workforce has been investigating your institution since December 7, 2023. The Committee has broad jurisdiction over postsecondary education, including its compliance with Title VI of the Civil Rights Act, campus safety concerns over disruptions to the learning environment, and the awarding of federal student aid under the Higher Education Act.
• The Committee on Oversight and Accountability is investigating the sources of funding and other support flowing to groups espousing pro-Hamas propaganda and engaged in antisemitic harassment and intimidation of students. The Committee on Oversight and Accountability is the principal oversight committee of the US House of Representatives and has broad authority to investigate “any matter” at “any time” under House Rule X.
• The Committee on Ways and Means has been investigating several universities since November 15, 2023, when the Committee held a hearing entitled From Ivory Towers to Dark Corners: Investigating the Nexus Between Antisemitism, Tax-Exempt Universities, and Terror Financing. The Committee followed the hearing with letters to those institutions on January 10, 202
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
Honest Reviews of Tim Han LMA Course Program.pptxtimhan337
Personal development courses are widely available today, with each one promising life-changing outcomes. Tim Han’s Life Mastery Achievers (LMA) Course has drawn a lot of interest. In addition to offering my frank assessment of Success Insider’s LMA Course, this piece examines the course’s effects via a variety of Tim Han LMA course reviews and Success Insider comments.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
The Roman Empire A Historical Colossus.pdfkaushalkr1407
The Roman Empire, a vast and enduring power, stands as one of history's most remarkable civilizations, leaving an indelible imprint on the world. It emerged from the Roman Republic, transitioning into an imperial powerhouse under the leadership of Augustus Caesar in 27 BCE. This transformation marked the beginning of an era defined by unprecedented territorial expansion, architectural marvels, and profound cultural influence.
The empire's roots lie in the city of Rome, founded, according to legend, by Romulus in 753 BCE. Over centuries, Rome evolved from a small settlement to a formidable republic, characterized by a complex political system with elected officials and checks on power. However, internal strife, class conflicts, and military ambitions paved the way for the end of the Republic. Julius Caesar’s dictatorship and subsequent assassination in 44 BCE created a power vacuum, leading to a civil war. Octavian, later Augustus, emerged victorious, heralding the Roman Empire’s birth.
Under Augustus, the empire experienced the Pax Romana, a 200-year period of relative peace and stability. Augustus reformed the military, established efficient administrative systems, and initiated grand construction projects. The empire's borders expanded, encompassing territories from Britain to Egypt and from Spain to the Euphrates. Roman legions, renowned for their discipline and engineering prowess, secured and maintained these vast territories, building roads, fortifications, and cities that facilitated control and integration.
The Roman Empire’s society was hierarchical, with a rigid class system. At the top were the patricians, wealthy elites who held significant political power. Below them were the plebeians, free citizens with limited political influence, and the vast numbers of slaves who formed the backbone of the economy. The family unit was central, governed by the paterfamilias, the male head who held absolute authority.
Culturally, the Romans were eclectic, absorbing and adapting elements from the civilizations they encountered, particularly the Greeks. Roman art, literature, and philosophy reflected this synthesis, creating a rich cultural tapestry. Latin, the Roman language, became the lingua franca of the Western world, influencing numerous modern languages.
Roman architecture and engineering achievements were monumental. They perfected the arch, vault, and dome, constructing enduring structures like the Colosseum, Pantheon, and aqueducts. These engineering marvels not only showcased Roman ingenuity but also served practical purposes, from public entertainment to water supply.
SRGM with Imperfect Debugging by Genetic Algorithms
1. SRGM with Imperfect Debugging by Genetic Algorithms
Dr.R.SatyaPrasad O.NagaRaju Prof.R.R.LKantam
Associate.Professor Asst.Professor. Professor
Dept.of Computer science &Engineering Dept.of Computer science &Engineering Dept.of Statistics
Acharya Nagarjuna University charya Nagarjuna University Acharya Nagarjuna University
Nagarjuna Nagar-522524 Nagarjuna Nagar-522524 Nagarjuna Nagar-522524
profrsp@gmail.com onrajunrt@gmail.com
Abstract
Computer software has progressively turned out to be an
essential component in modern technologies. Penalty costs resulting from
software failures are often more considerable than software developing costs.
Debugging decreases the error content but expands the software development
costs. To improve the software quality, software reliability engineering plays
an important role in many aspects throughout the software life cycle. In this
paper, we incorporate both imperfect debugging and change-point problem into
the software reliability growth model(SRGM) based on the well-known
exponential distribution the parameter estimation is studied and the proposed
model is compared with the some existing models in the literature and is find to
be better.
Key words: - Software Reliability, NHPP, Mean value function, Genetic
Algorithms
1. Introduction:
We are witnessing our increasing dependence on software systems, as
they are becoming more and more complex, thus harder to develop and
maintain. Software systems are present in many safety – critical applications
such as power plants, health care systems, air-traffic, etc. they all require high
quality, reliabilities and safety.
2. 67
Software reliability is the probability that software will not cause the
failure of a product for a specified period of time. This probability is a function
of the inputs, as well as a function of the existence of faults in the software.
Various NHPP SRGMs have been studied with various assumptions.
Many of the SRGMs assume that each time a failure occurs, the fault that
caused it can be immediately removed and no new faults are introduced, which
is usually called perfect debugging, Imperfect debugging models have been
proposed, with relaxation of the above assumption (hoba, 1984, pham 1993).
The other assumption of many NHPP SRGMs is that each failure occurs
independently and randomly in time according to the same distribution during
the fault detection process (Musa at. Al 1987). However in more realistic
situations, the failure distribution can be affected by many factors, such as the
running environment, testing strategy and resource allocation. Once these
factors are changed during the software testing phase, this could result in a
software failure intensity function that increase or decrease non-monotonically.
It is identified as a change point problem (Zhao, 1993). In software reliability
estimation the change point effect should be considered simultaneously, if there
is a change point exists, otherwise the estimators of the model cannot express
the factual software reliability behavior.
2. A General NHPP Model:
Let { }
( ), 0
N t t ≥ be a counting process representing the cumulative
number of software failures by time t. The N (t) process is shown to be a NHPP
with a mean value function m(t). Mean value function represent the s-expected
number of software failures by time t. Goel and Okumoto (1979) assume that
the number of software failures during non-overlapped time intervals is s-
independent and the software failure intensity ( )
t
λ is proportional to the
residual fault content. Thus m(t) can be obtained by solving the following
differential equation.
3. 68
( )
( )
( ) ( )
dm t
t b a m t
dt
λ = = − …………………… (2.1)
where ‘a’ denotes the initial number of faults contained in a program
and b represents the fault detection rate. The solution of
( )
( ) 1 bt
m t a e−
= − …………………… (2.2)
In software reliability, the initial number of faults and the fault detection
rate are always unknown. The maximum likelihood technique can be used to
evaluate the unknown parameters.
The conditional software reliability, R(x/t), is defined as the probability
that there is no failure observed in the time period (t, t+x), given that the last
failure occurred at a time point ( )
0, 0
t t x
≥ > . Given the mean value function
m(t), the conditional software reliability can be shown as
( ) ( ) ( ) ( )
( )
1
/ exp 1
m t x m t b x
bt
R x t e a e e
− + − − +
−
= = − − ……………. (2.3)
For a more general NHPP SRGM, we can extend and modify Eq. (2.1)
as following:
( )
( )
( ) ( ) ( ) ,
dm t
t b t a t m t
dt
λ = = − ……………………. (2.4)
where a(t) is the time-dependent fault content function which includes
the initial and introduced faults in the program and b(t) is the time-dependent
fault detection rate. One can define a(t), b(t) to yield more complex or less
complex analytic solution for m(t). Various a(t), b(t) express different
assumptions of the fault detection processes (Pham et al., 1999).
3. Imperfect-software-debugging models.
Following the general NHPP model, a constant a(t) implies the perfect
debugging assumption, i.e., no new faults are introduced during the debugging
4. 69
process. Pham (1993) introduced an NHPP SRGM that is subject to imperfect
debugging. He assumed if detected faults are removed, then there is a
possibility to introduce new faults with a constant rate β . Let a(t) be the
number of faults to be eventually detected (denoted by “a”) plus the number of
new faults introduced to the program by time t, the mean value function m(t)
can be given as the solution of the following system of differential equations.
( ) ( )
( ) ( )
( )
,
a t m t
m t
b a t m t
t t t
β
∂ ∂
∂
= − =
∂ ∂ ∂
………………… (3.1)
( )
(0) , 0 0
a a m
= =
where a is the number of faults to be eventually detected. Solving the
above equations, we can obtain the mean value function and conditional
software reliability, respectively, as follows:
( ) ( )
1
1 ,
1
bt
a
m t e β
β
− −
= −
−
……………………..(3.2)
( ) ( ) ( )
( )
1
/ exp bt
R x t m x e β
− −
= −
4. An NHPP model with change-point.
Many SRGMs suppose the fault detection rate is a constant, or a
monotonically increasing function. The failure intensity is expected to be a
continuous function of time. For examples, Goel and Okumoto (1979)
presented a G-O model subjected to a constant fault detection rate and Yamada
et al. (1983) modified the G-O model and created an increasing fault detection
rate function, which represents the debugging process with the learning
phenomenon. Both of the models were proposed with continuous failure
intensity function. As the earlier mention, the fault detection rate can be
affected by many factors such as the testing strategy and resources allocation.
During a software testing process, there is a possibility that the underlying fault
detection rate function is changed at some time moment τ called ‘change-
point’. Considering the change- point problem in software reliability models is
intended to be more close to the reality.
5. 70
Zhao (1993) modified the Jelinski and Moranda (1972) model to estmate
the allocation of change-point and the failure intensity function. He assumed
that the observed inter-failure times follow the same distribution F at the
beginning. After τ failure are observed. The remaining items have the
distribution G. Distribution F and G are from the same parametric family.
Chang (1997) considered the change-point problems in the NHPP
SRGMs. The parameters of the NHPP with change-point models are estimated
by the weighted least square method. Let the parameter τ be the change point
that is considered unknown and is to be estimated from the data. The fault
detection rate function is defined as
( ) 1
2
, when 0 t t,
, when t. t,
b
b t
b
≤ ≤
=
>
By the assumptions, the mean value function, m(t) and the intensity
function, ( )
t
λ , can be derived as
( )
( )
( )
( )
1
1 2
1 , when 0 t ,
1 , when t. ,
b
b b t
a e
m t
a e
τ
τ τ
τ
τ
−
− − −
− ≤ ≤
=
− >
……………………… (4.1)
( )
( )
( )
1
1 2
1
2
, when 0 t ,
, when t. ,
b
b b t
ab e
dm t
t
dt ab e
τ
τ τ
τ
λ
τ
−
− − −
≤ ≤
=
>
Introducing the above mean value function, the conditional software
reliability function for any times x given t can be shown as
( ) ( ) ( )
( )
{ }
/ exp
R x t m t x m t
= − + −
( )
( )
{ }
( )
( )
{ }
( ) ( )
( )
{ }
1
1
1 2
1
1 2 1
) 1
exp ,when t t+x ,
exp ,when t +t ,
exp ,when
b t x
b t
b b t x
b t
b b t b b x
a e e
a e e x
a e e t
τ τ
τ τ τ τ
τ
τ
τ
− +
−
− − + −
−
− − − − − + −
− − ≤ ≤
= − − ≤ +
− − <
…………. (4.2)
5. Imperfect-software-debugging model with change- point.
6. 71
To consider the NHPP SRGM that integrates imperfect debugging with
change-point problem, the following assumptions are made:
(a) When detected faults are removed at time t, it is possible to introduce
new faults with introduction rate ( )
t
β
( ) ( )
1
2
,when 0 t ,
is the change-point
, when t.>
t
β τ
β τ
β τ
≤ ≤
=
(b) The fault detection rate represented as following is a step function.
( ) 1
2
, when 0 t ,
b , when t> ,
b
b t
τ
τ
≤ ≤
=
(c) A NHPP models with the fault detection phenomenon in the software
system.
In earlier studies, the parameter, τ is considered as unknown and is to
be estimated from the collected failure data (Zhao, 1993; Hinkley, 1970;
Chang 1997). Because the testing strategy and resource allocation can be
tracked all the time during the fault detection process, it may be more
reasonable to reconsider that the change point τ is given. Therefore, we
can assume but not necessary the parameter τ as allocated in a certain
time point and is known in advance. According to these assumptions,
one can derive the new set of differential equations to obtain the new
mean value function:
( )
( ) ( ) ( )
( )
( )
( )
( )
,
m t a t m t
b t a t m t t
t t t
β
∂ ∂ ∂
= − =
∂ ∂ ∂
( ) ( )
0 , 0 0
a a m
= =
Solving the differential equations under the assumptions
(a) And (b) Yields
( ) ( ) ( )
( )
1 1
1
1
1
b b t
a
m t e
β τ τ
β
− − + −
= −
−
……………………………..(5.1)
( ) ( ) ( )
( )
{ }
/ exp
R x t m t x m t
= − + −
7. 72
( ) ( ) ( )
( )
( ) ( ) ( ) ( )
( )
( )
1 1 1 1
2 1 2 2 2 2
1 1
1
1 1 1
2
exp , when 0<t
1
exp 1 , when t
1
b t b t x
b b t b x
a
e e
a
e e
β β
β τ β τ β
τ
β
τ
β
− − − − +
− − − − − − −
− − ≤
−
=
− − >
−
6. The Present model under study:
To consider the new model of NHPP SRGM that imperfect debugging
with change point problem, the following assumption are.
a) When defected faults are removed at time t, it is possible to introduce
new faults with introduction rate β .
b) The faults detection rate represented as a step function is given below
( ) 1,when 0
b t b t
= ≤
The unknown parameters are to be estimated from the collected failure data
(Zhao, 1993, Hinkley 1970, Chang 1997) According to these assumptions;
one can derive the new set of differential equation to obtain the new mean
value function.
( )
( ) ( )
m t
b t a m t
t
∂
= −
∂
( )
( )
( )
( ) ( )
; 0 , 0 0
m t m t
t a a m
t t
β
∂ ∂
= = =
∂ ∂
( ) ( ) 1
b t m t b
β
= + at ( ) 1
0
t b t b
= =
where b(t) is the time dependent faults detection rate running the general NHPP
model, a constant b(t) implies the perfect debugging assumption i.e. no new
faults are introduced during debugging process.
………………..(6.1)
8. 73
Taking this mean value function we propose to suggest a new
SRGM with the help of a NHPP. It is reliability Parameter, predictive validates.
Its applicability as a SRGM can also be assessed through fitness of models.
The developed function will become more complex than the other
models. Let the faults introduction rate is a constant ( )
β during the fault
detection process, the mean value function time 0 t
≤
Eq. [m(t)] assumes that the intensity function ( )
t
λ is not a continuous
function of time except when 1
b Following the same definition of Goel and
Okumto (1979), the conditional reliability function of this developed model can
be obtained
( ) ( )
( )
{ }
/ exp ( )
R x t m t x m t
= − + −
( )
( ) ( )
( )
( )( )
( )
1 1
1
1 1
1
1
1 1
1
/ exp
1 1
a b t a b x
a b t x
a b b
a
e e a
b
R x t
a a
e e a b t
b b
β β
β
β
β β
β β
β
+ +
+ +
+
− + −
=
+ + +
…… (6.2)
The R(x/t) model can be used to the construct the problem with single type of
fault. However based on the severity that assesses the impact of the fault on the
user, software faults can be classified into varies types.
The further modified model can be applied to conduct the software
reliability estimation problem not only for the imperfect debugging and change
point case but also the multiple fault type’s problem. The only difficulty is that
more parameters need to be estimated at the same time.
7. Numerical examples and model evaluation:
To verify the proposed model that incorporates both imperfect debugging and
change-point problems, four data sets are introduced. Two of them are
collected from real software development project and the others are obtained
from simulation.
9. 74
The first set of software failure data to be analyzed in this section is
taken from Misra (1983. The purpose of the first example is to illustrate the
process of model creation. In the data set software faults are classified into
three different types. Critical (type1), major (type2) and minor (type 3). The
total testing time and number of software failure for each week are recorded.
Pham (1993) used the same data to illustrate the results of his imperfect
debugging model based on the following setting of parameter. 0.05
β = .
According to the assumption of NHPP the remaining parameters 1
,
a b
can be estimated by solving the following likelihood function, for its
Maximum.
( )
( ) ( )
( ) ( ) ( ) ( )
( ) ( )
( )
1
1
2
1
1 1 1
1
,
!
m t m t
i j i j
i j i j
y t y t e
n
i j i j
i j i j i j
m t m t
L a b
y t y t
− − −
−
−
= = −
− −
=
−
∏∏
where ( )
i j
y t is the cumulated number of types I faults before time ( )
i j
y t and n
is the interval domain size It should be point out that a similar analysis was
conducted using the method of maximum likelihood discussed in (Pham,
1993). The estimates for parameters .200
a = 1 0.000355
b =
The results show that the fault detection rates increase for all three types
of fault after time τ table 1 present the evaluation results of the developed
model. Where mean square errors (MSE) measure weighted average of the
square of the distance between actual data and the model estimates and is
defined as
( ) ( )
2
3
1 1
MSE= / .
n
i j i j
i j
y t m t d
= =
−
d: the degree of freedom.
For the purpose, of comparison MSE uses the degree of freedom by
assigning a large penalty to a model, with more parameters. In this proposed
model larger penalty is assigned. The smaller the MSE value, the better the
10. 75
model fits. It is exhibited that the new model yields a little more conservative
results than the other NHPP SRGM but not significant, since the change-point
is created and may not exist.
The proposed model using the data set collected from the system T1 in
Musa (1979) are also examined.
Table-1
Comparison of goodness-of-fit of imperfect debugging NHPP models (data
from Misra)
Imperfect NHPP SRGMs
Our
Model
With
change-point
No change-point
Maxim Mize log-
likelihood
-177.53 -153.998 -154.584
MSE (critical fault) 0.0734 0.082 0.086
MSE (major fault) 1.9210 1.998 2.028
MSE (minor fault) 4.1315 4.240 4.286
Table 2 Summarizes the MSE values for investigated models. It shows that
the MSE-fit and MSE-predict values for the new model are smaller than
the other one which do not consider change-point problem (24.50 vs 25.11
vs. 62.91 and 1.01vs 1.52vs 1.175 respectively)
Table-2
Comparison of descriptive and predictive power of imperfect debugging
NHPP models (data from Mua)
Imperfect NHPP SRGMs
Our
Model
With change-point No change-point
Maxim Mize log-
likelihood
-199 -215 -223
MSE (fit 24.50 25.11 62.91
MSE (predict) 1.00 1.01 1.75
The results of using simulation data to verify the new models are shown in
Table.3. In the data set 2, the descriptive error for the model without
11. 76
considering change-point is poor and unacceptable. Compared with the
model with the assumption of a change-point, the new model gives very
promising results where the MSE measures for description and prediction
are 12.81 and 5.36 respectively.
Table-3
Comparison of descriptive and predictive power of imperfect debugging
NHPP models (simulated data)
Imperfect NHPP SRGMs
Our
Model
With
change-point
No change-point
Data Set -1 MSE (fit)
MSE (predict)
4.91
1.01
5.01
1.21
9.88
1.63
Data Set -2 MSE (fit)
MSE (predict)
10.77
4.00
12.81
5.36
519.18
9.45
8. Genetic Algorithms:
We apply a genetic algorithms based optimizer to slove the formulated
mathematical models of the parameter estimation for software reliability.
Genetic algorithms (GAs) have been used extensively for dealing with
optimization problems. GAs is based on the biological evolution process, and
was firstly introduced by Holland (1975) in the 1970s. GAs is useful (Goldberg
1989) where the search space is large, nonlinear and noisy, and solutions are
ill-defined a priori.
The proposed model using the data set collection from the System T1 in Musa
(1979) are also examined. These data sets include 136 faults found in the test
phase. Table-4 summarizes the MSE (mean squared error) values investigated
models. It shows that the MSE (fit) and MSE (predict) values for the model are
smaller than the other one.
12. 77
Table-4
Imperfect NHPP SRGMs
Our
Model
With
change-point
No change-point
MSE (fit)
MSE (predict)
25.01
0.821
27.734
0.838
59.471
4.688
9. Conclusion:
The developed NHPP SRGM is unique in that it allows for analysis of
software failure data with change point, imperfect debugging, and various fault
types. From tables 1, 2, 3 and 4 we say that our proposed model is rated as
better than the other consider models with respect to all the conditions are
chosen. They for genetic algorithm are more suitable to our model with
minimum disabances than comparative models.
10.References:
[1]. Chang, I.P., 1997. An analysis of software reliability with change-point
Models. NSC 85-2121-M031-003, National Science Council, Taiwan.
[2 ].Fenton, N.E., Pfleeger, S.L., 1997. Software Metrics: A Rigorous and
Practical Approach. PWS Publishing Company, Boston.
[3]. Goel, A.L., Okumoto, K., 1979. Time-dependent error-detection rate
model for software reliability and other performance measures.
IEEE Trans. Reliab. R-28, 206–211.
[4]. Goldberg, D.E., 1989, Genetic Algorithms in Search, Optimization, and
Machine Learning, (Addison-Wesley).
[5]. Hinkley, D.V., 1970. Inference about the change-point in a sequence of
Random variables. Biometrika 57, 1–16.
13. 78
[6]. Huan jyh shyur, 2003. A stochastic software reliability model with
imperfect-debugging and change-point, Elsevier, the journal of systems and
software.135-141.
[7]. Huan jyh shyur, Mu-chen chen,Analyzing software reliability Growth
model with imperfect-debugging and change-point by Genetic Algrothms.
[8]. Jelinski, Z., Moranda, P.B., 1972. In: Software Reliability Research,
Statistical Computer Performance Evaluation. Academic Press,
New York, p. 465.
[9]. Misra, P.N., 1983. Software reliability analysis. IBM Syst. J. 22, 262–
270.
[10]. Musa, J.D., 1979. Software reliability measures applied to system
Engineering. In: Proceedings CONPCON.
[11]. Musa, J.D., Iannino, A., Okumoto, K., 1987. Software Reliability:
Measurement Prediction Application. McGraw-Hill, New York.
[12]. Ohba, M., 1984. Software reliability analysis model. IBM J. Res.
Develop. 28, 428–443.
[13]. Pham, H., 1993. Software reliability assessment: Imperfect debugging
and multiple failure types in software development. EG&GRAMM-
10737, Idaho National Engineering Laboratory.
[14]. Pham, H., Nordmann, L., Zhang, X., 1999. A general imperfectsoftware-
Debugging model with s-shaped fault-detection rate. IEEE
Trans. Reliab. 48, 169–175.
[15].Yamada, S., Ohba, M., Osaki, S., 1983. S-shaped reliability growth
Modeling for software error detection. IEEE Trans. Reliab. 12,
475–484.
[16]. Zhao, M., 1993, Change-point problems in software and hardware
reliability.Commun. Statist – Theory Meth., 22, 757-768
14. 79
Authors
Dr. R. Satya Prasad Received Ph.D. degree in Computer Science
in the faculty of Engineering in 2007 from Acharya Nagarjuna
University, Guntur, Andhra Pradesh, India. He have a
satisfactory consistent academic track of record and received
gold medal from Acharya Nagarjuna University for his out
standing performance in a first rank in Masters Degree. He is
currently working as Associative Professor and Head of the
Department, in the Department of Computer Science &
Engineering, Acharya Nagarjuna University. He has occupied
various academic responsibilities like practical examiner, project
adjudicator, external member of board of examiners for various
Universities and Colleges in and around in Andhra Pradesh. His
current research is focused on Software Engineering, Image
Processing & Database Management System. He has published
several papers in National & International Journals.
O.NagaRaju received the Masters Degree in Computer Science
& Engineering from Acharya Nagarjuna University, Guntur, Andhra
Pradesh, India. He is currently pursuing Ph.D., at Department of
Computer Science and Engineering, Acharya Nagarjuna University,
Guntur, Andhra Pradesh, India. His research interests include Software
Engineering, Network Computing, and Image Processing.