This document presents a statistical process control method using the Log-Power model for ungrouped software failure data. It describes the Log-Power model and maximum likelihood estimation approach. It then applies the model and control chart method to two software failure time datasets to estimate parameters and monitor the failure process. The results show the estimated parameters, calculated control limits, and failure distributions plotted on control charts to detect any changes in the failure process.
A Defect Prediction Model for Software Product based on ANFISIJSRD
Artificial intelligence techniques are day by day getting involvement in all the classification and prediction based process like environmental monitoring, stock exchange conditions, biomedical diagnosis, software engineering etc. However still there are yet to be simplify the challenges of selecting training criteria for design of artificial intelligence models used for prediction of results. This work focus on the defect prediction mechanism development using software metric data of KC1.We have taken subtractive clustering approach for generation of fuzzy inference system (FIS).The FIS rules are generated at different radius of influence of input attribute vectors and the developed rules are further modified by ANFIS technique to obtain the prediction of number of defects in software project using fuzzy logic system.
Optimization is considered to be one of the pillars of statistical learning and also plays a major role in the design and development of intelligent systems such as search engines, recommender systems, and speech and image recognition software. Machine Learning is the study that gives the computers the ability to learn and also the ability to think without being explicitly programmed. A computer is said to learn from an experience with respect to a specified task and its performance related to that task. The machine learning algorithms are applied to the problems to reduce efforts. Machine learning algorithms are used for manipulating the data and predict the output for the new data with high precision and low uncertainty. The optimization algorithms are used to make rational decisions in an environment of uncertainty and imprecision. In this paper a methodology is presented to use the efficient optimization algorithm as an alternative for the gradient descent machine learning algorithm as an optimization algorithm.
Comparative Study of Machine Learning Algorithms for Sentiment Analysis with ...Sagar Deogirkar
Comparing the State-of-the-Art Deep Learning with Machine Learning algorithms performance on TF-IDF vector creation for Sentiment Analysis using Airline Tweeter Data Set.
Testing the performance of the power law process model considering the use of...IJCSEA Journal
Within the class of non-homogeneous Poisson process (NHPP) models and as a result of the simplicity of
the mathematical computations of the Power Law Process (PLP) model and the attractive physical
explanation of its parameters, this model has found considerable attention in repairable systems literature.
In this article, we conduct the investigation of new estimation approach, the regression estimation
procedure, on the performance of the parametric PLP model. The regression approach for estimating the
unknown parameters of the PLP model through the mean time between failure (TBF) function is evaluated
against the maximum likelihood estimation (MLE) approach. The results from the regression and MLE
approaches are compared based on three error evaluation criteria in terms of parameter estimation and its
precision, the numerical application shows the effectiveness of the regression estimation approach at
enhancing the predictive accuracy of the TBF measure.
A Defect Prediction Model for Software Product based on ANFISIJSRD
Artificial intelligence techniques are day by day getting involvement in all the classification and prediction based process like environmental monitoring, stock exchange conditions, biomedical diagnosis, software engineering etc. However still there are yet to be simplify the challenges of selecting training criteria for design of artificial intelligence models used for prediction of results. This work focus on the defect prediction mechanism development using software metric data of KC1.We have taken subtractive clustering approach for generation of fuzzy inference system (FIS).The FIS rules are generated at different radius of influence of input attribute vectors and the developed rules are further modified by ANFIS technique to obtain the prediction of number of defects in software project using fuzzy logic system.
Optimization is considered to be one of the pillars of statistical learning and also plays a major role in the design and development of intelligent systems such as search engines, recommender systems, and speech and image recognition software. Machine Learning is the study that gives the computers the ability to learn and also the ability to think without being explicitly programmed. A computer is said to learn from an experience with respect to a specified task and its performance related to that task. The machine learning algorithms are applied to the problems to reduce efforts. Machine learning algorithms are used for manipulating the data and predict the output for the new data with high precision and low uncertainty. The optimization algorithms are used to make rational decisions in an environment of uncertainty and imprecision. In this paper a methodology is presented to use the efficient optimization algorithm as an alternative for the gradient descent machine learning algorithm as an optimization algorithm.
Comparative Study of Machine Learning Algorithms for Sentiment Analysis with ...Sagar Deogirkar
Comparing the State-of-the-Art Deep Learning with Machine Learning algorithms performance on TF-IDF vector creation for Sentiment Analysis using Airline Tweeter Data Set.
Testing the performance of the power law process model considering the use of...IJCSEA Journal
Within the class of non-homogeneous Poisson process (NHPP) models and as a result of the simplicity of
the mathematical computations of the Power Law Process (PLP) model and the attractive physical
explanation of its parameters, this model has found considerable attention in repairable systems literature.
In this article, we conduct the investigation of new estimation approach, the regression estimation
procedure, on the performance of the parametric PLP model. The regression approach for estimating the
unknown parameters of the PLP model through the mean time between failure (TBF) function is evaluated
against the maximum likelihood estimation (MLE) approach. The results from the regression and MLE
approaches are compared based on three error evaluation criteria in terms of parameter estimation and its
precision, the numerical application shows the effectiveness of the regression estimation approach at
enhancing the predictive accuracy of the TBF measure.
AUTOMATIC GENERATION AND OPTIMIZATION OF TEST DATA USING HARMONY SEARCH ALGOR...csandit
Software testing is the primary phase, which is performed during software development and it is
carried by a sequence of instructions of test inputs followed by expected output. The Harmony
Search (HS) algorithm is based on the improvisation process of music. In comparison to other
algorithms, the HSA has gain popularity and superiority in the field of evolutionary
computation. When musicians compose the harmony through different possible combinations of
the music, at that time the pitches are stored in the harmony memory and the optimization can
be done by adjusting the input pitches and generate the perfect harmony. The test case
generation process is used to identify test cases with resources and also identifies critical
domain requirements. In this paper, the role of Harmony search meta-heuristic search
technique is analyzed in generating random test data and optimized those test data. Test data
are generated and optimized by applying in a case study i.e. a withdrawal task in Bank ATM
through Harmony search. It is observed that this algorithm generates suitable test cases as well
as test data and gives brief details about the Harmony search method. It is used for test data
generation and optimization
The pertinent single-attribute-based classifier for small datasets classific...IJECEIAES
Classifying a dataset using machine learning algorithms can be a big challenge when the target is a small dataset. The OneR classifier can be used for such cases due to its simplicity and efficiency. In this paper, we revealed the power of a single attribute by introducing the pertinent single-attributebased-heterogeneity-ratio classifier (SAB-HR) that used a pertinent attribute to classify small datasets. The SAB-HR’s used feature selection method, which used the Heterogeneity-Ratio (H-Ratio) measure to identify the most homogeneous attribute among the other attributes in the set. Our empirical results on 12 benchmark datasets from a UCI machine learning repository showed that the SAB-HR classifier significantly outperformed the classical OneR classifier for small datasets. In addition, using the H-Ratio as a feature selection criterion for selecting the single attribute was more effectual than other traditional criteria, such as Information Gain (IG) and Gain Ratio (GR).
Automatically Estimating Software Effort and Cost using Computing Intelligenc...cscpconf
In the IT industry, precisely estimate the effort of each software project the development cost
and schedule are count for much to the software company. So precisely estimation of man
power seems to be getting more important. In the past time, the IT companies estimate the work
effort of man power by human experts, using statistics method. However, the outcomes are
always unsatisfying the management level. Recently it becomes an interesting topic if computing
intelligence techniques can do better in this field. This research uses some computing
intelligence techniques, such as Pearson product-moment correlation coefficient method and
one-way ANOVA method to select key factors, and K-Means clustering algorithm to do project
clustering, to estimate the software project effort. The experimental result show that using
computing intelligence techniques to estimate the software project effort can get more precise
and more effective estimation than using traditional human experts did
Threshold benchmarking for feature ranking techniquesjournalBEEI
In prediction modeling, the choice of features chosen from the original feature set is crucial for accuracy and model interpretability. Feature ranking techniques rank the features by its importance but there is no consensus on the number of features to be cut-off. Thus, it becomes important to identify a threshold value or range, so as to remove the redundant features. In this work, an empirical study is conducted for identification of the threshold benchmark for feature ranking algorithms. Experiments are conducted on Apache Click dataset with six popularly used ranker techniques and six machine learning techniques, to deduce a relationship between the total number of input features (N) to the threshold range. The area under the curve analysis shows that ≃ 33-50% of the features are necessary and sufficient to yield a reasonable performance measure, with a variance of 2%, in defect prediction models. Further, we also find that the log2(N) as the ranker threshold value represents the lower limit of the range.
Robust Fault-Tolerant Training Strategy Using Neural Network to Perform Funct...Eswar Publications
This paper is intended to introduce an efficient as well as robust training mechanism for a neural network which can be used for testing the functionality of software. The traditional setup of neural network architecture is used constituting the two phases -training phase and evaluation phase. The input test cases are to be trained in first phase and consequently they behave like normal test cases to predict the output as untrained test cases. The test oracle measures the deviation between the outputs of untrained test cases with trained test cases and authorizes a final decision. Our framework can be applied to systems where number of test cases outnumbers the
functionalities or the system under test is too complex. It can also be applied to the test case development when the modules of a system become tedious after modification.
Feature selection in high-dimensional datasets is
considered to be a complex and time-consuming problem. To
enhance the accuracy of classification and reduce the execution
time, Parallel Evolutionary Algorithms (PEAs) can be used. In
this paper, we make a review for the most recent works which
handle the use of PEAs for feature selection in large datasets.
We have classified the algorithms in these papers into four main
classes (Genetic Algorithms (GA), Particle Swarm Optimization
(PSO), Scattered Search (SS), and Ant Colony Optimization
(ACO)). The accuracy is adopted as a measure to compare the
efficiency of these PEAs. It is noticeable that the Parallel Genetic
Algorithms (PGAs) are the most suitable algorithms for feature
selection in large datasets; since they achieve the highest accuracy.
On the other hand, we found that the Parallel ACO is timeconsuming
and less accurate comparing with other PEA.
A Novel Approach To Detection and Evaluation of Resampled Tampered ImagesCSCJournals
Most digital forgeries use an interpolation function, affecting the underlying statistical distribution of the image pixel values, that when detected, can be used as evidence of tampering. This paper provides a comparison of interpolation techniques, similar to Lehmann [1], using analyses of the Fourier transform of the image signal, and a quantitative assessment of the interpolation quality after applying selected interpolation functions, alongside an appraisal of computational performance using runtime measurements. A novel algorithm is proposed for detecting locally tampered regions, taking the averaged discrete Fourier transform of the zero-crossing of the second difference of the resampled signal (ADZ). The algorithm was contrasted using precision, recall and specificity metrics against those found in the literature, with comparable results. The interpolation comparison results were similar to that of [1]. The results of the detection algorithm showed that it performed well for determining authentic images, and better than previously proposed algorithms for determining tampered regions.
A hybrid fuzzy ann approach for software effort estimationijfcstjournal
Software development effort estimation is one of the major activities in software project management.
During the project proposal stage there is high probability of estimates being made inaccurate but later on
this inaccuracy decreases. In the field of software development there are certain matrices, based on which
the effort estimation is being made. Till date various methods has been proposed for software effort
estimation, of which the non algorithmic methods, like artificial intelligence techniques have been very
successful. A Hybrid Fuzzy-ANN model, known as Adaptive Neuro Fuzzy Inference System (ANFIS) is more
suitable in such situations. The present paper is concerned with developing software effort estimation
model based on ANFIS. The present study evaluates the efficiency of the proposed ANFIS model, for which
COCOMO81 datasets has been used. The result so obtained has been compared with Artificial Neural
Network (ANN) and Intermediate COCOCMO model developed by Boehm. The results were analyzed using
Magnitude of Relative Error (MRE) and Root Mean Square Error (RMSE). It is observed that the ANFIS
provided better results than ANN and COCOMO model.
Applicability of Hooke’s and Jeeves Direct Search Solution Method to Metal c...ijiert bestjournal
Role of optimization in engineering design is prominent one with the advent of computers. Optimization has become a part of computer aided design activities. It is prima rily being used in those design activities in which the goal is not only to achieve just a feasible design,but also a des ign objective. In most engineering design activities,the design objective could be simply to minimize the cost of production or to maximize the efficiency of the production. An optimization algorithm is a procedure which is executed it eratively by comparing various solutions till the optimum or a satisfactory solution is found. In many industri al design activities,optimization is achieved indirectly by comparing a few chosen design solutions and accept ing the best solution. This simplistic approach never guarantees and optimization algorithms being with one or more d esign solutions supplied by the user and then iteratively check new design the true optimum solution. There ar e two distinct types of optimization algorithms which are in use today. First there are algorithms which are deterministic,with specific rules for moving from one solution to the other secondly,there are algorithms whi ch are stochastic transition rules.
Model based test case prioritization using neural network classificationcseij
Model-based testing for real-life software systems often require a large number of tests, all of which cannot
exhaustively be run due to time and cost constraints. Thus, it is necessary to prioritize the test cases in
accordance with their importance the tester perceives. In this paper, this problem is solved by improving
our given previous study, namely, applying classification approach to the results of our previous study
functional relationship between the test case prioritization group membership and the two attributes:
important index and frequency for all events belonging to given group are established. A for classification
purpose, neural network (NN) that is the most advances is preferred and a data set obtained from our study
for all test cases is classified using multilayer perceptron (MLP) NN. The classification results for
commercial test prioritization application show the high classification accuracies about 96% and the
acceptable test prioritization performances are achieved.
ESTIMATING PROJECT DEVELOPMENT EFFORT USING CLUSTERED REGRESSION APPROACHcscpconf
Due to the intangible nature of “software”, accurate and reliable software effort estimation is a challenge in the software Industry. It is unlikely to expect very accurate estimates of software
development effort because of the inherent uncertainty in software development projects and the complex and dynamic interaction of factors that impact software development. Heterogeneity exists in the software engineering datasets because data is made available from diverse sources.
This can be reduced by defining certain relationship between the data values by classifying them into different clusters. This study focuses on how the combination of clustering and
regression techniques can reduce the potential problems in effectiveness of predictive efficiency due to heterogeneity of the data. Using a clustered approach creates the subsets of data having a degree of homogeneity that enhances prediction accuracy. It was also observed in this study that ridge regression performs better than other regression techniques used in the analysis.
Estimating project development effort using clustered regression approachcsandit
Due to the intangible nature of “software”, accurate and reliable software effort estimation is a
challenge in the software Industry. It is unlikely to expect very accurate estimates of software
development effort because of the inherent uncertainty in software development projects and the
complex and dynamic interaction of factors that impact software development. Heterogeneity
exists in the software engineering datasets because data is made available from diverse sources.
This can be reduced by defining certain relationship between the data values by classifying
them into different clusters. This study focuses on how the combination of clustering and
regression techniques can reduce the potential problems in effectiveness of predictive efficiency
due to heterogeneity of the data. Using a clustered approach creates the subsets of data having
a degree of homogeneity that enhances prediction accuracy. It was also observed in this study
that ridge regression performs better than other regression techniques used in the analysis.
A Review on Feature Selection Methods For Classification TasksEditor IJCATR
In recent years, application of feature selection methods in medical datasets has greatly increased. The challenging task in
feature selection is how to obtain an optimal subset of relevant and non redundant features which will give an optimal solution without
increasing the complexity of the modeling task. Thus, there is a need to make practitioners aware of feature selection methods that have
been successfully applied in medical data sets and highlight future trends in this area. The findings indicate that most existing feature
selection methods depend on univariate ranking that does not take into account interactions between variables, overlook stability of the
selection algorithms and the methods that produce good accuracy employ more number of features. However, developing a universal
method that achieves the best classification accuracy with fewer features is still an open research area.
AUTOMATIC GENERATION AND OPTIMIZATION OF TEST DATA USING HARMONY SEARCH ALGOR...csandit
Software testing is the primary phase, which is performed during software development and it is
carried by a sequence of instructions of test inputs followed by expected output. The Harmony
Search (HS) algorithm is based on the improvisation process of music. In comparison to other
algorithms, the HSA has gain popularity and superiority in the field of evolutionary
computation. When musicians compose the harmony through different possible combinations of
the music, at that time the pitches are stored in the harmony memory and the optimization can
be done by adjusting the input pitches and generate the perfect harmony. The test case
generation process is used to identify test cases with resources and also identifies critical
domain requirements. In this paper, the role of Harmony search meta-heuristic search
technique is analyzed in generating random test data and optimized those test data. Test data
are generated and optimized by applying in a case study i.e. a withdrawal task in Bank ATM
through Harmony search. It is observed that this algorithm generates suitable test cases as well
as test data and gives brief details about the Harmony search method. It is used for test data
generation and optimization
The pertinent single-attribute-based classifier for small datasets classific...IJECEIAES
Classifying a dataset using machine learning algorithms can be a big challenge when the target is a small dataset. The OneR classifier can be used for such cases due to its simplicity and efficiency. In this paper, we revealed the power of a single attribute by introducing the pertinent single-attributebased-heterogeneity-ratio classifier (SAB-HR) that used a pertinent attribute to classify small datasets. The SAB-HR’s used feature selection method, which used the Heterogeneity-Ratio (H-Ratio) measure to identify the most homogeneous attribute among the other attributes in the set. Our empirical results on 12 benchmark datasets from a UCI machine learning repository showed that the SAB-HR classifier significantly outperformed the classical OneR classifier for small datasets. In addition, using the H-Ratio as a feature selection criterion for selecting the single attribute was more effectual than other traditional criteria, such as Information Gain (IG) and Gain Ratio (GR).
Automatically Estimating Software Effort and Cost using Computing Intelligenc...cscpconf
In the IT industry, precisely estimate the effort of each software project the development cost
and schedule are count for much to the software company. So precisely estimation of man
power seems to be getting more important. In the past time, the IT companies estimate the work
effort of man power by human experts, using statistics method. However, the outcomes are
always unsatisfying the management level. Recently it becomes an interesting topic if computing
intelligence techniques can do better in this field. This research uses some computing
intelligence techniques, such as Pearson product-moment correlation coefficient method and
one-way ANOVA method to select key factors, and K-Means clustering algorithm to do project
clustering, to estimate the software project effort. The experimental result show that using
computing intelligence techniques to estimate the software project effort can get more precise
and more effective estimation than using traditional human experts did
Threshold benchmarking for feature ranking techniquesjournalBEEI
In prediction modeling, the choice of features chosen from the original feature set is crucial for accuracy and model interpretability. Feature ranking techniques rank the features by its importance but there is no consensus on the number of features to be cut-off. Thus, it becomes important to identify a threshold value or range, so as to remove the redundant features. In this work, an empirical study is conducted for identification of the threshold benchmark for feature ranking algorithms. Experiments are conducted on Apache Click dataset with six popularly used ranker techniques and six machine learning techniques, to deduce a relationship between the total number of input features (N) to the threshold range. The area under the curve analysis shows that ≃ 33-50% of the features are necessary and sufficient to yield a reasonable performance measure, with a variance of 2%, in defect prediction models. Further, we also find that the log2(N) as the ranker threshold value represents the lower limit of the range.
Robust Fault-Tolerant Training Strategy Using Neural Network to Perform Funct...Eswar Publications
This paper is intended to introduce an efficient as well as robust training mechanism for a neural network which can be used for testing the functionality of software. The traditional setup of neural network architecture is used constituting the two phases -training phase and evaluation phase. The input test cases are to be trained in first phase and consequently they behave like normal test cases to predict the output as untrained test cases. The test oracle measures the deviation between the outputs of untrained test cases with trained test cases and authorizes a final decision. Our framework can be applied to systems where number of test cases outnumbers the
functionalities or the system under test is too complex. It can also be applied to the test case development when the modules of a system become tedious after modification.
Feature selection in high-dimensional datasets is
considered to be a complex and time-consuming problem. To
enhance the accuracy of classification and reduce the execution
time, Parallel Evolutionary Algorithms (PEAs) can be used. In
this paper, we make a review for the most recent works which
handle the use of PEAs for feature selection in large datasets.
We have classified the algorithms in these papers into four main
classes (Genetic Algorithms (GA), Particle Swarm Optimization
(PSO), Scattered Search (SS), and Ant Colony Optimization
(ACO)). The accuracy is adopted as a measure to compare the
efficiency of these PEAs. It is noticeable that the Parallel Genetic
Algorithms (PGAs) are the most suitable algorithms for feature
selection in large datasets; since they achieve the highest accuracy.
On the other hand, we found that the Parallel ACO is timeconsuming
and less accurate comparing with other PEA.
A Novel Approach To Detection and Evaluation of Resampled Tampered ImagesCSCJournals
Most digital forgeries use an interpolation function, affecting the underlying statistical distribution of the image pixel values, that when detected, can be used as evidence of tampering. This paper provides a comparison of interpolation techniques, similar to Lehmann [1], using analyses of the Fourier transform of the image signal, and a quantitative assessment of the interpolation quality after applying selected interpolation functions, alongside an appraisal of computational performance using runtime measurements. A novel algorithm is proposed for detecting locally tampered regions, taking the averaged discrete Fourier transform of the zero-crossing of the second difference of the resampled signal (ADZ). The algorithm was contrasted using precision, recall and specificity metrics against those found in the literature, with comparable results. The interpolation comparison results were similar to that of [1]. The results of the detection algorithm showed that it performed well for determining authentic images, and better than previously proposed algorithms for determining tampered regions.
A hybrid fuzzy ann approach for software effort estimationijfcstjournal
Software development effort estimation is one of the major activities in software project management.
During the project proposal stage there is high probability of estimates being made inaccurate but later on
this inaccuracy decreases. In the field of software development there are certain matrices, based on which
the effort estimation is being made. Till date various methods has been proposed for software effort
estimation, of which the non algorithmic methods, like artificial intelligence techniques have been very
successful. A Hybrid Fuzzy-ANN model, known as Adaptive Neuro Fuzzy Inference System (ANFIS) is more
suitable in such situations. The present paper is concerned with developing software effort estimation
model based on ANFIS. The present study evaluates the efficiency of the proposed ANFIS model, for which
COCOMO81 datasets has been used. The result so obtained has been compared with Artificial Neural
Network (ANN) and Intermediate COCOCMO model developed by Boehm. The results were analyzed using
Magnitude of Relative Error (MRE) and Root Mean Square Error (RMSE). It is observed that the ANFIS
provided better results than ANN and COCOMO model.
Applicability of Hooke’s and Jeeves Direct Search Solution Method to Metal c...ijiert bestjournal
Role of optimization in engineering design is prominent one with the advent of computers. Optimization has become a part of computer aided design activities. It is prima rily being used in those design activities in which the goal is not only to achieve just a feasible design,but also a des ign objective. In most engineering design activities,the design objective could be simply to minimize the cost of production or to maximize the efficiency of the production. An optimization algorithm is a procedure which is executed it eratively by comparing various solutions till the optimum or a satisfactory solution is found. In many industri al design activities,optimization is achieved indirectly by comparing a few chosen design solutions and accept ing the best solution. This simplistic approach never guarantees and optimization algorithms being with one or more d esign solutions supplied by the user and then iteratively check new design the true optimum solution. There ar e two distinct types of optimization algorithms which are in use today. First there are algorithms which are deterministic,with specific rules for moving from one solution to the other secondly,there are algorithms whi ch are stochastic transition rules.
Model based test case prioritization using neural network classificationcseij
Model-based testing for real-life software systems often require a large number of tests, all of which cannot
exhaustively be run due to time and cost constraints. Thus, it is necessary to prioritize the test cases in
accordance with their importance the tester perceives. In this paper, this problem is solved by improving
our given previous study, namely, applying classification approach to the results of our previous study
functional relationship between the test case prioritization group membership and the two attributes:
important index and frequency for all events belonging to given group are established. A for classification
purpose, neural network (NN) that is the most advances is preferred and a data set obtained from our study
for all test cases is classified using multilayer perceptron (MLP) NN. The classification results for
commercial test prioritization application show the high classification accuracies about 96% and the
acceptable test prioritization performances are achieved.
ESTIMATING PROJECT DEVELOPMENT EFFORT USING CLUSTERED REGRESSION APPROACHcscpconf
Due to the intangible nature of “software”, accurate and reliable software effort estimation is a challenge in the software Industry. It is unlikely to expect very accurate estimates of software
development effort because of the inherent uncertainty in software development projects and the complex and dynamic interaction of factors that impact software development. Heterogeneity exists in the software engineering datasets because data is made available from diverse sources.
This can be reduced by defining certain relationship between the data values by classifying them into different clusters. This study focuses on how the combination of clustering and
regression techniques can reduce the potential problems in effectiveness of predictive efficiency due to heterogeneity of the data. Using a clustered approach creates the subsets of data having a degree of homogeneity that enhances prediction accuracy. It was also observed in this study that ridge regression performs better than other regression techniques used in the analysis.
Estimating project development effort using clustered regression approachcsandit
Due to the intangible nature of “software”, accurate and reliable software effort estimation is a
challenge in the software Industry. It is unlikely to expect very accurate estimates of software
development effort because of the inherent uncertainty in software development projects and the
complex and dynamic interaction of factors that impact software development. Heterogeneity
exists in the software engineering datasets because data is made available from diverse sources.
This can be reduced by defining certain relationship between the data values by classifying
them into different clusters. This study focuses on how the combination of clustering and
regression techniques can reduce the potential problems in effectiveness of predictive efficiency
due to heterogeneity of the data. Using a clustered approach creates the subsets of data having
a degree of homogeneity that enhances prediction accuracy. It was also observed in this study
that ridge regression performs better than other regression techniques used in the analysis.
A Review on Feature Selection Methods For Classification TasksEditor IJCATR
In recent years, application of feature selection methods in medical datasets has greatly increased. The challenging task in
feature selection is how to obtain an optimal subset of relevant and non redundant features which will give an optimal solution without
increasing the complexity of the modeling task. Thus, there is a need to make practitioners aware of feature selection methods that have
been successfully applied in medical data sets and highlight future trends in this area. The findings indicate that most existing feature
selection methods depend on univariate ranking that does not take into account interactions between variables, overlook stability of the
selection algorithms and the methods that produce good accuracy employ more number of features. However, developing a universal
method that achieves the best classification accuracy with fewer features is still an open research area.
VMware EVO - Fremtidens datarom er hyperkonvergertKenneth de Brucq
Dell Solutions Tour 2014 Norge
Roger Samdal, VMware
Med Dell PowerEdge 13G servere, VMware VSAN og NSX kan du realisere et fullstendig programvare definert datasenter, der all infrastruktur er virtualisert og kjører på industristandard x86 servere. Enkelt, fleksibelt og høy ytelse til halve prisen.
Assessing Software Reliability Using SPC – An Order Statistics Approach IJCSEA Journal
There are many software reliability models that are based on the times of occurrences of errors in the debugging of software. It is shown that it is possible to do asymptotic likelihood inference for software reliability models based on order statistics or Non-Homogeneous Poisson Processes (NHPP), with asymptotic confidence levels for interval estimates of parameters. In particular, interval estimates from these models are obtained for the conditional failure rate of the software, given the data from the debugging process. The data can be grouped or ungrouped. For someone making a decision about when to market software, the conditional failure rate is an important parameter. Order statistics are used in a wide variety of practical situations. Their use in characterization problems, detection of outliers, linear estimation, study of system reliability, life-testing, survival analysis, data compression and many other fields can be seen from the many books. Statistical Process Control (SPC) can monitor the forecasting of software failure and thereby contribute significantly to the improvement of software reliability. Control charts are widely used for software process control in the software industry. In this paper we proposed a control mechanism based on order statistics of cumulative quantity between observations of time domain
failure data using mean value function of Half Logistics Distribution (HLD) based on NHPP.
Assessing Software Reliability Using SPC – An Order Statistics ApproachIJCSEA Journal
There are many software reliability models that are based on the times of occurrences of errors in the debugging of software. It is shown that it is possible to do asymptotic likelihood inference for software reliability models based on order statistics or Non-Homogeneous Poisson Processes (NHPP), with asymptotic confidence levels for interval estimates of parameters. In particular, interval estimates from these models are obtained for the conditional failure rate of the software, given the data from the debugging process. The data can be grouped or ungrouped. For someone making a decision about when to market software, the conditional failure rate is an important parameter. Order statistics are used in a wide variety of practical situations. Their use in characterization problems, detection of outliers, linear estimation, study of system reliability, life-testing, survival analysis, data compression and many other fields can be seen from the many books. Statistical Process Control (SPC) can monitor the forecasting of software failure and thereby contribute significantly to the improvement of software reliability. Control charts are widely used for software process control in the software industry. In this paper we proposed a control mechanism based on order statistics of cumulative quantity between observations of time domain
failure data using mean value function of Half Logistics Distribution (HLD) based on NHPP.
Pareto Type II Based Software Reliability Growth ModelWaqas Tariq
The past 4 decades have seen the formulation of several software reliability growth models to predict the reliability and error content of software systems. This paper presents Pareto type II model as a software reliability growth model, together with expressions for various reliability performance measures. Theory of probability, distribution function, probability distributions plays major role in software reliability model building. This paper presents estimation procedures to access reliability of a software system using Pareto distribution, which is based on Non Homogenous Poisson Process (NHPP).
SRGM with Imperfect Debugging by Genetic Algorithmsijseajournal
Computer software has progressively turned out to be an
essential component in modern technologies. Penalty costs resulting from
software failures are often more considerable than software developing costs.
Debugging decreases the error content but expands the software development
costs. To improve the software quality, software reliability engineering plays
an important role in many aspects throughout the software life cycle. In this
paper, we incorporate both imperfect debugging and change-point problem into
the software reliability growth model(SRGM) based on the well-known
exponential distribution the parameter estimation is studied and the proposed
model is compared with the some existing models in the literature and is find to
be better.
DETECTION OF RELIABLE SOFTWARE USING SPRT ON TIME DOMAIN DATAIJCSEA Journal
In Classical Hypothesis testing volumes of data is to be collected and then the conclusions are drawn which may take more time. But, Sequential Analysis of statistical science could be adopted in order to decide upon the reliable / unreliable of the developed software very quickly. The procedure adopted for this is, Sequential Probability Ratio Test (SPRT). In the present paper we proposed the performance of SPRT on Time domain data using Weibull model and analyzed the results by applying on 5 data sets. The parameters are estimated using Maximum Likelihood Estimation.
A report on designing a model for improving CPU Scheduling by using Machine L...MuskanRath1
Disclaimer: Please let me know in case some of the portions of the article match your research. I would include the link to your research in the description section of my article.
Description:
The main concern of our paper describes that we are proposing a model for a uniprocessor system for improving CPU scheduling. Our model is implemented at low-level language or assembly language and LINUX is used for the implementation of the model as it is an open-source environment and its kernel is editable.
There are several methods to predict the length of the CPU bursts, such as the exponential averaging method, however, these methods may not give accurate or reliable predicted values. In this paper, we will propose a Machine Learning (ML) based on the best approach to estimate the length of the CPU bursts for processes. We will make use of Bayesian Theory for our model as a classifier tool that will decide which process will execute first in the ready queue. The proposed approach aims to select the most significant attributes of the process using feature selection techniques and then predicts the CPU-burst for the process in the grid. Furthermore, applying attribute selection techniques improves the performance in terms of space, time, and estimation.
Software reliability models (SRMs) are very important for estimating and predicting software
reliability in the testing/debugging phase. The contributions of this paper are as follows. First, a
historical review of the Gompertz SRM is given. Based on several software failure data, the
parameters of the Gompertz software reliability model are estimated using two estimation
methods, the traditional maximum likelihood and the least square. The methods of estimation are
evaluated using the MSE and R-squared criteria. The results show that the least square
estimation is an attractive method in term of predictive performance and can be used when the
maximum likelihood method fails to give good prediction results.
Optimal Selection of Software Reliability Growth Model-A StudyIJEEE
People use software and sometime software fails.so they try to quantify software reliability and try to understand how and why it fails.For this purpose so many software Reliability models have been developed to estimate the defects in the software while delivering it to the customer.Till now so many software Reliability models have been developed,but main issue is that it remain largely unsolved that how to calculate software reliability efficiently.In everycircumstance we cannotuse one model because no single model can completely represent all features.This paper describes the circumstances and criteria under which particular model can be selected.
Determination of Software Release Instant of Three-Tier Client Server Softwar...Waqas Tariq
Quality of any software system mainly depends on how much time testing take place, what kind of testing methodologies are used, how complex the software is, the amount of efforts put by software developers and the type of testing environment subject to the cost and time constraint. More time developers spend on testing more errors can be removed leading to better reliable software but then testing cost will also increase. On the contrary, if testing time is too short, software cost could be reduced provided the customers take risk of buying unreliable software. However, this will increase the cost during operational phase since it is more expensive to fix an error during operational phase than during testing phase. Therefore it is essentially important to decide when to stop testing and release the software to customers based on cost and reliability assessment. In this paper we present a mechanism of when to stop testing process and release the software to end-user by developing a software cost model with risk factor. Based on the proposed method we specifically address the issues of how to decide that we should stop testing and release the software based on three-tier client server architecture which would facilitates software developers to ensure on-time delivery of a software product meeting the criteria of achieving predefined level of reliability and minimizing the cost. A numerical example has been cited to illustrate the experimental results showing significant improvements over the conventional statistical models based on NHPP.
Predicting disease at an early stage becomes critical, and the most difficult challenge is to predict it correctly along with the sickness. The prediction happens based on the symptoms of an individual. The model presented can work like a digital doctor for disease prediction, which helps to timely diagnose the disease and can be efficient for the person to take immediate measures. The model is much more accurate in the prediction of potential ailments. The work was tested with four machine learning algorithms and got the best accuracy with Random Forest.
A Defect Prediction Model for Software Product based on ANFISIJSRD
Artificial intelligence techniques are day by day getting involvement in all the classification and prediction based process like environmental monitoring, stock exchange conditions, biomedical diagnosis, software engineering etc. However still there are yet to be simplify the challenges of selecting training criteria for design of artificial intelligence models used for prediction of results. This work focus on the defect prediction mechanism development using software metric data of KC1.We have taken subtractive clustering approach for generation of fuzzy inference system (FIS).The FIS rules are generated at different radius of influence of input attribute vectors and the developed rules are further modified by ANFIS technique to obtain the prediction of number of defects in software project using fuzzy logic system.
A Review on Parameter Estimation Techniques of Software Reliability Growth Mo...Editor IJCATR
Software reliability is considered as a quantifiable metric, which is defined as the probability of a software to operate
without failure for a specified period of time in a specific environment. Various software reliability growth models have been proposed
to predict the reliability of a software. These models help vendors to predict the behaviour of the software before shipment. The
reliability is predicted by estimating the parameters of the software reliability growth models. But the model parameters are generally
in nonlinear relationships which creates many problems in finding the optimal parameters using traditional techniques like Maximum
Likelihood and least Square Estimation. Various stochastic search algorithms have been introduced which have made the task of
parameter estimation, more reliable and computationally easier. Parameter estimation of NHPP based reliability models, using MLE
and using an evolutionary search algorithm called Particle Swarm Optimization, has been explored in the paper.
Similar to Software Process Control on Ungrouped Data: Log-Power Model (20)
The Use of Java Swing’s Components to Develop a WidgetWaqas Tariq
Widget is a kind of application provides a single service such as a map, news feed, simple clock, battery-life indicators, etc. This kind of interactive software object has been developed to facilitate user interface (UI) design. A user interface (UI) function may be implemented using different widgets with the same function. In this article, we present the widget as a platform that is generally used in various applications, such as in desktop, web browser, and mobile phone. We also describe a visual menu of Java Swing’s components that will be used to establish widget. It will assume that we have successfully compiled and run a program that uses Swing components.
3D Human Hand Posture Reconstruction Using a Single 2D ImageWaqas Tariq
Passive sensing of the 3D geometric posture of the human hand has been studied extensively over the past decade. However, these research efforts have been hampered by the computational complexity caused by inverse kinematics and 3D reconstruction. In this paper, our objective focuses on 3D hand posture estimation based on a single 2D image with aim of robotic applications. We introduce the human hand model with 27 degrees of freedom (DOFs) and analyze some of its constraints to reduce the DOFs without any significant degradation of performance. A novel algorithm to estimate the 3D hand posture from eight 2D projected feature points is proposed. Experimental results using real images confirm that our algorithm gives good estimates of the 3D hand pose. Keywords: 3D hand posture estimation; Model-based approach; Gesture recognition; human- computer interface; machine vision.
Camera as Mouse and Keyboard for Handicap Person with Troubleshooting Ability...Waqas Tariq
Camera mouse has been widely used for handicap person to interact with computer. The utmost important of the use of camera mouse is must be able to replace all roles of typical mouse and keyboard. It must be able to provide all mouse click events and keyboard functions (include all shortcut keys) when it is used by handicap person. Also, the use of camera mouse must allow users troubleshooting by themselves. Moreover, it must be able to eliminate neck fatigue effect when it is used during long period. In this paper, we propose camera mouse system with timer as left click event and blinking as right click event. Also, we modify original screen keyboard layout by add two additional buttons (button “drag/ drop” is used to do drag and drop of mouse events and another button is used to call task manager (for troubleshooting)) and change behavior of CTRL, ALT, SHIFT, and CAPS LOCK keys in order to provide shortcut keys of keyboard. Also, we develop recovery method which allows users go from camera and then come back again in order to eliminate neck fatigue effect. The experiments which involve several users have been done in our laboratory. The results show that the use of our camera mouse able to allow users do typing, left and right click events, drag and drop events, and troubleshooting without hand. By implement this system, handicap person can use computer more comfortable and reduce the dryness of eyes.
A Proposed Web Accessibility Framework for the Arab DisabledWaqas Tariq
The Web is providing unprecedented access to information and interaction for people with disabilities. This paper presents a Web accessibility framework which offers the ease of the Web accessing for the disabled Arab users and facilitates their lifelong learning as well. The proposed framework system provides the disabled Arab user with an easy means of access using their mother language so they don’t have to overcome the barrier of learning the target-spoken language. This framework is based on analyzing the web page meta-language, extracting its content and reformulating it in a suitable format for the disabled users. The basic objective of this framework is supporting the equal rights of the Arab disabled people for their access to the education and training with non disabled people. Key Words : Arabic Moon code, Arabic Sign Language, Deaf, Deaf-blind, E-learning Interactivity, Moon code, Web accessibility , Web framework , Web System, WWW.
Real Time Blinking Detection Based on Gabor FilterWaqas Tariq
New method of blinking detection is proposed. The utmost important of blinking detections method is robust against different users, noise, and also change of eye shape. In this paper, we propose blinking detections method by measuring of distance between two arcs of eye (upper part and lower part). We detect eye arcs by apply Gabor filter onto eye image. As we know that Gabor filter has advantage on image processing application since it able to extract spatial localized spectral features, such line, arch, and other shape are more easily detected. After two of eye arcs are detected, we measure the distance between both by using connected labeling method. The open eye is marked by the distance between two arcs is more than threshold and otherwise, the closed eye is marked by the distance less than threshold. The experiment result shows that our proposed method robust enough against different users, noise, and eye shape changes with perfectly accuracy.
Computer Input with Human Eyes-Only Using Two Purkinje Images Which Works in ...Waqas Tariq
A method for computer input with human eyes-only using two Purkinje images which works in a real time basis without calibration is proposed. Experimental results shows that cornea curvature can be estimated by using two light sources derived Purkinje images so that no calibration for reducing person-to-person difference of cornea curvature. It is found that the proposed system allows usersf movements of 30 degrees in roll direction and 15 degrees in pitch direction utilizing detected face attitude which is derived from the face plane consisting three feature points on the face, two eyes and nose or mouth. Also it is found that the proposed system does work in a real time basis.
Toward a More Robust Usability concept with Perceived Enjoyment in the contex...Waqas Tariq
Mobile multimedia service is relatively new but has quickly dominated people¡¯s lives, especially among young people. To explain this popularity, this study applies and modifies the Technology Acceptance Model (TAM) to propose a research model and conduct an empirical study. The goal of study is to examine the role of Perceived Enjoyment (PE) and what determinants can contribute to PE in the context of using mobile multimedia service. The result indicates that PE is influencing on Perceived Usefulness (PU) and Perceived Ease of Use (PEOU) and directly Behavior Intention (BI). Aesthetics and flow are key determinants to explain Perceived Enjoyment (PE) in mobile multimedia usage.
Collaborative Learning of Organisational KnolwedgeWaqas Tariq
This paper presents recent research into methods used in Australian Indigenous Knowledge sharing and looks at how these can support the creation of suitable collaborative envi- ronments for timely organisational learning. The protocols and practices as used today and in the past by Indigenous communities are presented and discussed in relation to their relevance to a personalised system of knowledge sharing in modern organisational cultures. This research focuses on user models, knowledge acquisition and integration of data for constructivist learning in a networked repository of or- ganisational knowledge. The data collected in the repository is searched to provide collections of up-to-date and relevant material for training in a work environment. The aim is to improve knowledge collection and sharing in a team envi- ronment. This knowledge can then be collated into a story or workflow that represents the present knowledge in the organisation.
Our research aims to propose a global approach for specification, design and verification of context awareness Human Computer Interface (HCI). This is a Model Based Design approach (MBD). This methodology describes the ubiquitous environment by ontologies. OWL is the standard used for this purpose. The specification and modeling of Human-Computer Interaction are based on Petri nets (PN). This raises the question of representation of Petri nets with XML. We use for this purpose, the standard of modeling PNML. In this paper, we propose an extension of this standard for specification, generation and verification of HCI. This extension is a methodological approach for the construction of PNML with Petri nets. The design principle uses the concept of composition of elementary structures of Petri nets as PNML Modular. The objective is to obtain a valid interface through verification of properties of elementary Petri nets represented with PNML.
Development of Sign Signal Translation System Based on Altera’s FPGA DE2 BoardWaqas Tariq
The main aim of this paper is to build a system that is capable of detecting and recognizing the hand gesture in an image captured by using a camera. The system is built based on Altera’s FPGA DE2 board, which contains a Nios II soft core processor. Image processing techniques and a simple but effective algorithm are implemented to achieve this purpose. Image processing techniques are used to smooth the image in order to ease the subsequent processes in translating the hand sign signal. The algorithm is built for translating the numerical hand sign signal and the result are displayed on the seven segment display. Altera’s Quartus II, SOPC Builder and Nios II EDS software are used to construct the system. By using SOPC Builder, the related components on the DE2 board can be interconnected easily and orderly compared to traditional method that requires lengthy source code and time consuming. Quartus II is used to compile and download the design to the DE2 board. Then, under Nios II EDS, C programming language is used to code the hand sign translation algorithm. Being able to recognize the hand sign signal from images can helps human in controlling a robot and other applications which require only a simple set of instructions provided a CMOS sensor is included in the system.
An overview on Advanced Research Works on Brain-Computer InterfaceWaqas Tariq
A brain–computer interface (BCI) is a proficient result in the research field of human- computer synergy, where direct articulation between brain and an external device occurs resulting in augmenting, assisting and repairing human cognitive. Advanced works like generating brain-computer interface switch technologies for intermittent (or asynchronous) control in natural environments or developing brain-computer interface by Fuzzy logic Systems or by implementing wavelet theory to drive its efficacies are still going on and some useful results has also been found out. The requirements to develop this brain machine interface is also growing day by day i.e. like neuropsychological rehabilitation, emotion control, etc. An overview on the control theory and some advanced works on the field of brain machine interface are shown in this paper.
Exploring the Relationship Between Mobile Phone and Senior Citizens: A Malays...Waqas Tariq
There is growing ageing phenomena with the rise of ageing population throughout the world. According to the World Health Organization (2002), the growing ageing population indicates 694 million, or 223% is expected for people aged 60 and over, since 1970 and 2025.The growth is especially significant in some advanced countries such as North America, Japan, Italy, Germany, United Kingdom and so forth. This growing older adult population has significantly impact the social-culture, lifestyle, healthcare system, economy, infrastructure and government policy of a nation. However, there are limited research studies on the perception and usage of a mobile phone and its service for senior citizens in a developing nation like Malaysia. This paper explores the relationship between mobile phones and senior citizens in Malaysia from the perspective of a developing country. We conducted an exploratory study using contextual interviews with 5 senior citizens of how they perceive their mobile phones. This paper reveals 4 interesting themes from this preliminary study, in addition to the findings of the desirable mobile requirements for local senior citizens with respect of health, safety and communication purposes. The findings of this study bring interesting insight to local telecommunication industries as a whole, and will also serve as groundwork for more in-depth study in the future.
Principles of Good Screen Design in WebsitesWaqas Tariq
Visual techniques for proper arrangement of the elements on the user screen have helped the designers to make the screen look good and attractive. Several visual techniques emphasize the arrangement and ordering of the screen elements based on particular criteria for best appearance of the screen. This paper investigates few significant visual techniques in various web user interfaces and showcases the results for better understanding and their presence.
Virtual teams are used more and more by companies and other organizations to receive benefits. They are a great way to enable teamwork in situations where people are not sitting in the same physical place at the same time. As companies seek to increase the use of virtual teams, a need exists to explore the context of these teams, the virtuality of a team and software that may help these teams working virtualy. Virtual teams have the same basic principles as traditional teams, but there is one big difference. This difference is the way the team members communicate. Instead of using the dynamics of in-office face-to-face exchange, they now rely on special communication channels enabled by modern technologies, such as e-mails, faxes, phone calls and teleconferences, virtual meetings etc. This is why this paper is focused on the issues regarding virtual teams, and how these teams are created and progressing in Albania.
Cognitive Approach Towards the Maintenance of Web-Sites Through Quality Evalu...Waqas Tariq
It is a well established fact that the Web-Applications require frequent maintenance because of cutting– edge business competitions. The authors have worked on quality evaluation of web-site of Indian ecommerce domain. As a result of that work they have made a quality-wise ranking of these sites. According to their work and also the survey done by various other groups Futurebazaar web-site is considered to be one of the best Indian e-shopping sites. In this research paper the authors are assessing the maintenance of the same site by incorporating the problems incurred during this evaluation. This exercise gives a real world maintainability problem of web-sites. This work will give a clear picture of all the quality metrics which are directly or indirectly related with the maintainability of the web-site.
USEFul: A Framework to Mainstream Web Site Usability through Automated Evalua...Waqas Tariq
A paradox has been observed whereby web site usability is proven to be an essential element in a web site, yet at the same time there exist an abundance of web pages with poor usability. This discrepancy is the result of limitations that are currently preventing web developers in the commercial sector from producing usable web sites. In this paper we propose a framework whose objective is to alleviate this problem by automating certain aspects of the usability evaluation process. Mainstreaming comes as a result of automation, therefore enabling a non-expert in the field of usability to conduct the evaluation. This results in reducing the costs associated with such evaluation. Additionally, the framework allows the flexibility of adding, modifying or deleting guidelines without altering the code that references them since the guidelines and the code are two separate components. A comparison of the evaluation results carried out using the framework against published evaluations of web sites carried out by web site usability professionals reveals that the framework is able to automatically identify the majority of usability violations. Due to the consistency with which it evaluates, it identified additional guideline-related violations that were not identified by the human evaluators.
Robot Arm Utilized Having Meal Support System Based on Computer Input by Huma...Waqas Tariq
A robot arm utilized having meal support system based on computer input by human eyes only is proposed. The proposed system is developed for handicap/disabled persons as well as elderly persons and tested with able persons with several shapes and size of eyes under a variety of illumination conditions. The test results with normal persons show the proposed system does work well for selection of the desired foods and for retrieve the foods as appropriate as usersf requirements. It is found that the proposed system is 21% much faster than the manually controlled robotics.
Dynamic Construction of Telugu Speech Corpus for Voice Enabled Text EditorWaqas Tariq
In recent decades speech interactive systems have gained increasing importance. Performance of an ASR system mainly depends on the availability of large corpus of speech. The conventional method of building a large vocabulary speech recognizer for any language uses a top-down approach to speech. This approach requires large speech corpus with sentence or phoneme level transcription of the speech utterances. The transcriptions must also include different speech order so that the recognizer can build models for all the sounds present. But, for Telugu language, because of its complex nature, a very large, well annotated speech database is very difficult to build. It is very difficult, if not impossible, to cover all the words of any Indian language, where each word may have thousands and millions of word forms. A significant part of grammar that is handled by syntax in English (and other similar languages) is handled within morphology in Telugu. Phrases including several words (that is, tokens) in English would be mapped on to a single word in Telugu.Telugu language is phonetic in nature in addition to rich in morphology. That is why the speech technology developed for English cannot be applied to Telugu language. This paper highlights the work carried out in an attempt to build a voice enabled text editor with capability of automatic term suggestion. Main claim of the paper is the recognition enhancement process developed by us for suitability of highly inflecting, rich morphological languages. This method results in increased speech recognition accuracy with very much reduction in corpus size. It also adapts Telugu words to the database dynamically, resulting in growth of the corpus.
An Improved Approach for Word Ambiguity RemovalWaqas Tariq
Word ambiguity removal is a task of removing ambiguity from a word, i.e. correct sense of word is identified from ambiguous sentences. This paper describes a model that uses Part of Speech tagger and three categories for word sense disambiguation (WSD). Human Computer Interaction is very needful to improve interactions between users and computers. For this, the Supervised and Unsupervised methods are combined. The WSD algorithm is used to find the efficient and accurate sense of a word based on domain information. The accuracy of this work is evaluated with the aim of finding best suitable domain of word. Keywords: Human Computer Interaction, Supervised Training, Unsupervised Learning, Word Ambiguity, Word sense disambiguation
Parameters Optimization for Improving ASR Performance in Adverse Real World N...Waqas Tariq
From the existing research it has been observed that many techniques and methodologies are available for performing every step of Automatic Speech Recognition (ASR) system, but the performance (Minimization of Word Error Recognition-WER and Maximization of Word Accuracy Rate- WAR) of the methodology is not dependent on the only technique applied in that method. The research work indicates that, performance mainly depends on the category of the noise, the level of the noise and the variable size of the window, frame, frame overlap etc is considered in the existing methods. The main aim of the work presented in this paper is to use variable size of parameters like window size, frame size and frame overlap percentage to observe the performance of algorithms for various categories of noise with different levels and also train the system for all size of parameters and category of real world noisy environment to improve the performance of the speech recognition system. This paper presents the results of Signal-to-Noise Ratio (SNR) and Accuracy test by applying variable size of parameters. It is observed that, it is really very hard to evaluate test results and decide parameter size for ASR performance improvement for its resultant optimization. Hence, this study further suggests the feasible and optimum parameter size using Fuzzy Inference System (FIS) for enhancing resultant accuracy in adverse real world noisy environmental conditions. This work will be helpful to give discriminative training of ubiquitous ASR system for better Human Computer Interaction (HCI). Keywords: ASR Performance, ASR Parameters Optimization, Multi-Environmental Training, Fuzzy Inference System for ASR, ubiquitous ASR system, Human Computer Interaction (HCI)
Students, digital devices and success - Andreas Schleicher - 27 May 2024..pptxEduSkills OECD
Andreas Schleicher presents at the OECD webinar ‘Digital devices in schools: detrimental distraction or secret to success?’ on 27 May 2024. The presentation was based on findings from PISA 2022 results and the webinar helped launch the PISA in Focus ‘Managing screen time: How to protect and equip students against distraction’ https://www.oecd-ilibrary.org/education/managing-screen-time_7c225af4-en and the OECD Education Policy Perspective ‘Students, digital devices and success’ can be found here - https://oe.cd/il/5yV
The Art Pastor's Guide to Sabbath | Steve ThomasonSteve Thomason
What is the purpose of the Sabbath Law in the Torah. It is interesting to compare how the context of the law shifts from Exodus to Deuteronomy. Who gets to rest, and why?
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
This is a presentation by Dada Robert in a Your Skill Boost masterclass organised by the Excellence Foundation for South Sudan (EFSS) on Saturday, the 25th and Sunday, the 26th of May 2024.
He discussed the concept of quality improvement, emphasizing its applicability to various aspects of life, including personal, project, and program improvements. He defined quality as doing the right thing at the right time in the right way to achieve the best possible results and discussed the concept of the "gap" between what we know and what we do, and how this gap represents the areas we need to improve. He explained the scientific approach to quality improvement, which involves systematic performance analysis, testing and learning, and implementing change ideas. He also highlighted the importance of client focus and a team approach to quality improvement.
The Roman Empire A Historical Colossus.pdfkaushalkr1407
The Roman Empire, a vast and enduring power, stands as one of history's most remarkable civilizations, leaving an indelible imprint on the world. It emerged from the Roman Republic, transitioning into an imperial powerhouse under the leadership of Augustus Caesar in 27 BCE. This transformation marked the beginning of an era defined by unprecedented territorial expansion, architectural marvels, and profound cultural influence.
The empire's roots lie in the city of Rome, founded, according to legend, by Romulus in 753 BCE. Over centuries, Rome evolved from a small settlement to a formidable republic, characterized by a complex political system with elected officials and checks on power. However, internal strife, class conflicts, and military ambitions paved the way for the end of the Republic. Julius Caesar’s dictatorship and subsequent assassination in 44 BCE created a power vacuum, leading to a civil war. Octavian, later Augustus, emerged victorious, heralding the Roman Empire’s birth.
Under Augustus, the empire experienced the Pax Romana, a 200-year period of relative peace and stability. Augustus reformed the military, established efficient administrative systems, and initiated grand construction projects. The empire's borders expanded, encompassing territories from Britain to Egypt and from Spain to the Euphrates. Roman legions, renowned for their discipline and engineering prowess, secured and maintained these vast territories, building roads, fortifications, and cities that facilitated control and integration.
The Roman Empire’s society was hierarchical, with a rigid class system. At the top were the patricians, wealthy elites who held significant political power. Below them were the plebeians, free citizens with limited political influence, and the vast numbers of slaves who formed the backbone of the economy. The family unit was central, governed by the paterfamilias, the male head who held absolute authority.
Culturally, the Romans were eclectic, absorbing and adapting elements from the civilizations they encountered, particularly the Greeks. Roman art, literature, and philosophy reflected this synthesis, creating a rich cultural tapestry. Latin, the Roman language, became the lingua franca of the Western world, influencing numerous modern languages.
Roman architecture and engineering achievements were monumental. They perfected the arch, vault, and dome, constructing enduring structures like the Colosseum, Pantheon, and aqueducts. These engineering marvels not only showcased Roman ingenuity but also served practical purposes, from public entertainment to water supply.
Software Process Control on Ungrouped Data: Log-Power Model
1. Satya Prasad, B. Indira Reddy & Krishna Mohan Gonuguntla
International Journal of Software Engineering (IJSE), Volume (5) : Issue (1) : 2014 1
Software Process Control on Ungrouped Data: Log-Power Model
Satya Prasad Ravi profrsp@gmail.com
Associate Professor,
Dept. of Computer Science & Engg.,
Acharya Nagarjuna University
Andhrapradesh, India
B. Indira Reddy Indira2259@yahoo.co.in
Associate professor,
St. Pauls college of Management & IT
Hyderabad, India.
Krishna Mohan Gonuguntla km_mm_2000@yahoo.com
Reader, Dept. of Computer Science,
P.B.Siddhartha College
Andhrapradesh, India.
Abstract
Statistical Process Control (SPC) is the best choice to monitor software reliability process. It
assists the software development team to identify and actions to be taken during software failure
process and hence, assures better software reliability. In this paper we propose a control
mechanism based on the cumulative observations of failures which is ungrouped data using an
infinite failure mean value function of Log-Power model, which is Non-Homogenous Poisson
Process (NHPP) based. The Maximum Likelihood Estimation (MLE) approach is used to estimate
the unknown parameters of the model.
Keywords: MLE, SPC, Log-Power, Ungrouped Data.
1. INTRODUCTION
Many software reliability models have been proposed in last 40 years to compute the reliability
growth of products during software development phase. These models can be of two types i.e.
static and dynamic. A static model uses software metrics to estimate the number of defects in the
software. A dynamic model uses the past failure discovery rate during software execution over
time to estimate the number of failures. Various software reliability growth models (SRGMs) exist
to estimate the expected number of total defects (or failures) or the expected number of
remaining defects (or failures).
The goal of software engineering is to produce high quality software at low cost. As, human
beings are involved in the development of software, there is a possibility of errors in the software.
To identify and eliminate human errors in software development process and also to improve
software reliability, the Statistical Process Control concepts and methods are the best choice.
SPC concepts and methods are used to monitor the performance of a software process over time
in order to verify that the process remains in the state of statistical control. It helps in finding
assignable causes, long term improvements in the software process. Software quality and
reliability can be achieved by eliminating the causes or improving the software process or its
operating procedures [1].
The most popular technique for maintaining process control is control charting. The control chart
is one of the seven tools for quality control. Software process control is used to secure, that the
2. Satya Prasad, B. Indira Reddy & Krishna Mohan Gonuguntla
International Journal of Software Engineering (IJSE), Volume (5) : Issue (1) : 2014 2
quality of the final product will conform to predefined standards. In any process, regardless of
how carefully it is maintained, a certain amount of natural variability will always exist. A process is
said to be statistically “in-control” when it operates with only chance causes of variation. On the
other hand, when assignable causes are present, then we say that the process is statistically
“out-of-control”. Control charts should be capable to create an alarm when a shift in the level of
one or more parameters of the underlying distribution occurs or a non-random behavior comes
into. Normally, such a situation will be reflected in the control chart by points plotted outside the
control limits or by the presence of specific patterns. The most common non-random patterns are
cycles, trends, mixtures and stratification [2]. For a process to be in control the control chart
should not have any trend or nonrandom pattern. The selection of proper SPC charts is essential
to effective statistical process control implementation and use. The SPC chart selection is based
on data, situation and need [3].
Chan et al.,[4] proposed a procedure based on the monitoring of cumulative quantity. This
approach has shown to have a number of advantages: it does not involve the choice of a sample
size; it raises fewer false alarms; it can be used in any environment; and it can detect further
process improvement. Xie et al.,[5] proposed t-chart for reliability monitoring where the control
limits are defined in such a manner that the process is considered to be out of control when one
failure is less than LCL or greater than UCL. Assuming an acceptable false alarm =0.0027 the
control limits were defined. In section 5 of present paper, a method is presented to estimate the
parameters and defining the limits. The process control is decided by taking the successive
differences of mean values.
2. BACKGROUND THEORY
This section presents the theory that underlies NHPP models, the SRGMs under consideration
and maximum likelihood estimation for ungrouped data. If ‘t’ is a continuous random variable with
pdf: 1 2( ; , , , )kf t θ θ θ… . Where, 1 2, , , kθ θ θ… are k unknown constant parameters which need to
be estimated, and cdf: ( )F t . Where, The mathematical relationship between the pdf and cdf is
given by: ( )'
( )f t F t= . Let ‘a’ denote the expected number of faults that would be detected given
infinite testing time. Then, the mean value function of the NHPP models can be written
as: ( ) ( )m t aF t= , where F(t) is a cumulative distribution function. The failure intensity function
( )tλ in case of NHPP models is given by: ( ) '( )t aF tλ = [6].
2.1. NHPP model
The Non-Homogenous Poisson Process (NHPP) based software reliability growth models
(SRGMs) are proved to be quite successful in practical software reliability engineering [7]. The
main issue in the NHPP model is to determine an appropriate mean value function to denote the
expected number of failures experienced up to a certain time point. Model parameters can be
estimated by using Maximum Likelihood Estimate (MLE). Various NHPP SRGMs have been built
upon various assumptions. Many of the SRGMs assume that each time a failure occurs, the fault
that caused it can be immediately removed and no new faults are introduced. Which is usually
called perfect debugging. Imperfect debugging models have proposed a relaxation of the above
assumption [8][9].
2.2. Model under consideration: Log-Power model
Software reliability growth models (SRGM’s) are useful to assess the reliability for quality
management and testing-progress control of software development. They have been grouped
into two classes of models concave and S-shaped. The most important thing about both models
is that they have the same asymptotic behavior, i.e., the defect detection rate decreases as the
number of defects detected (and repaired) increases, and the total number of defects detected
asymptotically approaches a finite value. The Log Power NHPP model has several interesting
properties, such as simple graphical interpretations and simple forms of the maximum likelihood
estimates for the parameters. This model is characterized by the following mean value function:
3. Satya Prasad, B. Indira Reddy & Krishna Mohan Gonuguntla
International Journal of Software Engineering (IJSE), Volume (5) : Issue (1) : 2014 3
( ) ( )log 1b
m t a t= + . Where, , 0, 0a b t> ≥ . The failure intensity function of the model, which is
defined as the derivative of the mean value function ( )m t , is given by ( )
( )1
log 1
1
b
ab t
t
t
λ
−
+
=
+
.
3. MAXIMUM LIKELIHOOD ESTIMATION
In much of the literature the preferred method of obtaining parameter estimates is to use the
maximum likelihood equations. Likelihood equations are derived from the model equations and
the assumptions which underlie these equations. The parameters are then taken to be those
values which maximize these likelihood functions. These values are found by taking the partial
derivate of the likelihood function with respect to the model parameters, the maximum likelihood
equations, and setting them to zero. Iterative routines are then used to solve these equations.
Unfortunately, the SRGM literature is sadly lacking in advice on which iterative routines to use,
and with what starting values. This is unfortunate because the accuracy of parameter estimates
and thus the accuracy of the models themselves greatly depend on the ability of the iterative
search methods used to overcome local minima and find good values for the parameters.
If we conduct an experiment and obtain N independent observations, 1 2, , , Nt t t… . The likelihood
function may be given by the following product:
( )1 2 1 2 1 2
1
, , , | , , , ( ; , , , )
N
N k i k
i
L t t t f tθ θ θ θ θ θ
=
= ∏… … …
Likelihood function by using (t) is:
( )
1
( )
n
m t
i
i
L e tλ−
=
= ∏
Log Likelihood function for ungrouped data [10] is given as,
[ ]
1
lo g lo g ( ) ( )
n
i n
i
L t m tλ
=
= −∑
The maximum likelihood estimators (MLE) of 1 2, , , kθ θ θ… are obtained by maximizing L or Λ ,
where Λ is ln L . By maximizing , which is much easier to work with than L, the maximum
likelihood estimators (MLE) of 1 2, , , kθ θ θ… are the simultaneous solutions of k equations such
as: ( ) 0
jθ
∂ Λ
=
∂
, j=1,2,…,k.
3.1. Illustration: Parameter Estimation
We used cumulative time between failures data for software reliability monitoring. The use of
cumulative quality is a different and new approach, which is of particular advantage in reliability.
Using the estimators of ‘a’ and ‘b’ we can compute ( )m t .
The likelihood function of Log-power model is given as,
( ) ( )1
lo g 1
1
lo g 1
1
b
bN
a t i
i i
a b t
L e
t
−
− +
=
+
=
+
∏ (3.1.1)
Taking the natural logarithm on both sides, The Log Likelihood function is given as:
( )
( )
1
1
log 1
log log log 1
1
bn
i b
n
i i
ab t
L a t
t
−
=
+
= − + +
∑ . (3.1.2)
4. Satya Prasad, B. Indira Reddy & Krishna Mohan Gonuguntla
International Journal of Software Engineering (IJSE), Volume (5) : Issue (1) : 2014 4
Taking the Partial derivative with respect to ‘a’ and equating to ‘0’.
( )log 1b
n
n
a
t
=
+
(3.1.3)
Taking the Partial derivative of log L with respect to ‘b’ and equating to‘0’.
( )( ) ( )
1
log log 1 log log 1
n
n i
i
n
b
n t t
=
=
+ − + ∑
(3.1.4)
4. TIME DOMAIN FAILURE DATA SETS
The techniques examined here deal with data about the time at which failures occurred; or
alternatively, data about the time between failure occurrences. These two forms can be
considered equivalent. Although most software reliability growth models use data of this form,
and such models have been in use for several decades, finding suitable data to verify models and
improvement techniques is difficult. Early work generally focused on data based on calendar or
wall clock time. Musa asserts that CPU execution time is a better measure than wall clock time,
during which the actually time spent running a program can vary greatly based on CPU load, man
hours, and other factors [11].
DS #1: On-Line Data Entry IBM Software Package
The data reported by Ohba [8] are recorded from testing an on-line data entry software package
developed at IBM. The following table shows the number of errors and the inter failure time.
Failure
Number
Time
Between
Failure(h)
Failure
Number
Time
Between
Failure(h)
Failure
Number
Time
Between
Failure(h)
1 10 6 12 11 19
2 9 7 18 12 30
3 13 8 15 13 32
4 11 9 22 14 25
5 15 10 25 15 40
TABLE 4.1: DS #1.
DS #2: AT&T System T Project
The AT&T’s System T is a network-management system developed by AT&T that receives data
from telemetry events, such as alarms, facility-performance information, and diagnostic
messages, and forwards them to operators for further action. The system has been tested and
failure data has been collected [12]. The following Table shows the failures and the inter-failure
times (in CPU units).
Failure
Number
Inter Failure
Time
Failure
Number
Inter Failure
Time
Failure
Number
Inter Failure
Time
1 5.5 9 11.39 17 125.67
2 1.83 10 19.88 18 82.69
3 2.75 11 7.81 19 0.46
4 70.89 12 14.6 20 31.61
5 3.94 13 11.41 21 129.31
6 14.98 14 18.94 22 47.6
7 3.47 15 65.3
8 9.96 16 0.04
TABLE 4.2: DS #2.
5. Satya Prasad, B. Indira Reddy & Krishna Mohan Gonuguntla
International Journal of Software Engineering (IJSE), Volume (5) : Issue (1) : 2014 5
5. RESULTS
The performance of the model under consideration is exemplified by applying on the data sets
given in tables 4.1 and 4.2.
5.1 Calculation of Control Limits
The control limits for the chart are defined in such a manner that the process is considered to be
out of control when the time to observe exactly one failure is less than LCL or greater than UCL.
Our aim is to monitor the failure process and detect any change of the intensity parameter. When
the process is normal, there is a chance for this to happen and it is commonly known as false
alarm. The traditional false alarm probability is to set to be 0.27% although any other false alarm
probability can be used. The actual acceptable false alarm probability should in fact depend on
the actual product or process [13].
( )
( )
( )
log 1 0.99865
log 1 0.5
log 1 0.00135
b
u
b
c
b
l
T t
T t
T t
= + =
= + =
= + =
Data set a b )( Utm )( Ctm )( Ltm
DS#1 0.022149 3.747340 0.022136 0.016392 0.001256
DS#2 0.073480 3.040257 0.073437 0.054379 0.004168
TABLE 5.1.1: Estimated Parameters and the Control Limits.
5.2 Distribution of Failures
The ( )m t values were calculated at each cumulative value of ‘t’. The successive differences of
these values are calculated to plot as a failure control chart along with the calculated control limits
which vary with the considered data. The following tables 5.2.1, 5.2.2 and graphs given in figures
5.2.1, 5.2.2 shows the performance of Log-Power model in software process control.
FN m(t) SD FN m(t) SD FN m(t) SD
1 0.587104 0.764908 6 5.069872 1.082693 11 10.191812 1.262075
2 1.352012 1.060919 7 6.152565 0.838939 12 11.453888 1.249374
3 2.412931 0.832128 8 6.991504 1.145406 13 12.703262 0.917759
4 3.245060 1.047639 9 8.136909 1.201109 14 13.621021 1.378979
5 4.292699 0.777173 10 9.338018 0.853794 15 15.000000
TABLE 5.2.1: Successive Differences of Mean Values IBM.
FN m(t) SD FN m(t) SD FN m(t) SD
1 0.494209 0.227281 9 8.843671 0.842176 17 16.753902 1.699392
2 0.721490 0.337608 10 9.685847 0.312257 18 18.453294 0.008876
3 1.059098 5.614150 11 9.998104 0.559258 19 18.462171 0.596590
4 6.673248 0.218518 12 10.557362 0.417022 20 19.058761 2.206733
5 6.891766 0.784428 13 10.974384 0.658035 21 21.265494 0.734506
6 7.676194 0.172320 14 11.632419 2.008932 22 22.000000
7 7.848515 0.477406 15 13.641351 0.001129
8 8.325921 0.517750 16 13.642479 3.111423
TABLE 5.2.2: Successive Differences of Mean Values ATT.
6. Satya Prasad, B. Indira Reddy & Krishna Mohan Gonuguntla
International Journal of Software Engineering (IJSE), Volume (5) : Issue (1) : 2014 6
failure control chart
UCL 0.022136
CL 0.016392
LCL 0.001256
0.000100
0.001000
0.010000
0.100000
1.000000
10.000000
1 2 3 4 5 6 7 8 9 10 11 12 13 14
failure number
successivedifferencesofmean
values
FIGURE 5.2.1: Failure Control Chart.
failure control chart
0.073437UCL
0.054379CL
0.004168LCL
0.000100
0.001000
0.010000
0.100000
1.000000
10.000000
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
failure number
successivedifferencesofmean
values
FIGURE 5.2.2: Failure Control Chart.
A point below the control limit ( )Lm t indicates an alarming signal. A point above the control limit
( )Um t indicates better quality. If the points are falling within the control limits it indicates the
software process is in stable. By placing the failure cumulative data shown in tables 5.2.1 and
5.2.2 on y axis and failure number on x axis and the values of control limits are placed on Control
chart, we obtained Figures 5.2.1 and 5.2.2.The software quality is determined by detecting
failures at an early stage.
6. CONCLUSION
The given Time between failures data are plotted through the estimated mean value function
against the failure serial order. The graphs have shown out of control signals i.e below the LCL.
Hence we conclude that our method of estimation and the control chart are giving a positive
recommendation for their use in finding out preferable control process or desirable out of control
signal. By observing the Control chart it is identified that, for DS#1 the failure process out of UCL.
For DS#2 the failure situation is detected at 15th point below LCL. Hence our proposed Control
Chart detects out of control situation. The performance of this model is compared with Rayleigh
and exponential Rayleigh models [14][15]. Many of the successive differences have gone out of
7. Satya Prasad, B. Indira Reddy & Krishna Mohan Gonuguntla
International Journal of Software Engineering (IJSE), Volume (5) : Issue (1) : 2014 7
upper control limits for the present model. But, in the case of exponential and Rayleigh model the
successive differences have gone out of Lower control limits and within the upper and lower
control limits.
7. REFERENCES
[1] Kimura, M., Yamada, S., Osaki, S., (1995). ”Statistical Software reliability prediction and its
applicability based on mean time between failures”. Mathematical and Computer
ModellingVolume 22, Issues 10-12, Pages 149-155.
[2] Koutras, M.V., Bersimis, S., Maravelakis,P.E., 2007. “Statistical process control using
shewart control charts with supplementary Runs rules” Springer Science + Business media
9:207-224.
[3] MacGregor, J.F., Kourti, T., 1995. “Statistical process control of multivariate processes”.
Control Engineering Practice Volume 3, Issue 3, March 1995, Pages 403-414.
[4] Chan, L.Y, Xie, M., and Goh. T.N., (2000), “Cumulative quality control charts for monitoring
production processes. Int J Prod Res; 38(2):397-408.
[5] Xie. M, T.N Goh and P.Ranjan. (2002). “Some effective control chart procedures for reliability
monitoring”, Reliability Engineering and System Safety. 77, 143-150.
[6] Swapna S. Gokhale and Kishore S.Trivedi, (1998). “Log-Logistic Software Reliability Growth
Model”. The 3rd IEEE International Symposium on High-Assurance Systems Engineering.
IEEE Computer Society.
[7] Musa, J. D.; Iannino, A.; Okumoto, K. (1987). “Software Reliability - Measurement, Prediction,
Application”, New York.
[8] Ohba, M., 1984. “Software reliability analysis model”. IBM J. Res. Develop. 28, 428-443.
[9] Pham. H., 1993. “Software reliability assessment: Imperfect debugging and multiple failure
types
[10]Pham. H., 2006. “System software reliability”, Springer.
[11]Musa J.D. (1975). “A Theory of Software reliability and its applications” IEEE Trans. On
Software Engineering ,vol SE-1(3)
[12]Ehrlich, W., Prasanna, B., Stampfel, J. and Wu, J. (1993). “Determining the cost of a stop
testing decision”, IEEE Software: 33-42.
[13]Gokhale, S.S and Trivedi, K.S., 1998. “Log-Logistic Software Reliability Growth Model”. The
3rd IEEE International Symposium on High-Assurance Systems Engineering. IEEE Computer
Society.
[14]R.Satya Prasad, G.Krishna Mohan and Prof. R.R.L. Kantham. “Time Domain based Software
Process Control using Weibull Mean Value Function”, International Journal of Computer
Applications (IJCA). 18(3):18-21, March 2011.
[15]G.Krishna Mohan, B.Srinivasa Rao and Dr. R.Satya Prasad. ”A Comparative study of
Software Reliability models using SPC on ungrouped data”, International Journal of
Advanced Research in Computer Science and Software Engineering (IJARCSSE). Volume 2,
Issue 2, February 2012.