The document presents the Burr Type III software reliability growth model based on non-homogeneous Poisson process (NHPP) with time domain data. It describes the background and formulation of the Burr Type III and NHPP models. Parameter estimation for the Burr Type III model is performed using maximum likelihood estimation on ungrouped time domain failure data. Goodness of fit is analyzed to assess how well the model fits real software failure data sets.
Pareto Type II Based Software Reliability Growth ModelWaqas Tariq
The past 4 decades have seen the formulation of several software reliability growth models to predict the reliability and error content of software systems. This paper presents Pareto type II model as a software reliability growth model, together with expressions for various reliability performance measures. Theory of probability, distribution function, probability distributions plays major role in software reliability model building. This paper presents estimation procedures to access reliability of a software system using Pareto distribution, which is based on Non Homogenous Poisson Process (NHPP).
Particle Swarm Optimization in the fine-tuning of Fuzzy Software Cost Estimat...Waqas Tariq
Software cost estimation deals with the financial and strategic planning of software projects. Controlling the expensive investment of software development effectively is of paramount importance. The limitation of algorithmic effort prediction models is their inability to cope with uncertainties and imprecision surrounding software projects at the early development stage. More recently, attention has turned to a variety of machine learning methods, and soft computing in particular to predict software development effort. Fuzzy logic is one such technique which can cope with uncertainties. In the present paper, Particle Swarm Optimization Algorithm (PSOA) is presented to fine tune the fuzzy estimate for the development of software projects . The efficacy of the developed models is tested on 10 NASA software projects, 18 NASA projects and COCOMO 81 database on the basis of various criterion for assessment of software cost estimation models. Comparison of all the models is done and it is found that the developed models provide better estimation
A Study of Person Identification using Keystroke Dynamics and Statistical Ana...Dr. Amarjeet Singh
In this paper, a basic study of closed-set identification
using keystroke dynamics and simple statistical analysis has
been carried out. Dwell time, flight time and one additional
feature called key affinity are used as user-identifying features.
The timing information is passed through a statistical layer to
produce mean and standard deviation. This information is
combined with key affinity to identify a rank-based person list.
In conclusion, we compare the performance of this setup with
other setups. This work aims to suggest that a keystroke
dynamics system relying on pure statistics as its underlying
algorithm may not be sufficiently accurate.
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
The Improved Hybrid Algorithm for the Atheer and Berry-ravindran Algorithms IJECEIAES
Exact String matching considers is one of the important ways in solving the basic problems in computer science. This research proposed a hybrid exact string matching algorithm called E-Atheer. This algorithm depended on good features; searching and shifting techniques in the Atheer and BerryRavindran algorithms, respectively. The proposed algorithm showed better performance in number of attempts and character comparisons compared to the original and recent and standard algorithms. E-Atheer algorithm used several types of databases, which are DNA, Protein, XML, Pitch, English, and Source. The best performancein the number of attempts is when the algorithm is executed using the pitch dataset. The worst performance is when it is used with DNA dataset. The best and worst databases in the number of character comparisons with the E-Atheer algorithm are the Source and DNA databases, respectively.
Laura Florentina Stoica, Florian Mircea Boian, Florin Stoica, A Distributed CTL Model Checker, Proceeding of 10th International Conference on e-Business, ICE-B 2013, Reykjavik Iceland, paper 33, 29-31 July, pp. 379-386, ISBN: 978-989-8565-72-3, 2013
Constructing a classification model is important in machine learning for a particular task. A
classification process involves assigning objects into predefined groups or classes based on a
number of observed attributes related to those objects. Artificial neural network is one of the
classification algorithms which, can be used in many application areas. This paper investigates
the potential of applying the feed forward neural network architecture for the classification of
medical datasets. Migration based differential evolution algorithm (MBDE) is chosen and
applied to feed forward neural network to enhance the learning process and the network
learning is validated in terms of convergence rate and classification accuracy. In this paper,
MBDE algorithm with various migration policies is proposed for classification problems using
medical diagnosis.
Pareto Type II Based Software Reliability Growth ModelWaqas Tariq
The past 4 decades have seen the formulation of several software reliability growth models to predict the reliability and error content of software systems. This paper presents Pareto type II model as a software reliability growth model, together with expressions for various reliability performance measures. Theory of probability, distribution function, probability distributions plays major role in software reliability model building. This paper presents estimation procedures to access reliability of a software system using Pareto distribution, which is based on Non Homogenous Poisson Process (NHPP).
Particle Swarm Optimization in the fine-tuning of Fuzzy Software Cost Estimat...Waqas Tariq
Software cost estimation deals with the financial and strategic planning of software projects. Controlling the expensive investment of software development effectively is of paramount importance. The limitation of algorithmic effort prediction models is their inability to cope with uncertainties and imprecision surrounding software projects at the early development stage. More recently, attention has turned to a variety of machine learning methods, and soft computing in particular to predict software development effort. Fuzzy logic is one such technique which can cope with uncertainties. In the present paper, Particle Swarm Optimization Algorithm (PSOA) is presented to fine tune the fuzzy estimate for the development of software projects . The efficacy of the developed models is tested on 10 NASA software projects, 18 NASA projects and COCOMO 81 database on the basis of various criterion for assessment of software cost estimation models. Comparison of all the models is done and it is found that the developed models provide better estimation
A Study of Person Identification using Keystroke Dynamics and Statistical Ana...Dr. Amarjeet Singh
In this paper, a basic study of closed-set identification
using keystroke dynamics and simple statistical analysis has
been carried out. Dwell time, flight time and one additional
feature called key affinity are used as user-identifying features.
The timing information is passed through a statistical layer to
produce mean and standard deviation. This information is
combined with key affinity to identify a rank-based person list.
In conclusion, we compare the performance of this setup with
other setups. This work aims to suggest that a keystroke
dynamics system relying on pure statistics as its underlying
algorithm may not be sufficiently accurate.
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
The Improved Hybrid Algorithm for the Atheer and Berry-ravindran Algorithms IJECEIAES
Exact String matching considers is one of the important ways in solving the basic problems in computer science. This research proposed a hybrid exact string matching algorithm called E-Atheer. This algorithm depended on good features; searching and shifting techniques in the Atheer and BerryRavindran algorithms, respectively. The proposed algorithm showed better performance in number of attempts and character comparisons compared to the original and recent and standard algorithms. E-Atheer algorithm used several types of databases, which are DNA, Protein, XML, Pitch, English, and Source. The best performancein the number of attempts is when the algorithm is executed using the pitch dataset. The worst performance is when it is used with DNA dataset. The best and worst databases in the number of character comparisons with the E-Atheer algorithm are the Source and DNA databases, respectively.
Laura Florentina Stoica, Florian Mircea Boian, Florin Stoica, A Distributed CTL Model Checker, Proceeding of 10th International Conference on e-Business, ICE-B 2013, Reykjavik Iceland, paper 33, 29-31 July, pp. 379-386, ISBN: 978-989-8565-72-3, 2013
Constructing a classification model is important in machine learning for a particular task. A
classification process involves assigning objects into predefined groups or classes based on a
number of observed attributes related to those objects. Artificial neural network is one of the
classification algorithms which, can be used in many application areas. This paper investigates
the potential of applying the feed forward neural network architecture for the classification of
medical datasets. Migration based differential evolution algorithm (MBDE) is chosen and
applied to feed forward neural network to enhance the learning process and the network
learning is validated in terms of convergence rate and classification accuracy. In this paper,
MBDE algorithm with various migration policies is proposed for classification problems using
medical diagnosis.
Outsourcing of scientic computations is attracting increasing attention since it enables the customers with limited computing resource and storage devices to outsource the sophisticated computation workloads into powerful service providers. However, it also comes up with some security and privacy concerns and challenges, such as the input and output privacy of the customers, and cheating behaviors of the cloud. Motivated by these issues, this paper focused on privacy-preserving Linear Fractional Programming (LFP) as a typical and practically relevant case for veriable secure multiparty computation. We will investigate the secure and veriable schema with correctness guarantees, by using normal multiparty techniques to compute the result of a computation and then using veriable techniques only to verify that this result was correct.
A SIMPLE PROCESS TO SPEED UP MACHINE LEARNING METHODS: APPLICATION TO HIDDEN ...cscpconf
An intrinsic problem of classifiers based on machine learning (ML) methods is that their
learning time grows as the size and complexity of the training dataset increases. For this reason, it is important to have efficient computational methods and algorithms that can be applied on large datasets, such that it is still possible to complete the machine learning tasks in reasonable time. In this context, we present in this paper a simple process to speed up ML methods. An unsupervised clustering algorithm is combined with Expectation, Maximization (EM) algorithm to develop an efficient Hidden Markov Model (HMM) training. The idea of the proposed process consists of two steps. The first one involves a preprocessing step to reduce the number of training instances with any information loss. In this step, training instances with similar inputs are clustered and a weight factor which represents the frequency of these instances is assigned to each representative cluster. In the second step, all formulas in theclassical HMM training algorithm (EM) associated with the number of training instances are modified to include the weight factor in appropriate terms. This process significantly accelerates HMM training while maintaining the same initial, transition and emission probabilities matrixes as those obtained with the classical HMM training algorithm. Accordingly, the classification accuracy is preserved. Depending on the size of the training set, speedups of up to 2200 times is possible when the size is about 100.000 instances. The proposed approach is not limited to training HMMs, but it can be employed for a large variety of MLs methods.
Although fuzzy systems demonstrate their ability to
solve different kinds of problems in various applications, there is an increasing interest on developing solid mathematical implementations suitable for control applications such as that used in fuzzy logic controllers (FLC). It is well known that, wide range of parameters is needed to be specified before the construction of a fuzzy system. To simplify in a systematic way the design and construction of a general fuzzy system, and without loss for generality a full parameterization process for a singleton type FLC is proposed in this paper. The resented methodology is very helpful in developing a universal computing algorithm for a standard fuzzy like PID controllers. An illustrative example shows the simplicity of applying the new paradigm.
An optimal general type-2 fuzzy controller for Urban Traffic NetworkISA Interchange
Urban traffic network model is illustrated by state-charts and object-diagram. However, they have limitations to show the behavioral perspective of the traffic information flow. Consequently, a state space model is used to calculate the half-value waiting time of vehicles. In this study, a combination of the general type-2 fuzzy logic sets and the modified backtracking search algorithm (MBSA) techniques are used in order to control the traffic signal scheduling and phase succession so as to guarantee a smooth flow of traffic with the least wait times and average queue length. The parameters of input and output membership functions are optimized simultaneously by the novel heuristic algorithm MBSA. A comparison is made between the achieved results with those of optimal and conventional type-1 fuzzy logic controllers.
Fault detection based on novel fuzzy modelling csijjournal
The Fault detection which is based on fuzzy modeling is investigated. Takagi-Sugeno (TS) fuzzy model can
be derived by structure and parameter identification, where only the input-output data of the identified system are available. In the structure identification step, Gustafson-Kessel clustering algorithm (GKCA) is used to detect clusters of different geometrical shapes in the data set and to obtain the point-wise membership function of the premise. In the parameter identification step, Unscented Kalman filter (UKF) is
used to estimate the parameters of the premise’s membership function. In the consequence part, Kalman filter (KF) algorithm is applied as a linear regression to estimate parameters of the TS model using the input-output data set. Then, the obtained fuzzy model is used to detect the fault. Simulations are provided to demonstrate the effectiveness of the theoretical results.
PREDICTIVE EVALUATION OF THE STOCK PORTFOLIO PERFORMANCE USING FUZZY CMEANS A...ijfls
The aim of this paper is to investigate the trend of the return of a portfolio formed randomly or for any
specific technique. The approach is made using two techniques fuzzy: fuzzy c-means (FCM) algorithm and
the fuzzy transform, where the rules used at fuzzy transform arise from the application of the FCM
algorithm. The results show that the proposed methodology is able to predict the trend of the return of a
stock portfolio, as well as the tendency of the market index. Real data of the financial market are used from
2004 until 2007.
A hybrid fuzzy ann approach for software effort estimationijfcstjournal
Software development effort estimation is one of the major activities in software project management.
During the project proposal stage there is high probability of estimates being made inaccurate but later on
this inaccuracy decreases. In the field of software development there are certain matrices, based on which
the effort estimation is being made. Till date various methods has been proposed for software effort
estimation, of which the non algorithmic methods, like artificial intelligence techniques have been very
successful. A Hybrid Fuzzy-ANN model, known as Adaptive Neuro Fuzzy Inference System (ANFIS) is more
suitable in such situations. The present paper is concerned with developing software effort estimation
model based on ANFIS. The present study evaluates the efficiency of the proposed ANFIS model, for which
COCOMO81 datasets has been used. The result so obtained has been compared with Artificial Neural
Network (ANN) and Intermediate COCOCMO model developed by Boehm. The results were analyzed using
Magnitude of Relative Error (MRE) and Root Mean Square Error (RMSE). It is observed that the ANFIS
provided better results than ANN and COCOMO model.
Bayesian analysis of shape parameter of Lomax distribution using different lo...Premier Publishers
The Lomax distribution also known as Pareto distribution of the second kind or Pearson Type VI distribution has been used in the analysis of income data, and business failure data. It may describe the lifetime of a decreasing failure rate component as a heavy tailed alternative to the exponential distribution. In this paper we consider the estimation of the parameter of Lomax distribution. Baye’s estimator is obtained by using Jeffery’s and extension of Jeffery’s prior by using squared error loss function, Al-Bayyati’s loss function and Precautionary loss function. Maximum likelihood estimation is also discussed. These methods are compared by using mean square error through simulation study with varying sample sizes. The study aims to find out a suitable estimator of the parameter of the distribution. Finally, we analyze one data set for illustration.
POSTERIOR RESOLUTION AND STRUCTURAL MODIFICATION FOR PARAMETER DETERMINATION ...IJCI JOURNAL
When only a few lower modes data are available to evaluate a large number of unknown parameters, it is
difficult to acquire information about all unknown parameters. The challenge in this kind of updation
problem is first to get confidence about the parameters that are evaluated correctly using the available
data and second to get information about the remaining parameters. In this work, the first issue is resolved
employing the sensitivity of the modal data used for updation. Once it is fixed that which parameters are
evaluated satisfactorily using the available modal data the remaining parameters are evaluated employing
modal data of a virtual structure. This virtual structure is created by adding or removing some known
stiffness to or from some of the stories of the original structure. A 12-story shear building is considered for
the numerical illustration of the approach. Results of the study show that the present approach is an
effective tool in system identification problem when only a few data is available for updation.
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
A report on designing a model for improving CPU Scheduling by using Machine L...MuskanRath1
Disclaimer: Please let me know in case some of the portions of the article match your research. I would include the link to your research in the description section of my article.
Description:
The main concern of our paper describes that we are proposing a model for a uniprocessor system for improving CPU scheduling. Our model is implemented at low-level language or assembly language and LINUX is used for the implementation of the model as it is an open-source environment and its kernel is editable.
There are several methods to predict the length of the CPU bursts, such as the exponential averaging method, however, these methods may not give accurate or reliable predicted values. In this paper, we will propose a Machine Learning (ML) based on the best approach to estimate the length of the CPU bursts for processes. We will make use of Bayesian Theory for our model as a classifier tool that will decide which process will execute first in the ready queue. The proposed approach aims to select the most significant attributes of the process using feature selection techniques and then predicts the CPU-burst for the process in the grid. Furthermore, applying attribute selection techniques improves the performance in terms of space, time, and estimation.
Mechanism of the Reaction of Plasma Albumin with Formaldehyde in Ethanol - Wa...IOSR Journals
The Spectrophotometric determination of the acid dissociation/ionisation constant (pKa) of plasma albumin-formaldehyde adduct in both water solution and Ethanol solutions was carried out in this study. The pKa values obtained in both media were used to establish the Bronsted-linear type constants from plots of pKa against logarithm of second order rate constants obtained at varying pHs in the study. The result of the pKa values obtained in both water solution and ethanol-water mixtures were found to be in the range of 5.0 - 8.0. This pointed to the fact that only lysine residue with pKa value 8.3 that might have possibly reacted with formaldehyde in this reaction of all the known amino acid residues in plasma albumin. The corresponding Brønsted-type plots proportionality constants (β) for the reaction in water and ethanol-water mixtures were found to be β = 0.059 and 0.0057 respectively. The reaction mechanisms that have low values for proportionality constants α or β are considered to have a transition state closely resembling the reactant with little proton transfer (Cox et al, 1988). Thus, one would suggest that the cross-linking of formaldehyde with plasma albumin in water and ethanol-water mixtures proceeds through little proton transfer
Balking and Reneging in the Queuing SystemIOSR Journals
In this paper, we have discussed about a steady state solution of the ordered queuing problem with balking and reneging. Here we have taken the waiting line is of chi-square queue with Poisson balking probability which depend not only on the number of customers in the system, but also the rate of services in the system.
Outsourcing of scientic computations is attracting increasing attention since it enables the customers with limited computing resource and storage devices to outsource the sophisticated computation workloads into powerful service providers. However, it also comes up with some security and privacy concerns and challenges, such as the input and output privacy of the customers, and cheating behaviors of the cloud. Motivated by these issues, this paper focused on privacy-preserving Linear Fractional Programming (LFP) as a typical and practically relevant case for veriable secure multiparty computation. We will investigate the secure and veriable schema with correctness guarantees, by using normal multiparty techniques to compute the result of a computation and then using veriable techniques only to verify that this result was correct.
A SIMPLE PROCESS TO SPEED UP MACHINE LEARNING METHODS: APPLICATION TO HIDDEN ...cscpconf
An intrinsic problem of classifiers based on machine learning (ML) methods is that their
learning time grows as the size and complexity of the training dataset increases. For this reason, it is important to have efficient computational methods and algorithms that can be applied on large datasets, such that it is still possible to complete the machine learning tasks in reasonable time. In this context, we present in this paper a simple process to speed up ML methods. An unsupervised clustering algorithm is combined with Expectation, Maximization (EM) algorithm to develop an efficient Hidden Markov Model (HMM) training. The idea of the proposed process consists of two steps. The first one involves a preprocessing step to reduce the number of training instances with any information loss. In this step, training instances with similar inputs are clustered and a weight factor which represents the frequency of these instances is assigned to each representative cluster. In the second step, all formulas in theclassical HMM training algorithm (EM) associated with the number of training instances are modified to include the weight factor in appropriate terms. This process significantly accelerates HMM training while maintaining the same initial, transition and emission probabilities matrixes as those obtained with the classical HMM training algorithm. Accordingly, the classification accuracy is preserved. Depending on the size of the training set, speedups of up to 2200 times is possible when the size is about 100.000 instances. The proposed approach is not limited to training HMMs, but it can be employed for a large variety of MLs methods.
Although fuzzy systems demonstrate their ability to
solve different kinds of problems in various applications, there is an increasing interest on developing solid mathematical implementations suitable for control applications such as that used in fuzzy logic controllers (FLC). It is well known that, wide range of parameters is needed to be specified before the construction of a fuzzy system. To simplify in a systematic way the design and construction of a general fuzzy system, and without loss for generality a full parameterization process for a singleton type FLC is proposed in this paper. The resented methodology is very helpful in developing a universal computing algorithm for a standard fuzzy like PID controllers. An illustrative example shows the simplicity of applying the new paradigm.
An optimal general type-2 fuzzy controller for Urban Traffic NetworkISA Interchange
Urban traffic network model is illustrated by state-charts and object-diagram. However, they have limitations to show the behavioral perspective of the traffic information flow. Consequently, a state space model is used to calculate the half-value waiting time of vehicles. In this study, a combination of the general type-2 fuzzy logic sets and the modified backtracking search algorithm (MBSA) techniques are used in order to control the traffic signal scheduling and phase succession so as to guarantee a smooth flow of traffic with the least wait times and average queue length. The parameters of input and output membership functions are optimized simultaneously by the novel heuristic algorithm MBSA. A comparison is made between the achieved results with those of optimal and conventional type-1 fuzzy logic controllers.
Fault detection based on novel fuzzy modelling csijjournal
The Fault detection which is based on fuzzy modeling is investigated. Takagi-Sugeno (TS) fuzzy model can
be derived by structure and parameter identification, where only the input-output data of the identified system are available. In the structure identification step, Gustafson-Kessel clustering algorithm (GKCA) is used to detect clusters of different geometrical shapes in the data set and to obtain the point-wise membership function of the premise. In the parameter identification step, Unscented Kalman filter (UKF) is
used to estimate the parameters of the premise’s membership function. In the consequence part, Kalman filter (KF) algorithm is applied as a linear regression to estimate parameters of the TS model using the input-output data set. Then, the obtained fuzzy model is used to detect the fault. Simulations are provided to demonstrate the effectiveness of the theoretical results.
PREDICTIVE EVALUATION OF THE STOCK PORTFOLIO PERFORMANCE USING FUZZY CMEANS A...ijfls
The aim of this paper is to investigate the trend of the return of a portfolio formed randomly or for any
specific technique. The approach is made using two techniques fuzzy: fuzzy c-means (FCM) algorithm and
the fuzzy transform, where the rules used at fuzzy transform arise from the application of the FCM
algorithm. The results show that the proposed methodology is able to predict the trend of the return of a
stock portfolio, as well as the tendency of the market index. Real data of the financial market are used from
2004 until 2007.
A hybrid fuzzy ann approach for software effort estimationijfcstjournal
Software development effort estimation is one of the major activities in software project management.
During the project proposal stage there is high probability of estimates being made inaccurate but later on
this inaccuracy decreases. In the field of software development there are certain matrices, based on which
the effort estimation is being made. Till date various methods has been proposed for software effort
estimation, of which the non algorithmic methods, like artificial intelligence techniques have been very
successful. A Hybrid Fuzzy-ANN model, known as Adaptive Neuro Fuzzy Inference System (ANFIS) is more
suitable in such situations. The present paper is concerned with developing software effort estimation
model based on ANFIS. The present study evaluates the efficiency of the proposed ANFIS model, for which
COCOMO81 datasets has been used. The result so obtained has been compared with Artificial Neural
Network (ANN) and Intermediate COCOCMO model developed by Boehm. The results were analyzed using
Magnitude of Relative Error (MRE) and Root Mean Square Error (RMSE). It is observed that the ANFIS
provided better results than ANN and COCOMO model.
Bayesian analysis of shape parameter of Lomax distribution using different lo...Premier Publishers
The Lomax distribution also known as Pareto distribution of the second kind or Pearson Type VI distribution has been used in the analysis of income data, and business failure data. It may describe the lifetime of a decreasing failure rate component as a heavy tailed alternative to the exponential distribution. In this paper we consider the estimation of the parameter of Lomax distribution. Baye’s estimator is obtained by using Jeffery’s and extension of Jeffery’s prior by using squared error loss function, Al-Bayyati’s loss function and Precautionary loss function. Maximum likelihood estimation is also discussed. These methods are compared by using mean square error through simulation study with varying sample sizes. The study aims to find out a suitable estimator of the parameter of the distribution. Finally, we analyze one data set for illustration.
POSTERIOR RESOLUTION AND STRUCTURAL MODIFICATION FOR PARAMETER DETERMINATION ...IJCI JOURNAL
When only a few lower modes data are available to evaluate a large number of unknown parameters, it is
difficult to acquire information about all unknown parameters. The challenge in this kind of updation
problem is first to get confidence about the parameters that are evaluated correctly using the available
data and second to get information about the remaining parameters. In this work, the first issue is resolved
employing the sensitivity of the modal data used for updation. Once it is fixed that which parameters are
evaluated satisfactorily using the available modal data the remaining parameters are evaluated employing
modal data of a virtual structure. This virtual structure is created by adding or removing some known
stiffness to or from some of the stories of the original structure. A 12-story shear building is considered for
the numerical illustration of the approach. Results of the study show that the present approach is an
effective tool in system identification problem when only a few data is available for updation.
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
A report on designing a model for improving CPU Scheduling by using Machine L...MuskanRath1
Disclaimer: Please let me know in case some of the portions of the article match your research. I would include the link to your research in the description section of my article.
Description:
The main concern of our paper describes that we are proposing a model for a uniprocessor system for improving CPU scheduling. Our model is implemented at low-level language or assembly language and LINUX is used for the implementation of the model as it is an open-source environment and its kernel is editable.
There are several methods to predict the length of the CPU bursts, such as the exponential averaging method, however, these methods may not give accurate or reliable predicted values. In this paper, we will propose a Machine Learning (ML) based on the best approach to estimate the length of the CPU bursts for processes. We will make use of Bayesian Theory for our model as a classifier tool that will decide which process will execute first in the ready queue. The proposed approach aims to select the most significant attributes of the process using feature selection techniques and then predicts the CPU-burst for the process in the grid. Furthermore, applying attribute selection techniques improves the performance in terms of space, time, and estimation.
Mechanism of the Reaction of Plasma Albumin with Formaldehyde in Ethanol - Wa...IOSR Journals
The Spectrophotometric determination of the acid dissociation/ionisation constant (pKa) of plasma albumin-formaldehyde adduct in both water solution and Ethanol solutions was carried out in this study. The pKa values obtained in both media were used to establish the Bronsted-linear type constants from plots of pKa against logarithm of second order rate constants obtained at varying pHs in the study. The result of the pKa values obtained in both water solution and ethanol-water mixtures were found to be in the range of 5.0 - 8.0. This pointed to the fact that only lysine residue with pKa value 8.3 that might have possibly reacted with formaldehyde in this reaction of all the known amino acid residues in plasma albumin. The corresponding Brønsted-type plots proportionality constants (β) for the reaction in water and ethanol-water mixtures were found to be β = 0.059 and 0.0057 respectively. The reaction mechanisms that have low values for proportionality constants α or β are considered to have a transition state closely resembling the reactant with little proton transfer (Cox et al, 1988). Thus, one would suggest that the cross-linking of formaldehyde with plasma albumin in water and ethanol-water mixtures proceeds through little proton transfer
Balking and Reneging in the Queuing SystemIOSR Journals
In this paper, we have discussed about a steady state solution of the ordered queuing problem with balking and reneging. Here we have taken the waiting line is of chi-square queue with Poisson balking probability which depend not only on the number of customers in the system, but also the rate of services in the system.
Corchorusolitoriuswaste(mulukiya) as a potential sorbent for the removal of c...IOSR Journals
This work was conducted to determine the practicability of using a new adsorbent Corchorusolitorius(mulukiya)waste,for the removal of cadmium (Cd(II)), and thorium (Th(IV)) from wastewater. Corchorusolitoriuswereanalysis by Fourier transform infraredFTIR , scanning electron microscopy (SEM) and energy dispersive X-ray Spectroscopy (EDEX). Some parameters such as adsorbent dosage, solution pH’s, initial metal ion concentrations, and contact time, that influence adsorption phenomenon, were studied. The optimum pH for maximum adsorption of Cd(II) and Th(IV) was found to be 5.55 and 4.50, respectively. The contact time required for reaching equilibrium was 2 hr. The pseudo second-order kinetic model was the best fit to represent the kinetic data. Analysis of the equilibrium adsorption data using Langmuir and Freundlich models showed that theLangmuir model was well suitable to describe the metal ions adsorption.
A Comparison between Natural and Synthetic Food Flavoring Extracts Using Infr...IOSR Journals
Food is the basic necessity of life. One works hard and earns to satisfy our hunger .But at the end of the day, many of us are not sure of what we eat. We may be eating a dangerous flavors and dyes. Often, we invite diseases rather than good health. The purpose of this article is to detect the presence of food adulterants in some common foods and to create awareness about the artificial tests and dyes. A study of the IR spectra and the optical activitiy of two natural and artificial most common used flavor and colors (Vanilla and Strawberry) were detected. IR spectra of synthetic Vanilla were dominated by specific peaks that attributed to corresponding synthetic pigments (specific spectral band of stretching C=0 ester of aldehydic and ketonic groups in synthetic flavor at1744.87cm-1 with a weak shoulder at1700 cm-1 .And stretching CO of sucrose at (990.49 and 923,70) cm-1.The synthetic Strawberry characterized with specific spectral bands of (C=O stretching at 1634.96 cm-1 in ester and CO stretching of sucrose at 925 cm-1), while these functional groups disappeared in natural. Vanilla and Strawberry extracts. The natural Flavoring extracts posse's levorotatory property; they are optically active, while the synthetic extracts not rotates the plane of polarization of the light which passes through the material, they are said to be; not active optically. The obtained results indicated that, Infrared spectrum and Optical activity could be adapted to detect adulterants added products, and to differentiate between natural and artificial food flavoring extracts.
Physiological, Biochemical and Modern Biotechnological Approach to Improvemen...IOSR Journals
Rauwolfia serpentina also known as Sarpagandha (Apocynaceae) is an integral part of Ayurvedic medical system in India for over centuries for the treatment of various ailments. The leaves and roots ofRauwolfiaserpentina contain alkaloids which are secondary metabolites. Major alkaloids identified are Reserpine, Rauwolfine, Serpentine, Sarpagine, Ajmaline, Yohimbine and Ajmalicine.The present paper is an overview of the studies concerning with physiological, biochemical and modern biotechnological approach to improvement of Rauwolfiaserpentina.
Effective Waste Management and Environmental ControlIOSR Journals
There is wide spread interest in the world today in the methods that enable the re-use of waste.
According to Webster’s Mew Practical Dictionary, ‘Waste’ means “Thrown away as worthless after being used.
i.e. of no further use to a person, animal or plant; contrary to this opinion, it has been discovered that what is
regarded as waste or worthless, when worked upon can be manipulated to generated or produce materials that
are beneficial for the use of man.
This paper throw light into how waste resources can be control by analysis the theories of waste
management, recycling, re-use disposal and compositing from organic wastes and ways by which farm and
municipal waste can be worked upon to produce materials that are beneficial for the use of man
Electromagnetic fields of time-dependent magnetic monopoleIOSR Journals
Dirac-Maxwell’s equations, retained for magnetic monopoles, are generalized by introducing
magnetic scale field. It allows the magnetic monopoles to be time-dependent and the potentials to be Lorentz
gauge free. The non-conserved part or the time-dependent part of the magnetic charge density is responsible to
produce the magnetic scalar field which further contributes to the magnetic and electric vector fields. This
contribution makes possible to create an ideal square wave magnetic field from an exponentially rising and
decaying magnetic charge.
Effect of Poling Field and Non-linearity in Quantum Breathers in FerroelectricsIOSR Journals
Abstract : Lithium tantalate is technologically one of the most important ferroelectric materials with a low poling field that has several applications in the field of photonics and memory switching devices. In a Hamiltonian system, such as dipolar system, the polarization behavior of such ferroelectrics can be well-modeled by Klein-Gordon (K-G) equation. To probe the quantum states related to discrete breathers, the same K-G lattice is quantized to give rise to quantum breathers (QBs) that are explained by a periodic boundary condition. The gap between the localized and delocalized phonon-band is a function of impurity content that is again related to the effect of pinning of domains due to antisite tantalum defects in the system, i.e. a point of easier switching within the limited amount of data on poling field.
Meditation for stress reduction in Indian Army- An Experimental StudyIOSR Journals
1.1 Stress is defined as “the non-specific response of the body to any demand made upon it.” Hans Sely (1956). Lazarous (1966) maintains that stress occurs when demands on the person which tax or exceed his judgment resources. McGrath JE (1990) explains that there is a potential for stress when an environmental situation is perceived as presenting a demand which threatens to exceed the person‟s capabilities and resources for meeting it under conditions where he has expected a substantial differential in the rewards and cost for meeting the demands verses not meeting it.Stress can be the result of external situations such as an abusive relationship or poor working conditions. Stress can also be the result of internal situations or stressors such as worrying or having pessimistic thoughts about future. Work being the central theme to the life and a social reality provides a status to the individual and his bond to the society by way of quality of work or position he attains. This ultimately leads to raise the standard of living or we can call it desire to grow, so that we can get recognition and social status in the society. In the ambition to grow we begin to work more than our capabilities and thus strain ourselves and thus lead to stress. (Prestonjee DM &Muncherji (1991)). Duration in stress is another variable which acts as a factor causing stronger stress response. This is in consistence with uncertainty theory of occupational stress by Beer TA and Bhagat RS (1985).Each individual needs a moderate amount of stress to be alert and capable of functioning effectively in an organization. Hence stress is inherent in the concept of creativity. [Pestonjee(1991) Pareek(1993)].
Parasites Associated with wild-caught houseflies in Awka metropololisIOSR Journals
Investigation of parasites associated with wild-caught houseflies in Awka metropolis, Anambra State, southeastern Nigeria, was undertaken between April and August, 2012. Locally designed fly traps were used to collect flies. The flies were identified into genera and species using their characteristic features. These flies were demobilized by chilling, washed with sterilized distilled water, and the suspension homogenised before processing for parasites on their external body parts. For internal parasites, the external surfaces of the flies were sterilized with 70% alcohol, squashed to release the internal contents and the suspension homogenized with 100ml distilled water. Aliquots of the suspensions from both the internal and external contents of the flies were used for parasite isolations and identification using standard parasitological techniques. Eight fly species were processed for parasites identification. Parasites isolated from the flies were Entamoeba histolytica cysts, Hookworm ova, Ascaris lumbricoides ova, and Trichuris trichiura ova. All the parasites isolated were from the external surfaces of the flies. This reveals the fact that wild-caught flies, especially M. domestica, harbour parasites on their bodies, which can cause diseases. Hence, there is need for improved sanitation in our urban communities, to prevent epidemics associated with poor sanitary conditions.
Antibiotic Susceptibility Pattern of Pyogenic Bacterial Isolates in Sputum.IOSR Journals
Drugs Have Been Used For The Treatment Of Infectious Diseases Since 17th Century , However
Chemotherapy As A Science Has Began With Paul Ehrlich In The First Decade Of 20th Century . Paul Ehrlich
(1854-1915) Was One Of The Earliest Pioneers In The Field Of Antimicrobial Chemotherapy .1Ehrlich
Formulated The Principles Of “Selective Toxicity” ,I.E; Selective Inhibition Of The Growth Of Microorganisms
Without Damage To The Host.2 Resistance Has Been Documented Not Only Against Antibiotics Of
Natural And Semi- Synthetic Origin , But Also Against Purely Synthetic Compounds (Flouroquinolone) Or
Those Which Do Not Even Enter The Cells (Vancomycin) .3 However , The Euphoria Over The Potential
Conquest Of Infectious Diseases Was Short-Lived .Almost As Soon As Antibacterial Drugs Were Deployed ,
Bacteria Responded By Manifesting Various Forms Of Resistance.4 Considered As “Wonder Drugs”
Antibiotics Are Often Prescribed Inappropriately And Inadequately And Have Thus Became One Of The
Highly Abused Agents.5
Determination of baseline Widal titre among apparently healthy population in ...IOSR Journals
Present study was conducted to determine the baseline widal titer of healthy population of Dehradun city. A total of 300 serum samples were collected from healthy individual with no history of fever and who had not received any vaccination for enteric fever. Tube agglutination test was done with commercially available antigens which contained the Salmonella enterica serovar typhi O and H antigens, the Salmonella enterica serovar paratyphi AH antigen and paratyphi BH antigen. In the present study an agglutination titer for TO – 1:20 is 28%, for 1:40 is 24%, followed by 1:80 and 1: 160 which is 10%, 4% respectively. The highest sample with an anti-H titre found with 1:20 (22%) followed by 1:40(17%). Based upon the results of the study it has been recommended that a single Widal can be significant in an endemic region when higher titre (1:160) is obtained.
Public Expenditure on Education; A Measure for Promoting Economic DevelopmentIOSR Journals
The rational utilization and allocation of public expenditure would result into an economic development of the country. It has been observed that allocation and utilization of expenditure in Pakistan have been very little towards development. The allocation of current expenditure such as debt servicing and defense has increased by a greater percentage every year as compared to education. Money borrowed by the economy over the years, if had been put to the development of projects, the economy would have seen much higher development and growth. The objective of the research lies in evaluating the public expenditure and its role in economic development by considering education as an indicator to social development in Pakistan.
Structural elucidation, Identification, quantization of process related impur...IOSR Journals
Major process related unknown impurity associated with the synthesis of Hydralazine hydrochloride bulk drug was detected by high performance liquid chromatography (HPLC) and was subjected to high resolution accurate liquid chromatography mass spectroscopy (HR/AM-LCMS) for identification. The proposed impurity was isolated from Hydralazine hydrochloride active pharmaceutical ingredient (API) by preparative chromatographic method and was injected on HPLC for comparison of retention time with that of the unknown process related impurity in Hydralazine hydrochloride. The molecular ion peak of preparatively isolated impurity and that of unknown process related impurity in Hydralazine hydrochloride were compared for confirmation. The postulated structure was unambiguously confirmed with the help of HR/AM- LC MS/MS, NMR and FTIR data proposed to be 1-(2-phthalazin-1-ylhydrazino)phthalazine (Hazh Dimer). This impurity of Hydralazine hydrochloride is not been previously reported. A rapid Acquity H-class gradient method with runtime of 15.0min was developed for Quantitation on Unisphere Cyno column and validated for parameters such as accuracy, precision, linearity and range, robustness. The LOD and LOQ of method were 0081% and 0.0246% respectively.
Effect of Annealing and Time of Crystallization on Structural and Optical Pro...IOSR Journals
In this report pure poly(vinylidene fluoride) (PVDF) films were prepared by casting method using acetone solvent. The crystallization of both α and β phase from acetone solvent by varying the time of crystallization has been described. This paper also describes the enhancement of β phase at different annealing condition. β phase dominant thin films were obtained when as cast thin films were annealed at 90 ºC for 5 hours. The PVDF films with dominant α-phase were obtained, when time of crystallization is extend. From (X-ray diffraction) XRD and Fourier Transform Infrared Spectrum (FTIR) it is confirmed that the PVDF thin films, cast from acetone solution and annealed at 90 ºC for 5 hours, have maximum percentage of β-phase. Presence of the crystalline α and β phases in each sample was confirmed by X-ray Diffraction (XRD) and Fourier transform infrared spectroscopy (FTIR). We found that of PVDF when crystallized from its acetone solutions led to the formation of β phase. UV-visible optical absorption analysis revealed a change in the optical gap and shift in absorption edge with annealing temperature.
SRGM with Imperfect Debugging by Genetic Algorithmsijseajournal
Computer software has progressively turned out to be an
essential component in modern technologies. Penalty costs resulting from
software failures are often more considerable than software developing costs.
Debugging decreases the error content but expands the software development
costs. To improve the software quality, software reliability engineering plays
an important role in many aspects throughout the software life cycle. In this
paper, we incorporate both imperfect debugging and change-point problem into
the software reliability growth model(SRGM) based on the well-known
exponential distribution the parameter estimation is studied and the proposed
model is compared with the some existing models in the literature and is find to
be better.
Software Process Control on Ungrouped Data: Log-Power ModelWaqas Tariq
Statistical Process Control (SPC) is the best choice to monitor software reliability process. It assists the software development team to identify and actions to be taken during software failure process and hence, assures better software reliability. In this paper we propose a control mechanism based on the cumulative observations of failures which is ungrouped data using an infinite failure mean value function of Log-Power model, which is Non-Homogenous Poisson Process (NHPP) based. The Maximum Likelihood Estimation (MLE) approach is used to estimate the unknown parameters of the model.
DETECTION OF RELIABLE SOFTWARE USING SPRT ON TIME DOMAIN DATAIJCSEA Journal
In Classical Hypothesis testing volumes of data is to be collected and then the conclusions are drawn which may take more time. But, Sequential Analysis of statistical science could be adopted in order to decide upon the reliable / unreliable of the developed software very quickly. The procedure adopted for this is, Sequential Probability Ratio Test (SPRT). In the present paper we proposed the performance of SPRT on Time domain data using Weibull model and analyzed the results by applying on 5 data sets. The parameters are estimated using Maximum Likelihood Estimation.
Duplicate Code Detection using Control StatementsEditor IJCATR
Code clone detection is an important area of research as reusability is a key factor in software evolution. Duplicate code
degrades the design and structure of software and software qualities like readability, changeability, maintainability. Code clone
increases the maintenance cost as incorrect changes in copied code may lead to more errors. In this paper we address structural code
similarity detection and propose new methods to detect structural clones using structure of control statements. By structure we mean
order of control statements used in the source code. We have considered two orders of control structures: (i) Sequence of control
statements as it appears (ii) Execution flow of control statements.
Duplicate Code Detection using Control StatementsEditor IJCATR
Code clone detection is an important area of research as reusability is a key factor in software evolution. Duplicate code
degrades the design and structure of software and software qualities like readability, changeability, maintainability. Code clone
increases the maintenance cost as incorrect changes in copied code may lead to more errors. In this paper we address structural code
similarity detection and propose new methods to detect structural clones using structure of control statements. By structure we mean
order of control statements used in the source code. We have considered two orders of control structures: (i) Sequence of control
statements as it appears (ii) Execution flow of control statements.
Duplicate Code Detection using Control StatementsEditor IJCATR
Code clone detection is an important area of research as reusability is a key factor in software evolution. Duplicate code
degrades the design and structure of software and software qualities like readability, changeability, maintainability. Code clone
increases the maintenance cost as incorrect changes in copied code may lead to more errors. In this paper we address structural code
similarity detection and propose new methods to detect structural clones using structure of control statements. By structure we mean
order of control statements used in the source code. We have considered two orders of control structures: (i) Sequence of control
statements as it appears (ii) Execution flow of control statements.
Duplicate Code Detection using Control StatementsEditor IJCATR
Code clone detection is an important area of research as reusability is a key factor in software evolution. Duplicate code degrades the design and structure of software and software qualities like readability, changeability, maintainability. Code clone increases the maintenance cost as incorrect changes in copied code may lead to more errors. In this paper we address structural code similarity detection and propose new methods to detect structural clones using structure of control statements. By structure we mean order of control statements used in the source code. We have considered two orders of control structures: (i) Sequence of control statements as it appears (ii) Execution flow of control statements.
Assessing Software Reliability Using SPC – An Order Statistics Approach IJCSEA Journal
There are many software reliability models that are based on the times of occurrences of errors in the debugging of software. It is shown that it is possible to do asymptotic likelihood inference for software reliability models based on order statistics or Non-Homogeneous Poisson Processes (NHPP), with asymptotic confidence levels for interval estimates of parameters. In particular, interval estimates from these models are obtained for the conditional failure rate of the software, given the data from the debugging process. The data can be grouped or ungrouped. For someone making a decision about when to market software, the conditional failure rate is an important parameter. Order statistics are used in a wide variety of practical situations. Their use in characterization problems, detection of outliers, linear estimation, study of system reliability, life-testing, survival analysis, data compression and many other fields can be seen from the many books. Statistical Process Control (SPC) can monitor the forecasting of software failure and thereby contribute significantly to the improvement of software reliability. Control charts are widely used for software process control in the software industry. In this paper we proposed a control mechanism based on order statistics of cumulative quantity between observations of time domain
failure data using mean value function of Half Logistics Distribution (HLD) based on NHPP.
Assessing Software Reliability Using SPC – An Order Statistics ApproachIJCSEA Journal
There are many software reliability models that are based on the times of occurrences of errors in the debugging of software. It is shown that it is possible to do asymptotic likelihood inference for software reliability models based on order statistics or Non-Homogeneous Poisson Processes (NHPP), with asymptotic confidence levels for interval estimates of parameters. In particular, interval estimates from these models are obtained for the conditional failure rate of the software, given the data from the debugging process. The data can be grouped or ungrouped. For someone making a decision about when to market software, the conditional failure rate is an important parameter. Order statistics are used in a wide variety of practical situations. Their use in characterization problems, detection of outliers, linear estimation, study of system reliability, life-testing, survival analysis, data compression and many other fields can be seen from the many books. Statistical Process Control (SPC) can monitor the forecasting of software failure and thereby contribute significantly to the improvement of software reliability. Control charts are widely used for software process control in the software industry. In this paper we proposed a control mechanism based on order statistics of cumulative quantity between observations of time domain
failure data using mean value function of Half Logistics Distribution (HLD) based on NHPP.
Software reliability models (SRMs) are very important for estimating and predicting software
reliability in the testing/debugging phase. The contributions of this paper are as follows. First, a
historical review of the Gompertz SRM is given. Based on several software failure data, the
parameters of the Gompertz software reliability model are estimated using two estimation
methods, the traditional maximum likelihood and the least square. The methods of estimation are
evaluated using the MSE and R-squared criteria. The results show that the least square
estimation is an attractive method in term of predictive performance and can be used when the
maximum likelihood method fails to give good prediction results.
From the past many years many software defects prediction models are developed to solve the various issues in software project development. Software reliability is the significant in software quality which evaluates and predicts the quality of the software based on the defects prediction. Many software companies are trying to improve the software quality and also trying to reduce the cost of the software development. Rayleigh model is one of the significant models to analyze the software defects based on the generated data. Analysis of means (ANOM) is statistical technique which gives the quality assurance based on the situations. In this paper, an improved software defect prediction models (ISDPM) are used for predicting defects occur at the time of five phases such as analysis, planning, design, testing and maintenance. To improve the performance of the proposed methodology an order statistics is adopted for better prediction. The experiments are conducted on 2 synthetic projects that are used to analyze the defects.
Fuzzy Type Image Fusion Using SPIHT Image Compression TechniqueIJERA Editor
This paper presents a fuzzy type image fusion technique using Set Partitioning in Hierarchical Trees (SPIHT).
It is concluded that fusion with higher single levels provides better fusion quality. This technique can be used
for fusion of fuzzy images as well as multi model image fusion. The proposed algorithm is very simple, easy to
implement and could be used for real time applications. This is paper also provided comparatively studied
between proposed and previous existing technique and validation of the proposed algorithm as Peak Signal to
Noise Ratio (PSNR), Root Mean Square Error (RMSE).
CLASSIFIER SELECTION MODELS FOR INTRUSION DETECTION SYSTEM (IDS)ieijjournal1
Any abnormal activity can be assumed to be anomalies intrusion. In the literature several techniques and
algorithms have been discussed for anomaly detection. In the most of cases true positive and false positive
parameters have been used to compare their performance. However, depending upon the application a
wrong true positive or wrong false positive may have severe detrimental effects. This necessitates inclusion
of cost sensitive parameters in the performance. Moreover the most common testing dataset KDD-CUP-99
has huge size of data which intern require certain amount of pre-processing. Our work in this paper starts
with enumerating the necessity of cost sensitive analysis with some real life examples. After discussing
KDD-CUP-99 an approach is proposed for feature elimination and then features selection to reduce the
number of more relevant features directly and size of KDD-CUP-99 indirectly. From the reported
literature general methods for anomaly detection are selected which perform best for different types of
attacks. These different classifiers are clubbed to form an ensemble. A cost opportunistic technique is
suggested to allocate the relative weights to classifiers ensemble for generating the final result. The cost
sensitivity of true positive and false positive results is done and a method is proposed to select the elements
of cost sensitivity metrics for further improving the results to achieve the overall better performance. The
impact on performance trade of due to incorporating the cost sensitivity is discussed.
Application of Lifetime Models in Maintenance (Case Study: Thermal Electricit...iosrjce
IOSR Journal of Mathematics(IOSR-JM) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of mathemetics and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in mathematics. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Elevating Tactical DDD Patterns Through Object Calisthenics
I017144954
1. IOSR Journal of Computer Engineering (IOSR-JCE)
e-ISSN: 2278-0661,p-ISSN: 2278-8727, Volume 17, Issue 1, Ver. IV (Jan – Feb. 2015), PP 49-54
www.iosrjournals.org
DOI: 10.9790/0661-17144954 www.iosrjournals.org 49 | Page
Burr Type III Software Reliability Growth Model
Ch.Smitha Chowdary1
, Dr.R.Satya Prasad2
, K.Sobhana3
1
Research Scholar, Dept. of Computer Science, Krishna University, Machilipatnam,
Andhra Pradesh. INDIA-521001
2
Associate Professor, Dept. of Computer Science & Engineering Acharya Nagarjuna University,
Nagarjuna Nagar- INDIA-522510
3
Research Scholar, Dept. of Computer Science, Krishna University, Machilipatnam,
Andhra Pradesh. INDIA-521001
Abstract : Software Reliability Growth Model (SRGM) is used to assess software reliability quantitatively for
tracking and measuring the growth of reliability. The potentiality of SRGM is judged by its capability to fit the
software failure data. In this paper we propose Burr type III software reliability growth model based on Non
Homogeneous Poisson Process (NHPP) with time domain data. The Maximum Likelihood (ML) estimation
method is used for finding unknown parameters in the model on ungrouped data. How good does a
mathematical model fit to the data is also being calculated. To assess the performance of the considered SRGM,
we have carried out the parameter estimation on real software failure data sets. We also present an analysis of
goodness of fit and reliability for given failure data sets.
Keywords: Burr type III, Goodness of fit, NHPP, ML estimation, Software Reliability, Time domain data.
I. Introduction
Software reliability is the probability of failure-free operation of software in a specified environment
during specified duration [1][2][3]. Over the decades, different statistical models have been discussed for
assessing the software reliability where Wood (1996), Pham (2005), Goel & Okumoto (1979), Satya Prasad
(2007) and Satya Prasad & Geetha Rani (2011) are some examples [4][5][6][7][8]. The models applicable for
the assessment of software reliability are called Software Reliability Growth Models (SRGMs). SRGM is a
mathematical model of how software reliability improves as faults are detected and repaired [9]. One of the
important classes of SRGM that has been studied widely is Non Homogeneous Poisson Process (NHPP), which
forms one of the main classes of the existing SRGMs due to its mathematical traceability and wide applicability.
The NHPP based software reliability growth models have been proved quite successful in practical software
reliability engineering [3].
Software reliability can be estimated after the determination of the mean value function. To determine
this mean value function, model parameters can be estimated by using Maximum Likelihood Estimation (MLE).
These Parameter values can be obtained by using Newton-Raphson Method.
The success of mathematical modeling approach to reliability evaluation heavily depends upon quality
of failure data collected and its goodness of fit. If the selected model does not fit relatively well into the
collected software testing data, we may expect a low prediction ability of this model and the decision makings
based on the analysis of this model may be far from what is considered to be optimal decision [10].
This paper presents Burr type III model to analyze the reliability of a software system using time
domain data. The layout of the paper is as follows: Section 2 gives the details of formulation and interpretation
of the model for the underlying NHPP. Section 3 describes the background of Burr type III model. Section 4
discusses parameter estimation of Burr type III model based on time domain data. Section 5 describes the
techniques used for software failure data analysis for live data. Section 6 gives the performance analysis of the
presented model and Section 7 contains the conclusion.
II. NHPP Model
Software reliability models can be classified according to probabilistic assumptions based on the type
of failure process. First one is, when a Markov process represents the failure process, the resultant model is
Markovian Model. Second one is fault counting model that describes the failure phenomenon by stochastic
process like Homogeneous Poisson Process (HPP), Non Homogeneous Poisson Process (NHPP) and Compound
Poisson Process (CPP) etc. Most of the failure count models are based upon NHPP and are described in the
following lines.
A software system is subject to failures at random times which are caused by errors present in the
system. Suppose {N(t), t >0} is a counting process, then the cumulative number of failures are represented by
time „t‟. As there were no failures at t=0 we have
2. Burr Type III Software Reliability Growth Model
DOI: 10.9790/0661-17144954 www.iosrjournals.org 50 | Page
N(0) = 0
The assumption is, that the number of software failures during non overlapping time intervals will not
affect each other. In other way, for any finite collection of times t1<t2<…. <tn the „n‟ random variables N(t1),
{N(t2)-N(t1)}, ….. {N(tn) - N(tn-1)} are independent. This implies that the counting process {N(t), t>0} has
independent increments. Suppose m(t) represents the expected number of software failures by time „t‟, then the
expected number of errors remaining in the system at any time is finite, hence m(t) is a bounded, non decreasing
function of „t‟ with the following boundary conditions.
ta
t
tm
,
0,0
)(
Here the above mentioned „a‟ is meant for expected number of software errors to be eventually
detected. Let N(t) be known to have a Poisson probability mass function with parameters m(t) i.e.
)(
!
tm
n
e
n
tm
ntNP
, n=0, 1, 2,…∞
Then the stochastic behavior of software failure phenomenon can be described through the N(t) process. In this
paper we consider m(t) as given by
bc
tatm
1 ---------------- (2.1)
Where [m(t)/a] is the cumulative distribution function of Burr type III distribution for the present
choice.
)(
!
tm
n
e
n
tm
ntNP
!
.
lim
n
ae
ntNP
na
n
This is also a Poisson model with mean „a‟. Let the number of errors remaining in the system at time‟t‟ be N(t).
b
aa
tma
c-
t1-=
)(
E[N(t)]-)]E[N(=]E[N(t)
N(t)-)N(=N(t)
Suppose Sk be the time between (k-1)th
and kth
failure of the software product and Xk be the time up to
the kth
failure, then to find out the probability time between (k-1)th
and kth
failures the software reliability
function is given below.
)()(
1 )/( smsxm
kk exsXRS
--------- (2.2)
This expression is referred as Software Reliability.
III. Background Theory
This section presents the theory that underlines Burr Type III based NHPP model. The Burr type XII
uses a wide range of skewness and kurtosis, which may be used to fit any given set of unimodal data [11]. The
„reciprocal Burr‟ (Burr Type III) covers a wide region that includes the region covered by Burr Type XII. Burr
has suggested a number of forms of cumulative distribution functions (cdf) that would be useful for fitting data
[12]. The Burr Type XII distribution F(x) is given as
F(x) =1-(1+xc
)-b
, x>0, c>0, b>0. ------ (3.1)
Here both‟ c‟ and‟ b‟ are shape parameters.
Let X be the random variable with cdf given by equation (3.1) and consider the transformation t=1/X.
F(t)=(1+t-c
)-b
--------- (3.2)
Which is one of the many forms of distribution functions (Burr Type III) given by Burr.
3. Burr Type III Software Reliability Growth Model
DOI: 10.9790/0661-17144954 www.iosrjournals.org 51 | Page
IV. Parameters Of Burr Type III Based Time Domain Data
Burr [12] had introduced twelve different forms of cumulative distribution functions for modeling data.
The task of building a mathematical model is incomplete until the unknown parameters i.e. the model
parameters are estimated and validated on actual software failure data sets. In this section we develop
expressions to estimate the parameters of the Burr type III model based on time domain data. Parameter
estimation is given primary importance for software reliability prediction. Parameter estimation can be achieved
by applying a technique of MLE which is the most important and widely used estimation technique. A set of
failure data is usually collected in one of two common ways, time domain data and interval domain data. Here
the failure data is collected through time domain data.
The mean value function of Burr type III model as given in equation (2.1) is
bc
tatm
1 t>0 a, b, c > 0
To assess the software reliability, „a‟, „b‟ and „c‟ are to be known or else they can be estimated from
software failure data. For estimating „a‟, „b‟ and „c‟ for the Burr type III model expressions are derived as
mentioned below. Assuming that the data are given for the occurrence times of the failures or the times of
successive failures, i.e., the realization of random variables Tj for j = 1, 2,..n. Given that the data provide n
successive times of observed failures Tj for 0 t1≤t2≤...tn, we can convert these data into the time between failures
xi where xi = ti-ti-1 for i = 1, 2,…, n. Given the recorded data on the time of failures, the log likelihood function
(LLF) takes on the following form:
)()(log
1
n
n
i
i tmtLLF
--------- (4.1)
n
i
bc
n
bc
i
c
i t
a
tt
abc
LogL
1
11
1)1(
log -------- (4.2)
n
i
c
iibc
n
tbtccba
t
a
LogL
1
1log1log1logloglog
1
----- (4.3)
Accordingly parameters „a‟, „b‟ and „c‟ would be solutions of the equations
0
a
LogL
bc
ntna
1 --------- (4.4)
0
b
LogL
n
i
ni tnt
n
b
1
11
1log1log
----------- (4.5)
The parameter „c‟ is estimated by iterative Newton-Raphson Method using
i
i
ii
cg
cg
cc
'
1 where g(c) and
gꞌ(c) are expressed as follows.
0
c
LogL
c
i
n
i
ic
n
n
t
t
c
n
t
tn
cg
1
2
1log
1
log
1
------ (4.6)
4. Burr Type III Software Reliability Growth Model
DOI: 10.9790/0661-17144954 www.iosrjournals.org 52 | Page
02
2
c
LogL
n
i
c
i
i
c
i
c
n
c
nn
t
tt
c
n
t
ttn
cg
1
2
2
22
2
1
log2
1
log
' ------------ (4.7)
The value of „c‟ in the above equations (4.6) (4.7) can be obtained using Newton-Raphson iterative
method. Solving the above equation yields the point estimate of the parameter „c‟.
V. Data Validity Analysis
The set of software errors analyzed here is borrowed from software development project as published
in Pham [13, 14]. Data set is truncated into different proportions and used for estimating the parameters of the
proposed basic discrete time model. Table 1 shows the time between failures for different software products
presenting as per the size of the data set.
NTDS Data
The data set consists of 26 failures in 250 days. During the production phase 26 software errors are
found and during the test phase five additional errors are found. During the user phase one error is observed and
two more errors are noticed in a subsequent test phase indicating that a network of the module has taken place
after the user error is found. In this paper, a numerical conversion of data (Failure Time (hours)*0.01) is done in
order to facilitate the parameter estimation [15] [16] [17].
Table-1: NTDS Data Set
Failure
Number n
Time between
Failures Sk days
Cumulative Time
Xn = 𝑆 𝑘 days
Failure Time(hours)*0.01
Production (Checkout) Phase
1 9 9 0.09
2 12 21 0.21
3 11 32 0.32
4 4 36 0.36
5 7 43 0.43
6 2 45 0.45
7 5 50 0.5
8 8 58 0.58
9 5 63 0.63
10 7 70 0.7
11 1 71 0.71
12 6 77 0.77
13 1 78 0.78
14 9 87 0.87
15 4 91 0.91
16 1 92 0.92
17 3 95 0.95
18 3 98 0.98
19 6 104 1.04
20 1 105 1.05
21 11 116 1.16
22 33 149 1.49
23 7 156 1.56
24 91 247 2.47
25 2 249 2.49
26 1 250 2.5
Test Phase
27 87 337 3.37
28 47 384 3.84
29 12 396 3.96
30 9 405 4.05
31 135 540 5.4
User Phase
32 258 798 7.98
Test Phase
33 16 814 8.14
34 35 849 8.49
5. Burr Type III Software Reliability Growth Model
DOI: 10.9790/0661-17144954 www.iosrjournals.org 53 | Page
Solving equations in Section 3 by Newton-Raphson Method (N-R) method for the NTDS software
failure data, the iterative solutions for MLEs of a, b and c are as below.
34.465706
a
1.763647
b
1.810222
c
Table-2: Parameters Estimated through MLE
Data set (no) Number of samples
Estimated Parameter
A b C
NTDS 26 34.465706 1.763647 1.810222
AT&T 22 26.839829 1.658692 1.000000
SONATA 30 79.831359 6.742810 0.602440
XIE 30 33.310426 2.270095 1.371974
IBM 15 20.624785 1.711630 1.447815
Here, these three values can be accepted as MLEs of „a‟, „b‟ and „c‟. The estimator of the reliability
function from the equation (2.2) at any time x beyond 250 days is given by
)()(
1 )/( smsxm
kk exsXRS
= 𝑒− [ 𝑚 (50+250) – 𝑚 (250)]
=0.999221
VI. Performance Analysis For Goodness Of Fit
The potentiality of SRGM is judged by its capability to fit the software failure data, where the term
goodness of fit denotes the question of “How good does a mathematical model fit to the data?” Experiments on
a set of actual software failure data have been performed to validate the model under study and to assess its
performance. The considered model fits more to the data set whose Log Likelihood is most negative. The
application of the considered distribution function and its Log Likelihood on different datasets collected from
real world failure data is given below in table 3.
Table-3: Log likelihood on different data sets
Data set (no) Log L (MLE)
Reliability tn+50
(MLE)
NTDS -168.070414 0.999221
AT&T -160.264155 0.995543
SONATA -218.086332 0.917342
XIE -256.630664 0.999247
IBM -109.246261 0.998116
VII. Conclusion
In this paper we propose a Burr type III software reliability growth model. This model is useful
primarily for estimating and monitoring software reliability that is viewed as a measure of software quality.
Equations are developed to obtain the maximum likelihood estimates of the parameters based on time domain
data. The proposed discrete time models have been validated and evaluated on actual software failure data cited
from real software development projects and compared with existing discrete time NHPP based model. The
results are encouraging in terms of goodness of fit and predictive validity due to their applicability and
flexibility. To validate the proposed approach, the parameter estimation is carried out on the data sets collected
[18][19]. The data set Xie has the best fit among all datasets as it is having the highest negative value for the log
likelihood. The reliability of all the data sets are given in Table 3.The reliability of the model over Xie data is
the highest among the data sets which are considered.
References
[1]. Musa J.D, Software Reliability Engineering MCGraw-Hill, 1998.
[2]. Lyu, M.R., (1996). “Handbook of Software Reliability Engineering”, McGraw-Hill, New York.
[3]. Musa, J.D., Iannino, A., Okumoto, k., 1987. “Software Reliability: Measurement Prediction Application” McGraw -Hill, New
York.
[4]. Wood. A (1996), “Predicting Software Reliability”, IEEE Computer, 2253-2264.
6. Burr Type III Software Reliability Growth Model
DOI: 10.9790/0661-17144954 www.iosrjournals.org 54 | Page
[5]. Pham. H (2005) “A Generalized Logistic Software Reliability Growth Model”, Opsearch, Vol.42, No.4, 332-331.MC Graw Hill,
New York.
[6]. Goel, A.L., Okumoto, K., 1979. Time- dependent error-detection rate model for software reliability and other performance
measures. IEEE Trans. Reliab. R-28, 206-211.
[7]. Satya Prasad, R (2007) “Half logistic Software reliability growth model”, Ph.D Thesis of ANU, India.
[8]. Satya Prasad, R and Geetha Rani, N (2011), “Pareto type II software reliability growth model”, International Journal of Software
Engineering, Volume 2, Issue(4) 81-86
[9]. Quadri, S.M.K and Ahmad, N., (2010). “Software Reliability Growth modelling with new modified Weibull testing-effort and
optimal release policy”, International Journal of Computer Applications, Vol.6, No.12.
[10]. Xie, M., Yang, B. and Gaudoin, O. (2001), “Regression goodness-of-fit Test for Software reliability model validation”, ISSRE and
Chillarege Corp.
[11]. Tadikamalla, Pandu R. “A Look at the Burr and Related Distributions.” International Statistical Review / Revue Internationale de
Statistique 48(3)(Dec., 1980): 337–344.
[12]. Burr (1942), “Cumulative Frequency Functions”, Annals of Mathematical Statistics, 13, pp. 215-232.
[13]. Pham. H., 2003. “Handbook of Reliability Engineering” Springer.
[14]. Pham. H., 2006. “System software reliability” Springer.
[15]. N. R. Barraza., “Parameter Estimation for the Compound Poisson Software Reliability Model”, International Journal of Software
Engineering and Its Applications, http://www.sersc.org/journals/IJSEIA/vol7_no1_2013/11.pdf, vol. 7, no. 1, (2013) January, pp.
137-148.
[16]. Inayat, M. Asim Noor and Z. Inayat, “Parameter Successful Product-based Agile Software Development without Onsite Customer:
An Industrial Case Study”, International Journal of Software Engineering and Its
Applications,http://www.sersc.org/journals/IJSEIA/vol6_no2_2012/1.pdf, vol. 6, no. 2, (2012) April, pp. 1-14.
[17]. Hassan Najadat and Izzat Alsmadi., “Enhance Rule Based Detection for Software Fault Prone Modules”, International Journal of
Software Engineering and Its Applications, Vol. 6, No.1, pp. 75-86, January
(2012)http://www.serc.org/journals/IJSEIA/vol6_no1_2012/6.pdf.
[18]. Xie, M., Goh. T.N., Ranjan.P, (2002) “Some effective control chart procedures for reliability monitoring” -Reliability engineering
and System Safety 77 143 -150 ꞌ2002.
[19]. Ashoka, M., (2010), “Sonata Software Limited” data set, Bangalore.