This paper is intended to introduce an efficient as well as robust training mechanism for a neural network which can be used for testing the functionality of software. The traditional setup of neural network architecture is used constituting the two phases -training phase and evaluation phase. The input test cases are to be trained in first phase and consequently they behave like normal test cases to predict the output as untrained test cases. The test oracle measures the deviation between the outputs of untrained test cases with trained test cases and authorizes a final decision. Our framework can be applied to systems where number of test cases outnumbers the
functionalities or the system under test is too complex. It can also be applied to the test case development when the modules of a system become tedious after modification.
ANALYSIS OF MACHINE LEARNING ALGORITHMS WITH FEATURE SELECTION FOR INTRUSION ...IJNSA Journal
In recent times, various machine learning classifiers are used to improve network intrusion detection. The researchers have proposed many solutions for intrusion detection in the literature. The machine learning classifiers are trained on older datasets for intrusion detection, which limits their detection accuracy. So, there is a need to train the machine learning classifiers on the latest dataset. In this paper, UNSW-NB15, the latest dataset is used to train machine learning classifiers. The selected classifiers such as K-Nearest Neighbors (KNN), Stochastic Gradient Descent (SGD), Random Forest (RF), Logistic Regression (LR), and Naïve Bayes (NB) classifiers are used for training from the taxonomy of classifiers based on lazy and eager learners. In this paper, Chi-Square, a filter-based feature selection technique, is applied to the UNSW-NB15 dataset to reduce the irrelevant and redundant features. The performance of classifiers is measured in terms of Accuracy, Mean Squared Error (MSE), Precision, Recall, F1-Score, True Positive Rate (TPR) and False Positive Rate (FPR) with or without feature selection technique and comparative analysis of these machine learning classifiers is carried out.
A Defect Prediction Model for Software Product based on ANFISIJSRD
Artificial intelligence techniques are day by day getting involvement in all the classification and prediction based process like environmental monitoring, stock exchange conditions, biomedical diagnosis, software engineering etc. However still there are yet to be simplify the challenges of selecting training criteria for design of artificial intelligence models used for prediction of results. This work focus on the defect prediction mechanism development using software metric data of KC1.We have taken subtractive clustering approach for generation of fuzzy inference system (FIS).The FIS rules are generated at different radius of influence of input attribute vectors and the developed rules are further modified by ANFIS technique to obtain the prediction of number of defects in software project using fuzzy logic system.
The comparison of the text classification methods to be used for the analysis...ijcsit
Text classification is used for the purpose of preventing the leakage of the data which is highly important
within the institution through unallowed ways. The results obtained from the text classification process
should be integrated into the DLP architecture immediately. The data flowing through the net requires
instant control and the flow of the sensitive data should be prevented. The use of the machinery learning
methods is required to perform the text classification which will be integrated into the DLP architecture.
The experimental results of the comparison of text classification methods to be used in the interface written
on the ICAP protocol have been prepared in the networked architecture developed for the DLP system.
Also, the choice of the text classification method to be used in the instant control of the sensitive data has
been carried out. The DLP text classification architecture developed helps decide the classification method
through the examination of the data in motion. The method to be chosen for the text classification is applied
to the ICAP protocol, and the analysis of the sensitive data and confidentiality are provided.
ANALYSIS OF MACHINE LEARNING ALGORITHMS WITH FEATURE SELECTION FOR INTRUSION ...IJNSA Journal
In recent times, various machine learning classifiers are used to improve network intrusion detection. The researchers have proposed many solutions for intrusion detection in the literature. The machine learning classifiers are trained on older datasets for intrusion detection, which limits their detection accuracy. So, there is a need to train the machine learning classifiers on the latest dataset. In this paper, UNSW-NB15, the latest dataset is used to train machine learning classifiers. The selected classifiers such as K-Nearest Neighbors (KNN), Stochastic Gradient Descent (SGD), Random Forest (RF), Logistic Regression (LR), and Naïve Bayes (NB) classifiers are used for training from the taxonomy of classifiers based on lazy and eager learners. In this paper, Chi-Square, a filter-based feature selection technique, is applied to the UNSW-NB15 dataset to reduce the irrelevant and redundant features. The performance of classifiers is measured in terms of Accuracy, Mean Squared Error (MSE), Precision, Recall, F1-Score, True Positive Rate (TPR) and False Positive Rate (FPR) with or without feature selection technique and comparative analysis of these machine learning classifiers is carried out.
A Defect Prediction Model for Software Product based on ANFISIJSRD
Artificial intelligence techniques are day by day getting involvement in all the classification and prediction based process like environmental monitoring, stock exchange conditions, biomedical diagnosis, software engineering etc. However still there are yet to be simplify the challenges of selecting training criteria for design of artificial intelligence models used for prediction of results. This work focus on the defect prediction mechanism development using software metric data of KC1.We have taken subtractive clustering approach for generation of fuzzy inference system (FIS).The FIS rules are generated at different radius of influence of input attribute vectors and the developed rules are further modified by ANFIS technique to obtain the prediction of number of defects in software project using fuzzy logic system.
The comparison of the text classification methods to be used for the analysis...ijcsit
Text classification is used for the purpose of preventing the leakage of the data which is highly important
within the institution through unallowed ways. The results obtained from the text classification process
should be integrated into the DLP architecture immediately. The data flowing through the net requires
instant control and the flow of the sensitive data should be prevented. The use of the machinery learning
methods is required to perform the text classification which will be integrated into the DLP architecture.
The experimental results of the comparison of text classification methods to be used in the interface written
on the ICAP protocol have been prepared in the networked architecture developed for the DLP system.
Also, the choice of the text classification method to be used in the instant control of the sensitive data has
been carried out. The DLP text classification architecture developed helps decide the classification method
through the examination of the data in motion. The method to be chosen for the text classification is applied
to the ICAP protocol, and the analysis of the sensitive data and confidentiality are provided.
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
Intrusion Detection System Based on K-Star Classifier and Feature Set ReductionIOSR Journals
Abstract: Network security and Intrusion Detection Systems (IDS’s) is an important security related research
area. This paper applies K-star algorithm with filtering analysis in order to build a network intrusion detection
system. For our experimental analysis and as a case study, we have used the new NSL-KDD dataset, which is a
modified dataset for KDDCup 1999 intrusion detection benchmark dataset. With a split of 66.0% for the
training set and the remainder for the testing set a 2 class classifications has been implemented. WEKA which is
a java based open source software consists of a collection of machine learning algorithms for Data mining tasks
has been used in the testing process. The experimental results show that the proposed approach is very accurate
with low false positive rate and high true positive rate and it takes less learning time in comparison with other
existing approaches used for efficient network intrusion detection.
Keywords: Information Gain, Intrusion Detection System, Instance-based classifier, K-Star, Weka.
A novel ensemble modeling for intrusion detection system IJECEIAES
Vast increase in data through internet services has made computer systems more vulnerable and difficult to protect from malicious attacks. Intrusion detection systems (IDSs) must be more potent in monitoring intrusions. Therefore an effectual Intrusion Detection system architecture is built which employs a facile classification model and generates low false alarm rates and high accuracy. Noticeably, IDS endure enormous amounts of data traffic that contain redundant and irrelevant features, which affect the performance of the IDS negatively. Despite good feature selection approaches leads to a reduction of unrelated and redundant features and attain better classification accuracy in IDS. This paper proposes a novel ensemble model for IDS based on two algorithms Fuzzy Ensemble Feature selection (FEFS) and Fusion of Multiple Classifier (FMC). FEFS is a unification of five feature scores. These scores are obtained by using feature-class distance functions. Aggregation is done using fuzzy union operation. On the other hand, the FMC is the fusion of three classifiers. It works based on Ensemble decisive function. Experiments were made on KDD cup 99 data set have shown that our proposed system works superior to well-known methods such as Support Vector Machines (SVMs), K-Nearest Neighbor (KNN) and Artificial Neural Networks (ANNs). Our examinations ensured clearly the prominence of using ensemble methodology for modeling IDSs, and hence our system is robust and efficient.
Comparative Performance Analysis of Machine Learning Techniques for Software ...csandit
Machine learning techniques can be used to analyse data from different perspectives and enable
developers to retrieve useful information. Machine learning techniques are proven to be useful
in terms of software bug prediction. In this paper, a comparative performance analysis of
different machine learning techniques is explored for software bug prediction on public
available data sets. Results showed most of the machine learning methods performed well on
software bug datasets.
Implementation of reducing features to improve code change based bug predicti...eSAT Journals
Abstract Today, we are getting plenty of bugs in the software because of variations in the software and hardware technologies. Bugs are nothing but Software faults, existing a severe challenge for system reliability and dependability. To identify the bugs from the software bug prediction is convenient approach. To visualize the presence of a bug in a source code file, recently, Machine learning classifiers approach is developed. Because of a huge number of machine learned features current classifier-based bug prediction have two major problems i) inadequate precision for practical usage ii) measured prediction time. In this paper we used two techniques first, cos-triage algorithm which have a go to enhance the accuracy and also lower the price of bug prediction and second, feature selection methods which eliminate less significant features. Reducing features get better the quality of knowledge extracted and also boost the speed of computation. Keywords: Efficiency, Bug Prediction, Classification, Feature Selection, Accuracy
In a Power plant with a Distributed Control System ( DCS ), process parameters are continuously stored in
databases at discrete intervals. The data contained in these databases may not appear to contain valuable
relational information but practically such a relation exists. The large number of process parameter values
are changing with time in a Power Plant. These parameters are part of rules framed by domain experts for
the expert system. With the changes in parameters there is a quite high possibility to form new rules using
the dynamics of the process itself. We present an efficient algorithm that generates all significant rules
based on the real data. The association based algorithms were compared and the best suited algorithm for
this process application was selected. The application for the Learning system is studied in a Power Plant
domain. The SCADA interface was developed to acquire online plant data.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Performance analysis of binary and multiclass models using azure machine lear...IJECEIAES
Network data is expanding and that too at an alarming rate. Besides, the sophisticated attack tools used by hackers lead to capricious cyber threat landscape. Traditional models proposed in the field of network intrusion detection using machine learning algorithms emphasize more on improving attack detection rate and reducing false alarms but time efficiency is often overlooked. Therefore, in order to address this limitation, a modern solution has been presented using Machine Learning-as-a-Service platform. The proposed work analyses the performance of eight two-class and three multiclass algorithms using UNSW NB-15, a modern intrusion detection dataset. 82,332 testing samples were considered to evaluate the performance of algorithms. The proposed two class decision forest model exhibited 99.2% accuracy and took 6 seconds to learn 1,75,341 network instances. Multiclass classification task was also undertaken wherein attack types like generic, exploits, shellcode and worms were classified with a recall percentage of 99%, 94.49%, 91.79% and 90.9% respectively by the multiclass decision forest model that also leapfrogged others in terms of training and execution time.
Artificial Neural Network and Multi-Response Optimization in Reliability Meas...inventionjournals
Neural network is an important tool for reliability analysis, including estimation of reliability or utility function which are too complicated to be analytical expressed for large or complex system. It has been demonstrated the neural network has significant improvement in the parameter estimation accuracy over the traditional chi-square test. There are many parameters of a neural network that should be determined while training the dataset, since different setups of algorithm parameters affect the estimation performance in either accuracy or computation efficiency. In this paper, neural network training is used to estimate the utility function for the parallel-series redundancy allocation problem, and weighted principal component based multi-response optimization method is applied to find the optimal setting of neural network parameters so that the simultaneous minimizations of training error and computing time are achieved.
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
Intrusion Detection System Based on K-Star Classifier and Feature Set ReductionIOSR Journals
Abstract: Network security and Intrusion Detection Systems (IDS’s) is an important security related research
area. This paper applies K-star algorithm with filtering analysis in order to build a network intrusion detection
system. For our experimental analysis and as a case study, we have used the new NSL-KDD dataset, which is a
modified dataset for KDDCup 1999 intrusion detection benchmark dataset. With a split of 66.0% for the
training set and the remainder for the testing set a 2 class classifications has been implemented. WEKA which is
a java based open source software consists of a collection of machine learning algorithms for Data mining tasks
has been used in the testing process. The experimental results show that the proposed approach is very accurate
with low false positive rate and high true positive rate and it takes less learning time in comparison with other
existing approaches used for efficient network intrusion detection.
Keywords: Information Gain, Intrusion Detection System, Instance-based classifier, K-Star, Weka.
A novel ensemble modeling for intrusion detection system IJECEIAES
Vast increase in data through internet services has made computer systems more vulnerable and difficult to protect from malicious attacks. Intrusion detection systems (IDSs) must be more potent in monitoring intrusions. Therefore an effectual Intrusion Detection system architecture is built which employs a facile classification model and generates low false alarm rates and high accuracy. Noticeably, IDS endure enormous amounts of data traffic that contain redundant and irrelevant features, which affect the performance of the IDS negatively. Despite good feature selection approaches leads to a reduction of unrelated and redundant features and attain better classification accuracy in IDS. This paper proposes a novel ensemble model for IDS based on two algorithms Fuzzy Ensemble Feature selection (FEFS) and Fusion of Multiple Classifier (FMC). FEFS is a unification of five feature scores. These scores are obtained by using feature-class distance functions. Aggregation is done using fuzzy union operation. On the other hand, the FMC is the fusion of three classifiers. It works based on Ensemble decisive function. Experiments were made on KDD cup 99 data set have shown that our proposed system works superior to well-known methods such as Support Vector Machines (SVMs), K-Nearest Neighbor (KNN) and Artificial Neural Networks (ANNs). Our examinations ensured clearly the prominence of using ensemble methodology for modeling IDSs, and hence our system is robust and efficient.
Comparative Performance Analysis of Machine Learning Techniques for Software ...csandit
Machine learning techniques can be used to analyse data from different perspectives and enable
developers to retrieve useful information. Machine learning techniques are proven to be useful
in terms of software bug prediction. In this paper, a comparative performance analysis of
different machine learning techniques is explored for software bug prediction on public
available data sets. Results showed most of the machine learning methods performed well on
software bug datasets.
Implementation of reducing features to improve code change based bug predicti...eSAT Journals
Abstract Today, we are getting plenty of bugs in the software because of variations in the software and hardware technologies. Bugs are nothing but Software faults, existing a severe challenge for system reliability and dependability. To identify the bugs from the software bug prediction is convenient approach. To visualize the presence of a bug in a source code file, recently, Machine learning classifiers approach is developed. Because of a huge number of machine learned features current classifier-based bug prediction have two major problems i) inadequate precision for practical usage ii) measured prediction time. In this paper we used two techniques first, cos-triage algorithm which have a go to enhance the accuracy and also lower the price of bug prediction and second, feature selection methods which eliminate less significant features. Reducing features get better the quality of knowledge extracted and also boost the speed of computation. Keywords: Efficiency, Bug Prediction, Classification, Feature Selection, Accuracy
In a Power plant with a Distributed Control System ( DCS ), process parameters are continuously stored in
databases at discrete intervals. The data contained in these databases may not appear to contain valuable
relational information but practically such a relation exists. The large number of process parameter values
are changing with time in a Power Plant. These parameters are part of rules framed by domain experts for
the expert system. With the changes in parameters there is a quite high possibility to form new rules using
the dynamics of the process itself. We present an efficient algorithm that generates all significant rules
based on the real data. The association based algorithms were compared and the best suited algorithm for
this process application was selected. The application for the Learning system is studied in a Power Plant
domain. The SCADA interface was developed to acquire online plant data.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Performance analysis of binary and multiclass models using azure machine lear...IJECEIAES
Network data is expanding and that too at an alarming rate. Besides, the sophisticated attack tools used by hackers lead to capricious cyber threat landscape. Traditional models proposed in the field of network intrusion detection using machine learning algorithms emphasize more on improving attack detection rate and reducing false alarms but time efficiency is often overlooked. Therefore, in order to address this limitation, a modern solution has been presented using Machine Learning-as-a-Service platform. The proposed work analyses the performance of eight two-class and three multiclass algorithms using UNSW NB-15, a modern intrusion detection dataset. 82,332 testing samples were considered to evaluate the performance of algorithms. The proposed two class decision forest model exhibited 99.2% accuracy and took 6 seconds to learn 1,75,341 network instances. Multiclass classification task was also undertaken wherein attack types like generic, exploits, shellcode and worms were classified with a recall percentage of 99%, 94.49%, 91.79% and 90.9% respectively by the multiclass decision forest model that also leapfrogged others in terms of training and execution time.
Artificial Neural Network and Multi-Response Optimization in Reliability Meas...inventionjournals
Neural network is an important tool for reliability analysis, including estimation of reliability or utility function which are too complicated to be analytical expressed for large or complex system. It has been demonstrated the neural network has significant improvement in the parameter estimation accuracy over the traditional chi-square test. There are many parameters of a neural network that should be determined while training the dataset, since different setups of algorithm parameters affect the estimation performance in either accuracy or computation efficiency. In this paper, neural network training is used to estimate the utility function for the parallel-series redundancy allocation problem, and weighted principal component based multi-response optimization method is applied to find the optimal setting of neural network parameters so that the simultaneous minimizations of training error and computing time are achieved.
EMPIRICAL APPLICATION OF SIMULATED ANNEALING USING OBJECT-ORIENTED METRICS TO...ijcsa
The work is about using Simulated Annealing Algorithm for the effort estimation model parameter
optimization which can lead to the reduction in the difference in actual and estimated effort used in model
development.
The model has been tested using OOP’s dataset, obtained from NASA for research purpose.The data set
based model equation parameters have been found that consists of two independent variables, viz. Lines of
Code (LOC) along with one more attribute as a dependent variable related to software development effort
(DE). The results have been compared with the earlier work done by the author on Artificial Neural
Network (ANN) and Adaptive Neuro Fuzzy Inference System (ANFIS) and it has been observed that the
developed SA based model is more capable to provide better estimation of software development effort than
ANN and ANFIS
ESTIMATING HANDLING TIME OF SOFTWARE DEFECTScsandit
The problem of accurately predicting handling time for software defects is of great practical
importance. However, it is difficult to suggest a practical generic algorithm for such estimates,
due in part to the limited information available when opening a defect and the lack of a uniform
standard for defect structure. We suggest an algorithm to address these challenges that is
implementable over different defect management tools. Our algorithm uses machine learning
regression techniques to predict the handling time of defects based on past behaviour of similar
defects. The algorithm relies only on a minimal set of assumptions about the structure of the
input data. We show how an implementation of this algorithm predicts defect handling time with
promising accuracy results
Model based test case prioritization using neural network classificationcseij
Model-based testing for real-life software systems often require a large number of tests, all of which cannot
exhaustively be run due to time and cost constraints. Thus, it is necessary to prioritize the test cases in
accordance with their importance the tester perceives. In this paper, this problem is solved by improving
our given previous study, namely, applying classification approach to the results of our previous study
functional relationship between the test case prioritization group membership and the two attributes:
important index and frequency for all events belonging to given group are established. A for classification
purpose, neural network (NN) that is the most advances is preferred and a data set obtained from our study
for all test cases is classified using multilayer perceptron (MLP) NN. The classification results for
commercial test prioritization application show the high classification accuracies about 96% and the
acceptable test prioritization performances are achieved.
Performance Comparision of Machine Learning AlgorithmsDinusha Dilanka
In this paper Compare the performance of two
classification algorithm. I t is useful to differentiate
algorithms based on computational performance rather
than classification accuracy alone. As although
classification accuracy between the algorithms is similar,
computational performance can differ significantly and it
can affect to the final results. So the objective of this paper
is to perform a comparative analysis of two machine
learning algorithms namely, K Nearest neighbor,
classification and Logistic Regression. In this paper it
was considered a large dataset of 7981 data points and 112
features. Then the performance of the above mentioned
machine learning algorithms are examined. In this paper
the processing time and accuracy of the different machine
learning techniques are being estimated by considering the
collected data set, over a 60% for train and remaining
40% for testing. The paper is organized as follows. In
Section I, introduction and background analysis of the
research is included and in section II, problem statement.
In Section III, our application and data analyze Process,
the testing environment, and the Methodology of our
analysis are being described briefly. Section IV comprises
the results of two algorithms. Finally, the paper concludes
with a discussion of future directions for research by
eliminating the problems existing with the current
research methodology.
Neural network based numerical digits recognization using nnt in matlabijcses
Artificial neural networks are models inspired by human nervous system that is capable of learning. One of
the important applications of artificial neural network is character Recognition. Character Recognition
finds its application in number of areas, such as banking, security products, hospitals, in robotics also.
This paper is based on a system that recognizes a english numeral, given by the user, which is already
trained on the features of the numbers to be recognized using NNT (Neural network toolbox) .The system
has a neural network as its core, which is first trained on a database. The training of the neural network
extracts the features of the English numbers and stores in the database. The next phase of the system is to
recognize the number given by the user. The features of the number given by the user are extracted and
compared with the feature database and the recognized number is displayed.
ANALYSIS AND COMPARISON STUDY OF DATA MINING ALGORITHMS USING RAPIDMINERIJCSEA Journal
Comparison study of algorithms is very much required before implementing them for the needs of any
organization. The comparisons of algorithms are depending on the various parameters such as data
frequency, types of data and relationship among the attributes in a given data set. There are number of
learning and classifications algorithms are used to analyse, learn patterns and categorize data are
available. But the problem is the one to find the best algorithm according to the problem and desired
output. The desired result has always been higher accuracy in predicting future values or events from the
given dataset. Algorithms taken for the comparisons study are Neural net, SVM, Naïve Bayes, BFT and
Decision stump. These top algorithms are most influential data mining algorithms in the research
community. These algorithms have been considered and mostly used in the field of knowledge discovery
and data mining.
Artificial intelligence based pattern recognition is
one of the most important tools in process control to identify
process problems. The objective of this study was to
evaluate the relative performance of a feature-based
Recognizer compared with the raw data-based recognizer.
The study focused on recognition of seven commonly
researched patterns plotted on the quality chart. The
artificial intelligence based pattern recognizer trained using
the three selected statistical features resulted in significantly
better performance compared with the raw data-based
recognizer.
AN IMPROVED METHOD FOR IDENTIFYING WELL-TEST INTERPRETATION MODEL BASED ON AG...IAEME Publication
This paper presents an approach based on applying an aggregated predictor formed by multiple versions of a multilayer neural network with a back-propagation optimization algorithm for helping the engineer to get a list of the most appropriate well-test interpretation models for a given set of pressure/ production data. The proposed method consists of three stages: (1) data decorrelation through principal component analysis to reduce the covariance between the variables and the dimension of the input layer in the artificial neural network, (2) bootstrap replicates of the learning set where the data is repeatedly sampled with a random split of the data into train sets and using these as new learning sets, and (3) automatic reservoir model identification through aggregated predictor formed by a plurality vote when predicting a new class. This method is described in detail to ensure successful replication of results. The required training and test dataset were generated by using analytical solution models. In our case, there were used 600 samples: 300 for training, 100 for cross-validation, and 200 for testing. Different network structures were tested during this study to arrive at optimum network design. We notice that the single net methodology always brings about confusion in selecting the correct model even though the training results for the constructed networks are close to 1. We notice also that the principal component analysis is an effective strategy in reducing the number of input features, simplifying the network structure, and lowering the training time of the ANN. The results obtained show that the proposed model provides better performance when predicting new data with a coefficient of correlation approximately equal to 95% Compared to a previous approach 80%, the combination of the PCA and ANN is more stable and determine the more accurate results with lesser computational complexity than was feasible previously. Clearly, the aggregated predictor is more stable and shows less bad classes compared to the previous approach.
Content-Based Image Retrieval (CBIR) systems have been used for the searching of relevant images in various research areas. In CBIR systems features such as shape, texture and color are used. The extraction of features is the main step on which the retrieval results depend. Color features in CBIR are used as in the color histogram, color moments, conventional color correlogram and color histogram. Color space selection is used to represent the information of color of the pixels of the query image. The shape is the basic characteristic of segmented regions of an image. Different methods are introduced for better retrieval using different shape representation techniques; earlier the global shape representations were used but with time moved towards local shape representations. The local shape is more related to the expressing of result instead of the method. Local shape features may be derived from the texture properties and the color derivatives. Texture features have been used for images of documents, segmentation-based recognition,and satellite images. Texture features are used in different CBIR systems along with color, shape, geometrical structure and sift features.
The cyber attacks have become most prevalent in the past few years. During this time, attackers have discovered new vulnerabilities to carry out malicious activities on the internet. Both the clients and the servers have been victimized by the attackers. Clickjacking is one of the attacks that have been adopted by the attackers to deceive the innocuous internet users to initiate some action. Clickjacking attack exploits one of the vulnerabilities existing in the web applications. This attack uses a technique that allows cross domain attacks with the help of userinitiated clicks and performs unintended actions. This paper traces out the vulnerabilities that make a website vulnerable to clickjacking attack and proposes a solution for the same.
Performance Analysis of Audio and Video Synchronization using Spreaded Code D...Eswar Publications
The audio and video synchronization plays an important role in speech recognition and multimedia communication. The audio-video sync is a quite significant problem in live video conferencing. It is due to use of various hardware components which introduces variable delay and software environments. The objective of the synchronization is used to preserve the temporal alignment between the audio and video signals. This paper proposes the audio-video synchronization using spreading codes delay measurement technique. The performance of the proposed method made on home database and achieves 99% synchronization efficiency. The audio-visual
signature technique provides a significant reduction in audio-video sync problems and the performance analysis of audio and video synchronization in an effective way. This paper also implements an audio- video synchronizer and analyses its performance in an efficient manner by synchronization efficiency, audio-video time drift and audio-video delay parameters. The simulation result is carried out using mat lab simulation tools and simulink. It is automatically estimating and correcting the timing relationship between the audio and video signals and maintaining the Quality of Service.
Due to the availability of complicated devices in industry, models for consumers at lower cost of resources are developed. Home Automation systems have been developed by several researchers. The limitations of home automation includes complexity in architecture, higher costs of the equipment, interface inflexibility. In this paper as we have proposed, the working protocol of PIC 16F72 technology is which is secure, cost efficient, flexible that leads to the development of efficient home automation systems. The system is operational to control various home appliances like fans, Bulbs, Tube light. The following paper describes about components used and working of all components connected. The home automation system makes use of Android app entitled “Home App” which gives
flexibility and easy to use GUI.
Semantically Enchanced Personalised Adaptive E-Learning for General and Dysle...Eswar Publications
E-learning plays an important role in providing required and well formed knowledge to a learner. The medium of e- learning has achieved advancement in various fields such as adaptive e-learning systems. The need for enhancing e-learning semantically can enhance the retrieval and adaptability of the learning curriculum. This paper provides a semantically enhanced module based e-learning for computer science programme on a learnercentric perspective. The learners are categorized based on their proficiency for providing personalized learning environment for users. Learning disorders on the platform of e-learning still require lots of research. Therefore, this paper also provides a personalized assessment theoretical model for alphabet learning with learning objects for
children’s who face dyslexia.
Agriculture plays an important role in the economy of our country. Over 58 percent of the rural households depend on the agriculture sector as their means of livelihood. Agriculture is one of the major contributors to Gross Domestic Product(GDP). Seeds are the soul of agriculture. This application helps in reducing the time for the researchers as well as farmers to know the seedling parameters. The application helps the farmers to know about the percentage of seedlings that will grow and it is very essential in estimating the yield of that particular crop. Manual calculation may lead to some error, to minimize that error, the developed app is used. The scientist and farmers require the app to know about the physiological seed quality parameters and to take decisions regarding their farming activities. In this article a desktop app for seed germination percentage and vigour index calculation are developed in PHP scripting language.
What happens when adaptive video streaming players compete in time-varying ba...Eswar Publications
Competition among adaptive video streaming players severely diminishes user-QoE. When players compete at a bottleneck link many do not obtain adequate resources. This imbalance eventually causes ill effects such as screen flickering and video stalling. There have been many attempts in recent years to overcome some of these problems. However, added to the competition at the bottleneck link there is also the possibility of varying network bandwidth which can make the situation even worse. This work focuses on such a situation. It evaluates current heuristic adaptive video players at a bottleneck link with time-varying bandwidth conditions. Experimental setup includes the TAPAS player and emulated network conditions. The results show PANDA outperforms FESTIVE, ELASTIC and the Conventional players.
WLI-FCM and Artificial Neural Network Based Cloud Intrusion Detection SystemEswar Publications
Security and Performance aspects of cloud computing are the major issues which have to be tended to in Cloud Computing. Intrusion is one such basic and imperative security problem for Cloud Computing. Consequently, it is essential to create an Intrusion Detection System (IDS) to detect both inside and outside assaults with high detection precision in cloud environment. In this paper, cloud intrusion detection system at hypervisor layer is developed and assesses to detect the depraved activities in cloud computing environment. The cloud intrusion detection system uses a hybrid algorithm which is a fusion of WLI- FCM clustering algorithm and Back propagation artificial Neural Network to improve the detection accuracy of the cloud intrusion detection system. The proposed system is implemented and compared with K-means and classic FCM. The DARPA’s KDD cup dataset 1999 is used for simulation. From the detailed performance analysis, it is clear that the proposed system is able to detect the anomalies with high detection accuracy and low false alarm rate.
Spreading Trade Union Activities through Cyberspace: A Case StudyEswar Publications
This report present the outcome of an investigative research conducted to examine the modu-operandi of academic staff union of polytechnics (ASUP) YabaTech. The investigation covered the logistics and cost implication for spreading union activities among members. It was discovered that cost of management and dissemination of information to members was at high side, also logistics problem constitutes to loss of information in transit hence cut away some members from union activities. To curtail the problem identified, we proposed the
design of secure and dynamic website for spreading union activities among members and public. The proposed system was implemented using HTML5 technology, interface frameworks like Bootstrap and j query which enables the responsive feature of the application interface. The backend was designed using PHPMYSQL. It was discovered from the evaluation of the new system that cost of managing information has reduced considerably, and logistic problems identified in the old system has become a forgotten issue.
Identifying an Appropriate Model for Information Systems Integration in the O...Eswar Publications
Nowadays organizations are using information systems for optimizing processes in order to increase coordination and interoperability across the organizations. Since Oil and Gas Industry is one of the large industries in whole of the world, there is a need to compatibility of its Information Systems (IS) which consists three categories of systems: Field IS, Plant IS and Enterprise IS to create interoperability and approach the
optimizing processes as its result. In this paper we introduce the different models of information systems integration, identify the types of information systems that are using in the upstream and downstream sectors of petroleum industry, and finally based on expert’s opinions will identify a suitable model for information systems integration in this industry.
Link-and Node-Disjoint Evaluation of the Ad Hoc on Demand Multi-path Distance...Eswar Publications
This work illustrates the AOMDV routing protocol. Its ancestor, the AODV routing protocol is also described. This tutorial demonstrates how forward and reverse paths are created by the AOMDV routing protocol. Loop free paths formulation is described, together with node and link disjoint paths. Finally, the performance of the AOMDV routing protocol is investigated along link and node disjoint paths. The WSN with the AOMDV routing protocol using link disjoint paths is better than the WSN with the AOMDV routing protocol using node disjoint paths for energy consumption.
Bridging Centrality: Identifying Bridging Nodes in Transportation NetworkEswar Publications
To identify the importance of node of a network, several centralities are used. Majority of these centrality measures are dominated by components' degree due to their nature of looking at networks’ topology. We propose a centrality to identification model, bridging centrality, based on information flow and topological aspects. We apply bridging centrality on real world networks including the transportation network and show that the nodes distinguished by bridging centrality are well located on the connecting positions between highly connected regions. Bridging centrality can discriminate bridging nodes, the nodes with more information flowed through them and locations between highly connected regions, while other centrality measures cannot.
Now a days we are living in an era of Information Technology where each and every person has to become IT incumbent either intentionally or unintentionally. Technology plays a vital role in our day to day life since last few decades and somehow we all are depending on it in order to obtain maximum benefit and comfort. This new era equipped with latest advents of technology, enlightening world in the form of Internet of Things (IoT). Internet of things is such a specified and dignified domain which leads us to the real world scenarios where each object can perform some task while communicating with some other objects. The world with full of devices, sensors and other objects which will communicate and make human life far better and easier than ever. This paper provides an overview of current research work on IoT in terms of architecture, a technology used and applications. It also highlights all the issues related to technologies used for IoT, after the literature review of research work. The main purpose of this survey is to provide all the latest technologies, their corresponding
trends and details in the field of IoT in systematic manner. It will be helpful for further research.
Automatic Monitoring of Soil Moisture and Controlling of Irrigation SystemEswar Publications
In past couple of decades, there is immediate growth in field of agricultural technology. Utilization of proper method of irrigation by drip is very reasonable and proficient. A various drip irrigation methods have been proposed, but they have been found to be very luxurious and dense to use. The farmer has to maintain watch on irrigation schedule in the conventional drip irrigation system, which is different for different types of crops. In remotely monitored embedded system for irrigation purposes have become a new essential for farmer to accumulate his energy, time and money and will take place only when there will be requirement of water. In this approach, the soil test for chemical constituents, water content, and salinity and fertilizer requirement data collected by wireless and processed for better drip irrigation plan. This paper reviews different monitoring systems and proposes an automatic monitoring system model using Wireless Sensor Network (WSN) which helps the farmer to improve the yield.
Multi- Level Data Security Model for Big Data on Public Cloud: A New ModelEswar Publications
With the advent of cloud computing the big data has emerged as a very crucial technology. The certain type of cloud provides the consumers with the free services like storage, computational power etc. This paper is intended to make use of infrastructure as a service where the storage service from the public cloud providers is going to leveraged by an individual or organization. The paper will emphasize the model which can be used by anyone without any cost. They can store the confidential data without any type of security issue, as the data will be altered
in such a way that it cannot be understood by the intruder if any. Not only that but the user can retrieve back the original data within no time. The proposed security model is going to effectively and efficiently provide a robust security while data is on cloud infrastructure as well as when data is getting migrated towards cloud infrastructure or vice versa.
Impact of Technology on E-Banking; Cameroon PerspectivesEswar Publications
The financial services industry is experiencing rapid changes in services delivery and channels usage, and financial companies and users of financial services are looking at new technologies as they emerge and deciding whether or not to embrace them and the new opportunities to save and manage enormous time, cost and stress.
There is no doubt about the favourable and manifold impact of technology on e-banking as pictured in this review paper, almost all banks are with the least and most access e-banking Technological equipments like ATMs and Cards. On the other Hand cheap and readily available technology has opened a favourable competition in ebanking services business with a lot of wide range competitors competing with Commercial Banks in Cameroon in providing digital financial services.
Classification Algorithms with Attribute Selection: an evaluation study using...Eswar Publications
Attribute or feature selection plays an important role in the process of data mining. In general the data set contains more number of attributes. But in the process of effective classification not all attributes are relevant.
Attribute selection is a technique used to extract the ranking of attributes. Therefore, this paper presents a comparative evaluation study of classification algorithms before and after attribute selection using Waikato Environment for Knowledge Analysis (WEKA). The evaluation study concludes that the performance metrics of the classification algorithm, improves after performing attribute selection. This will reduce the work of processing irrelevant attributes.
Mining Frequent Patterns and Associations from the Smart meters using Bayesia...Eswar Publications
In today’s world migration of people from rural areas to urban areas is quite common. Health care services are one of the most challenging aspect that is must require to the people with abnormal health. Advancements in the technologies lead to build the smart homes, which contains various sensor or smart meter devices to automate the process of other electronic device. Additionally these smart meters can be able to capture the daily activities of the patients and also monitor the health conditions of the patients by mining the frequent patterns and
association rules generated from the smart meters. In this work we proposed a model that is able to monitor the activities of the patients in home and can send the daily activities to the corresponding doctor. We can extract the frequent patterns and association rules from the log data and can predict the health conditions of the patients and can give the suggestions according to the prediction. Our work is divided in to three stages. Firstly, we used to record the daily activities of the patient using a specific time period at three regular intervals. Secondly we applied the frequent pattern growth for extracting the association rules from the log file. Finally, we applied k means clustering for the input and applied Bayesian network model to predict the health behavior of the patient and precautions will be given accordingly.
Network as a Service Model in Cloud Authentication by HMAC AlgorithmEswar Publications
Resource pooling on internet-based accessing on use as pay environmental technology and ruled in IT field is the
cloud. Present, in every organization has trusted the web, however, the information must flow but not hold the
data. Therefore, all customers have to use the cloud. While the cloud progressing info by securing-protocols. Third
party observing and certain circumstances directly stale in flow and kept of packets in the virtual private cloud.
Global security statistics in the year 2017, hacking sensitive information in cloud approximately maybe 75.35%,
and the world security analyzer said this calculation maybe reached to 100%. For this cause, this proposed
research work concentrates on Authentication-Message-Digest-Key with authentication in routing the Network as
a Service of packets in OSPF (Open Shortest Path First) implementing Cloud with GNS3 has tested them to
securing from attackers.
Microstrip patch antennas are recently used in wireless detection applications due to their low power consumption, low cost, versatility, field excitation, ease of fabrication etc. The microstrip patch antennas are also called as printed antennas which is suffer with an array elements of antenna and narrow bandwidth. To overcome the above drawbacks, Flame Retardant Material is used as the substrate. Rectangular shape of microstrip patch antenna with FR4 material as the substrate which is more suitable for the explosive detection applications. The proposed printed antenna was designed with the dimension of 60 x 60 mm2. FR-4 material has a dielectric constant value of 4.3 with thickness 1.56 mm, length and width 60 mm and 60 mm respectively. One side of the substrate contains the ground plane of dimensions 60 x60 mm2 made of copper and the other side of the substrate contains the patch which have dimensions 34 x 29 mm2 and thickness 0.03mm which is also made of copper. RMPA without slot, Vertical slot RMPA, Double horizontal slot RMPA and Centre slot RMPA structures were
designed and the performance of the antennas were analysed with various parameters such as gain, directivity, Efield, VSWR and return loss. From the performance analysis, double horizontal slot RMPA antenna provides a better result and it provides maximum gain (8.61dB) and minimum return loss (-33.918dB). Based on the E-field excitation value the SEMTEX explosive material is detected and it was simulated using CST software.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
Robust Fault-Tolerant Training Strategy Using Neural Network to Perform Functional Testing of Software
1. Int. J. Advanced Networking and Applications
Volume: 09 Issue: 03 Pages: 3455-3460 (2017) ISSN: 0975-0290
3455
Robust Fault-Tolerant Training Strategy
Using Neural Network to Perform Functional
Testing of Software
Manas Kumar Yogi
Department of Computer Science, Pragati Engineering College, Kakinada City, India
Email: manas.yogi@gmail.com
L. Yamuna
Department of Computer Science, Pragati Engineering College, Kakinada City, India
Email: yamuna.lakkamsani@gmail.com
--------------------------------------------------------------------------ABSTRACT--------------------------------------------------------
This paper is intended to introduce an efficient as well as robust training mechanism for a neural network which
can be used for testing the functionality of software. The traditional setup of neural network architecture is used
constituting the two phases -training phase and evaluation phase. The input test cases are to be trained in first
phase and consequently they behave like normal test cases to predict the output as untrained test cases. The test
oracle measures the deviation between the outputs of untrained test cases with trained test cases and authorizes a
final decision. Our framework can be applied to systems where number of test cases outnumbers the
functionalities or the system under test is too complex. It can also be applied to the test case development when the
modules of a system become tedious after modification.
Keywords - ATNN, Fault, Neural, Test Case, Test Oracle
---------------------------------------------------------------------------------------------------------------------------------------------------
Date of Submission: Oct 23, 2017 Date of Acceptance: Dec 01, 2017
---------------------------------------------------------------------------------------------------------------------------------------------------
I. INTRODUCTION
In software testing what matters most is how much
application conforms to specifications. In practice,
agreement documents are indicators of the level to be
accepted up to which the required functionality can be
achieved. Software testing consumes substantial quantity
of time as well as effort, so strategies have to be developed
to carry out the functionality testing in a manner which is
efficient in terms to deliver quality software with
minimum effort & time. In past, Artificial Neural
Networks (ANNs) were used to handle aspects of testing.
ANNs are developed to mimic the structure and
information processing powers of the human brain. The
architectural components of a neural network are units
same as the neurons of the brain. A neural network is
formed from one or more layers of these neurons, the
interconnections of which have associated synaptic
weights. Each neuron in the network is able to perform
calculations that contribute to the overall learning process,
or training of the network. The neuron interconnections
are associated with synaptic weights that store the
information computed during the training of the network.
It is rightly said that the neural network is a massive
parallel information processing system which uses the
distributed control to learn and store knowledge about its
environment. Clearly, the two crucial factors that affect
the superior computational capability of the neural
network are its distributed design working in parallel
layers and its ability to extrapolate the learned information
to yield outputs for inputs not presented during training
phase. These properties of the neural network allow
multiple complex problems to be solved.
Data mining, pattern recognition, and function
approximation are some of the tasks that can be handled
by neural networks. In this paper, a design of Artificial
Testing Neural Network (ATNN) is proposed to train on a
suite of test cases developed manually. Sometimes manual
test cases are found to have a greater degree of fault
finding ability and this efficient element of manual test
cases are used to train the test cases on a ATNN. The
result is a set of superior or trained test cases which have
the ability to find a fault in the functionality of the
application in minimum time. If this approach is repeated
over time, these trained test cases can show better fault
finding ability over other programs under test.
II. EXISTING WORK
Domain based testing models already exist which predicts
faults taking into account fault exposing metrics which are
traditional in nature. Tools like SLEUTH use this model
for purpose of effective test suite generation with the help
of test case metrics, a synthetic test oracle judge’s
individual test case for error classification. The neural
network is imparted training on test metric input sequence
and maps them to the test oracle’s error classification
system. Once trained, the network acts as a test case
effectiveness predictor. The metrics used for the
experiment were loosely based on coverage metrics for
Domain Based Testing. In real testing environment, the set
of metrics needed for an arbitrary testing criterion is not
known well in advance. This rises a huge challenge of
selecting a dynamic approach for finalizing test case
metrics.
The results from training four networks showed how well
each network predicted individual fault severities. The no
2. Int. J. Advanced Networking and Applications
Volume: 09 Issue: 03 Pages: 3455-3460 (2017) ISSN: 0975-0290
3456
error net predicted the best with 94.4%, and the second-
best predictor was the most severe fault net, with 91.7%.
placed The incorrectly classified tests were placed into one
of three categories: False Positive, False Negative, and
Other Incorrect. A False Positive response was recorded
for severity 1- 3 errors when the network predicted a fault
that doesn’t truly exist. For no error severity, a false
positive meant that the neural network predicted that there
was not an error, when indeed the test case would have
uncovered one. For other severity classes, a False
Negative response was recorded when the neural net
predicted severity of no fault exposed when the test case
indicates a fault. Other Incorrect represented to tests that
were classified by the neural net as exposing a fault, but of
the incorrect type. We have used this information to
analyze three test data generation objectives.
Objective 1: Minimize Number of Test Cases
Objective 2: Widen scope of Severe severity classes
Objective 3: Reduce error rate during training
III. PROPOSED MECHANISM
2.1 Phases of Proposed Mechanism
The proposed mechanism consists of two phases - a
training phase and evaluation phase.
3.1.1 Phase 1
Hybrid Training Mechanism - Training Phase
(Construction of Trained Test Cases)
Figure.1. Mechanism of training phase
Hybrid training mechanism is resident on an ATNN
(Artificial Testing Neural Network). The ATNN is
modelled after a Feed-Forward Neural Network which has
two layers namely, Hidden Layer and Output
Layer/Visible Layer. In this work, only two layers are
considered, i.e., input layer and output layer. If we denote
yi
(l)
as the output of the ith
test case of input layer l, the
function of the network is represented as:
yi
l
=f [ ∑wij
(l,l-1)
yj
(l-1)
+Øi
(l)
], where l=1,2,…..n (1)
Where wij
(l,l-1)
represent the weight from test case ‘j’ of
layer ‘l-1’ to the test case ‘i’ of layer ‘l’. Øi
(l)
is the
threshold of test case ‘i’ of layer ‘l’.
The decision factor is considering the number of test
cases in layer ‘l’. The weight of test case is indicated by
the fault detection ability of a test case. We limit the
weight value of a test case between 0 and 1.
With help of above principles, we present a hybrid
training mechanism for a test suite Ts. After application of
a learning algorithm on a test suite Ts, we get what we
term it as Trained Test Cases.
Let x(1),x(2),…x(m) be given input vectors containing
test case input values and y(1),y(2)…y(m) be the
corresponding desired output vectors. We apply the back
propagation algorithm as our basis, so as to adjust the
weights and threshold of the test cases.
We calculate the sum of square error:
M
E= ∑ E(m) (2)
m=1
E(m)=ly(m)-y(L)
l2
,
Where y(L)
indicates vector of outputs of the network,
when input is x(m). We repeat the adjustments of weights,
so that the network maps each x(m) to y(m) as close as
possible. When this situation appears, we say the test cases
x(1), x(2) …..x(m) are now trained. The threshold is
decided based on overall testing time available.
3.1.2 Phase 2
Evaluation Phase (Decision Making based on nature of
output)
Figure.2. Mechanism of evaluation phase
In this stage, the designed test cases of a test suite Ts
are applied under the Program under Test (PUT) and
results are given as input to the Test Oracle. In an alternate
procedure conducted parallel to the above mentioned
procedure, the trained test cases generate the predicted
outputs which are fed to the Test Oracle. Finally, the
Oracle decides the functionality of the program based on
the outcomes; the Test Oracle has comparison ability of
trained test case output with outputs of designed test cases.
3.2 Advantages of proposed training mechanism
The following are the advantages of proposed work-
i. The trained test cases can act as a Test Oracle itself
for next phase of test process like regression
testing.
ii.For a complex program to be tested, the test cases
can be designed in such a manner that during
regression testing only the test cases which are to
be executed on the modified version of the program
under test (PUT) are compared against the
corresponding trained test cases for decisive
outcome. In this way, large amount of test cases
3. Int. J. Advanced Networking and Applications
Volume: 09 Issue: 03 Pages: 3455-3460 (2017) ISSN: 0975-0290
3457
won’t be re-executed, thus saving testing time and
effort to appreciate level.
iii.The Test Oracle we employ is unbiased unlike
human testers. Human testers may be biased due to
prior knowledge of the program. The ATNN
contains layers of test cases with specific weights.
Figure.3. Architectural design for ATNN
Here, th11 …th21 indicates the test cases in hidden layers.
These test cases have enhanced weight values and by
application of different test data, i.e., x1, x2….xn, their
weights change. For example, consider a test case for
checking password field. The validation rule is at least 6
characters and 2 of them should be special symbols.
The input test case, say ‘t’ will have all 6 characters as
special symbols, so this test case will fail due to high
deviation of validation rule, thus giving it a very high
weight say ‘wh’, where h=higher.
In second layer, i.e., the first hidden layer, the test case
th1 should be designed with even higher weight. So, we
give test data input as all 6 characters blank. This will give
it weight say wvh, where vh=very high.
In third layer, i.e., second hidden layer, we design the
test case with 1 character as special and rest is general
characters. This test case does not deviate much from the
test validation rule, so we assign this test case with weight
say wm, where m=medium.
4. EXPERIMENTAL RESULTS
We used a credit card approval system to verify the
effectiveness of our proposed mechanism. The process
that the experiment follows begins with the generation of
test cases. The input attributes are created using the
specification of the program that is being tested, while the
outputs are generated by executing the tested program.
The data undergo a preprocessing procedure in which all
continuous input attributes are normalized (the range is
determined by finding the maximum and minimum values
for each attribute), and the binary inputs and outputs are
either assigned a value of 0 or 1. The continuous output is
treated in a different manner, and the output of each
example is placed into the correct interval specified by the
range of possible values and the number of intervals used.
The processed data are used as the data set for training the
neural network. The network parameters are determined
before the training algorithm begins. The training of the
network includes presenting the entire data set for one
epoch, and the number of epochs for training is also
specified. The back propagation training algorithm
concludes when the maximum number of epochs has been
reached or the minimum error rate has been achieved. The
network is then used as an “oracle” to predict the correct
outputs for the subsequent regression tests.
Table I. Input attributes of the data
Attribute name type Attribute
type
details
Aadhar id integer Input unique
Citizenship integer Input 0-indian
1-others
State integer Input 0-29
Age integer Input 1-100
Sex integer Input 0: Female
1: Male
Region integer Input 0–6 for
different
regions in
India
Income class integer Input 0 if income
p.a. <
Rs.10k
1. if
income
p.a. ≥ Rs
10k
2 if income
p.a. ≥ Rs
25k
3 if income
p.a. ≥ $50k
Number of dependents integer Input 1–4
Marital status integer Input 0: Single
1:
Married
Credit approved integer Output 0: No 1:
Yes
Credit amount integer Output ≥ 0
4. Int. J. Advanced Networking and Applications
Volume: 09 Issue: 03 Pages: 3455-3460 (2017) ISSN: 0975-0290
3458
Table II. Sample data used during training (before preprocessing)
Aadhar id Citizenship State Age Sex Region Income
class
Number of
dependents
Marital
status
Credit
approved
Credit
amount
1 1 1 23 1 1 1 1 1 0 0
2 1 12 45 1 3 1 1 0 0 0
3 1 22 65 0 4 1 0 0 1 10000
4 1 11 34 0 6 1 0 1 0 0
5 1 5 26 0 2 2 2 1 1 20000
6 1 7 28 1 2 0 1 0 0 0
7 0 7 41 1 5 2 2 0 1 10000
8 0 8 55 1 6 2 2 0 0 0
9 1 19 58 1 4 2 3 1 1 20000
In this experimental setup in MATLAB 2016 , we used
eight input units for the eight relevant input attributes (the
first is not used, as it is a descriptor for the example), and
twelve output computational units for the output attributes.
The first two output units are used for the binary output.
For training purposes, the unit with the higher output value
is said to be the “winner”. The remaining ten units are
used for the continuous output. The initial synaptic
weights of the neural network are obtained randomly and
covered a range between –0.5 and 0.5. Experimenting with
the neural network and the training data, we concluded
that one hidden layer with twenty-four units was sufficient
for the neural network to approximate the original
application to within a reasonable accuracy. A learning
rate of 0.50 was used, and the network required 1,200
epochs to produce a 0.2 percent misclassification rate on
the binary output and 5.38 percent for the continuous
output. The minimum error rate for the continuous output
(low threshold = 0.10, high threshold = 0.90).
5. Int. J. Advanced Networking and Applications
Volume: 09 Issue: 03 Pages: 3455-3460 (2017) ISSN: 0975-0290
3459
Table III. The minimum error rate for the continuous output (low threshold = 0.10, high threshold = 0.9
Injected fault number Number of correct
outputs
Number of incorrect
outputs
Percentage of correct
outputs classified as being
incorrect (%)
Percentage of incorrect
outputs classified as being
correct (%)
2 140 860 28.14 1.43
3 307 693 6.78 49.51
4 587 413 5.33 21.12
5 822 178 8.99 3.89
6 69 931 23.63 13.04
7 559 441 11.11 5.72
8 355 645 21.86 7.89
9 217 783 8.17 73.27
10 303 697 7.60 52.15
11 238 762 7.74 66.39
12 276 724 24.17 10.51
13 371 629 23.05 6.47
14 99 901 22.86 23.23
15 65 935 23.32 33.85
16 407 593 23.27 4.91
17 273 727 22.56 13.55
18 20 980 24.49 50.00
19 71 929 24.54 1.41
20 1000 0 0.00 4.20
21 125 875 20.91 50.40
Percentage average 16.93 24.65
Total average 20.79
6. Int. J. Advanced Networking and Applications
Volume: 09 Issue: 03 Pages: 3455-3460 (2017) ISSN: 0975-0290
3460
The last table summarizes the results for the minimum
error rate of the continuous output (credit amount.) The
tables include the injected fault number, the number of
correct outputs and incorrect outputs as determined by the
“oracle,” and the percentages for the correct outputs
classified as being incorrect and incorrect outputs
classified as being correct. The percentages were obtained
by comparing the classification of the “oracle” with that of
the original version of the application. The original version
is assumed to be fault-free, and is used as a control to
evaluate the results of the comparison tool. The best
thresholds were selected to minimize the overall average
of two error rates. Due to the increased complexity
involved in evaluating the continuous output, there is a
significant change in the capability of the neural network
to distinguish between the correct and the faulty test cases:
the minimum average error of 8.31 achieved for the binary
output versus the minimum average error of 20.79 for the
continuous output. An attempt to vary the threshold values
also did not result in an evident change to the overall
average percentage of error for the continuous output.
3. CONCLUSION
The main aim of this paper was to put forward a new
mechanism to test the functionality of a complex system
using the principles deployed in field of neural networks.
An Artificial Testing Neural Network was used to train the
way manual developed test cases work and to improve the
fault detecting ability of the trained test cases we used a
test oracle. In future we intend to apply the proposed
mechanism to test a application system which already has
a test case suit. The future of testing in automation is
ushering into a new era with usage of neural networks in
software testing which will bring about a revolution in the
way automation
software testing is being done now. The researchers in this
field are working diligently to evolve hybrid techniques
like we proposed in this paper to save considerable amount
of testing effort and time.
REFERENCES
[1] Anderson C, von Mayrhauser A, Mraz R. On the use
of neural networks to guide software testing
activities. In: Proceedings of ITC’95, the
International Test Conference; October 21–26, 1995.
[2] Choi J, Choi B. Test agent system design. In: 1999
IEEE International Fuzzy Systems Conference
Proceedings; August 22–25, 1999.
[3] Khoshgoftaar TM, Allen EB,Hudepohl JP,Aud SJ.
Application of neural networks to software quality
modeling of a very large telecommunications system.
IEEE Transactions on Neural Networks
1997;8(4):902–909.
[4] Khoshgoftaar TM, Szabo RM. Using neural
networks to predict software faults during testing.
IEEE Transactions on Reliability 1996;45(3):456–
462.
[5] J. Yue, C. Bojan, M. Tim, and L. Jie, “Incremental
development of fault prediction
models,” International Journal of Software
Engineering and Knowledge Engineering, vol. 23,
no. 10, pp. 1399–1425, 2013.
[6] Masoud Ahmadzade, Davood Khosroanjom, Toufiq
Khezri, Yusef Sofi, ” Test Adequacy Criteria for
UML Design Models Based on a Fuzzy - AHP
Approach ”, American Journal of Scientific Research
ISSN 1450-223X Issue 42(2012), pp. 72-84.
[7] Park, J., Baik, J.: Improving software reliability
prediction through multi-criteria based dynamic
model selection and combination. J. Syst.
Softw. 101, 236–244 (2015).
[8] Chang, P.T., Lin, K.P., Pai, P.F.: Hybrid learning
fuzzy neural models in forecasting engine system
reliability. In: Proceeding of the Fifth Asia Pacific
Industrial Engineering and Management Systems
Conference, pp. 2361–2366 (2004).
[9] Noekhah, S., Hozhabri, A.A., Rizi, H.S.: Software
reliability prediction model based on ICA algorithm
and MLP neural network. In: 7th International
Conference on e-Commerce in Developing
Countries: With Focus on e-Security (ECDC), pp. 1–
15. IEEE, April 2013.
[10] BEWOOR, L. A. et al. Predicting Root Cause
Analysis (RCA) bucket for software defects through
Artificial Neural Network. Imperial Journal of
Interdisciplinary Research, [S.l.], v. 3, n. 6, june
2017. ISSN 2454-1362.
[11] Ruilian zhao, shanshan lv, “Neural network based
test cases generation using genetic algorithm” 13th
IEEE international symposium on Pacific Rim
dependable computing. IEEE, 2007, pp.97 - 100.
[12] ZhiweiXu,KehanGao,Taghi,M.Khoshgoftaar,Naeem
Seliya,System regression test planning with a fuzzy
expert system,Information Sciences Volume 259, 20
February 2014, Pages 532-543.
[13] Bhatnagar R, Bhattacharjee V, Ghose MK (2010)
Software development effort estimation—neural
network vs regression modeling approach. Int J Eng
Sci Technol 2(7): 2950–2956.