Testing embedded systems must be done carefully particularly in the significant regions of the embedded systems. Inputs from an embedded system can happen in multiple order and many relationships can exist among the input sequences. Consideration of the sequences and the relationships among the sequences is one of the most important considerations that must be tested to find the expected behavior of the embedded systems. On the other hand combinatorial approaches help determining fewer test cases that are quite enough to test the embedded systems exhaustively. In this paper, an Optimal Mining Technique that considers multi-input domain which is based on built-in combinatorial approaches has been presented. The method exploits multi-input sequences and the relationships that exist among multi-input vectors. The technique has been used for testing an embedded system that monitors and controls the temperature within the Nuclear reactors.
Metamorphic Security Testing for Web SystemsLionel Briand
Metamorphic testing is proposed to address the oracle problem in web security testing. Relations capture necessary properties between multiple inputs and outputs that must hold when a system is not vulnerable. Experiments on commercial and open source systems show the approach has high sensitivity (58.33-83.33%) and specificity (99.43-99.50%), detecting vulnerabilities without many false alarms. Extensive experiments with 22 relations achieved similar results for Jenkins and Joomla.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Abstract—Combinatorial testing (also called interaction testing) is an effective specification-based test input generation technique. By now most of research work in combinatorial testing aims to propose novel approaches trying to generate test suites with minimum size that still cover all the pairwise, triple, or n-way combinations of factors. Since the difficulty of solving this problem is demonstrated to be NP-hard, existing approaches have been designed to generate optimal or near optimal combinatorial test suites in polynomial time. In this paper, we try to apply particle swarm optimization (PSO), a kind of meta-heuristic search technique, to pairwise testing (i.e. a special case of combinatorial testing aiming to cover all the pairwise combinations). To systematically build pairwise test suites, we propose two different PSO based algorithms. One algorithm is based on one-test-at-a-time strategy and the other is based on IPO-like strategy. In these two different algorithms, we use PSO to complete the construction of a single test. To successfully apply PSO to cover more uncovered pairwise combinations in this construction process, we provide a detailed description on how to formulate the search space, define the fitness function and set some heuristic settings. To verify the effectiveness of our approach, we implement these algorithms and choose some typical inputs. In our empirical study, we analyze the impact factors of our approach and compare our approach to other well-known approaches. Final empirical results show the effectiveness and efficiency of our approach.
This document presents a method for prioritizing test cases for event-driven software using a genetic algorithm. It proposes a single abstract model that can test both graphical user interface (GUI) and web applications. Test cases are executed and assigned a fitness value, then stored in a training database. When test cases have equal fitness values, prioritization criteria like fault detection, time, and code coverage are applied using the genetic algorithm to determine the optimal testing order. The approach was experimentally tested on small GUI and web apps and showed a reduction in latency time compared to other techniques. Future work could involve applying the algorithm to larger real-world software.
Implementation of reducing features to improve code change based bug predicti...eSAT Journals
Abstract Today, we are getting plenty of bugs in the software because of variations in the software and hardware technologies. Bugs are nothing but Software faults, existing a severe challenge for system reliability and dependability. To identify the bugs from the software bug prediction is convenient approach. To visualize the presence of a bug in a source code file, recently, Machine learning classifiers approach is developed. Because of a huge number of machine learned features current classifier-based bug prediction have two major problems i) inadequate precision for practical usage ii) measured prediction time. In this paper we used two techniques first, cos-triage algorithm which have a go to enhance the accuracy and also lower the price of bug prediction and second, feature selection methods which eliminate less significant features. Reducing features get better the quality of knowledge extracted and also boost the speed of computation. Keywords: Efficiency, Bug Prediction, Classification, Feature Selection, Accuracy
Model based test case prioritization using neural network classificationcseij
Model-based testing for real-life software systems often require a large number of tests, all of which cannot
exhaustively be run due to time and cost constraints. Thus, it is necessary to prioritize the test cases in
accordance with their importance the tester perceives. In this paper, this problem is solved by improving
our given previous study, namely, applying classification approach to the results of our previous study
functional relationship between the test case prioritization group membership and the two attributes:
important index and frequency for all events belonging to given group are established. A for classification
purpose, neural network (NN) that is the most advances is preferred and a data set obtained from our study
for all test cases is classified using multilayer perceptron (MLP) NN. The classification results for
commercial test prioritization application show the high classification accuracies about 96% and the
acceptable test prioritization performances are achieved.
Design Verification and Test Vector Minimization Using Heuristic Method of a ...ijcisjournal
The reduction in feature size increases the probability of manufacturing defect in the IC will result in a
faulty chip. A very small defect can easily result in a faulty transistor or interconnecting wire when the
feature size is less. Testing is required to guarantee fault-free products, regardless of whether the product
is a VLSI device or an electronic system. Simulation is used to verify the correctness of the design. To test n
input circuit we required 2n
test vectors. As the number inputs of a circuit are more, the exponential growth
of the required number of vectors takes much time to test the circuit. It is necessary to find testing methods
to reduce test vectors . So here designed an heuristic approach to test the ripple carry adder. Modelsim and
Xilinx tools are used to verify and synthesize the design.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Metamorphic Security Testing for Web SystemsLionel Briand
Metamorphic testing is proposed to address the oracle problem in web security testing. Relations capture necessary properties between multiple inputs and outputs that must hold when a system is not vulnerable. Experiments on commercial and open source systems show the approach has high sensitivity (58.33-83.33%) and specificity (99.43-99.50%), detecting vulnerabilities without many false alarms. Extensive experiments with 22 relations achieved similar results for Jenkins and Joomla.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Abstract—Combinatorial testing (also called interaction testing) is an effective specification-based test input generation technique. By now most of research work in combinatorial testing aims to propose novel approaches trying to generate test suites with minimum size that still cover all the pairwise, triple, or n-way combinations of factors. Since the difficulty of solving this problem is demonstrated to be NP-hard, existing approaches have been designed to generate optimal or near optimal combinatorial test suites in polynomial time. In this paper, we try to apply particle swarm optimization (PSO), a kind of meta-heuristic search technique, to pairwise testing (i.e. a special case of combinatorial testing aiming to cover all the pairwise combinations). To systematically build pairwise test suites, we propose two different PSO based algorithms. One algorithm is based on one-test-at-a-time strategy and the other is based on IPO-like strategy. In these two different algorithms, we use PSO to complete the construction of a single test. To successfully apply PSO to cover more uncovered pairwise combinations in this construction process, we provide a detailed description on how to formulate the search space, define the fitness function and set some heuristic settings. To verify the effectiveness of our approach, we implement these algorithms and choose some typical inputs. In our empirical study, we analyze the impact factors of our approach and compare our approach to other well-known approaches. Final empirical results show the effectiveness and efficiency of our approach.
This document presents a method for prioritizing test cases for event-driven software using a genetic algorithm. It proposes a single abstract model that can test both graphical user interface (GUI) and web applications. Test cases are executed and assigned a fitness value, then stored in a training database. When test cases have equal fitness values, prioritization criteria like fault detection, time, and code coverage are applied using the genetic algorithm to determine the optimal testing order. The approach was experimentally tested on small GUI and web apps and showed a reduction in latency time compared to other techniques. Future work could involve applying the algorithm to larger real-world software.
Implementation of reducing features to improve code change based bug predicti...eSAT Journals
Abstract Today, we are getting plenty of bugs in the software because of variations in the software and hardware technologies. Bugs are nothing but Software faults, existing a severe challenge for system reliability and dependability. To identify the bugs from the software bug prediction is convenient approach. To visualize the presence of a bug in a source code file, recently, Machine learning classifiers approach is developed. Because of a huge number of machine learned features current classifier-based bug prediction have two major problems i) inadequate precision for practical usage ii) measured prediction time. In this paper we used two techniques first, cos-triage algorithm which have a go to enhance the accuracy and also lower the price of bug prediction and second, feature selection methods which eliminate less significant features. Reducing features get better the quality of knowledge extracted and also boost the speed of computation. Keywords: Efficiency, Bug Prediction, Classification, Feature Selection, Accuracy
Model based test case prioritization using neural network classificationcseij
Model-based testing for real-life software systems often require a large number of tests, all of which cannot
exhaustively be run due to time and cost constraints. Thus, it is necessary to prioritize the test cases in
accordance with their importance the tester perceives. In this paper, this problem is solved by improving
our given previous study, namely, applying classification approach to the results of our previous study
functional relationship between the test case prioritization group membership and the two attributes:
important index and frequency for all events belonging to given group are established. A for classification
purpose, neural network (NN) that is the most advances is preferred and a data set obtained from our study
for all test cases is classified using multilayer perceptron (MLP) NN. The classification results for
commercial test prioritization application show the high classification accuracies about 96% and the
acceptable test prioritization performances are achieved.
Design Verification and Test Vector Minimization Using Heuristic Method of a ...ijcisjournal
The reduction in feature size increases the probability of manufacturing defect in the IC will result in a
faulty chip. A very small defect can easily result in a faulty transistor or interconnecting wire when the
feature size is less. Testing is required to guarantee fault-free products, regardless of whether the product
is a VLSI device or an electronic system. Simulation is used to verify the correctness of the design. To test n
input circuit we required 2n
test vectors. As the number inputs of a circuit are more, the exponential growth
of the required number of vectors takes much time to test the circuit. It is necessary to find testing methods
to reduce test vectors . So here designed an heuristic approach to test the ripple carry adder. Modelsim and
Xilinx tools are used to verify and synthesize the design.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Model-Driven Run-Time Enforcement of Complex Role-Based Access Control PoliciesLionel Briand
This document describes a model-driven approach for enforcing complex role-based access control (RBAC) policies at runtime. It presents a GemRBAC+CTX model that formally specifies RBAC policies and contexts. An enforcement process is used to check access requests and update access control data based on constraints defined in the model. This approach provides an expressive and standardized way to enforce a wide range of RBAC policies through model-driven techniques.
Verification of the protection services in antivirus systems by using nusmv m...ijfcstjournal
In this paper, a model of protection services in the antivirus system is proposed. The antivirus system
behavior separate in to preventive and control behaviors. We extract the properties which are expected
from the model of antivirus system approach from control behavior in the form of CTL and LTL temporal
logic formulas. To implement the behavior models of antivirus system approach, the ArgoUML tool and the
NuSMV model checker are employed. The results show that the antivirus system approach can detects
fairness, reachability, deadlock free and verify some properties of the proposed model verified by using
NuSMV model checker.
A Defect Prediction Model for Software Product based on ANFISIJSRD
Artificial intelligence techniques are day by day getting involvement in all the classification and prediction based process like environmental monitoring, stock exchange conditions, biomedical diagnosis, software engineering etc. However still there are yet to be simplify the challenges of selecting training criteria for design of artificial intelligence models used for prediction of results. This work focus on the defect prediction mechanism development using software metric data of KC1.We have taken subtractive clustering approach for generation of fuzzy inference system (FIS).The FIS rules are generated at different radius of influence of input attribute vectors and the developed rules are further modified by ANFIS technique to obtain the prediction of number of defects in software project using fuzzy logic system.
Automating System Test Case Classification and Prioritization for Use Case-Dr...Lionel Briand
This document proposes an approach to automate system test case classification, creation of new test cases, and prioritization for use case-driven testing in product line engineering. The approach classifies previous test cases as reusable, retestable or obsolete based on use case scenario models. It generates guidance to update test cases based on differences between old and new scenarios. It also identifies new scenarios not covered by previous tests to define new test cases. Finally, it prioritizes the test suite for a new product using test execution history and other information. The overall goal is to maximize reuse of test assets from existing products and rely on requirements rather than source code analysis.
Composite Intrusion Detection in Process Control Networksguest8fdee6
This dissertation develops a multi-algorithm intrusion detection approach for process control networks. It uses estimation-inspection algorithms and probabilistic models to detect anomalies in RAM content evolutions from normal network traffic and physical processes. It also leverages specification-based detection from supervisory and automatic control applications. The approach was tested on a process control network testbed and exhibited high detection rates with low false alarms.
A NOVEL APPROACH FOR TEST CASEPRIORITIZATIONIJCSEA Journal
This paper proposes a novel approach to test case prioritization that calculates the product of a test case's statement coverage and number of function calls to determine priority. The test cases are ordered based on this product metric, with the highest product value first. An algorithm is presented and evaluations show the approach improves fault detection rates over non-prioritized test cases, as measured by the APFD metric. The approach addresses potential ambiguities when multiple test cases have the same product value by further prioritizing based on number of function calls or execution order. The results demonstrate the effectiveness of the proposed prioritization formula and algorithm.
Can we predict the quality of spectrum-based fault localization?Lionel Briand
The document discusses predicting the effectiveness of spectrum-based fault localization techniques. It proposes defining metrics to capture aspects of source code, test executions, test suites, and faults. A dataset of 341 instances with 70 variables is generated from Defects4J projects, classifying instances as "effective" or "ineffective" based on fault ranking. Analysis identifies the most influential metrics, finding a combination of static, dynamic, and test metrics can construct a prediction model with excellent discrimination, achieving an AUC of 0.86-0.88. The results suggest effectiveness depends more on code and test complexity than fault type/location, and entangled dynamic call graphs hinder localization.
SENSITIVITY ANALYSIS OF INFORMATION RETRIEVAL METRICS ijcsit
Average Precision, Recall and Precision are the main metrics of Information Retrieval (IR) systems performance. Using Mathematical and empirical analysis, in this paper, we show the properties of those metrics. Mathematically, it is demonstrated that all those parameters are very sensitive to relevance judgment which is not usually very reliable. We show that position shifting downwards of the relevant document within the ranked list is followed by Average Precision decreasing. The variation of Average Precision parameter value is highly present in the positions 1 to 10, while from the 10th position on, this variation is negligible. In addition, we try to estimate the regularity of the Average Precision value changes, when we assume that we are switching the arbitrary number of relevance judgments within the existing ranked list, from non-relevant to relevant. Empirically, it is shown hat 6 relevant documents at the end of the 20 document list, have approximately same Average Precision value as a single relevant document at the beginning of this list, while Recall and Precision values increase linearly, regardless of the document position in the list. Also, we show that in the case of Serbian-to-English human translation query followed by English-to-Serbian machine translation, relevance judgment is significantly changed and therefore, all the parameters for measuring the IR system performance are also subject to change.
ANALYSIS OF MACHINE LEARNING ALGORITHMS WITH FEATURE SELECTION FOR INTRUSION ...IJNSA Journal
This document summarizes a research paper that analyzes machine learning algorithms for intrusion detection using the UNSW-NB15 dataset. It compares the performance of classifiers like KNN, SGD, Random Forest, Logistic Regression, and Naive Bayes, both with and without feature selection. Chi-Square feature selection is applied to reduce irrelevant features before training the classifiers. The classifiers' performance is evaluated based on metrics like accuracy, precision, recall, F1-score, true positive rate and false positive rate. The paper finds that feature selection can improve classifiers' performance for intrusion detection.
A practical guide for using Statistical Tests to assess Randomized Algorithms...Lionel Briand
This document provides guidance on using statistical tests to properly evaluate randomized algorithms in software engineering. It discusses challenges in determining if a technique is better than others and the need for multiple runs to avoid conclusions based solely on randomness. Statistical tests help identify if enough runs were performed but may find tiny differences significant with sufficient runs. The document advocates more runs and statistical tests to validate results rather than relying on luck, and presents examples where this was crucial, including search-based test generation and vulnerability testing techniques.
Using evolutionary testing to improve efficiency and qualityFaysal Ahmed
Evolutionary testing uses genetic algorithms to automatically generate test cases that improve software testing efficiency and quality. It transforms testing goals into optimization problems solved through evolutionary algorithms. Experiments show evolutionary testing outperforms manual, random, and static testing by generating more effective test cases. It provides a systematic, automated approach for testing temporal behavior, safety, and structural coverage.
Testing of Cyber-Physical Systems: Diversity-driven StrategiesLionel Briand
Lionel Briand discusses strategies for testing cyber-physical systems using diversity-driven approaches. He outlines challenges in verifying controllers and decision-making components in cyber-physical systems due to large input spaces and expensive model execution. Briand proposes maximizing diversity of test cases to improve fault detection. He describes using diversity of input signals, output signals, and failure patterns to generate test cases. Search algorithms are used to find test cases that maximize diversity or reveal specific failure patterns. The strategies are shown to significantly outperform coverage-based and random testing on Simulink models.
An SPRT Procedure for an Ungrouped Data using MMLE ApproachIOSR Journals
This document describes a sequential probability ratio test (SPRT) procedure for analyzing ungrouped software failure data using a modified maximum likelihood estimation (MMLE) approach. The SPRT procedure can help quickly detect unreliable software by making decisions with fewer observed failures than traditional hypothesis testing methods. Parameters are estimated using MMLE, which approximates functions in the maximum likelihood equation with linear functions to simplify calculations compared to other estimation methods. The document provides details on how to apply the SPRT procedure and MMLE parameter estimation to a software reliability growth model to analyze software failure data sequentially and detect unreliable software components earlier.
Automated Inference of Access Control Policies for Web ApplicationsLionel Briand
This document proposes an approach to automatically infer access control policies for web applications through dynamic analysis and machine learning. The approach involves exploring the application to discover resources, analyzing resource access data, inferring access rules using decision trees, assessing rule consistency, and targeted testing. An evaluation on two applications found the approach was effective in discovering resources and inferring correct policies for one application. Inconsistencies in the inferred rules also helped detect some access control issues in the applications.
Parameter Estimation of Software Reliability Growth Models Using Simulated An...Editor IJCATR
The parameter estimation of Goel’s Okomotu Model is performed victimisation simulated annealing. The Goel’s Okomotu
Model is predicated on Exponential model and could be a easy non-homogeneous Poisson method (NHPP) model. Simulated
annealing could be a heuristic optimisation technique that provides a method to flee local optima. The information set is optimized
using simulated annealing technique. SA could be a random algorithmic program with higher performance than Genetic algorithmic
program (GA) that depends on the specification of the neighbourhood structure of a state area and parameter settings for its cooling
schedule.
IMPLEMENTATION OF COMPACTION ALGORITHM FOR ATPG GENERATED PARTIALLY SPECIFIED...VLSICS Design
In this paper the ATPG is implemented using C++. This ATPG is based on fault equivalence concept in which the number of faults gets reduced before compaction method. This ATPG uses the line justification and error propagation to find the test vectors for reduced fault set with the aid of controllability and observability. Single stuck at fault model is considered. The programs are developed for fault equivalence method, controllability Observability, automatic test pattern generation and test data compaction using object oriented language C++. ISCAS 85 C17 circuit was used for analysis purpose along with other circuits. Standard ISCAS (International Symposium on Circuits And Systems) netlist format was used. The flow charts and results for ISCAS 85 C17 circuits along with other netlists are given in this paper. The test vectors generated by the ATPG further compacted to reduce the test vector data. The algorithm is developed for the test vector compaction and discussed along with results.
This document contains instructions for a final examination in an introduction to software engineering course. It provides the date, time, location of the exam, as well as instructions that students are to write their name, student ID, and signature on the cover and top of the exam. It also states that the exam is closed book and notes, calculators are permitted, and to circle one answer for multiple choice questions. The exam contains two sections - a multiple choice section and an essay question section where students should write their answers in a separate booklet.
DYNAMUT: A MUTATION TESTING TOOL FOR INDUSTRY-LEVEL EMBEDDED SYSTEM APPLICATIONSijesajournal
The document describes DynaMut, a tool developed to automate mutation testing for embedded system applications written in C++. DynaMut inserts conditional mutations into the code during compilation rather than requiring multiple recompilations. This reduces the time needed for mutation testing by 48-67% compared to traditional methods. The document also evaluates different sampling techniques for reducing the number of mutations tested while maintaining representative results, finding that dithered sampling is most effective.
An enhanced pairwise search approach for generatingAlexander Decker
This document summarizes a research paper that proposes an enhanced pairwise search approach for generating test data. The approach aims to generate an optimal number of test cases and reduce execution time. It works by first generating pairs of parameters and values, then combining pairs to construct test cases that maximize covered pairs. An evaluation of the approach on 5 systems found it generated the smallest test suites in 3 of 5 cases compared to other methods. The authors conclude the approach is efficient in reducing test suites and execution time to meet industry needs.
Software testing is an important activity of the software development process. Software testing is most
efforts consuming phase in software development. One would like to minimize the effort and maximize the
number of faults detected and automated test case generation contributes to reduce cost and time effort.
Hence test case generation may be treated as an optimization problem In this paper we have used genetic
algorithm to optimize the test case that are generated applying conditional coverage on source code. Test
case data is generated automatically using genetic algorithm are optimized and outperforms the test cases
generated by random testing.
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Model-Driven Run-Time Enforcement of Complex Role-Based Access Control PoliciesLionel Briand
This document describes a model-driven approach for enforcing complex role-based access control (RBAC) policies at runtime. It presents a GemRBAC+CTX model that formally specifies RBAC policies and contexts. An enforcement process is used to check access requests and update access control data based on constraints defined in the model. This approach provides an expressive and standardized way to enforce a wide range of RBAC policies through model-driven techniques.
Verification of the protection services in antivirus systems by using nusmv m...ijfcstjournal
In this paper, a model of protection services in the antivirus system is proposed. The antivirus system
behavior separate in to preventive and control behaviors. We extract the properties which are expected
from the model of antivirus system approach from control behavior in the form of CTL and LTL temporal
logic formulas. To implement the behavior models of antivirus system approach, the ArgoUML tool and the
NuSMV model checker are employed. The results show that the antivirus system approach can detects
fairness, reachability, deadlock free and verify some properties of the proposed model verified by using
NuSMV model checker.
A Defect Prediction Model for Software Product based on ANFISIJSRD
Artificial intelligence techniques are day by day getting involvement in all the classification and prediction based process like environmental monitoring, stock exchange conditions, biomedical diagnosis, software engineering etc. However still there are yet to be simplify the challenges of selecting training criteria for design of artificial intelligence models used for prediction of results. This work focus on the defect prediction mechanism development using software metric data of KC1.We have taken subtractive clustering approach for generation of fuzzy inference system (FIS).The FIS rules are generated at different radius of influence of input attribute vectors and the developed rules are further modified by ANFIS technique to obtain the prediction of number of defects in software project using fuzzy logic system.
Automating System Test Case Classification and Prioritization for Use Case-Dr...Lionel Briand
This document proposes an approach to automate system test case classification, creation of new test cases, and prioritization for use case-driven testing in product line engineering. The approach classifies previous test cases as reusable, retestable or obsolete based on use case scenario models. It generates guidance to update test cases based on differences between old and new scenarios. It also identifies new scenarios not covered by previous tests to define new test cases. Finally, it prioritizes the test suite for a new product using test execution history and other information. The overall goal is to maximize reuse of test assets from existing products and rely on requirements rather than source code analysis.
Composite Intrusion Detection in Process Control Networksguest8fdee6
This dissertation develops a multi-algorithm intrusion detection approach for process control networks. It uses estimation-inspection algorithms and probabilistic models to detect anomalies in RAM content evolutions from normal network traffic and physical processes. It also leverages specification-based detection from supervisory and automatic control applications. The approach was tested on a process control network testbed and exhibited high detection rates with low false alarms.
A NOVEL APPROACH FOR TEST CASEPRIORITIZATIONIJCSEA Journal
This paper proposes a novel approach to test case prioritization that calculates the product of a test case's statement coverage and number of function calls to determine priority. The test cases are ordered based on this product metric, with the highest product value first. An algorithm is presented and evaluations show the approach improves fault detection rates over non-prioritized test cases, as measured by the APFD metric. The approach addresses potential ambiguities when multiple test cases have the same product value by further prioritizing based on number of function calls or execution order. The results demonstrate the effectiveness of the proposed prioritization formula and algorithm.
Can we predict the quality of spectrum-based fault localization?Lionel Briand
The document discusses predicting the effectiveness of spectrum-based fault localization techniques. It proposes defining metrics to capture aspects of source code, test executions, test suites, and faults. A dataset of 341 instances with 70 variables is generated from Defects4J projects, classifying instances as "effective" or "ineffective" based on fault ranking. Analysis identifies the most influential metrics, finding a combination of static, dynamic, and test metrics can construct a prediction model with excellent discrimination, achieving an AUC of 0.86-0.88. The results suggest effectiveness depends more on code and test complexity than fault type/location, and entangled dynamic call graphs hinder localization.
SENSITIVITY ANALYSIS OF INFORMATION RETRIEVAL METRICS ijcsit
Average Precision, Recall and Precision are the main metrics of Information Retrieval (IR) systems performance. Using Mathematical and empirical analysis, in this paper, we show the properties of those metrics. Mathematically, it is demonstrated that all those parameters are very sensitive to relevance judgment which is not usually very reliable. We show that position shifting downwards of the relevant document within the ranked list is followed by Average Precision decreasing. The variation of Average Precision parameter value is highly present in the positions 1 to 10, while from the 10th position on, this variation is negligible. In addition, we try to estimate the regularity of the Average Precision value changes, when we assume that we are switching the arbitrary number of relevance judgments within the existing ranked list, from non-relevant to relevant. Empirically, it is shown hat 6 relevant documents at the end of the 20 document list, have approximately same Average Precision value as a single relevant document at the beginning of this list, while Recall and Precision values increase linearly, regardless of the document position in the list. Also, we show that in the case of Serbian-to-English human translation query followed by English-to-Serbian machine translation, relevance judgment is significantly changed and therefore, all the parameters for measuring the IR system performance are also subject to change.
ANALYSIS OF MACHINE LEARNING ALGORITHMS WITH FEATURE SELECTION FOR INTRUSION ...IJNSA Journal
This document summarizes a research paper that analyzes machine learning algorithms for intrusion detection using the UNSW-NB15 dataset. It compares the performance of classifiers like KNN, SGD, Random Forest, Logistic Regression, and Naive Bayes, both with and without feature selection. Chi-Square feature selection is applied to reduce irrelevant features before training the classifiers. The classifiers' performance is evaluated based on metrics like accuracy, precision, recall, F1-score, true positive rate and false positive rate. The paper finds that feature selection can improve classifiers' performance for intrusion detection.
A practical guide for using Statistical Tests to assess Randomized Algorithms...Lionel Briand
This document provides guidance on using statistical tests to properly evaluate randomized algorithms in software engineering. It discusses challenges in determining if a technique is better than others and the need for multiple runs to avoid conclusions based solely on randomness. Statistical tests help identify if enough runs were performed but may find tiny differences significant with sufficient runs. The document advocates more runs and statistical tests to validate results rather than relying on luck, and presents examples where this was crucial, including search-based test generation and vulnerability testing techniques.
Using evolutionary testing to improve efficiency and qualityFaysal Ahmed
Evolutionary testing uses genetic algorithms to automatically generate test cases that improve software testing efficiency and quality. It transforms testing goals into optimization problems solved through evolutionary algorithms. Experiments show evolutionary testing outperforms manual, random, and static testing by generating more effective test cases. It provides a systematic, automated approach for testing temporal behavior, safety, and structural coverage.
Testing of Cyber-Physical Systems: Diversity-driven StrategiesLionel Briand
Lionel Briand discusses strategies for testing cyber-physical systems using diversity-driven approaches. He outlines challenges in verifying controllers and decision-making components in cyber-physical systems due to large input spaces and expensive model execution. Briand proposes maximizing diversity of test cases to improve fault detection. He describes using diversity of input signals, output signals, and failure patterns to generate test cases. Search algorithms are used to find test cases that maximize diversity or reveal specific failure patterns. The strategies are shown to significantly outperform coverage-based and random testing on Simulink models.
An SPRT Procedure for an Ungrouped Data using MMLE ApproachIOSR Journals
This document describes a sequential probability ratio test (SPRT) procedure for analyzing ungrouped software failure data using a modified maximum likelihood estimation (MMLE) approach. The SPRT procedure can help quickly detect unreliable software by making decisions with fewer observed failures than traditional hypothesis testing methods. Parameters are estimated using MMLE, which approximates functions in the maximum likelihood equation with linear functions to simplify calculations compared to other estimation methods. The document provides details on how to apply the SPRT procedure and MMLE parameter estimation to a software reliability growth model to analyze software failure data sequentially and detect unreliable software components earlier.
Automated Inference of Access Control Policies for Web ApplicationsLionel Briand
This document proposes an approach to automatically infer access control policies for web applications through dynamic analysis and machine learning. The approach involves exploring the application to discover resources, analyzing resource access data, inferring access rules using decision trees, assessing rule consistency, and targeted testing. An evaluation on two applications found the approach was effective in discovering resources and inferring correct policies for one application. Inconsistencies in the inferred rules also helped detect some access control issues in the applications.
Parameter Estimation of Software Reliability Growth Models Using Simulated An...Editor IJCATR
The parameter estimation of Goel’s Okomotu Model is performed victimisation simulated annealing. The Goel’s Okomotu
Model is predicated on Exponential model and could be a easy non-homogeneous Poisson method (NHPP) model. Simulated
annealing could be a heuristic optimisation technique that provides a method to flee local optima. The information set is optimized
using simulated annealing technique. SA could be a random algorithmic program with higher performance than Genetic algorithmic
program (GA) that depends on the specification of the neighbourhood structure of a state area and parameter settings for its cooling
schedule.
IMPLEMENTATION OF COMPACTION ALGORITHM FOR ATPG GENERATED PARTIALLY SPECIFIED...VLSICS Design
In this paper the ATPG is implemented using C++. This ATPG is based on fault equivalence concept in which the number of faults gets reduced before compaction method. This ATPG uses the line justification and error propagation to find the test vectors for reduced fault set with the aid of controllability and observability. Single stuck at fault model is considered. The programs are developed for fault equivalence method, controllability Observability, automatic test pattern generation and test data compaction using object oriented language C++. ISCAS 85 C17 circuit was used for analysis purpose along with other circuits. Standard ISCAS (International Symposium on Circuits And Systems) netlist format was used. The flow charts and results for ISCAS 85 C17 circuits along with other netlists are given in this paper. The test vectors generated by the ATPG further compacted to reduce the test vector data. The algorithm is developed for the test vector compaction and discussed along with results.
This document contains instructions for a final examination in an introduction to software engineering course. It provides the date, time, location of the exam, as well as instructions that students are to write their name, student ID, and signature on the cover and top of the exam. It also states that the exam is closed book and notes, calculators are permitted, and to circle one answer for multiple choice questions. The exam contains two sections - a multiple choice section and an essay question section where students should write their answers in a separate booklet.
DYNAMUT: A MUTATION TESTING TOOL FOR INDUSTRY-LEVEL EMBEDDED SYSTEM APPLICATIONSijesajournal
The document describes DynaMut, a tool developed to automate mutation testing for embedded system applications written in C++. DynaMut inserts conditional mutations into the code during compilation rather than requiring multiple recompilations. This reduces the time needed for mutation testing by 48-67% compared to traditional methods. The document also evaluates different sampling techniques for reducing the number of mutations tested while maintaining representative results, finding that dithered sampling is most effective.
An enhanced pairwise search approach for generatingAlexander Decker
This document summarizes a research paper that proposes an enhanced pairwise search approach for generating test data. The approach aims to generate an optimal number of test cases and reduce execution time. It works by first generating pairs of parameters and values, then combining pairs to construct test cases that maximize covered pairs. An evaluation of the approach on 5 systems found it generated the smallest test suites in 3 of 5 cases compared to other methods. The authors conclude the approach is efficient in reducing test suites and execution time to meet industry needs.
Software testing is an important activity of the software development process. Software testing is most
efforts consuming phase in software development. One would like to minimize the effort and maximize the
number of faults detected and automated test case generation contributes to reduce cost and time effort.
Hence test case generation may be treated as an optimization problem In this paper we have used genetic
algorithm to optimize the test case that are generated applying conditional coverage on source code. Test
case data is generated automatically using genetic algorithm are optimized and outperforms the test cases
generated by random testing.
IRJET - Neural Network based Leaf Disease Detection and Remedy Recommenda...IRJET Journal
This document describes a neural network-based system for detecting leaf diseases and recommending remedies. It uses a convolutional neural network (CNN) and deep learning techniques to classify images of plant leaves with different diseases. The system is trained on a dataset of 5000 leaf images across 4 disease classes. It aims to help farmers more easily identify leaf diseases and receive treatment recommendations without needing to directly contact experts. The document outlines the existing problems, proposed solution, literature review on related techniques like boosting and support vector machines, software and algorithms used including Python, Anaconda and Spyder. It also describes the implementation process involving modules for data loading, preprocessing, feature extraction using CNN, disease prediction, and recommending remedies.
This paper presents a set of methods that uses a genetic algorithm for automatic test-data generation in
software testing. For several years researchers have proposed several methods for generating test data
which had different drawbacks. In this paper, we have presented various Genetic Algorithm (GA) based test
methods which will be having different parameters to automate the structural-oriented test data generation
on the basis of internal program structure. The factors discovered are used in evaluating the fitness
function of Genetic algorithm for selecting the best possible Test method. These methods take the test
populations as an input and then evaluate the test cases for that program. This integration will help in
improving the overall performance of genetic algorithm in search space exploration and exploitation fields
with better convergence rate.
Test case optimization in configuration testing using ripper algorithmeSAT Journals
Abstract
Software systems are highly configurable. Although there are lots of advantages in improving the configuration, it is difficult to test unique errors hiding in configurations. To overcome this problem, combinatorial interaction testing (CIT) is used to selects strength and computes a covering array which includes all configuration option combinations. It poorly identifies the effective configuration space. So the cost required for testing get increased. In this work, techniques includes hierarchical clustering algorithm and ripper algorithm. It gives high strength interaction which it can be missed by CIT approach and it identifies effective configuration space. We evaluated and comparecoverage achieves by CIT and RIPPER classification with hierarchical clustering. Using this approach we validate loop as well as statement based configurations. Our results strongly suggest that Proto-interaction formed by RIPPER classificationwith hierarchical clusteringcan effectively covers sets of configurations than traditional CIT.
Keywords: Configuration options, Hierarchical Clustering, RIPPER Algorithm
Aco based solution for tsp model for evaluation of software test suiteIAEME Publication
This document discusses using ant colony optimization (ACO) to evaluate software test suites. It begins by introducing software testing and describing how test cases are generated and test suites created. It then proposes using ACO to execute the test suite by formulating it as a traveling salesman problem (TSP) and having "ants" find optimal paths through test cases. The paper outlines the ACO algorithm and applies it to a sample test suite evaluation. It evaluates the accuracy and efficiency of the approach using metrics like precision, recall, iterative best cost, and average node branching. The technique is shown to evaluate test suites more efficiently than other algorithms like Dijkstra's algorithm.
Aco based solution for tsp model for evaluation of software test suiteIAEME Publication
The document discusses evaluating software test suites using an ant colony optimization (ACO) approach. It proposes formulating the testing problem as a traveling salesman problem that can be solved using ACO. Specifically:
1) It describes generating test cases using equivalence class partitioning and discusses tools that can automatically generate test cases.
2) It explains how ACO can be used to execute the test suite by modeling it as a TSP and having "ants" find optimal test case execution orders.
3) It provides pseudocode for the ACO algorithm to evaluate the test suite and discusses updating pheromone trails to bias testing toward high quality solutions.
Aco based solution for tsp model for evaluation of software test suiteIAEME Publication
The document discusses evaluating software test suites using an ant colony optimization (ACO) approach. It proposes formulating the testing problem as a traveling salesman problem that can be solved using ACO. Specifically:
1) It describes generating test cases using equivalence class partitioning and discusses tools that can automatically generate test cases.
2) It explains how ACO can be used to execute the test suite by modeling it as a TSP and having "ants" find optimal test case execution orders.
3) It provides pseudocode for the ACO algorithm to evaluate the test suite and discusses updating pheromone trails to bias testing toward high quality solutions.
1. Write test cases from given software models using the following test
design techniques. (K3)
a equivalence partitioning;
b boundary value analysis;
c decision tables;
d state transition testing.
2. Understand the main purpose of each of the four techniques, what level and type of testing could use the technique, and how coverage may be measured. (K2)
3. Understand the concept of use case testing and its benefits.
backlink:
http://sif.uin-suska.ac.id/
http://fst.uin-suska.ac.id/
http://www.uin-suska.ac.id/
Specification based or black box techniquesmuhammad afif
The document discusses various specification-based or black-box testing techniques including equivalence partitioning, boundary value analysis, decision tables, state transition testing, and use case testing. It provides definitions and explanations of each technique, how they are used to design test cases, and their benefits in testing software specifications and identifying bugs.
Testing and test case generation by using fuzzy logic and nIAEME Publication
The document discusses using fuzzy logic and natural language processing techniques for software testing and test case generation. Specifically, it proposes using these techniques to enable search-based testing, automatic test case generation, and automatic fault finding. Fuzzy logic would be used to generate test cases from natural language requirements and specifications. This approach aims to make testing more efficient. The document provides background on different types of software testing (e.g. black box vs. white box) and discusses gray-box testing and test automation frameworks. It also outlines exploratory testing and functional vs. non-functional testing. The key idea is applying fuzzy logic and NLP to requirements to automatically generate test cases and detect faults.
The document discusses software testing processes and techniques. It covers topics like test case design, validation testing vs defect testing, unit testing vs integration testing, interface testing, system testing, acceptance testing, regression testing, test management, deriving test cases from use cases, and test coverage. The key points are that software testing involves designing test cases, running programs with test data, comparing results to test cases, and reporting test results. Different testing techniques like unit testing, integration testing, and system testing address different levels or parts of the system. Test cases are derived from use case scenarios to validate system functionality.
Specification based or black box techniques (andika m)Andika Mardanu
This document discusses specification-based or black box testing techniques, specifically equivalence partitioning, boundary value analysis, decision tables, state transition testing, and use case testing. It provides definitions and explanations of each technique, including that equivalence partitioning divides test conditions into groups that should be handled equivalently by the system, decision tables deal with combinations of inputs and conditions, state transition testing models systems that can be in different states, and use case testing identifies test cases that exercise full system transactions.
Specification based or black box techniquesIrvan Febry
- The document discusses four specification-based black-box testing techniques: equivalence partitioning, boundary value analysis, decision tables, and state transition testing.
- It provides details on each technique, including how equivalence partitioning divides test conditions into groups that should be treated equivalently, how decision tables deal with combinations of inputs and conditions, and how state transition testing is used for systems that can be described as finite state machines.
- The document also briefly discusses use case testing and how use cases describe interactions between actors and the system to achieve tasks from start to finish.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Multi objective genetic algorithm for regression testing reduction eSAT Journals
Abstract Location based authentication is a new direction in development of authentication techniques in the area of security. In this paper, the geographical position of user is an important attribute for authentication of user. It provides strong authentication as a location characteristic can never be stolen or spoofed. As an effective and popular means for privacy protection data hiding in encrypted image is proposed. In our application we are providing secure message passing facilities for this OTP (One Time Password) and Steganoghaphy techniques are used. This technique is relatively new approach towards information security. Keywords: Location based authentication, GPS device, Image encryption, Cryptography, Steganography, and OTP
The document discusses four specification-based black-box testing techniques: equivalence partitioning, boundary value analysis, decision tables, and state transition testing. It provides definitions and explanations of each technique. For example, it explains that equivalence partitioning involves dividing test conditions into groups that should be handled equivalently by the system, and then testing one condition from each group. It also discusses use case testing and how use cases can help uncover integration defects.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
This document discusses test case prioritization, which aims to execute test cases in an order that increases their effectiveness at detecting faults. It describes regression testing and different prioritization techniques, including genetic algorithms. Genetic algorithms represent test cases as chromosomes and use selection, crossover and mutation to evolve solutions. The document concludes that prioritization techniques can improve fault detection rates but their effectiveness varies across programs and test suites, so practitioners must choose techniques carefully for different testing scenarios.
Similar to Testing embedded system through optimal mining technique (OMT) based on multi-input domain (20)
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Neural network optimizer of proportional-integral-differential controller par...IJECEIAES
Wide application of proportional-integral-differential (PID)-regulator in industry requires constant improvement of methods of its parameters adjustment. The paper deals with the issues of optimization of PID-regulator parameters with the use of neural network technology methods. A methodology for choosing the architecture (structure) of neural network optimizer is proposed, which consists in determining the number of layers, the number of neurons in each layer, as well as the form and type of activation function. Algorithms of neural network training based on the application of the method of minimizing the mismatch between the regulated value and the target value are developed. The method of back propagation of gradients is proposed to select the optimal training rate of neurons of the neural network. The neural network optimizer, which is a superstructure of the linear PID controller, allows increasing the regulation accuracy from 0.23 to 0.09, thus reducing the power consumption from 65% to 53%. The results of the conducted experiments allow us to conclude that the created neural superstructure may well become a prototype of an automatic voltage regulator (AVR)-type industrial controller for tuning the parameters of the PID controller.
An improved modulation technique suitable for a three level flying capacitor ...IJECEIAES
This research paper introduces an innovative modulation technique for controlling a 3-level flying capacitor multilevel inverter (FCMLI), aiming to streamline the modulation process in contrast to conventional methods. The proposed
simplified modulation technique paves the way for more straightforward and
efficient control of multilevel inverters, enabling their widespread adoption and
integration into modern power electronic systems. Through the amalgamation of
sinusoidal pulse width modulation (SPWM) with a high-frequency square wave
pulse, this controlling technique attains energy equilibrium across the coupling
capacitor. The modulation scheme incorporates a simplified switching pattern
and a decreased count of voltage references, thereby simplifying the control
algorithm.
A review on features and methods of potential fishing zoneIJECEIAES
This review focuses on the importance of identifying potential fishing zones in seawater for sustainable fishing practices. It explores features like sea surface temperature (SST) and sea surface height (SSH), along with classification methods such as classifiers. The features like SST, SSH, and different classifiers used to classify the data, have been figured out in this review study. This study underscores the importance of examining potential fishing zones using advanced analytical techniques. It thoroughly explores the methodologies employed by researchers, covering both past and current approaches. The examination centers on data characteristics and the application of classification algorithms for classification of potential fishing zones. Furthermore, the prediction of potential fishing zones relies significantly on the effectiveness of classification algorithms. Previous research has assessed the performance of models like support vector machines, naïve Bayes, and artificial neural networks (ANN). In the previous result, the results of support vector machine (SVM) were 97.6% more accurate than naive Bayes's 94.2% to classify test data for fisheries classification. By considering the recent works in this area, several recommendations for future works are presented to further improve the performance of the potential fishing zone models, which is important to the fisheries community.
Electrical signal interference minimization using appropriate core material f...IJECEIAES
As demand for smaller, quicker, and more powerful devices rises, Moore's law is strictly followed. The industry has worked hard to make little devices that boost productivity. The goal is to optimize device density. Scientists are reducing connection delays to improve circuit performance. This helped them understand three-dimensional integrated circuit (3D IC) concepts, which stack active devices and create vertical connections to diminish latency and lower interconnects. Electrical involvement is a big worry with 3D integrates circuits. Researchers have developed and tested through silicon via (TSV) and substrates to decrease electrical wave involvement. This study illustrates a novel noise coupling reduction method using several electrical involvement models. A 22% drop in electrical involvement from wave-carrying to victim TSVs introduces this new paradigm and improves system performance even at higher THz frequencies.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Bibliometric analysis highlighting the role of women in addressing climate ch...IJECEIAES
Fossil fuel consumption increased quickly, contributing to climate change
that is evident in unusual flooding and draughts, and global warming. Over
the past ten years, women's involvement in society has grown dramatically,
and they succeeded in playing a noticeable role in reducing climate change.
A bibliometric analysis of data from the last ten years has been carried out to
examine the role of women in addressing the climate change. The analysis's
findings discussed the relevant to the sustainable development goals (SDGs),
particularly SDG 7 and SDG 13. The results considered contributions made
by women in the various sectors while taking geographic dispersion into
account. The bibliometric analysis delves into topics including women's
leadership in environmental groups, their involvement in policymaking, their
contributions to sustainable development projects, and the influence of
gender diversity on attempts to mitigate climate change. This study's results
highlight how women have influenced policies and actions related to climate
change, point out areas of research deficiency and recommendations on how
to increase role of the women in addressing the climate change and
achieving sustainability. To achieve more successful results, this initiative
aims to highlight the significance of gender equality and encourage
inclusivity in climate change decision-making processes.
Voltage and frequency control of microgrid in presence of micro-turbine inter...IJECEIAES
The active and reactive load changes have a significant impact on voltage
and frequency. In this paper, in order to stabilize the microgrid (MG) against
load variations in islanding mode, the active and reactive power of all
distributed generators (DGs), including energy storage (battery), diesel
generator, and micro-turbine, are controlled. The micro-turbine generator is
connected to MG through a three-phase to three-phase matrix converter, and
the droop control method is applied for controlling the voltage and
frequency of MG. In addition, a method is introduced for voltage and
frequency control of micro-turbines in the transition state from gridconnected mode to islanding mode. A novel switching strategy of the matrix
converter is used for converting the high-frequency output voltage of the
micro-turbine to the grid-side frequency of the utility system. Moreover,
using the switching strategy, the low-order harmonics in the output current
and voltage are not produced, and consequently, the size of the output filter
would be reduced. In fact, the suggested control strategy is load-independent
and has no frequency conversion restrictions. The proposed approach for
voltage and frequency regulation demonstrates exceptional performance and
favorable response across various load alteration scenarios. The suggested
strategy is examined in several scenarios in the MG test systems, and the
simulation results are addressed.
Enhancing battery system identification: nonlinear autoregressive modeling fo...IJECEIAES
Precisely characterizing Li-ion batteries is essential for optimizing their
performance, enhancing safety, and prolonging their lifespan across various
applications, such as electric vehicles and renewable energy systems. This
article introduces an innovative nonlinear methodology for system
identification of a Li-ion battery, employing a nonlinear autoregressive with
exogenous inputs (NARX) model. The proposed approach integrates the
benefits of nonlinear modeling with the adaptability of the NARX structure,
facilitating a more comprehensive representation of the intricate
electrochemical processes within the battery. Experimental data collected
from a Li-ion battery operating under diverse scenarios are employed to
validate the effectiveness of the proposed methodology. The identified
NARX model exhibits superior accuracy in predicting the battery's behavior
compared to traditional linear models. This study underscores the
importance of accounting for nonlinearities in battery modeling, providing
insights into the intricate relationships between state-of-charge, voltage, and
current under dynamic conditions.
Smart grid deployment: from a bibliometric analysis to a surveyIJECEIAES
Smart grids are one of the last decades' innovations in electrical energy.
They bring relevant advantages compared to the traditional grid and
significant interest from the research community. Assessing the field's
evolution is essential to propose guidelines for facing new and future smart
grid challenges. In addition, knowing the main technologies involved in the
deployment of smart grids (SGs) is important to highlight possible
shortcomings that can be mitigated by developing new tools. This paper
contributes to the research trends mentioned above by focusing on two
objectives. First, a bibliometric analysis is presented to give an overview of
the current research level about smart grid deployment. Second, a survey of
the main technological approaches used for smart grid implementation and
their contributions are highlighted. To that effect, we searched the Web of
Science (WoS), and the Scopus databases. We obtained 5,663 documents
from WoS and 7,215 from Scopus on smart grid implementation or
deployment. With the extraction limitation in the Scopus database, 5,872 of
the 7,215 documents were extracted using a multi-step process. These two
datasets have been analyzed using a bibliometric tool called bibliometrix.
The main outputs are presented with some recommendations for future
research.
Use of analytical hierarchy process for selecting and prioritizing islanding ...IJECEIAES
One of the problems that are associated to power systems is islanding
condition, which must be rapidly and properly detected to prevent any
negative consequences on the system's protection, stability, and security.
This paper offers a thorough overview of several islanding detection
strategies, which are divided into two categories: classic approaches,
including local and remote approaches, and modern techniques, including
techniques based on signal processing and computational intelligence.
Additionally, each approach is compared and assessed based on several
factors, including implementation costs, non-detected zones, declining
power quality, and response times using the analytical hierarchy process
(AHP). The multi-criteria decision-making analysis shows that the overall
weight of passive methods (24.7%), active methods (7.8%), hybrid methods
(5.6%), remote methods (14.5%), signal processing-based methods (26.6%),
and computational intelligent-based methods (20.8%) based on the
comparison of all criteria together. Thus, it can be seen from the total weight
that hybrid approaches are the least suitable to be chosen, while signal
processing-based methods are the most appropriate islanding detection
method to be selected and implemented in power system with respect to the
aforementioned factors. Using Expert Choice software, the proposed
hierarchy model is studied and examined.
Enhancing of single-stage grid-connected photovoltaic system using fuzzy logi...IJECEIAES
The power generated by photovoltaic (PV) systems is influenced by
environmental factors. This variability hampers the control and utilization of
solar cells' peak output. In this study, a single-stage grid-connected PV
system is designed to enhance power quality. Our approach employs fuzzy
logic in the direct power control (DPC) of a three-phase voltage source
inverter (VSI), enabling seamless integration of the PV connected to the
grid. Additionally, a fuzzy logic-based maximum power point tracking
(MPPT) controller is adopted, which outperforms traditional methods like
incremental conductance (INC) in enhancing solar cell efficiency and
minimizing the response time. Moreover, the inverter's real-time active and
reactive power is directly managed to achieve a unity power factor (UPF).
The system's performance is assessed through MATLAB/Simulink
implementation, showing marked improvement over conventional methods,
particularly in steady-state and varying weather conditions. For solar
irradiances of 500 and 1,000 W/m2
, the results show that the proposed
method reduces the total harmonic distortion (THD) of the injected current
to the grid by approximately 46% and 38% compared to conventional
methods, respectively. Furthermore, we compare the simulation results with
IEEE standards to evaluate the system's grid compatibility.
Enhancing photovoltaic system maximum power point tracking with fuzzy logic-b...IJECEIAES
Photovoltaic systems have emerged as a promising energy resource that
caters to the future needs of society, owing to their renewable, inexhaustible,
and cost-free nature. The power output of these systems relies on solar cell
radiation and temperature. In order to mitigate the dependence on
atmospheric conditions and enhance power tracking, a conventional
approach has been improved by integrating various methods. To optimize
the generation of electricity from solar systems, the maximum power point
tracking (MPPT) technique is employed. To overcome limitations such as
steady-state voltage oscillations and improve transient response, two
traditional MPPT methods, namely fuzzy logic controller (FLC) and perturb
and observe (P&O), have been modified. This research paper aims to
simulate and validate the step size of the proposed modified P&O and FLC
techniques within the MPPT algorithm using MATLAB/Simulink for
efficient power tracking in photovoltaic systems.
Adaptive synchronous sliding control for a robot manipulator based on neural ...IJECEIAES
Robot manipulators have become important equipment in production lines, medical fields, and transportation. Improving the quality of trajectory tracking for
robot hands is always an attractive topic in the research community. This is a
challenging problem because robot manipulators are complex nonlinear systems
and are often subject to fluctuations in loads and external disturbances. This
article proposes an adaptive synchronous sliding control scheme to improve trajectory tracking performance for a robot manipulator. The proposed controller
ensures that the positions of the joints track the desired trajectory, synchronize
the errors, and significantly reduces chattering. First, the synchronous tracking
errors and synchronous sliding surfaces are presented. Second, the synchronous
tracking error dynamics are determined. Third, a robust adaptive control law is
designed,the unknown components of the model are estimated online by the neural network, and the parameters of the switching elements are selected by fuzzy
logic. The built algorithm ensures that the tracking and approximation errors
are ultimately uniformly bounded (UUB). Finally, the effectiveness of the constructed algorithm is demonstrated through simulation and experimental results.
Simulation and experimental results show that the proposed controller is effective with small synchronous tracking errors, and the chattering phenomenon is
significantly reduced.
Remote field-programmable gate array laboratory for signal acquisition and de...IJECEIAES
A remote laboratory utilizing field-programmable gate array (FPGA) technologies enhances students’ learning experience anywhere and anytime in embedded system design. Existing remote laboratories prioritize hardware access and visual feedback for observing board behavior after programming, neglecting comprehensive debugging tools to resolve errors that require internal signal acquisition. This paper proposes a novel remote embeddedsystem design approach targeting FPGA technologies that are fully interactive via a web-based platform. Our solution provides FPGA board access and debugging capabilities beyond the visual feedback provided by existing remote laboratories. We implemented a lab module that allows users to seamlessly incorporate into their FPGA design. The module minimizes hardware resource utilization while enabling the acquisition of a large number of data samples from the signal during the experiments by adaptively compressing the signal prior to data transmission. The results demonstrate an average compression ratio of 2.90 across three benchmark signals, indicating efficient signal acquisition and effective debugging and analysis. This method allows users to acquire more data samples than conventional methods. The proposed lab allows students to remotely test and debug their designs, bridging the gap between theory and practice in embedded system design.
Detecting and resolving feature envy through automated machine learning and m...IJECEIAES
Efficiently identifying and resolving code smells enhances software project quality. This paper presents a novel solution, utilizing automated machine learning (AutoML) techniques, to detect code smells and apply move method refactoring. By evaluating code metrics before and after refactoring, we assessed its impact on coupling, complexity, and cohesion. Key contributions of this research include a unique dataset for code smell classification and the development of models using AutoGluon for optimal performance. Furthermore, the study identifies the top 20 influential features in classifying feature envy, a well-known code smell, stemming from excessive reliance on external classes. We also explored how move method refactoring addresses feature envy, revealing reduced coupling and complexity, and improved cohesion, ultimately enhancing code quality. In summary, this research offers an empirical, data-driven approach, integrating AutoML and move method refactoring to optimize software project quality. Insights gained shed light on the benefits of refactoring on code quality and the significance of specific features in detecting feature envy. Future research can expand to explore additional refactoring techniques and a broader range of code metrics, advancing software engineering practices and standards.
Smart monitoring technique for solar cell systems using internet of things ba...IJECEIAES
Rapidly and remotely monitoring and receiving the solar cell systems status parameters, solar irradiance, temperature, and humidity, are critical issues in enhancement their efficiency. Hence, in the present article an improved smart prototype of internet of things (IoT) technique based on embedded system through NodeMCU ESP8266 (ESP-12E) was carried out experimentally. Three different regions at Egypt; Luxor, Cairo, and El-Beheira cities were chosen to study their solar irradiance profile, temperature, and humidity by the proposed IoT system. The monitoring data of solar irradiance, temperature, and humidity were live visualized directly by Ubidots through hypertext transfer protocol (HTTP) protocol. The measured solar power radiation in Luxor, Cairo, and El-Beheira ranged between 216-1000, 245-958, and 187-692 W/m 2 respectively during the solar day. The accuracy and rapidity of obtaining monitoring results using the proposed IoT system made it a strong candidate for application in monitoring solar cell systems. On the other hand, the obtained solar power radiation results of the three considered regions strongly candidate Luxor and Cairo as suitable places to build up a solar cells system station rather than El-Beheira.
An efficient security framework for intrusion detection and prevention in int...IJECEIAES
Over the past few years, the internet of things (IoT) has advanced to connect billions of smart devices to improve quality of life. However, anomalies or malicious intrusions pose several security loopholes, leading to performance degradation and threat to data security in IoT operations. Thereby, IoT security systems must keep an eye on and restrict unwanted events from occurring in the IoT network. Recently, various technical solutions based on machine learning (ML) models have been derived towards identifying and restricting unwanted events in IoT. However, most ML-based approaches are prone to miss-classification due to inappropriate feature selection. Additionally, most ML approaches applied to intrusion detection and prevention consider supervised learning, which requires a large amount of labeled data to be trained. Consequently, such complex datasets are impossible to source in a large network like IoT. To address this problem, this proposed study introduces an efficient learning mechanism to strengthen the IoT security aspects. The proposed algorithm incorporates supervised and unsupervised approaches to improve the learning models for intrusion detection and mitigation. Compared with the related works, the experimental outcome shows that the model performs well in a benchmark dataset. It accomplishes an improved detection accuracy of approximately 99.21%.
An In-Depth Exploration of Natural Language Processing: Evolution, Applicatio...DharmaBanothu
Natural language processing (NLP) has
recently garnered significant interest for the
computational representation and analysis of human
language. Its applications span multiple domains such
as machine translation, email spam detection,
information extraction, summarization, healthcare,
and question answering. This paper first delineates
four phases by examining various levels of NLP and
components of Natural Language Generation,
followed by a review of the history and progression of
NLP. Subsequently, we delve into the current state of
the art by presenting diverse NLP applications,
contemporary trends, and challenges. Finally, we
discuss some available datasets, models, and
evaluation metrics in NLP.
This study Examines the Effectiveness of Talent Procurement through the Imple...DharmaBanothu
In the world with high technology and fast
forward mindset recruiters are walking/showing interest
towards E-Recruitment. Present most of the HRs of
many companies are choosing E-Recruitment as the best
choice for recruitment. E-Recruitment is being done
through many online platforms like Linkedin, Naukri,
Instagram , Facebook etc. Now with high technology E-
Recruitment has gone through next level by using
Artificial Intelligence too.
Key Words : Talent Management, Talent Acquisition , E-
Recruitment , Artificial Intelligence Introduction
Effectiveness of Talent Acquisition through E-
Recruitment in this topic we will discuss about 4important
and interlinked topics which are
A high-Speed Communication System is based on the Design of a Bi-NoC Router, ...DharmaBanothu
The Network on Chip (NoC) has emerged as an effective
solution for intercommunication infrastructure within System on
Chip (SoC) designs, overcoming the limitations of traditional
methods that face significant bottlenecks. However, the complexity
of NoC design presents numerous challenges related to
performance metrics such as scalability, latency, power
consumption, and signal integrity. This project addresses the
issues within the router's memory unit and proposes an enhanced
memory structure. To achieve efficient data transfer, FIFO buffers
are implemented in distributed RAM and virtual channels for
FPGA-based NoC. The project introduces advanced FIFO-based
memory units within the NoC router, assessing their performance
in a Bi-directional NoC (Bi-NoC) configuration. The primary
objective is to reduce the router's workload while enhancing the
FIFO internal structure. To further improve data transfer speed,
a Bi-NoC with a self-configurable intercommunication channel is
suggested. Simulation and synthesis results demonstrate
guaranteed throughput, predictable latency, and equitable
network access, showing significant improvement over previous
designs
Impartiality as per ISO /IEC 17025:2017 StandardMuhammadJazib15
This document provides basic guidelines for imparitallity requirement of ISO 17025. It defines in detial how it is met and wiudhwdih jdhsjdhwudjwkdbjwkdddddddddddkkkkkkkkkkkkkkkkkkkkkkkwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwioiiiiiiiiiiiii uwwwwwwwwwwwwwwwwhe wiqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq gbbbbbbbbbbbbb owdjjjjjjjjjjjjjjjjjjjj widhi owqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq uwdhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhwqiiiiiiiiiiiiiiiiiiiiiiiiiiiiw0pooooojjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjj whhhhhhhhhhh wheeeeeeee wihieiiiiii wihe
e qqqqqqqqqqeuwiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiqw dddddddddd cccccccccccccccv s w c r
cdf cb bicbsad ishd d qwkbdwiur e wetwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwww w
dddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddfffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffw
uuuuhhhhhhhhhhhhhhhhhhhhhhhhe qiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii iqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc ccccccccccccccccccccccccccccccccccc bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbu uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuum
m
m mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm m i
g i dijsd sjdnsjd ndjajsdnnsa adjdnawddddddddddddd uw
Build the Next Generation of Apps with the Einstein 1 Platform.
Rejoignez Philippe Ozil pour une session de workshops qui vous guidera à travers les détails de la plateforme Einstein 1, l'importance des données pour la création d'applications d'intelligence artificielle et les différents outils et technologies que Salesforce propose pour vous apporter tous les bénéfices de l'IA.
2. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 9, No. 3, June 2019 : 2141 - 2151
2142
Combinatorial testing is commonly utilized black-box practice that could dramatically diminish the
number of test cases, as it is a highly competent technique to perceive software faults. This method originates
test cases from input domain of the system under test. But, when the input domain is noteworthy and the
output domain is much tiny, it is desirable to go for testing the output domain either exhaustively [1] or as
much as possible.
For a few safety critical embedded systems, building test cases drawn from multi-input domain is a
necessity as multiple inputs can occur at the same time. However the an embedded system must also be
tested form other perspectives that include input, output, input-output and Multi-output domain Generation
of test cases based on Multi-input perspective will be more suitable than other perspective’s as it guarantees
that all or as many possible input combinations are comprehensively tested.
Exhaustive testing [2] considering non multi-input domains is out of question when many input
variables exist and they act in several combinations. Pseudo-Exhaustive testing aims at considering only
those combinations that will most likely result in failure conditions. Optimal Mining Technique (OMT)
derives test cases by choosing certain combinations of either the inputs or outputs based on the possibility of
occurrence of multi- output or multi-input domain of an embedded system such as TMCNRS which monitors
and controls temperatures within nuclear reactor systems.
In the case of TMCNRS, the occurrences temperatures within nuclear records can happen
simultaneously are independently. In an embedded system processing tasks are designed for handling multi-
inputs which occur simultaneously. These tasks must be tested thoroughly to guarantee the proper working of
the embedded system. Test cases should be generated to verify the functionality of occurrence of proper
outputs based on the Occurrence of Multi-input.
Problem
In the case of embedded systems, Inputs occur as a set in addition to the occurrence of independent
inputs. The behaviour of an embedded system when multi-inputs occurs must be tested to find the whether
the system has been properly developed to process multi-inputs that occur simultaneously.
Proposed solution
An improved optimal Mining technique is presented in this paper, the range of values that must be
used for generating the test data have also been pre-identified and mapped with output variables. The multi
input relationships and also the input output relationships which can be used as a database for mining
relationship pattern for further modelling. The pattern of occurrence of the input variable can be determined
by using a mining algorithm or through manual inspection.
2. RELATED WORKS
Lakshmi Prasad, et al., [3] had presented a comprehensive survey on combinatorial testing. In [4] a
method is proposed that deals with generation of test cases based on the input domain considering with
special consideration to testing standalone embedded system. The algorithm can successfully generate pairs
for those input parameters and eliminate non related input pairs thereby reducing the size of the test suite to a
minimum. Lakshmi Prasad, et al., [5] had proposed several combinatorial methods for testing an embedded
system.
Gray. D. M. Cohen, et al., [6] introduced the combinatorial design approach for generating the test
cases automatically. They have described an application which is developed using the method presented by
them. They have shown that the time required for the development of test plan has been reduced considerably
and they have also shown that the entire code has been covered using the test plan that has been used to
generate test cases.
Cohen D. M., et al., [7] have presented a system called AETG. The approach presented by them
considers all combinations of input parameters that include pair-wise, tripe-wise and n-wise. The approaches
presented by them will breed all the valid test pairs ignoring the invalid test pairs. The numbers of test cases
generally are of the logarithmic order of the number of input variables used. The AETG has been used for
undertaking different types of testing that include unit, functional, acceptance, system, integration, regression
and inter-operability testing.
Cohen D. M., [8] have presented a method and a system for enumerating a minimal number of test
cases for systems with interacting elements that have relationships between the elements and the number of
characteristics evaluated for each element. In the method, the user enters values for each of the elements and
then defines relationships between the elements. Our method then enumerates a table of test cases for each
relationship between elements using deterministic procedures, when applicable, and random procedures
when deterministic procedures are not applicable. After a table of test cases is generated for each of the
relation, the method combines relationships into a single table of test cases.
3. Int J Elec & Comp Eng ISSN: 2088-8708
Intelligent fire detection and alert system using labVIEW (Fakrulradzi Idris)
2143
The significant expansion of autonomous control and information processing capabilities in the
coming generation of mission software systems results in a qualitatively larger space of behaviours that needs
to be "covered" during testing, not only at the system level but also at subsystem and unit levels Tung, [9].
A major challenge in this area is to automatically generate a relatively small set of test cases that,
collectively, guarantees a selected degree of coverage of the behaviour space. They described an algorithm
for a parametric test case generation tool that applies a combinatorial design approach to the selection of
candidate test cases. Evaluation of this algorithm on test parameters from the Deep Space One mission
reveals a valuable reduction in the number of test cases, when compared to an earlier home-brewed
generator. Lei Y., et al., [10] have used a criterion which is test specification based. The criterion considers
each pair of input variables, and every value pair of the input pair considered and every value selected
covered through a test case. They have evolved this strategy for carrying pair-wise testing.
Covering arrays have been augmented further by Cohen M. B., et al., [11] through inclusion of the
concept called Annealing. This has led to special array containing several sub-arrays which all together
contain all the t-tuples, each tuple appearing at least once. The strength of the array is measured through t
number of tuples contained in the array considering all the sub-array contained in it. They have analyzed all
the arrays that have strength of 3 tuples using recursive combinatorial construction and using search
techniques. The technique used by them leveraged optimality and efficiency of the size through use of
combinatorial construction and heuristic search.
Further addition to the recursive combinatorial generation added with heuristic search has been
made through detection of interaction of multiple components that lead to different kinds of failures. R.
Kuhn, et al., [12] have applied this approach to the real world applications and obtained the analytical results.
The way a system is tested depended on the type of the system. Choice of an appropriate testing method is
crucial for making testing effective and rational. The application related to flow of water inside a carbon
Nano tube that is single walled and in which several temperature gradients exist has been tested by Shiomi J.,
et al., [13] by using combinatorial method.
Ochoa, et al., [14] had introduced the box-fusion which is an approach to improve pair wise testing.
Box-Fusion approach was guessed and a case study was carried out by using two software implementations:
the Simple LTL Generator that builds Linear Temporal Logic (LTL) formulae with atomic propositions and
the prospect algorithm that can produce LTL formulae from more than 31,000 possible input combinations.
They have presented evaluation of Box-Fusion approach which considers, pair wise testing approach,
annotated control flow graphs approach and regression testing approach.
M. Lakshmi Prasad, et al., [15]-[19] had built test cases by particle swarm optimization (PSO) for
multi output domain embedded systems using combinatorial techniques. They also used neural network
based strategy for automated construction of test cases for testing an embedded system using combinatorial
techniques. They also developed generating test cases for testing web sites through neural networks and input
pairs. They also generated test cases using combinatorial methods based multi-output domain of an
embedded system through the process of optimal selection.
Abdul Rahman, et al., [20] had presented a survey on input-output relationship relation to test data
generation strategies. They reviewed the existing combinatorial test data generation strategies supporting the
IOR features specifically taking the nature inspired algorithm as the main basis. Benchmarking results
illustrate the comparative performance of existing nature inspired algorithm based strategies supporting IOR.
Combinatorial methods can also be used for testing software that predominantly uses logical
expressions based on Boolean or binary inputs. In the software used related to most of the safety critical
applications, Boolean expressions are used extensively. S. Vilkomir, [21] has steadied effectiveness of
combinatorial testing when binary inputs are used.
Deepa Gupta, et al., [22] had proposed a sequence generation of test cases using pair wise approach.
They presented an approach which uses the series origination approach for pair wise test case origination.
This approach makes certain to disseminate the required intent of trial run cases which cover all available
relations between all instructions pairs at least once. Trial run selection specification is this approach is based
on combinatorial testing.
Jose Torres-Jimenez, et al., [23] Covering arrays are combinatorial structures which have
applications in fields like software testing and hardware Trojan detection. In this paper we proposed a
two-stage simulated annealing algorithm to construct covering arrays. The proposed algorithm is instanced
in this paper through the construction of ternary covering arrays of strength three. We were able to get 579
new upper bounds. In order to show the generality of our proposal, we defined a new benchmark composed
of 25 instances of MCAs taken from the literature, all instances were improved.
S. Route [24] Clients today want more for less and the IBM test mantra of Test Less Test Right
helps address this ask by placing Combinatorial Test Design (CTD) at the heart of the solution. This
document presents two case studies of CTD implementation in client engagements and focuses on the
4. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 9, No. 3, June 2019 : 2141 - 2151
2144
approach, process and challenges addressed to scale up the implementation and make CTD a mainstream
activity. The IBM Focus tool was used in both cases to implement Combinatorial Test Design for
optimization of tests and for reducing test effort while increasing test coverage.
P. S., M. B., et al., [25] Combinatorial Testing is a test design methodology that aims to detect the
interaction failures existing in the software under test. The combinatorial input space model comprises of the
parameters and the values it can take. Building this input space model is a domain knowledge and experience
intensive task. The objective of the paper is to assist test designer in building this test model. A rule based
semi-automatic approach is proposed to derive the input space model elements from Use case specifications
and UML use case diagrams. A natural language processing based parser and an XMI based parser are
implemented. The rules formulated are applied on synthetic case studies and the output model is evaluated
using precision and recall metrics. The results are promising and this approach will be of good use to the test
designer.
Y. Yao, et al., [26] as an effective software testing technique, combinatorial testing has been
gradually applied in various types of test practice. In this case, it is necessary to provide useful combinatorial
testing tools to support the application of combinatorial testing technique on industrial scenarios, as well as
the academic research for combinatorial testing technique. To this end, on the basis of the research results of
this group, a suite of combinatorial testing tools has been developed, whose functions include test case
generation, test case optimization, and etc. For the requirements from both industrial and academic scenarios,
the tools should be configurable, scalable, modular, and etc. This paper gives a brief introduction to the
design and implementation of these tools. Keywords-combinatorial testing, combinatorial testing tools, test
generation, test prioritization.
3. APPLICATION OF OPTIMAL MINING TECHNIQUE TO PILOT PROJECT
The modified OMT algorithm and its application to the pilot project are presented below:
3.1. Steps of optimal MINING technique
a. Step-1
Determine the regular and embedded system specific input variables from the test requirements
specification. Input variables that are of continuous nature have been selected. In this OMT method only,
the multi-input variables that are of continuous in nature have been considered. The details of input variables
i.e. regular and ES specific selected are shown in Table 1 and Table 2. The details of output variables i.e.
regular and ES specific selected are shown in Table 3 and Table 4. The range of values that must be used for
generating the test data have also been pre-identified and mapped with input variables. The relationships that
exist between the input variables and its corresponding related input variables can be used as the basis for
generating the test cases.
Table 1. Regular Input Variables Traced Out of Test Specification of the Pilot Project
Variable
Serial No
Input Variable Name Type of Variable Start Value End Value
I1 INIT-MESSAGE Discrete "TMCNRS" "TMCNRS"
I2 KEY-1 Discrete 0 Z
I3 KEY-2 Discrete 0 Z
I4 KEY-3 Discrete 0 Z
I5 KEY-4 Discrete 0 Z
I6 KEY-5 Discrete 0 Z
I7 MSG-PW-ENTRY Discrete "Enter Password' "Enter Password'
I8 PASS-WD Discrete "abcde" "abcde"
I9 TEMP1 Continuous 1 255
I10 REF1 Continuous 1 255
I11 TEMP2 Continuous 1 255
I12 REF2 Continuous 1 255
5. Int J Elec & Comp Eng ISSN: 2088-8708
Intelligent fire detection and alert system using labVIEW (Fakrulradzi Idris)
2145
Table 2. Embedded Specific Input Variables Traced Out of Test Specification of the Pilot Project
Variable
Serial No
Input Variable Name Type of Variable Start Value End Value
I13 R-Command Constant String - -
I14 Th-Command Constant String - -
I15 T-Command Constant String - -
I16 O-Command Constant String - -
I17 V-Command Constant String - -
I18 MEM-LOC-1 Constant 0 255
I19 MEM-LOC-2 Constant 0 255
I20 TEST-PORT1 Constant String - -
I21 TEST-PORT2 Constant String - -
I22 TEST-PORT3 Constant String - -
I23 TEST-PORT4 Constant String - -
I24 TEST-PORT5 Constant String - -
Table 3. Details of the Regular Output Variables Related to the Pilot Project
Variable
Serial No.
Output Variable Name Type of Variable Start Value End Value
O1 LCD-STAT Discrete N Y
O2 LCD-WRITE Discrete "ABC" "ABC"
O3 HOST-STAT Discrete N Y
O4 HOST-WRITE Discrete "ABC" "ABC"
O5 TEMP1-OSIG Discrete N Y
O6 TEMP2-OSIG Discrete N Y
O7 PUMP1-OSIG Discrete N Y
O8 PUMP2-OSIG Discrete N Y
O9 BUZZER-OSIG Discrete N Y
Table 4. Details of the Embedded Specific Output Variables related to the Pilot Project
Variable
Serial No.
Output Variable Name Type of Variable Start Value End Value
O12 TEMP1-TSIG Continuous 0 32756
O13 TEMP1-VSIG Continuous 0 32756
O14 TEMP1-RES-TIME Continuous 0 32756
O15 TEMP1-THRU-TIME Continuous 0 32756
O16 TEMP1-RANGE-STA Discrete N Y
O17 PUMP1-TSIG Continuous 0 32756
O18 PUMP1-VSIG Continuous 0 32756
O19 PUMP1-RES-TIME Continuous 0 32756
O20 PUMP1-THRU-TIME Continuous 0 32756
O21 TEMP2-TSIG Continuous 0 32756
O22 TEMP2-VSIG Continuous 0 32756
O23 TEMP2-RES-TIME Continuous 0 32756
O24 TEMP2-THRU-TIME Continuous 0 32756
O25 TEMP2-RANGE-STA Discrete N Y
O26 PUMP2-TSIG Continuous 0 32756
O27 PUMP2-VSIG Continuous 0 32756
O28 PUMP2-RES-TIME Continuous 0 32756
O29 PUMP2-THRU-TIME Continuous 0 32756
O30 BUZZER-TSIG Continuous 0 32756
O31 BUZZER-VSIG Continuous 0 32756
O32 BUZZER-RES-TIME Continuous 0 32756
O33 BUZZER-THRU-TIME Continuous 0 32756
b. 3Step-2
Determine the input-input relationships and also the relationships with the output variable which can
be used as a database for mining relationship pattern for further modelling. Sample associativity and the
relationships among the input and output variables are shown in the Table 5.
6. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 9, No. 3, June 2019 : 2141 - 2151
2146
Table 5. Input Variables and its Relationship with Output Variables
c. Step-3
A set of input variables occurs in union. The set of input variables behave in a pattern. The pattern
of occurrence of the input variable can be determined by using a mining algorithm or through manual
inspection. Following are the input sets and the pattern of occurrence of those sets which can be mined or
manually determined.
Set-I1 with Patterns Set-I4 with Patterns
INIT_MESSAGE TEMP1
MSG-PW-ENTRY TEMP2
PASS-WD Set-I5 with Patterns
Set-I2 with Patterns REF1
TEST-PORT-1 REF2
TEST-PORT-2 Set-I6 with Patterns
TEST-PORT-3 R-Command
TEST-PORT-4 Th-Command
TEST-PORT-5 T-Command
Set-I3 with Patterns O-Command
Key-1 V-Command
Key-2 Set-I7 with Patterns
Key-3 MEM-LOC-1
Key-4 MEM-LOC-2
Key-5
d. Step-4
Relate the input patterns through coherence between an input variable and input variable set as
shown below.
Set-I4 with Patterns Set-I5 with Patterns
TEMP1 REF1
TEMP2 REF2
Set-I4 with Patterns Set-I4 with Patterns
TEMP1 TEMP2
TEMP2 TEMP1
7. Int J Elec & Comp Eng ISSN: 2088-8708
Intelligent fire detection and alert system using labVIEW (Fakrulradzi Idris)
2147
Set-I6 with Patterns
R-Command
Th-Command
T-Command
O-Command Set-I6 with Patterns
V-Command R-Command
Th-Command
e. Step-5
Trace out the output vectors having variables of similar nature and domain. Following are the output
vectors related to example application.
Set-O1 with Patterns Set-O6 with Patterns
LCD-STAT - PUMP1-OSIG -
LCD-STAT LCD-WRITE PUMP1-OSIG PUMP1-TSIG
Set-O2 with Patterns PUMP1-OSIG PUMP1-VSIG
HOST-STAT - Set-O7 with Patterns
HOST-STAT HOST-WRITE PUMP1-RES-TIME -
Set-O3 with Patterns PUMP1-THRU-TIME -
TEMP1-OSIG - Set-O8 with Patterns
TEMP1-OSIG TEMP1-TSIG TEMP2-OSIG -
TEMP1-OSIG TEMP1-VSIG TEMP2-OSIG TEMP2-TSIG
Set-O4 with Patterns TEMP2-OSIG TEMP2-VSIG
BUZZER-OSIG - Set-O9 with Patterns
BUZZER-OSIG BUZZER-TSIG PUMP2-OSIG -
BUZZER-OSIG BUZZER-VSIG PUMP2-OSIG PUMP2-TSIG
Set-O5 with Patterns PUMP2-OSIG PUMP2-VSIG
BUZZER-RES-TIME - Set-O10 with Patterns
BUZZER-THRU-TIME - PUMP2-RES-TIME -
PUMP2-THRU-TIME -
Set-O11 with Patterns
REF1-RANGE-STA -
REF2-RANGE-STA -
8. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 9, No. 3, June 2019 : 2141 - 2151
2148
f. Step-6
Relate the output patterns through coherence between an output variable and output variable set as
shown below.
g. Step-7
Develop input vector relationship with the output vector patterns.
Set-I1 with Patterns
INIT-MESSAGE Set-O1 with Patterns
MSG-PW-ENTRY LCD-STAT -
PASS-WD LCD-STAT LCD-WRITE
Set-I2 with Patterns
Key-1
Key-2 Set-O1 with Patterns
Key-3 LCD-STAT -
Key-4 LCD-STAT LCD-WRITE
Key-5
9. Int J Elec & Comp Eng ISSN: 2088-8708
Intelligent fire detection and alert system using labVIEW (Fakrulradzi Idris)
2149
Set-I3 with Patterns Set-O1 with Patterns
TEST-PORT-1
LCD-STAT -
LCD-STAT LCD-WRITE
TEST-PORT-5
Set-O2 with Patterns
HOST-STAT -
HOST-STAT HOST-WRITE
TEST-PORT-2
Set-O6 with Patterns
PUMP1-OSIG -
PUMP1-OSIG PUMP1-TSIG
PUMP1-OSIG PUMP1-VSIG
TEST-PORT-3
Set-O9 with Patterns
PUMP2-OSIG -
PUMP2-OSIG PUMP2-TSIG
PUMP2-OSIG PUMP2-VSIG
TEST-PORT-4
Set-O4 with Patterns
BUZZER-OSIG -
BUZZER-OSIG BUZZER-TSIG
BUZZER-OSIG BUZZER-VSIG
Generate test cases
Commencing from input cells trace out the pattern and pattern relationships till no linkage exists
such that all tracks are identified. This could be considered as multiply function commencing from input to
the last output variable. Each track as such is a test case by itself. The test cases generated are shown in the
Table 6.
Table 6. Generated Test cases for TMCNRS Using OMT
h. Step-8: Stop ()
10. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 9, No. 3, June 2019 : 2141 - 2151
2150
4. COMPARATIVE ANALYSIS
Three methods exist in literature which can be used for generation of test cases that can be used for
testing the embedded systems. The methods include generation of test cases using input domain, output
domain, and generation of test cases for semi or pseudo exhaustive testing using genetic algorithms. All these
methods do not take into account interrelation ships between input variables.
The association between the variables is only limited to adjacency. The methods are compared
considering the testing requirements of the embedded systems and the techniques that must be used for
undertaking the testing of the embedded systems. Table 7 shows comparison based on the suitability to
generate the test cases that can be used for testing different features of the embedded system. From the table
it can be seen that OMT is made for testing the embedded systems considering all the features that are related
to the embedded systems.
Table 7. Comparison of the Test case generation methods (Combinatorial Methods) based on
the suitability of the same for testing the embedded systems
S.No. Parameter for comparison
Pseudo Exhaustive
Testing
Optimal Mining
Technique Procedure
1. Test case Generation based on multiple inputs. X √
2. Ability to generate test case for device testing. X √
3.
Ability to generate test cases related to performance testing
(Throughput, response time).
X √
4.
Ability to generate test cases that deal with internal signal
processing.
X √
5. Ability to test external interfacing. X √
5. CONCLUSIONS
Optimal Mining Technique (OMT) is implemented to derive the test cases for multi-input domain
embedded system such as TMCNRS. The methods used in the literature are not quite suitable for undertaking
the testing of the embedded systems as they do not consider specific aspects of testing the embedded systems.
Testing of the elements such as response time, throughput, testing the proper working of the devices etc.
cannot be tested by the existing methods. The proposed method OMT considers all aspects of embedded
system that must be tested. Experimental outcomes show that this approach can generate test cases with high
competence and even fewer observable outputs from the embedded systems are considered.
REFERENCES
[1] D. R. Kuhn, et al., “Software Fault Interactions and Implications for Software Testing,” IEEE transactions on
software engineering, vol/issue: 30(6), 2004.
[2] D. Richard and V. Okum, “Pseudo-Exhaustive Testing for Software,” 30th Annual IEEE/NASA Software
Engineering WorkshopSEW-30 (SEW'06), 2006.
[3] M. L. Prasad and J. K. R. Sastry, “A Comprehensive Survey on Combinatorial Testing,” PONTE Journal,
vol/issue: 73(2), pp. 187-261, 2017.
[4] M. L. Prasad and J. K. R. Sastry, “A Graph Based Strategy (GBS) For Generating Test Cases Meant For Testing
Embedded Systems Using Combinatorial Approaches,” Journal of Advanced Research in Dynamical and Control
Systems, vol/issue: 10(01), pp. 314-324, 2018.
[5] M. L. Prasad and J. K. R. Sastry, “Testing Embedded Systems using test cases generated through Combinatorial
Methods,” International Journal of Engineering Technology, vol/issue: 7(1), pp. 146-158, 2018.
[6] Gray, et al., “The combinatorial design approach to automatic test generation,” IEEE Software, vol/issue: 13(5), pp.
83-88, 1996.
[7] D. M. Cohen, et al., “The AETG system: an approach to testing based on combinatorial design,” IEEE
Transactions on Software Engineering, vol/issue: 23(7), pp. 437-444, 1997.
[8] D. M. Cohen, et al., “Method and system for automatically generating efficient test cases for systems having
interacting elements,” 1996.
[9] Y. W. Tung and W. S. Aldiwan, “Automating test case generation for the new generation mission software
system,” IEEE Aerospace Conference, pp. 431-437, 2000.
[10] Y. Lei and K. C. Tai, “A Test Generating Strategy for Pair-wise Testing,” IEEE Transactions on Software
Engineering, 2002.
[11] Cohen B., et al., “Combinatorial aspects of covering arrays,” Le Matematiche (Catania), vol. 58, pp. 121-167,
2004.
[12] R. Kuhn, et al., “Practical Combinatorial Testing: beyond Pair wise,” IEEE Computer Society - IT Professional,
vol/issue: 10(3), 2008.
11. Int J Elec & Comp Eng ISSN: 2088-8708
Intelligent fire detection and alert system using labVIEW (Fakrulradzi Idris)
2151
[13] J. Shiomi and S. Maruyama, “Water transport inside a single-walled carbon nano-tube driven by a temperature
gradient,” Nanotechnology, 2009.
[14] Ochoa, et al., “Box-fusion: An Approach to Enhance Pairwise Testing,” University of Texas at El Paso, 2016.
[15] M. L. Prasad and J. K. R. Sastry, “Building Test Cases by Particle Swarm Optimization (PSO) For Multi Output
Domain Embedded Systems Using Combinatorial Techniques,” Journal of Advanced Research in Dynamical and
Control Systems, 2018.
[16] M. L. Prasad and J. K. R. Sastry, “A Neural Network Based Strategy (NNBS) For Automated Construction Of Test
Cases For Testing An Embedded System Using Combinatorial Techniques,” International Journal of Engineering
Technology, vol/issue: 7(1), pp. 74-81, 2018.
[17] M. L. Prasad and J. K. R. Sastry, “Generating Test cases for Testing WEB sites through Neural Networks and Input
Pairs,” International Journal of Applied Engineering research, vol/issue: 9(22), 2014.
[18] M. L. Prasad and V. C. Prakash, “Generation of less number of pair wise test cases using artificial neural networks
(ANN),” International Journal of Applied Engineering research, vol/issue: 9(22), 2014.
[19] M. L. Prasad and J. K. R. Sastry, “Generation of Test Cases Using Combinatorial Methods Based Multi-Output
Domain of an Embedded System through the Process of Optimal Selection,” International Journal of Pure and
Applied Mathematics, vol/issue: 118(20), pp. 181-189, 2018.
[20] A. R. A. Alsewari, et al., “Survey on input output relation based combination test data generation strategies,” ARPN
Journal of Engineering and Applied Sciences, vol/issue: 10(18), 2015.
[21] S. Vilkomir, “Combinatorial Testing of Software with Binary Inputs: A State-of-the-Art Review,” IEEE
International Conference on Software Quality, Reliability and Security Companion (QRS-C), Vienna, pp. 55-60,
2016.
[22] D. Gupta, et al., “Sequence Generation of Test Case Using Pairwise Approach Methodology,” Advances in
Computer and Computational Sciences, pp. 79-85, 2016.
[23] J. T. Jimenez, et al., “A two stage algorithm for combinatorial testing,” Optimization Letters, vol/issue: 11(3), pp
457-469, 2017.
[24] S. Route, “Test Optimization Using Combinatorial Test Design: Real-World Experience in Deployment of
Combinatorial Testing at Scale,” IEEE International Conference on Software Testing, Verification and Validation
Workshops (ICSTW), Tokyo, pp. 278-279, 2017.
[25] P. S., M. B., M. S. Narayan and K. Rangarajan, “Building Combinatorial Test Input Model from Use Case
Artefacts,” IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW),
pp. 220-228, 2017.
[26] Y. Yao, et al., “Design and Implementation of Combinatorial Testing Tools,” 2017 IEEE International Conference
on Software Quality, Reliability and Security Companion (QRS-C), Prague, pp. 320-325, 2017.