In this paper a new solution is proposed for testing simple stwo stage electronic circuits. It minimizes the number of tests to be performed to determine the genuinity of the circuit. The main idea behind the present research work is to identify the maximum number of indistinguishable faults present in the given circuit and minimize the number of test cases based on the number of faults that has been detected. Heuristic approach is used for test minimization part, which identifies the essential tests from overall test cases. From the results it is observed that, test minimization varies from 50% to 99% with the lowest one corresponding to a circuit with four gates .Test minimization is low in case of circuits with lesser input leads in gates compared to greater input leads in gates for the boolean expression with same number of symbols. Achievement of 99% reduction is due to the fact that the large number of tests find the same faults. The new approach is implemented for simple circuits. The results show potential for both smaller test sets and lower cpu times.
IMPLEMENTATION OF COMPACTION ALGORITHM FOR ATPG GENERATED PARTIALLY SPECIFIED...VLSICS Design
In this paper the ATPG is implemented using C++. This ATPG is based on fault equivalence concept in which the number of faults gets reduced before compaction method. This ATPG uses the line justification and error propagation to find the test vectors for reduced fault set with the aid of controllability and observability. Single stuck at fault model is considered. The programs are developed for fault equivalence method, controllability Observability, automatic test pattern generation and test data compaction using object oriented language C++. ISCAS 85 C17 circuit was used for analysis purpose along with other circuits. Standard ISCAS (International Symposium on Circuits And Systems) netlist format was used. The flow charts and results for ISCAS 85 C17 circuits along with other netlists are given in this paper. The test vectors generated by the ATPG further compacted to reduce the test vector data. The algorithm is developed for the test vector compaction and discussed along with results.
Fault detection based on novel fuzzy modelling csijjournal
The Fault detection which is based on fuzzy modeling is investigated. Takagi-Sugeno (TS) fuzzy model can
be derived by structure and parameter identification, where only the input-output data of the identified system are available. In the structure identification step, Gustafson-Kessel clustering algorithm (GKCA) is used to detect clusters of different geometrical shapes in the data set and to obtain the point-wise membership function of the premise. In the parameter identification step, Unscented Kalman filter (UKF) is
used to estimate the parameters of the premise’s membership function. In the consequence part, Kalman filter (KF) algorithm is applied as a linear regression to estimate parameters of the TS model using the input-output data set. Then, the obtained fuzzy model is used to detect the fault. Simulations are provided to demonstrate the effectiveness of the theoretical results.
Single Hard Fault Detection in Linear Analog Circuits Based On Simulation bef...IJERD Editor
This document presents a method for detecting single hard faults in linear analog circuits using a simulation before testing approach. The method generates test vectors for nominal and tolerance-bound component values to form a fault dictionary. The circuit under test is then simulated with injected faults. Fault variables are derived by comparing nominal and faulty responses. The component with the lowest relative standard deviation of its fault variable is identified as faulty. The method was tested on benchmark circuits like a linear voltage divider and correctly identified injected short and open faults in components. The technique addresses tolerance challenges in analog circuit testing.
A New Hybrid Robust Fault Detection of Switching Systems by Combination of Ob...IJECEIAES
In this paper, the problem of robust Fault Detection (FD) for continuous time switched system is tackled using a hybrid approach by combination of a switching observer and Bond Graph (BG) method. The main criteria of an FD system including the fault sensitivity and disturbance attenuation level in the presence of parametric uncertainties are considered in the proposed FD system. In the first stage, an optimal switching observer based on state space representation of the BG model is designed in which simultaneous fault sensitivity and disturbance attenuation level are satisfied using H index. In the second stage, the Global Analytical Redundancy Relations (GARRs) of the switching system are derived based on the output estimation error of the observer, which is called Error-based Global Analytical Redundancy Relations (EGARRs). The parametric uncertainties are included in the EGARRs, which define the adaptive thresholds on the residuals. A constant term due to the effect of disturbance is also considered in the thresholds. In fact, a two-stage FD system is proposed wherein some criteria may be considered in each stage. The efficiency of the proposed method is shown for a two-tank system. =H 1
PERFORMANCE ASSESSMENT OF ANFIS APPLIED TO FAULT DIAGNOSIS OF POWER TRANSFORMER ecij
Continuous monitoring of Power transformer is very much essential during its operation. Incipient faults inside the tank and winding insulation needs careful attention. Traditional ratio methods and Duval triangle can be employed to diagnose the incipient faults. Many times correct diagnosis due to the
borderline problems and the existence of multiple faults may not be possible. Artificial intelligence (AI) techniques could be the best solution to handle the non linearity and complexity in the input data. In the proposed work, adaptive neuro fuzzy inference system (ANFIS), is utilized to deal with 9 incipient fault conditions including healthy condition of power transformer with sufficient DGA transformer oil samples. Comparison of the diagnosis performance of both the methods of ANFIS and the feasibility pertaining to the problem is presented. Diagnosis error in classifying the oil samples and the network structure are the main considerations of the present study.
Neural Network-Based Actuator Fault Diagnosis for a Non-Linear Multi-Tank SystemISA Interchange
The paper is devoted to the problem of the robust actuator fault diagnosis of the dynamic non-linear systems. In the proposed method, it is assumed that the diagnosed system can be modelled by the recurrent neural network, which can be transformed into the linear parameter varying form. Such a system description allows developing the designing scheme of the robust unknown input observer within H1 framework for a class of non-linear systems. The proposed approach is designed in such a way that a prescribed disturbance attenuation level is achieved with respect to the actuator fault estimation error, while guaranteeing the convergence of the observer. The application of the robust unknown input observer enables actuator fault estimation, which allows applying the developed approach to the fault tolerant control tasks.
BOOLEAN SPECIFICATION BASED TESTING TECHNIQUES: A SURVEYcscpconf
Boolean expressions are major focus of specifications and they are very much prone to
introduction of faults, this survey presents various Boolean specification based testing
techniques, and covers more than 30 papers for the same. The various Boolean specification
based testing techniques like Cause effect graph, fosters strategy, meaningful impact strategy,
Branch Operator Strategy (BOR), Modified Condition/ Decision Coverage (MCDC) compared
on the basis of their fault detection effectiveness and the size of test suite. This collection
represents most of the existing work performed on Boolean specification based testing
techniques. This survey describes the basic algorithms used by these strategies and it also
includes operator and operand fault categories for evaluating the performance of above mentioned testing techniques. Finally, this survey contains short summaries of all the papers that use Boolean specification based testing techniques. These techniques have been empirically evaluated by various researchers on a simplified safety related real time control system.
Surrogate-based design is an effective approach for modeling computationally expensive system behavior. In such application, it is often challenging to characterize the expected accuracy of the surrogate. In addition to global and local error measures, regional error measures can be used to understand and interpret the surrogate accuracy in the regions of interest. This paper develops the Regional Error Estimation of Surrogate (REES) method to quantify the level of the error in any given subspace (or region) of the entire domain, when all the available training points have been invested to build the surrogate. In this approach, the accuracy of the surrogate in each subspace is estimated by modeling the variations of the mean and the maximum error in that subspace with increasing number of training points (in an iterative process). A regression model is used for this purpose. At each iteration, the intermediate surrogate is constructed using a subset of the entire training data, and tested over the remaining points. The evaluated errors at the intermediate test points at each iteration are used for training the regression model that represents the error variation with sample points. The effectiveness of the proposed method is illustrated using standard test problems. To this end, the predicted regional errors of the surrogate constructed using all the training points are compared with the regional errors estimated over a large set of test points.
IMPLEMENTATION OF COMPACTION ALGORITHM FOR ATPG GENERATED PARTIALLY SPECIFIED...VLSICS Design
In this paper the ATPG is implemented using C++. This ATPG is based on fault equivalence concept in which the number of faults gets reduced before compaction method. This ATPG uses the line justification and error propagation to find the test vectors for reduced fault set with the aid of controllability and observability. Single stuck at fault model is considered. The programs are developed for fault equivalence method, controllability Observability, automatic test pattern generation and test data compaction using object oriented language C++. ISCAS 85 C17 circuit was used for analysis purpose along with other circuits. Standard ISCAS (International Symposium on Circuits And Systems) netlist format was used. The flow charts and results for ISCAS 85 C17 circuits along with other netlists are given in this paper. The test vectors generated by the ATPG further compacted to reduce the test vector data. The algorithm is developed for the test vector compaction and discussed along with results.
Fault detection based on novel fuzzy modelling csijjournal
The Fault detection which is based on fuzzy modeling is investigated. Takagi-Sugeno (TS) fuzzy model can
be derived by structure and parameter identification, where only the input-output data of the identified system are available. In the structure identification step, Gustafson-Kessel clustering algorithm (GKCA) is used to detect clusters of different geometrical shapes in the data set and to obtain the point-wise membership function of the premise. In the parameter identification step, Unscented Kalman filter (UKF) is
used to estimate the parameters of the premise’s membership function. In the consequence part, Kalman filter (KF) algorithm is applied as a linear regression to estimate parameters of the TS model using the input-output data set. Then, the obtained fuzzy model is used to detect the fault. Simulations are provided to demonstrate the effectiveness of the theoretical results.
Single Hard Fault Detection in Linear Analog Circuits Based On Simulation bef...IJERD Editor
This document presents a method for detecting single hard faults in linear analog circuits using a simulation before testing approach. The method generates test vectors for nominal and tolerance-bound component values to form a fault dictionary. The circuit under test is then simulated with injected faults. Fault variables are derived by comparing nominal and faulty responses. The component with the lowest relative standard deviation of its fault variable is identified as faulty. The method was tested on benchmark circuits like a linear voltage divider and correctly identified injected short and open faults in components. The technique addresses tolerance challenges in analog circuit testing.
A New Hybrid Robust Fault Detection of Switching Systems by Combination of Ob...IJECEIAES
In this paper, the problem of robust Fault Detection (FD) for continuous time switched system is tackled using a hybrid approach by combination of a switching observer and Bond Graph (BG) method. The main criteria of an FD system including the fault sensitivity and disturbance attenuation level in the presence of parametric uncertainties are considered in the proposed FD system. In the first stage, an optimal switching observer based on state space representation of the BG model is designed in which simultaneous fault sensitivity and disturbance attenuation level are satisfied using H index. In the second stage, the Global Analytical Redundancy Relations (GARRs) of the switching system are derived based on the output estimation error of the observer, which is called Error-based Global Analytical Redundancy Relations (EGARRs). The parametric uncertainties are included in the EGARRs, which define the adaptive thresholds on the residuals. A constant term due to the effect of disturbance is also considered in the thresholds. In fact, a two-stage FD system is proposed wherein some criteria may be considered in each stage. The efficiency of the proposed method is shown for a two-tank system. =H 1
PERFORMANCE ASSESSMENT OF ANFIS APPLIED TO FAULT DIAGNOSIS OF POWER TRANSFORMER ecij
Continuous monitoring of Power transformer is very much essential during its operation. Incipient faults inside the tank and winding insulation needs careful attention. Traditional ratio methods and Duval triangle can be employed to diagnose the incipient faults. Many times correct diagnosis due to the
borderline problems and the existence of multiple faults may not be possible. Artificial intelligence (AI) techniques could be the best solution to handle the non linearity and complexity in the input data. In the proposed work, adaptive neuro fuzzy inference system (ANFIS), is utilized to deal with 9 incipient fault conditions including healthy condition of power transformer with sufficient DGA transformer oil samples. Comparison of the diagnosis performance of both the methods of ANFIS and the feasibility pertaining to the problem is presented. Diagnosis error in classifying the oil samples and the network structure are the main considerations of the present study.
Neural Network-Based Actuator Fault Diagnosis for a Non-Linear Multi-Tank SystemISA Interchange
The paper is devoted to the problem of the robust actuator fault diagnosis of the dynamic non-linear systems. In the proposed method, it is assumed that the diagnosed system can be modelled by the recurrent neural network, which can be transformed into the linear parameter varying form. Such a system description allows developing the designing scheme of the robust unknown input observer within H1 framework for a class of non-linear systems. The proposed approach is designed in such a way that a prescribed disturbance attenuation level is achieved with respect to the actuator fault estimation error, while guaranteeing the convergence of the observer. The application of the robust unknown input observer enables actuator fault estimation, which allows applying the developed approach to the fault tolerant control tasks.
BOOLEAN SPECIFICATION BASED TESTING TECHNIQUES: A SURVEYcscpconf
Boolean expressions are major focus of specifications and they are very much prone to
introduction of faults, this survey presents various Boolean specification based testing
techniques, and covers more than 30 papers for the same. The various Boolean specification
based testing techniques like Cause effect graph, fosters strategy, meaningful impact strategy,
Branch Operator Strategy (BOR), Modified Condition/ Decision Coverage (MCDC) compared
on the basis of their fault detection effectiveness and the size of test suite. This collection
represents most of the existing work performed on Boolean specification based testing
techniques. This survey describes the basic algorithms used by these strategies and it also
includes operator and operand fault categories for evaluating the performance of above mentioned testing techniques. Finally, this survey contains short summaries of all the papers that use Boolean specification based testing techniques. These techniques have been empirically evaluated by various researchers on a simplified safety related real time control system.
Surrogate-based design is an effective approach for modeling computationally expensive system behavior. In such application, it is often challenging to characterize the expected accuracy of the surrogate. In addition to global and local error measures, regional error measures can be used to understand and interpret the surrogate accuracy in the regions of interest. This paper develops the Regional Error Estimation of Surrogate (REES) method to quantify the level of the error in any given subspace (or region) of the entire domain, when all the available training points have been invested to build the surrogate. In this approach, the accuracy of the surrogate in each subspace is estimated by modeling the variations of the mean and the maximum error in that subspace with increasing number of training points (in an iterative process). A regression model is used for this purpose. At each iteration, the intermediate surrogate is constructed using a subset of the entire training data, and tested over the remaining points. The evaluated errors at the intermediate test points at each iteration are used for training the regression model that represents the error variation with sample points. The effectiveness of the proposed method is illustrated using standard test problems. To this end, the predicted regional errors of the surrogate constructed using all the training points are compared with the regional errors estimated over a large set of test points.
Farsi character recognition using new hybrid feature extraction methodsijcseit
Identification of visual words and writings has long been one of the most essential and the most attractive
operations in the field of image processing which has been studied since the last few decades and includes
security, traffic control, fields of psychology, medicine, and engineering, etc. Previous techniques in the
field of identification of visual writings are very similar to each other for the most parts of their analysis,
and depending on the needs of the operational field have presented different feature extraction. Changes in
style of writing and font and turns of words and other issues are challenges of characters identifying
activity. In this study, a system of Persian character identification using independent orthogonal moment
that is Zernike Moment and Fourier-Mellin Moment has been used as feature extraction technique. The
values of Zernike Moments as characteristics independent of rotation have been used for classification
issues in the past and each of their real and imaginary components have been neglected individually and
with the phase coefficients, each of them will be changed by rotation. In this study, Zernike and Fourier-
Mellin Moments have been investigated to detect Persian characters in noisy and noise-free images. Also,
an improvement on the k-Nearest Neighbor (k-NN) classifier is proposed for character recognition. Using
the results comparison of the proposed method with current salient methods such as Back Propagation
(BP) and Radial Basis Function (RBF) neural networks in terms of feature extraction in words, it has been
shown that on the Hoda database, the proposed method reaches an acceptable detection rate (96/5%).
IRJET- Domestic Water Conservation by IoT (Smart Home)IRJET Journal
This document discusses singular system identification for a constrained rigid robot model. It begins by introducing constrained robot models and noting they can be considered singular systems. It then discusses the importance of singular system equivalency in identification, as an inappropriate equivalency can cause large errors. The document proposes using strong equivalency to transform the constrained robot model before identification. It applies recursive least squares identification to the strongly equivalent system. Simulation results show this approach improves identification error convergence and output tracking compared to previous techniques for constrained robot models.
An SPRT Procedure for an Ungrouped Data using MMLE ApproachIOSR Journals
This document describes a sequential probability ratio test (SPRT) procedure for analyzing ungrouped software failure data using a modified maximum likelihood estimation (MMLE) approach. The SPRT procedure can help quickly detect unreliable software by making decisions with fewer observed failures than traditional hypothesis testing methods. Parameters are estimated using MMLE, which approximates functions in the maximum likelihood equation with linear functions to simplify calculations compared to other estimation methods. The document provides details on how to apply the SPRT procedure and MMLE parameter estimation to a software reliability growth model to analyze software failure data sequentially and detect unreliable software components earlier.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document describes a study on enhancing the serial estimation of discrete choice models through the use of standardization, warm starting, and early stopping techniques. It first reviews literature on accelerating discrete choice model estimation and quasi-Newton optimization methods. It then details the three techniques used: standardizing variables, initializing subsequent models with previous solutions, and stopping optimization early based on log-likelihood trends. Two sequences of 100 discrete choice models are tested to evaluate the effectiveness of the techniques. Results show that warm starting parameters and the Hessian matrix from previous solutions significantly reduces estimation time compared to estimating models separately.
This document proposes an unsupervised feature selection method that combines weighted principal components analysis with a thresholding algorithm. It begins by introducing weighted principal components, which represent the contribution of each original feature to the principal components. It then proposes a moving range-based thresholding algorithm to identify significant features based on their weighted principal component loadings. The method was evaluated on simulated and real datasets, demonstrating high sensitivity and specificity in identifying significant features, while requiring less computation time than existing methods.
APPLYING DYNAMIC MODEL FOR MULTIPLE MANOEUVRING TARGET TRACKING USING PARTICL...IJITCA Journal
In this paper, we applied a dynamic model for manoeuvring targets in SIR particle filter algorithm for improving tracking accuracy of multiple manoeuvring targets. In our proposed approach, a color distribution model is used to detect changes of target's model . Our proposed approach controls
deformation of target's model. If deformation of target's model is larger than a predetermined threshold,then the model will be updated. Global Nearest Neighbor (GNN) algorithm is used as data association algorithm. We named our proposed method as Deformation Detection Particle Filter (DDPF) . DDPF
approach is compared with basic SIR-PF algorithm on real airshow videos. Comparisons results show that, the basic SIR-PF algorithm is not able to track the manoeuvring targets when the rotation or scaling is occurred in target' s model. However, DDPF approach updates target's model when the rotation or
scaling is occurred. Thus, the proposed approach is able to track the manoeuvring targets more efficiently
and accurately.
SYSTEM IDENTIFICATION AND MODELING FOR INTERACTING AND NON-INTERACTING TANK S...ijistjournal
System identification from the experimental data plays a vital role for model based controller design. Derivation of process model from first principles is often difficult due to its complexity. The first stage in the development of any control and monitoring system is the identification and modeling of the system. Each model is developed within the context of a specific control problem. Thus, the need for a general system identification framework is warranted. The proposed framework should be able to adapt and emphasize different properties based on the control objective and the nature of the behavior of the system. Therefore, system identification has been a valuable tool in identifying the model of the system based on the input and output data for the design of the controller. The present work is concerned with the identification of transfer function models using statistical model identification, process reaction curve method, ARX model, genetic algorithm and modeling using neural network and fuzzy logic for interacting and non interacting tank process. The identification technique and modeling used is prone to parameter change & disturbance. The proposed methods are used for identifying the mathematical model and intelligent model of interacting and non interacting process from the real time experimental data.
Regression test selection model: a comparison between ReTSE and pythiaTELKOMNIKA JOURNAL
As software systems change and evolve over time regression tests have to be run to validate these changes. Regression testing is an expensive but essential activity in software maintenance. The purpose of this paper is to compare a new regression test selection model called ReTSE with Pythia. The ReTSE model uses decomposition slicing in order to identify the relevant regression tests. Decomposition slicing provides a technique that is capable of identifying the unchanged parts of a system. Pythia is a regression test selection technique based on textual differencing. Both techniques are compare using a Power program taken from Vokolos and Frankl’s paper. The analysis of this comparison has shown promising results in reducing the number of tests to be run after changes are introduced.
Sensitivity analysis in a lidar camera calibrationcsandit
In this paper, variability analysis was performed o
n the model calibration methodology between
a multi-camera system and a LiDAR laser sensor (Lig
ht Detection and Ranging). Both sensors
are used to digitize urban environments. A practica
l and complete methodology is presented to
predict the error propagation inside the LiDAR-came
ra calibration. We perform a sensitivity
analysis in a local and global way. The local appro
ach analyses the output variance with
respect to the input, only one parameter is varied
at once. In the global sensitivity approach, all
parameters are varied simultaneously and sensitivit
y indexes are calculated on the total
variation range of the input parameters. We quantif
y the uncertainty behaviour in the intrinsic
camera parameters and the relationship between the
noisy data of both sensors and their
calibration. We calculated the sensitivity indexes
by two techniques, Sobol and FAST (Fourier
amplitude sensitivity test). Statistics of the sens
itivity analysis are displayed for each sensor, the
sensitivity ratio in laser-camera calibration data
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document discusses feature selection techniques for classification problems. It begins by outlining class separability measures like divergence, Bhattacharyya distance, and scatter matrices. It then discusses feature subset selection approaches, including scalar feature selection which treats features individually, and feature vector selection which considers feature sets and correlations. Examples are provided to demonstrate calculating class separability measures for different feature combinations on sample datasets. Exhaustive search and suboptimal techniques like forward, backward, and floating selection are discussed for choosing optimal feature subsets. The goal of feature selection is to select a subset of features that maximizes class separation.
This document presents a new 3-level approach to simultaneously select the best surrogate model type, kernel function, and hyper-parameters for approximation models. The approach uses Regional Error Estimation of Surrogates (REES) to evaluate error and select models. It compares a cascaded technique that performs sequential optimization versus a one-step technique. Numerical examples on benchmark problems show the one-step technique reduces maximum and median errors by at least 60% with lower computational cost compared to the cascaded approach. Future work involves applying the one-step method to more complex problems and developing an online platform for collaborative surrogate model selection.
BEARINGS PROGNOSTIC USING MIXTURE OF GAUSSIANS HIDDEN MARKOV MODEL AND SUPPOR...IJNSA Journal
Prognostic of future health state relies on the estimation of the Remaining Useful Life (RUL) of physical
systems or components based on their current health state. RUL can be estimated by using three main
approaches: model-based, experience-based and data-driven approaches. This paper deals with a datadriven
prognostics method which is based on the transformation of the data provided by the sensors into
models that are able to characterize the behavior of the degradation of bearings.
For this purpose, we used Support Vector Machine (SVM) as modeling tool. The experiments on the
recently published data base taken from the platform PRONOSTIA clearly show the superiority of the
proposed approach compared to well established method in literature like Mixture of Gaussian Hidden
Markov Models (MoG-HMMs).
Fault detection and test minimization methodspraveenkaundal
This document summarizes and compares different fault detection and test minimization methods for combinational circuits. It discusses traditional methods like fault table, path sensitizing, and Boolean difference approaches. It also covers more recent heuristic and genetic algorithm methods. The conclusion states that iterative methods provide optimal solutions for circuits of varying complexity, while traditional methods require constructing large fault tables. The document aims to survey fault detection and test minimization techniques for combinational logic.
This document summarizes a study of built-in self-test (BIST) approaches for detecting single stuck-at faults in combinational logic circuits. Pseudorandom test patterns generated by a linear feedback shift register (LFSR) were applied in parallel and serially to benchmark circuits. Applying patterns in parallel via test-per-clock achieved high fault coverage but required a large LFSR for circuits with many inputs. Reseeding the LFSR improved coverage when an initial seed was ineffective. Seed selection and minimum LFSR size for different application methods were evaluated to optimize BIST fault detection.
A Simulation Experiment on a Built-In Self Test Equipped with Pseudorandom Te...VLSICS Design
This paper investigates the impact of the changes of the characteristic polynomials and initial loadings, on behaviour of aliasing errors of parallel signature analyzer (Multi-Input Shift Register), used in an LFSR based digital circuit testing technique. The investigation is carried-out through an extensive simulation study of the effectiveness of the LFSR based digital circuit testing technique. The results of the study show that when the identical characteristic polynomials of order n are used in both pseudo-random test-pattern generator, as well as in Multi-Input Shift Register (MISR) signature analyzer (parallel type) then the probability of aliasing errors remains unchanged due to the changes in the initial loadings of the pseudo-random test-pattern generator.
A Simulation Experiment on a Built-In Self Test Equipped with Pseudorandom Te...VLSICS Design
This paper investigates the impact of the changes of the characteristic polynomials and initial loadings, on behaviour of aliasing errors of parallel signature analyzer (Multi-Input Shift Register), used in an LFSR based digital circuit testing technique. The investigation is carried-out through an extensive simulation study of the effectiveness of the LFSR based digital circuit testing technique. The results of the study show that when the identical characteristic polynomials of order n are used in both pseudo-random test-pattern generator, as well as in Multi-Input Shift Register (MISR) signature analyzer (parallel type) then the probability of aliasing errors remains unchanged due to the changes in the initial loadings of the pseudo-random test-pattern generator.
Wavelet Based on the Finding of Hard and Soft Faults in Analog and Digital Si...ijcisjournal
In this paper methods for testing both software and hardware faults are implemented in analog and digital
signal circuits are presented. They are based on the wavelet transform (WT). The limit which affected by
faults detect ability, for the reference circuits is set by statistical processing data obtained from a set of
faults free circuits .In wavelet analysis it has two algorithm one is based on a discrimination factor using
Euclidean distances and the other mahalanobis distances, are introduced both methods on wavelet energy
calculation. Simulation result from proposed test methods in the testing known analog and digital signal
circuit benchmark are given. The results shows that effectiveness of existing methods two test metrics
against three other test methods, namely a test method based on rms value of the measured signal, a test
method utilizing the harmonic magnitude component of the measured signal waveform
Farsi character recognition using new hybrid feature extraction methodsijcseit
Identification of visual words and writings has long been one of the most essential and the most attractive
operations in the field of image processing which has been studied since the last few decades and includes
security, traffic control, fields of psychology, medicine, and engineering, etc. Previous techniques in the
field of identification of visual writings are very similar to each other for the most parts of their analysis,
and depending on the needs of the operational field have presented different feature extraction. Changes in
style of writing and font and turns of words and other issues are challenges of characters identifying
activity. In this study, a system of Persian character identification using independent orthogonal moment
that is Zernike Moment and Fourier-Mellin Moment has been used as feature extraction technique. The
values of Zernike Moments as characteristics independent of rotation have been used for classification
issues in the past and each of their real and imaginary components have been neglected individually and
with the phase coefficients, each of them will be changed by rotation. In this study, Zernike and Fourier-
Mellin Moments have been investigated to detect Persian characters in noisy and noise-free images. Also,
an improvement on the k-Nearest Neighbor (k-NN) classifier is proposed for character recognition. Using
the results comparison of the proposed method with current salient methods such as Back Propagation
(BP) and Radial Basis Function (RBF) neural networks in terms of feature extraction in words, it has been
shown that on the Hoda database, the proposed method reaches an acceptable detection rate (96/5%).
IRJET- Domestic Water Conservation by IoT (Smart Home)IRJET Journal
This document discusses singular system identification for a constrained rigid robot model. It begins by introducing constrained robot models and noting they can be considered singular systems. It then discusses the importance of singular system equivalency in identification, as an inappropriate equivalency can cause large errors. The document proposes using strong equivalency to transform the constrained robot model before identification. It applies recursive least squares identification to the strongly equivalent system. Simulation results show this approach improves identification error convergence and output tracking compared to previous techniques for constrained robot models.
An SPRT Procedure for an Ungrouped Data using MMLE ApproachIOSR Journals
This document describes a sequential probability ratio test (SPRT) procedure for analyzing ungrouped software failure data using a modified maximum likelihood estimation (MMLE) approach. The SPRT procedure can help quickly detect unreliable software by making decisions with fewer observed failures than traditional hypothesis testing methods. Parameters are estimated using MMLE, which approximates functions in the maximum likelihood equation with linear functions to simplify calculations compared to other estimation methods. The document provides details on how to apply the SPRT procedure and MMLE parameter estimation to a software reliability growth model to analyze software failure data sequentially and detect unreliable software components earlier.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document describes a study on enhancing the serial estimation of discrete choice models through the use of standardization, warm starting, and early stopping techniques. It first reviews literature on accelerating discrete choice model estimation and quasi-Newton optimization methods. It then details the three techniques used: standardizing variables, initializing subsequent models with previous solutions, and stopping optimization early based on log-likelihood trends. Two sequences of 100 discrete choice models are tested to evaluate the effectiveness of the techniques. Results show that warm starting parameters and the Hessian matrix from previous solutions significantly reduces estimation time compared to estimating models separately.
This document proposes an unsupervised feature selection method that combines weighted principal components analysis with a thresholding algorithm. It begins by introducing weighted principal components, which represent the contribution of each original feature to the principal components. It then proposes a moving range-based thresholding algorithm to identify significant features based on their weighted principal component loadings. The method was evaluated on simulated and real datasets, demonstrating high sensitivity and specificity in identifying significant features, while requiring less computation time than existing methods.
APPLYING DYNAMIC MODEL FOR MULTIPLE MANOEUVRING TARGET TRACKING USING PARTICL...IJITCA Journal
In this paper, we applied a dynamic model for manoeuvring targets in SIR particle filter algorithm for improving tracking accuracy of multiple manoeuvring targets. In our proposed approach, a color distribution model is used to detect changes of target's model . Our proposed approach controls
deformation of target's model. If deformation of target's model is larger than a predetermined threshold,then the model will be updated. Global Nearest Neighbor (GNN) algorithm is used as data association algorithm. We named our proposed method as Deformation Detection Particle Filter (DDPF) . DDPF
approach is compared with basic SIR-PF algorithm on real airshow videos. Comparisons results show that, the basic SIR-PF algorithm is not able to track the manoeuvring targets when the rotation or scaling is occurred in target' s model. However, DDPF approach updates target's model when the rotation or
scaling is occurred. Thus, the proposed approach is able to track the manoeuvring targets more efficiently
and accurately.
SYSTEM IDENTIFICATION AND MODELING FOR INTERACTING AND NON-INTERACTING TANK S...ijistjournal
System identification from the experimental data plays a vital role for model based controller design. Derivation of process model from first principles is often difficult due to its complexity. The first stage in the development of any control and monitoring system is the identification and modeling of the system. Each model is developed within the context of a specific control problem. Thus, the need for a general system identification framework is warranted. The proposed framework should be able to adapt and emphasize different properties based on the control objective and the nature of the behavior of the system. Therefore, system identification has been a valuable tool in identifying the model of the system based on the input and output data for the design of the controller. The present work is concerned with the identification of transfer function models using statistical model identification, process reaction curve method, ARX model, genetic algorithm and modeling using neural network and fuzzy logic for interacting and non interacting tank process. The identification technique and modeling used is prone to parameter change & disturbance. The proposed methods are used for identifying the mathematical model and intelligent model of interacting and non interacting process from the real time experimental data.
Regression test selection model: a comparison between ReTSE and pythiaTELKOMNIKA JOURNAL
As software systems change and evolve over time regression tests have to be run to validate these changes. Regression testing is an expensive but essential activity in software maintenance. The purpose of this paper is to compare a new regression test selection model called ReTSE with Pythia. The ReTSE model uses decomposition slicing in order to identify the relevant regression tests. Decomposition slicing provides a technique that is capable of identifying the unchanged parts of a system. Pythia is a regression test selection technique based on textual differencing. Both techniques are compare using a Power program taken from Vokolos and Frankl’s paper. The analysis of this comparison has shown promising results in reducing the number of tests to be run after changes are introduced.
Sensitivity analysis in a lidar camera calibrationcsandit
In this paper, variability analysis was performed o
n the model calibration methodology between
a multi-camera system and a LiDAR laser sensor (Lig
ht Detection and Ranging). Both sensors
are used to digitize urban environments. A practica
l and complete methodology is presented to
predict the error propagation inside the LiDAR-came
ra calibration. We perform a sensitivity
analysis in a local and global way. The local appro
ach analyses the output variance with
respect to the input, only one parameter is varied
at once. In the global sensitivity approach, all
parameters are varied simultaneously and sensitivit
y indexes are calculated on the total
variation range of the input parameters. We quantif
y the uncertainty behaviour in the intrinsic
camera parameters and the relationship between the
noisy data of both sensors and their
calibration. We calculated the sensitivity indexes
by two techniques, Sobol and FAST (Fourier
amplitude sensitivity test). Statistics of the sens
itivity analysis are displayed for each sensor, the
sensitivity ratio in laser-camera calibration data
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document discusses feature selection techniques for classification problems. It begins by outlining class separability measures like divergence, Bhattacharyya distance, and scatter matrices. It then discusses feature subset selection approaches, including scalar feature selection which treats features individually, and feature vector selection which considers feature sets and correlations. Examples are provided to demonstrate calculating class separability measures for different feature combinations on sample datasets. Exhaustive search and suboptimal techniques like forward, backward, and floating selection are discussed for choosing optimal feature subsets. The goal of feature selection is to select a subset of features that maximizes class separation.
This document presents a new 3-level approach to simultaneously select the best surrogate model type, kernel function, and hyper-parameters for approximation models. The approach uses Regional Error Estimation of Surrogates (REES) to evaluate error and select models. It compares a cascaded technique that performs sequential optimization versus a one-step technique. Numerical examples on benchmark problems show the one-step technique reduces maximum and median errors by at least 60% with lower computational cost compared to the cascaded approach. Future work involves applying the one-step method to more complex problems and developing an online platform for collaborative surrogate model selection.
BEARINGS PROGNOSTIC USING MIXTURE OF GAUSSIANS HIDDEN MARKOV MODEL AND SUPPOR...IJNSA Journal
Prognostic of future health state relies on the estimation of the Remaining Useful Life (RUL) of physical
systems or components based on their current health state. RUL can be estimated by using three main
approaches: model-based, experience-based and data-driven approaches. This paper deals with a datadriven
prognostics method which is based on the transformation of the data provided by the sensors into
models that are able to characterize the behavior of the degradation of bearings.
For this purpose, we used Support Vector Machine (SVM) as modeling tool. The experiments on the
recently published data base taken from the platform PRONOSTIA clearly show the superiority of the
proposed approach compared to well established method in literature like Mixture of Gaussian Hidden
Markov Models (MoG-HMMs).
Fault detection and test minimization methodspraveenkaundal
This document summarizes and compares different fault detection and test minimization methods for combinational circuits. It discusses traditional methods like fault table, path sensitizing, and Boolean difference approaches. It also covers more recent heuristic and genetic algorithm methods. The conclusion states that iterative methods provide optimal solutions for circuits of varying complexity, while traditional methods require constructing large fault tables. The document aims to survey fault detection and test minimization techniques for combinational logic.
This document summarizes a study of built-in self-test (BIST) approaches for detecting single stuck-at faults in combinational logic circuits. Pseudorandom test patterns generated by a linear feedback shift register (LFSR) were applied in parallel and serially to benchmark circuits. Applying patterns in parallel via test-per-clock achieved high fault coverage but required a large LFSR for circuits with many inputs. Reseeding the LFSR improved coverage when an initial seed was ineffective. Seed selection and minimum LFSR size for different application methods were evaluated to optimize BIST fault detection.
A Simulation Experiment on a Built-In Self Test Equipped with Pseudorandom Te...VLSICS Design
This paper investigates the impact of the changes of the characteristic polynomials and initial loadings, on behaviour of aliasing errors of parallel signature analyzer (Multi-Input Shift Register), used in an LFSR based digital circuit testing technique. The investigation is carried-out through an extensive simulation study of the effectiveness of the LFSR based digital circuit testing technique. The results of the study show that when the identical characteristic polynomials of order n are used in both pseudo-random test-pattern generator, as well as in Multi-Input Shift Register (MISR) signature analyzer (parallel type) then the probability of aliasing errors remains unchanged due to the changes in the initial loadings of the pseudo-random test-pattern generator.
A Simulation Experiment on a Built-In Self Test Equipped with Pseudorandom Te...VLSICS Design
This paper investigates the impact of the changes of the characteristic polynomials and initial loadings, on behaviour of aliasing errors of parallel signature analyzer (Multi-Input Shift Register), used in an LFSR based digital circuit testing technique. The investigation is carried-out through an extensive simulation study of the effectiveness of the LFSR based digital circuit testing technique. The results of the study show that when the identical characteristic polynomials of order n are used in both pseudo-random test-pattern generator, as well as in Multi-Input Shift Register (MISR) signature analyzer (parallel type) then the probability of aliasing errors remains unchanged due to the changes in the initial loadings of the pseudo-random test-pattern generator.
Wavelet Based on the Finding of Hard and Soft Faults in Analog and Digital Si...ijcisjournal
In this paper methods for testing both software and hardware faults are implemented in analog and digital
signal circuits are presented. They are based on the wavelet transform (WT). The limit which affected by
faults detect ability, for the reference circuits is set by statistical processing data obtained from a set of
faults free circuits .In wavelet analysis it has two algorithm one is based on a discrimination factor using
Euclidean distances and the other mahalanobis distances, are introduced both methods on wavelet energy
calculation. Simulation result from proposed test methods in the testing known analog and digital signal
circuit benchmark are given. The results shows that effectiveness of existing methods two test metrics
against three other test methods, namely a test method based on rms value of the measured signal, a test
method utilizing the harmonic magnitude component of the measured signal waveform
Design Verification and Test Vector Minimization Using Heuristic Method of a ...ijcisjournal
The reduction in feature size increases the probability of manufacturing defect in the IC will result in a
faulty chip. A very small defect can easily result in a faulty transistor or interconnecting wire when the
feature size is less. Testing is required to guarantee fault-free products, regardless of whether the product
is a VLSI device or an electronic system. Simulation is used to verify the correctness of the design. To test n
input circuit we required 2n
test vectors. As the number inputs of a circuit are more, the exponential growth
of the required number of vectors takes much time to test the circuit. It is necessary to find testing methods
to reduce test vectors . So here designed an heuristic approach to test the ripple carry adder. Modelsim and
Xilinx tools are used to verify and synthesize the design.
Peak- and Average-Power Reduction in Check-Based BIST by using Bit-Swapping ...IOSR Journals
This document proposes a technique called a bit-swapping linear feedback shift register (BS-LFSR) to reduce average and peak power in built-in self-tests (BISTs). The BS-LFSR swaps the values of two cells in an LFSR based on the value of a third cell, reducing transitions by 50% compared to a standard LFSR. It is combined with an algorithm that orders check cells to further reduce average and peak power during testing. Experimental results on benchmark circuits show up to 65% reduction in average power and 55% reduction in peak power with little impact on fault coverage or test time.
NEURAL NETWORKS WITH DECISION TREES FOR DIAGNOSIS ISSUEScscpconf
1) The document presents a new technique for fault detection and isolation that uses neural networks to generate models of normal and faulty system behaviors. A decision tree is then used to evaluate residuals and isolate faults.
2) The technique is demonstrated on a benchmark process for an electro-pneumatic valve actuator. Neural networks are used to generate models of the actuator's normal and 19 possible faulty behaviors.
3) A decision tree structure is proposed to simplify online fault diagnosis by only evaluating the most significant residuals needed at each step to isolate faults. This reduces computational effort compared to evaluating all residuals.
NEURAL NETWORKS WITH DECISION TREES FOR DIAGNOSIS ISSUEScsitconf
This paper presents a new idea for fault detection and isolation (FDI) technique which is
applied to industrial system. This technique is based on Neural Networks fault-free and Faulty
behaviours Models (NNFMs). NNFMs are used for residual generation, while decision tree
architecture is used for residual evaluation. The decision tree is realized with data collected
from the NNFM’s outputs and is used to isolate detectable faults depending on computed
threshold. Each part of the tree corresponds to specific residual. With the decision tree, it
becomes possible to take the appropriate decision regarding the actual process behaviour by
evaluating few numbers of residuals. In comparison to usual systematic evaluation of all
residuals, the proposed technique requires less computational effort and can be used for on line
diagnosis. An application example is presented to illustrate and confirm the effectiveness and
the accuracy of the proposed approach.
At present, the research on fault detection and diagnosis technology is very significant to improve the reliability of the equipment, which can greatly improve the safety and efficiency of the equipment. This paper proposes a new fault detection and diagnosis means based on the FOA-LSSVM algorithm. Experimental results demonstrate that the algorithm is effective for the detection and diagnosis of analog circuit faults. In addition, the model also demonstrate good generalization ability.
A Fault Dictionary-Based Fault Diagnosis Approach for CMOS Analog Integrated ...VLSICS Design
In this paper, we propose a simulation-before-test (SBT) fault diagnosis methodology based on the use of a fault dictionary approach. This technique allows the detection and localization of the most likely defects of open-circuit type occurring in Complementary Metal–Oxide–Semiconductor (CMOS) analog integrated circuits (ICs) interconnects. The fault dictionary is built by simulating the most likely defects causing the faults to be detected at the layout level. Then, for each injected fault, the spectre’s frequency responses and the power consumption obtained by simulation are stored in a table which constitutes the fault dictionary. In fact, each line in the fault dictionary constitutes a fault signature used to identify and locate a considered defect. When testing, the circuit under test is excited with the same stimulus, and the responses obtained are compared to the stored ones. To prove the efficiency of the proposed technique, a full custom CMOS operational amplifier is implemented in 0.25 µm technology and the most likely faults of opencircuit type are deliberately injected and simulated at the layout level.
A Fault Dictionary-Based Fault Diagnosis Approach for CMOS Analog Integrated ...VLSICS Design
In this paper, we propose a simulation-before-test (SBT) fault diagnosis methodology based on the use of a fault dictionary approach. This technique allows the detection and localization of the most likely defects of open-circuit type occurring in Complementary Metal–Oxide–Semiconductor (CMOS) analog integrated circuits (ICs) interconnects. The fault dictionary is built by simulating the most likely defects causing the faults to be detected at the layout level. Then, for each injected fault, the spectre’s frequency responses and the power consumption obtained by simulation are stored in a table which constitutes the fault dictionary. In fact, each line in the fault dictionary constitutes a fault signature used to identify and locate a considered defect. When testing, the circuit under test is excited with the same stimulus, and the responses obtained are compared to the stored ones. To prove the efficiency of the proposed technique, a full custom CMOS operational amplifier is implemented in 0.25 µm technology and the most likely faults of opencircuit type are deliberately injected and simulated at the layout level.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document discusses a technique called Linear Feedback Shift Register - Bit Complement Test Pattern Generation (LFSR-BCTPG) to minimize test power in VLSI circuits. The LFSR-BCTPG generates test patterns with the LFSR but complements output bits to reduce repeated patterns. This increases unique test vectors and improves fault coverage while reducing circuit size and dynamic power compared to a conventional LFSR. The technique generates test patterns by alternating the enable signals to the two halves of the LFSR flip-flops. Evaluation on ISCAS benchmark circuits showed this method reduces dynamic power consumption during testing.
Cellular wireless systems like GSM suffer from congestion resulting in overall system degradation and poor service delivery. When the traffic demand in a geographical area is high, the input traffic rate will exceed thecapacity of the output lines. This work focused on homogenous wireless network (the network traffic and resource dimensioning that are statistically identical) such that the network performance
evaluation can be reduced to a system with single cell and a single traffic type. Such system can employa queuing model to evaluate the performance metric of a cell in terms of blocking probability.
Five congestion control models were compared in the work to ascertain their peculiarities, they are Erlang B, Erlang C, Engset (cleared), Engset (buffered), and Bernoulli. To analyze the system, an aggregate onedimensional Markov chain wasderived, such that it describes a call arrival process under the assumption
that it is Poisson distributed. The models were simulated and their results show varying performances, however the Bernoulli model (Pb5) tends to show a situation that allows more users access to the system and the congestion level remain unaffected despite increase in the number of users and the offered traffic into the system.
Multiple Sensors Soft-Failure Diagnosis Based on Kalman Filtersipij
Sensor is the necessary components of the engine control system. Therefore, more and more work must do for improving sensors reliability. Soft failures are small bias errors or drift errors that accumulate relatively slowly with time in the sensed values that it must be detected because of it can be very easy to be mistaken for the results of noise. Simultaneous multiple sensors failures are rare events and must be considered. In order to solve this problem, a revised multiple-failure-hypothesis based testing is investigated. This approach uses multiple Kalman filters, and each of Kalman filter is designed based on a specific hypothesis for detecting specific sensors fault, and then uses Weighted Sum of Squared Residual (WSSR) to deal with Kalman filter residuals, and residual signals are compared with threshold in order to make fault detection decisions. The simulation results show that the proposed method can be used to detect multiple sensors soft failures fast and accurately.
A F AULT D IAGNOSIS M ETHOD BASED ON S EMI - S UPERVISED F UZZY C-M EANS...IJCI JOURNAL
Machine learning approaches are generally adopted i
n many fields including data mining, image
processing, intelligent fault diagnosis etc. As a c
lassic unsupervised learning technology, fuzzy C-me
ans
cluster analysis plays a vital role in machine lear
ning based intelligent fault diagnosis. With the ra
pid
development of science and technology, the monitori
ng signal data is numerous and keeps growing fast.
Only typical fault samples can be obtained and labe
led. Thus, how to apply semi-supervised learning
technology in fault diagnosis is significant for gu
aranteeing the equipment safety. According to this,
a novel
fault diagnosis method based on semi-supervised fuz
zy C-means(SFCM) cluster analysis is proposed.
Experimental results on Iris data set and the steel
plates faults data set show that this method is su
perior to
traditional fuzzy C-means clustering analysis
Similar to Heuristic approach to optimize the number of test cases for simple circuits (20)
Build the Next Generation of Apps with the Einstein 1 Platform.
Rejoignez Philippe Ozil pour une session de workshops qui vous guidera à travers les détails de la plateforme Einstein 1, l'importance des données pour la création d'applications d'intelligence artificielle et les différents outils et technologies que Salesforce propose pour vous apporter tous les bénéfices de l'IA.
Rainfall intensity duration frequency curve statistical analysis and modeling...bijceesjournal
Using data from 41 years in Patna’ India’ the study’s goal is to analyze the trends of how often it rains on a weekly, seasonal, and annual basis (1981−2020). First, utilizing the intensity-duration-frequency (IDF) curve and the relationship by statistically analyzing rainfall’ the historical rainfall data set for Patna’ India’ during a 41 year period (1981−2020), was evaluated for its quality. Changes in the hydrologic cycle as a result of increased greenhouse gas emissions are expected to induce variations in the intensity, length, and frequency of precipitation events. One strategy to lessen vulnerability is to quantify probable changes and adapt to them. Techniques such as log-normal, normal, and Gumbel are used (EV-I). Distributions were created with durations of 1, 2, 3, 6, and 24 h and return times of 2, 5, 10, 25, and 100 years. There were also mathematical correlations discovered between rainfall and recurrence interval.
Findings: Based on findings, the Gumbel approach produced the highest intensity values, whereas the other approaches produced values that were close to each other. The data indicates that 461.9 mm of rain fell during the monsoon season’s 301st week. However, it was found that the 29th week had the greatest average rainfall, 92.6 mm. With 952.6 mm on average, the monsoon season saw the highest rainfall. Calculations revealed that the yearly rainfall averaged 1171.1 mm. Using Weibull’s method, the study was subsequently expanded to examine rainfall distribution at different recurrence intervals of 2, 5, 10, and 25 years. Rainfall and recurrence interval mathematical correlations were also developed. Further regression analysis revealed that short wave irrigation, wind direction, wind speed, pressure, relative humidity, and temperature all had a substantial influence on rainfall.
Originality and value: The results of the rainfall IDF curves can provide useful information to policymakers in making appropriate decisions in managing and minimizing floods in the study area.
Applications of artificial Intelligence in Mechanical Engineering.pdfAtif Razi
Historically, mechanical engineering has relied heavily on human expertise and empirical methods to solve complex problems. With the introduction of computer-aided design (CAD) and finite element analysis (FEA), the field took its first steps towards digitization. These tools allowed engineers to simulate and analyze mechanical systems with greater accuracy and efficiency. However, the sheer volume of data generated by modern engineering systems and the increasing complexity of these systems have necessitated more advanced analytical tools, paving the way for AI.
AI offers the capability to process vast amounts of data, identify patterns, and make predictions with a level of speed and accuracy unattainable by traditional methods. This has profound implications for mechanical engineering, enabling more efficient design processes, predictive maintenance strategies, and optimized manufacturing operations. AI-driven tools can learn from historical data, adapt to new information, and continuously improve their performance, making them invaluable in tackling the multifaceted challenges of modern mechanical engineering.
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
Discover the latest insights on Data Driven Maintenance with our comprehensive webinar presentation. Learn about traditional maintenance challenges, the right approach to utilizing data, and the benefits of adopting a Data Driven Maintenance strategy. Explore real-world examples, industry best practices, and innovative solutions like FMECA and the D3M model. This presentation, led by expert Jules Oudmans, is essential for asset owners looking to optimize their maintenance processes and leverage digital technologies for improved efficiency and performance. Download now to stay ahead in the evolving maintenance landscape.
AI for Legal Research with applications, toolsmahaffeycheryld
AI applications in legal research include rapid document analysis, case law review, and statute interpretation. AI-powered tools can sift through vast legal databases to find relevant precedents and citations, enhancing research accuracy and speed. They assist in legal writing by drafting and proofreading documents. Predictive analytics help foresee case outcomes based on historical data, aiding in strategic decision-making. AI also automates routine tasks like contract review and due diligence, freeing up lawyers to focus on complex legal issues. These applications make legal research more efficient, cost-effective, and accessible.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Null Bangalore | Pentesters Approach to AWS IAMDivyanshu
#Abstract:
- Learn more about the real-world methods for auditing AWS IAM (Identity and Access Management) as a pentester. So let us proceed with a brief discussion of IAM as well as some typical misconfigurations and their potential exploits in order to reinforce the understanding of IAM security best practices.
- Gain actionable insights into AWS IAM policies and roles, using hands on approach.
#Prerequisites:
- Basic understanding of AWS services and architecture
- Familiarity with cloud security concepts
- Experience using the AWS Management Console or AWS CLI.
- For hands on lab create account on [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
# Scenario Covered:
- Basics of IAM in AWS
- Implementing IAM Policies with Least Privilege to Manage S3 Bucket
- Objective: Create an S3 bucket with least privilege IAM policy and validate access.
- Steps:
- Create S3 bucket.
- Attach least privilege policy to IAM user.
- Validate access.
- Exploiting IAM PassRole Misconfiguration
-Allows a user to pass a specific IAM role to an AWS service (ec2), typically used for service access delegation. Then exploit PassRole Misconfiguration granting unauthorized access to sensitive resources.
- Objective: Demonstrate how a PassRole misconfiguration can grant unauthorized access.
- Steps:
- Allow user to pass IAM role to EC2.
- Exploit misconfiguration for unauthorized access.
- Access sensitive resources.
- Exploiting IAM AssumeRole Misconfiguration with Overly Permissive Role
- An overly permissive IAM role configuration can lead to privilege escalation by creating a role with administrative privileges and allow a user to assume this role.
- Objective: Show how overly permissive IAM roles can lead to privilege escalation.
- Steps:
- Create role with administrative privileges.
- Allow user to assume the role.
- Perform administrative actions.
- Differentiation between PassRole vs AssumeRole
Try at [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Design and optimization of ion propulsion dronebjmsejournal
Electric propulsion technology is widely used in many kinds of vehicles in recent years, and aircrafts are no exception. Technically, UAVs are electrically propelled but tend to produce a significant amount of noise and vibrations. Ion propulsion technology for drones is a potential solution to this problem. Ion propulsion technology is proven to be feasible in the earth’s atmosphere. The study presented in this article shows the design of EHD thrusters and power supply for ion propulsion drones along with performance optimization of high-voltage power supply for endurance in earth’s atmosphere.
Prediction of Electrical Energy Efficiency Using Information on Consumer's Ac...PriyankaKilaniya
Energy efficiency has been important since the latter part of the last century. The main object of this survey is to determine the energy efficiency knowledge among consumers. Two separate districts in Bangladesh are selected to conduct the survey on households and showrooms about the energy and seller also. The survey uses the data to find some regression equations from which it is easy to predict energy efficiency knowledge. The data is analyzed and calculated based on five important criteria. The initial target was to find some factors that help predict a person's energy efficiency knowledge. From the survey, it is found that the energy efficiency awareness among the people of our country is very low. Relationships between household energy use behaviors are estimated using a unique dataset of about 40 households and 20 showrooms in Bangladesh's Chapainawabganj and Bagerhat districts. Knowledge of energy consumption and energy efficiency technology options is found to be associated with household use of energy conservation practices. Household characteristics also influence household energy use behavior. Younger household cohorts are more likely to adopt energy-efficient technologies and energy conservation practices and place primary importance on energy saving for environmental reasons. Education also influences attitudes toward energy conservation in Bangladesh. Low-education households indicate they primarily save electricity for the environment while high-education households indicate they are motivated by environmental concerns.
Prediction of Electrical Energy Efficiency Using Information on Consumer's Ac...
Heuristic approach to optimize the number of test cases for simple circuits
1. International journal of VLSI design & Communication Systems (VLSICS) Vol.1, No.3, September 2010
DOI: 10.5121/vlsic.2010.1302 13
Heuristic approach to optimize the number of test
cases for simple circuits
SM. Thamarai1
, K.Kuppusamy2
and T. Meyyappan3
1
Research scholar
lotusmeys@yahoo.com
2
Associate Professor&Research Supervisor
kkdiksamy@yahoo.com
3
Associate Professor
meyslotus@yahoo.com
Department of Computer Science and Engineering, Alagappa University,
Karaikudi-630003, Tamilnadu, South India.
ABSTRACT
In this paper a new solution is proposed for testing simple stwo stage electronic circuits. It minimizes the
number of tests to be performed to determine the genuinity of the circuit. The main idea behind the present
research work is to identify the maximum number of indistinguishable faults present in the given circuit
and minimize the number of test cases based on the number of faults that has been detected. Heuristic
approach is used for test minimization part, which identifies the essential tests from overall test cases.
From the results it is observed that, test minimization varies from 50% to 99% with the lowest one
corresponding to a circuit with four gates .Test minimization is low in case of circuits with lesser input
leads in gates compared to greater input leads in gates for the boolean expression with same number of
symbols. Achievement of 99% reduction is due to the fact that the large number of tests find the same faults.
The new approach is implemented for simple circuits. The results show potential for both smaller test sets
and lower cpu times.
KEYWORDS
Adaptive Scheduled Fault Detection, CombinationalCircuits, Fault Library, Heuristic Approach , Test
Minimization.
1. INRODUCTION
In recent years, the development of integrated circuit technology has accelerated rapidly; MSI and
LSI techniques promise to make today’s functional level devices tomorrow’s(even today’s) basic
components. Accordingly, digital systems are built with more and more complexity; the fault
testing and diagnosis of digital circuits becomes an important and indispensable part of the
manufacturing process.
2. International journal of VLSI design & Communication Systems (VLSICS) Vol.1, No.3, September 2010
14
Performance, area, power and testing are some of the most important attributes of complex VLSI
systems[7]. With the current reduction in devise sizes it is possible to fit increasingly larger
devices on to a single chip. As chip density increases the probability of defects occuring in a chip
increases as well. The quality, reliability and cost of the product are directly proportional to the
degree of testing the product[1]. Deterministic test generation algorithms for combinational
circuits[10] and sequential circuits [8],[9],[11],[12],[13],[14],[15] have been used in the past, but
the exection time are often long, due to the large number of backtracks that often occur.
The objective of this paper is to develop new test generation algorithm using heuristic
optimization techniques. This algorithm reduces the number of test vector sets of a combinational
logic circuit for fault detection and diagnosis respectively. With increasing integrated levels in
today’s VLSI chips, the complexity of testing them is also increasing. This is because the internal
chip modules have become increasingly difficult to access. Testing costs have become a
significant fraction of the total manufacturing cost. Hence there is a necessity to reduce the testing
cost. The facter that has the biggest impact on testing cost of a chip is the time required to test it.
This time can be decreased if the number of tests required to test the chip is reduced. So we
simply need to advise a test set that is small in size[1].
1.1 Problem Statement
The problems solved in this paper are:
1. Finding a fault dictionary for all stuck at faults in combinational circuit.
2. Formulating an exact method for minimizing the given diagnostic test set
3. Calculate the execution time to overcome the computational limitations of the exact
method of item 2.
1.2 Original contributions:
We have developed the test minimization algorithm to produce minimal test set for combinational
circuits. This method has its foundations on these steps: 1) Identifying the distinguishable faults
2) Generating test cases for them 3) Minimizing the number of test cases. The steps 1 and 2, give
us a non- exhaustive vector set, which on compaction will give a minimal test set. Using fault
detection and location, we have modeled fault dictionary. Fault dictionary is a table. In this table
rows identify the test number and column identifies the distinguishable faults. Heuristic approach
is adopted to optimize the number of test cases. Simple two stage circuits and its Boolean
expressions (as sum of product form) are experimented with the proposed method and test
minimizing is found to be satisfactory.
The earlier research work on test set compaction is discussed in section 2. In section 3, the steps
involved in heuristic approach for test minimization is discussed. In section 4, complete algorithm
for test set compaction is presented. In section 5, outcomes of various simple circuits are
tabulated and results are interpreted. In the final section, conclusions and future research
directions are outlined.
2. BACKGROUND
This paper addresses the problem of minimal test pattern generation for simple combinational
logic circuits only. However it should be noted that nearly all sequential logic, ie the circuit
3. International journal of VLSI design & Communication Systems (VLSICS) Vol.1, No.3, September 2010
15
containing state holding elements (flip flops) are tested in a way that transforms their operation
under test from sequential to combinational [2]. A number of basic analytic and heuristic methods
are found in the literature namely, Fault Table method, Path Sensitizing method and Equivalent-
Normal-Form(ENF) method, Karnaugh Map and tabular method, the ENF Karnaugh map
method, the Boolean Difference method, and the SPOOF method.[3]
The fault table method [3] [4] is the most classic approach to the problem. It is completely
general and always yields the minimum set of diagnostic tests. However it suffers from the fact
that it requires the very large fault table to be constructed. To overcome the problem of not
requiring the construction of very large fault table, the concept of path sensitizing is introduced.
A heuristic, systematic procedure derived from the concept of path sensitizing, known as
equivalent normal form(ENF) method[4] is then introduced. Although this method has eliminated
the two imperfections that the fault table and path sensitizing methods have, it introduces an
unattractive new feature, the requirement of the cumbersome computation of a ‘score function’
for every literal in the ENF and the complemented ENF of the circuit.
ENF Karnaugh map method [6]is the combination of ENF method and the Karnaugh map and
tabular method. Both the ENF method and ENF-Karnaugh map method do not guarantee minimal
experiments, nor is there a guarantee that a set of sensitized paths can be found for every circuit.
The Boolean Difference method and the SPOOF methods [4] are two convenient general methods
for deriving tests for detecting any single and/or multiple faults in any part of the circuit without
using any fault tables or maps.
In Fixed Scheduled Fault Detection method, three tables are generated for the given
combinational circuit (boolean expression in sum of products form). They are fault table, fault
detection table and fault location table. Essential test set is found using a new algorithm[16] . It
removes redundancy in test set by grouping test numbers detecting the same faults. It also
removes redundant test numbers that are detecting the same fault. Test numbers detecting single
faults alone are also collected as essential test numbers
3. HEURISTIC METHOD FOR TEST MINIMIZATION
In heuristic method, fault detection and test minimization consists of two phases. In the first
phase fault dictionary alone is created. In the second phase number of tests is going to be
minimized. In this paper diagnosing tree is created by dissecting the fault table matrix into two
sub matrices based on essential test number. The test number is added to essential test set.
Column numbers in these two matrices are added to the root node of the tree as right and left
siblings. Left children contains fault-free output column numbers from the matrix (0s) and the
right one contains faulty output column numbers from the matrix(1s).The process is repeated until
both left and right children results in a single column number in them. Essential test set is found
after removing redundant test numbers in it[4].
In this method, the economies that can be achieved by choosing each test input to be
applied to the circuit on the basis of the outcomes of all previous tests in the schedule. The choice
of test schedule depends upon the outcome of the individual tests in the sequence. A convenient
way to present such a sequence of tests and their outcomes is to use a diagnosing tree.
4. International journal of VLSI design & Communication Systems (VLSICS) Vol.1, No.3, September 2010
16
3.1 Diagnosing Tree
A diagnosing tree is a directed graph whose nodes are tests. The outgoing branches from a node
represent the different outcomes of the particular test. The diagnosing tree[3] is shown in figure
1,2 and 3.From this diagnosing tree, we see that the circuit fault-free if and only if the output
sequence to the sequence (2,3,6,5) is (0,1,1,0), as indicated by the path of dark lines. Thus this
tree can be simplified to the one shown in figure 2. Applying test 2,3,5,6 in any order will not
shorten the length of the experiment. Another diagnosing tree for test set {2,3,5,6) is shown in
figure 3, which still requires all four tests. Unlike fault detection, adaptive-scheduled fault
location is general yields a shorter experiment. Using heuristic approach, the fault location of the
circuit of figure 1, requires a minimal length of 4 is required for test minimization.
Figure 1 (Diagnosing Tree for fault Detection)
3.2: Heuristic method
This method consists of two phases. They are
Phase -1: Construction of the fault dictionary
Phase - 2 : Proposed Method for Test minimization
2
3 3
6
6
6
5 5
5 5
0 1
0 1 0 1
0 1
0 1
0
1
0 1
0 1
0 1
0 1
f0,f1,…f6
f0,f1, f3, f4, f5
f2,f6
f4
f0,f1, f3, f5
φ
φ
f2
f4
f1 f0 f3 ,f5
φ
φ
φ
f2,f6
f2,f6
f4
f1
f0,f3, f5
f6
5. International journal of VLSI design & Communication Systems (VLSICS) Vol.1, No.3, September 2010
17
Figure 2(Simplified tree) Figure 3(another simplified tree)
Phase -1: Construction of the fault dictionary
If x1,x2,…..,xn are the input variables to a single output circuit whose fault-
free(correct) output is z = z(x1,…..xn) and zα1
,zα2
,…..zαi
are the erroneous outputs, each
corresponding to one of the possible faults α1,α2,….αi , a multiple-output table of the
combinations may be obtained. This is called a fault dictionary, F in Table I.
Table I Fault Dictionary
Row
Number
x1 x2 … xn z
z
α
1
zα
2 … zα
i
0 0 0 … 0 0 1 0 … 0
1 0 0 … 1 1 1 0 … 1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2n
-1 1 1 … 1 0 0 0 … 1
Phase -2 : Proposed Method for Test minimization
This method adopts heuristic approach for test minimization. After two test inputs have been
applied, the four partial test schedules that should follow may be all different in content and
length, and so on for successive test inputs. The objective of this method is to find test schedule
with minimum number of levels in the diagnosing tree. The following factors influence the
method adopted for finding the minimal test set:
2
3
6
5
5
2
6
3
Faultless Faulty Faulty Faultless
6. International journal of VLSI design & Communication Systems (VLSICS) Vol.1, No.3, September 2010
18
1. The tests chosen must not be confined to any particular given set of tests .They must be
chosen from the rows of the fault table.
2. The construction of an adaptive schedule of tests for fault location with a minimal
number of levels needs at least NL tests.
3. At each step the test that will distinguish between the largest number of faults not
already distinguished should be chosen.
One way to select the appropriate row (test) at each step of procedure is to try all possible
remaining rows(tests). For even a small fault table, however, the number of possible graph
labeling that must be tried to determine the minimal number of levels is astronomical. This
approach is therefore impractical.
Let Wi0 and Wi1 denote the numbers of 0’s and 1’s in row i, respectively. A simple heuristic
method for finding a nearly minimal adaptive-scheduled fault-location experiment is:
Select row i, which maximizes the number of (0,1) pairs between digits in that row, that is, which
maximizes the expression
Ri =Wi0Wi1
This number is optimized if the row that has the most nearly equal distribution of 0’s and 1’s, [i.e.
the row (or one of the subset of rows) for which |Wio-Wi1| is minimal] is selected. The use of this
criterion appears to work very well for many problems.
4.COMPLETE ALGORITHM FOR TEST MINIMIZATION
This algorithm accepts the given expression in sum of product form, produces fault table, detects
faults and outputs essential tests after eliminating redundancy, if any. The pseudocode for
proposed Algorithm is given below:
Start
Initialize a binary tree data structure.
Step 1: Read Boolean Expersion(Sum of products form)
Step 2: Substitute binary values for all the symbols and find s-a-0 and s-a-1 faults
step 3: Eliminate duplicate faults
step 4:Create fault table by substituting all binary combinations n symbols in given
experssion
step 5: Eliminate duplicate rows and columns in fault table.
Repeat
Step 6: Find sum of 1’s and sum of 0’s in each row
Step 7: Find difference between sums of 0’s and 1’s
Step 8: Find minimum of the difference.
Step 9: Select the row number with minimum difference as essential number.
Step 10: Split the matrix into two sub matrices one containing 0’s and another
containing 1’s based on essential test row.
Step 11: Store the column numbers in each of the sub matrix as left and right
child of the
binary tree.
Step 12: Eliminate the selected row number from further analysis
7. International journal of VLSI design & Communication Systems (VLSICS) Vol.1, No.3, September 2010
19
Until the left and right child of the binary tree contains a single column number.
Step 13: Eliminate redundant test numbers.
Step 14: Output essential test for the given circuit.
Stop
5. RESULTS AND DISCUSSION
This algorithm is implemented in C language and experimented and tested with different simple
combinational circuits. It finds essential test numbers, for the given circuits.The results are
tabulated in table II. The following inferences are made on observation of the table II. Execution
time of the algorithm increases with the increase in number of gates. Achievement of 99%
reduction is due to the fact that the large number of tests find the same faults. Hence, they are
grouped and one among them is retained and the rest are eliminated. A diagnosing tree created
occupies less storage space.
Part of the algorithm (section 4) proposed in this research work uses the binary tree structure to
identify test numbers covering more than one fault and eliminates redundant tests to be
performed. Hence, they are grouped and one among them is retained and the rest are
eliminated.This method suffers by keeping a very large fault detection and location table. This
will increase the memory requirement of the data structures used. The increase in memory
requirement is directly proportional to the total number of inputs to the gates.
Table II Results
Sl.
No
No. of
Inputs
n
Total No
of Tests
2n
(a)
No. of
faults
Minimized
Tests (b)
Execution
Time
in Seconds
Minimization in
Percentage
(a-b)/a*100
1 4 16 8 7 0.5 50.0
2 6 64 11 10 1 82.8
3 8 256 14 13 2 94.9
4 9 512 14 13 3 97.4
5 10 1024 17 15 6 98.5
6 11 2048 17 15 13 99.3
7 12 4096 20 17 28 99.6
8 13 8192 22 19 30 99.7
9 14 16384 25 23 34 99.8
10 15 32768 29 24 35 99.9
8. International journal of VLSI design & Communication Systems (VLSICS) Vol.1, No.3, September 2010
20
6. CONCLUSION
To minimize the test generation problem in simple two stage combinational circuits, a new
heuristic algorithm has been developed. This algorithm is implemented and tested with different
combinational circuits. From the implementation it was observed that execution time of the
algorithm increases with the increase in number of gates. Test minimization varies from 50% to
99% with the lowest one corresponding to a circuit with four gates. In case of complex circuits,
number of faults are naturally more. Test minimization percentage reduces in those cases. This
Algorithm requires a very large fault table which is to be constructed and provides optimal
solution. This procedure is quite simple and easy to apply. The drawback of this method is that it
requires a large amount of computer storage space to store the fault table. The next phase of the
research work is extended to develop suitable Heuristic search Algorithm like Genetic Algorithm
to overcome the difficulties of proposed method. This work may be extended to very large scale
integration (VLSI) like benchmark circuits.
REFERENCES:
[1] Fault testing diagnostic test minimization, Mohammed Ashfaq Shukoor, M.SC Thesis,
Auburn University, Auburn
[2] M.L.Brushnell and V.D.Agrawal, “Essentials of Electronics testing for Digital, Memory, and
mixed signal VLSI circuits”, Spriger ,2000
[3] B. Holdsworth, “Digital Logic Design”, Second Edition,Prentice Hall of India,1975.
[4] Samuel C.Lee, “Digital Circuits and logic design”, ,Prentice Hall of India.
[5] D.K. Pradhan, “Fault Tolerant computing – Theory and Techniques”, ,Prentice Hall of
India,1986.
[6] W.H.Kautz, “Fault testing and diagnosis in Combinational Digital Circuits”, IEEE Tans.
computers, vol. C-17, pp.352-366, Apr.1968.
[7] P.Kalpana, Dr K.Gunavathy “Fault Oriented Test Pattern Generator for Digital to Analog
converters”, Academic Open Journal,vol.13,2004.
[8] W.-T.Cheng, “ The BACK algorithm for sequential test test generation”,
Proc.Int.Conf.Computer Design, pp.66-69,Oct .1988.
[9] A.Ghosh, S.Devadas, and A.R. Newton, “ Test generation for highly sequntial
circuits”,Proc.Int.Conf. Computer Aided Design,pp. 362-365, Nov 1989.
[10] P.Goel, “An implicit enumeration algorithm to generate tests for combinational logic
circuits”, IEEE Trans. Computers,Vol.C-30,no.3, pp215-222, March 1981.
[11] D.H.Lee and S.M.Reddy, “ A new test generation method for sequntial circuits”,
Proc.Int.Conf. Computer Aided Design, pp. 446-449, Nov 1991.
[12] H.-K.-T.Ma.S.Devadas, A.R.Newton, and A. Sangiovanni-Vincentelli, “ Test generation for
sequntial circuits”, IEEE Trans. . Computer Aided Design, Vol.7,no10,pp 1081-1093, Oct
1988.
[13] R.Marlett, “An effective test generation system for sequential circuits”, Proc.
DesignAutomation Conf., pp.250-256, June 1986.
[14] T.M.Niermann and J.H.Patel, “HITEC: A test generation package for sequential circuits”,
Proc. European Conf. Design Automation(EADC), pp.214-218, Feb. 1991.
9. International journal of VLSI design & Communication Systems (VLSICS) Vol.1, No.3, September 2010
21
[15] M.H.Schulz and E.Auth, “Essential: An efficient self learning test pattern generation
algorithm for sequential circuits”, Proc. Int. Test. Conf. , pp 28-37, Aug 1989.
[16] Thamarai.SM, Kuppusamy.K, Meyyappan.T, “A new Algorithm for fault based test
minimization in combinational circuits”, International Journal of Computing and
Applications, ISSN 0973-5704,Vol.5, No.1, pp.77-88,2010.
Authors
Prof. Dr K.Kuppusamy is working as an Associate Professor, Department of
Computer Science and Engineering, Alagappa University, Karaikukdi, Tamilnadu,
India. He has received his Ph.D in Computer Science and Engineering from
Alagappa University, Karaikudi, Tamilnadu in the year 2007. He has published
many papers in International Journals the presented in National and International
conferences. His areas of research interests include Information/Network
Security, Algorithms, Neural Networks, Fault Tolerant Computing, Software
Engineering & Testing and Operational Research.
Prof. T.Meyyappan currently, Associate Professor, Department of Computer
Science and Engineering, Alagappa University, Karaikudi, TamilNadu. He has
submitted his Ph.D. thesis in Computer Science in May 2010 and published a
number of research papers in National and International journals and
conferences. He has developed Software packages for Examination, Admission
Processing and official Website of Alagappa University. His research areas
include Operational Research, Digital Image Processing, Fault Tolerant
computing, Network security and Data Mining.
SM.Thamarai received her Diploma in Electronics and Communication
Engineering in Department of Technical Education, TamilNadu in 1989 and her
B.C.A., M.Sc. (First Rank holder and Gold Medalist), M.Phil. (First Rank holder)
degrees in Computer Science(1998-2005) from Alagappa University. She has
published two research papers in International Journals and presented papers in
Six National and one International Conferences. Her current research interests
are in Operational Research and Fault Tolerant Computing. She is currently
pursuing her Ph.D. in Alagappa University, Karaikudi, TamilNadu.