This paper investigates the impact of the changes of the characteristic polynomials and initial loadings, on behaviour of aliasing errors of parallel signature analyzer (Multi-Input Shift Register), used in an LFSR based digital circuit testing technique. The investigation is carried-out through an extensive simulation study of the effectiveness of the LFSR based digital circuit testing technique. The results of the study show that when the identical characteristic polynomials of order n are used in both pseudo-random test-pattern generator, as well as in Multi-Input Shift Register (MISR) signature analyzer (parallel type) then the probability of aliasing errors remains unchanged due to the changes in the initial loadings of the pseudo-random test-pattern generator.
An optimal general type-2 fuzzy controller for Urban Traffic NetworkISA Interchange
This document presents an optimal general type-2 fuzzy controller (OGT2FC) for controlling traffic signal scheduling and phase succession to minimize wait times and average queue length. The OGT2FC uses a combination of general type-2 fuzzy logic sets and the Modified Backtracking Search Algorithm (MBSA) to optimize the membership function parameters. Simulation results show the OGT2FC performs better than conventional type-1 fuzzy controllers in regulating urban traffic flow.
The document evaluates the performance of a parallel bubble sort algorithm implemented using MPI on a supercomputer. It finds that the parallel bubble sort algorithm has better running time as the number of processors increases. However, the parallel efficiency is highest for a small number of processors, making it most efficient for sorting on a limited number of cores. Testing was performed on Jordan's IMAN1 supercomputer and results showed decreasing efficiency as more processors were used, due to increasing communication overhead.
Techniques for Minimizing Power Consumption in DFT during Scan Test ActivityIJTET Journal
1. The document discusses techniques for minimizing power consumption during scan test activity. It proposes a novel circuit technique to eliminate switching in combinational logic during scan shifting by masking logic inputs. Blocking transistors are added to gate the power supply to first-level gates at flip-flop outputs during scan shifting.
2. A selective trigger scan architecture is proposed that reduces switching between scan cells and test vectors by using NOR gates to compare previous and next test vector values and only applying differences. Scan chain reordering is also used to minimize transitions between flip-flops.
3. Experimental results on ISCAS89 benchmarks show the proposed techniques reduce static power by 6.1-11% and area overhead by 47% compared to
Peak- and Average-Power Reduction in Check-Based BIST by using Bit-Swapping ...IOSR Journals
This document proposes a technique called a bit-swapping linear feedback shift register (BS-LFSR) to reduce average and peak power in built-in self-tests (BISTs). The BS-LFSR swaps the values of two cells in an LFSR based on the value of a third cell, reducing transitions by 50% compared to a standard LFSR. It is combined with an algorithm that orders check cells to further reduce average and peak power during testing. Experimental results on benchmark circuits show up to 65% reduction in average power and 55% reduction in peak power with little impact on fault coverage or test time.
Robust and efficient nonlinear structural analysis using the central differen...openseesdays
This document compares the central difference time integration scheme to the traditional Newmark average acceleration scheme for nonlinear structural analysis. The central difference scheme is explicit, does not require iteration, and is well-suited for parallelization. Incremental dynamic analyses using the central difference scheme converged for more ground motions and estimated a 10% higher median collapse capacity compared to the Newmark scheme. Analyses using the central difference scheme were also up to 5 times faster when run in parallel on high-performance computers. Parallel algorithms were developed to efficiently conduct multiple stripe analyses and incremental dynamic analyses using computational resources.
A Text-Independent Speaker Identification System based on The Zak TransformCSCJournals
This paper presents a novel text-independent speaker identification system based on the discrete Zak transform. The system uses the Zak transform coefficients as features to model 23 speakers from the ELSDSR database. During identification, the Euclidean distance between the Zak transform of the test speech and each speaker model is calculated. The speaker with the minimum distance is identified. The system achieves an identification efficiency of 91.3% using a single test file and 100% using two test files. The Zak-based method is also faster and has comparable accuracy to MFCC-based speaker identification. The paper also explores dividing signals into segments and averaging the Zak transforms, which improves efficiency while only slightly increasing modeling time.
Unconstrained Optimization Method to Design Two Channel Quadrature Mirror Fil...CSCJournals
This paper proposes an efficient method for the design of two-channel, quadrature mirror filter (QMF) bank for subband image coding. The choice of filter bank is important as it affects image quality as well as system design complexity. The design problem is formulated as weighted sum of reconstruction error in time domain and passband and stop-band energy of the low-pass analysis filter of the filter bank .The objective function is minimized directly, using nonlinear unconstrained method. Experimental results of the method on images show that the performance of the proposed method is better than that of the already existing methods. The impact of some filter characteristics, such as stopband attenuation, stopband edge, and filter length on the performance of the reconstructed images is also investigated.
Using the black-box approach with machine learning methods in ...butest
The document discusses two experiments using machine learning methods to improve job scheduling in grid computing environments. In the first experiment, machine learning methods were used to assist basic resource selection algorithms. In the second experiment, machine learning methods directly selected resources for job execution. The results showed that machine learning approaches could achieve improvements or comparable results to traditional scheduling methods.
An optimal general type-2 fuzzy controller for Urban Traffic NetworkISA Interchange
This document presents an optimal general type-2 fuzzy controller (OGT2FC) for controlling traffic signal scheduling and phase succession to minimize wait times and average queue length. The OGT2FC uses a combination of general type-2 fuzzy logic sets and the Modified Backtracking Search Algorithm (MBSA) to optimize the membership function parameters. Simulation results show the OGT2FC performs better than conventional type-1 fuzzy controllers in regulating urban traffic flow.
The document evaluates the performance of a parallel bubble sort algorithm implemented using MPI on a supercomputer. It finds that the parallel bubble sort algorithm has better running time as the number of processors increases. However, the parallel efficiency is highest for a small number of processors, making it most efficient for sorting on a limited number of cores. Testing was performed on Jordan's IMAN1 supercomputer and results showed decreasing efficiency as more processors were used, due to increasing communication overhead.
Techniques for Minimizing Power Consumption in DFT during Scan Test ActivityIJTET Journal
1. The document discusses techniques for minimizing power consumption during scan test activity. It proposes a novel circuit technique to eliminate switching in combinational logic during scan shifting by masking logic inputs. Blocking transistors are added to gate the power supply to first-level gates at flip-flop outputs during scan shifting.
2. A selective trigger scan architecture is proposed that reduces switching between scan cells and test vectors by using NOR gates to compare previous and next test vector values and only applying differences. Scan chain reordering is also used to minimize transitions between flip-flops.
3. Experimental results on ISCAS89 benchmarks show the proposed techniques reduce static power by 6.1-11% and area overhead by 47% compared to
Peak- and Average-Power Reduction in Check-Based BIST by using Bit-Swapping ...IOSR Journals
This document proposes a technique called a bit-swapping linear feedback shift register (BS-LFSR) to reduce average and peak power in built-in self-tests (BISTs). The BS-LFSR swaps the values of two cells in an LFSR based on the value of a third cell, reducing transitions by 50% compared to a standard LFSR. It is combined with an algorithm that orders check cells to further reduce average and peak power during testing. Experimental results on benchmark circuits show up to 65% reduction in average power and 55% reduction in peak power with little impact on fault coverage or test time.
Robust and efficient nonlinear structural analysis using the central differen...openseesdays
This document compares the central difference time integration scheme to the traditional Newmark average acceleration scheme for nonlinear structural analysis. The central difference scheme is explicit, does not require iteration, and is well-suited for parallelization. Incremental dynamic analyses using the central difference scheme converged for more ground motions and estimated a 10% higher median collapse capacity compared to the Newmark scheme. Analyses using the central difference scheme were also up to 5 times faster when run in parallel on high-performance computers. Parallel algorithms were developed to efficiently conduct multiple stripe analyses and incremental dynamic analyses using computational resources.
A Text-Independent Speaker Identification System based on The Zak TransformCSCJournals
This paper presents a novel text-independent speaker identification system based on the discrete Zak transform. The system uses the Zak transform coefficients as features to model 23 speakers from the ELSDSR database. During identification, the Euclidean distance between the Zak transform of the test speech and each speaker model is calculated. The speaker with the minimum distance is identified. The system achieves an identification efficiency of 91.3% using a single test file and 100% using two test files. The Zak-based method is also faster and has comparable accuracy to MFCC-based speaker identification. The paper also explores dividing signals into segments and averaging the Zak transforms, which improves efficiency while only slightly increasing modeling time.
Unconstrained Optimization Method to Design Two Channel Quadrature Mirror Fil...CSCJournals
This paper proposes an efficient method for the design of two-channel, quadrature mirror filter (QMF) bank for subband image coding. The choice of filter bank is important as it affects image quality as well as system design complexity. The design problem is formulated as weighted sum of reconstruction error in time domain and passband and stop-band energy of the low-pass analysis filter of the filter bank .The objective function is minimized directly, using nonlinear unconstrained method. Experimental results of the method on images show that the performance of the proposed method is better than that of the already existing methods. The impact of some filter characteristics, such as stopband attenuation, stopband edge, and filter length on the performance of the reconstructed images is also investigated.
Using the black-box approach with machine learning methods in ...butest
The document discusses two experiments using machine learning methods to improve job scheduling in grid computing environments. In the first experiment, machine learning methods were used to assist basic resource selection algorithms. In the second experiment, machine learning methods directly selected resources for job execution. The results showed that machine learning approaches could achieve improvements or comparable results to traditional scheduling methods.
Drilling rate prediction using bourgoyne and young model associated with gene...Avinash B
Bourgoyne and Young model (BYM) has been the dominant
method for drilling rate prediction. It demonstrates a relation
between drilling rate and the parameters affecting it. There are
eight variables influencing the drilling rate, and they depend on
ground formation type and must be determined based on the data
gathered in advance. Bourgoyne and Young proposed multiple
regression method for determining the unknown coefficients,
albeit, this method has the shortcoming that may lead to an
outcome, which doesn’t make sense physically in some
circumstances. To dissolve this flaw, some new mathematical
methods have been introduced, but utilizing these methods will
confront us with a loss in the accuracy of drilling rate prediction.
Our proposed method solves the two above mentioned
deficiencies, physically meaningless coefficients, and the decrease
in accuracy. In our method, we have employed Genetic algorithm
(GA) to determine the unknown parameters of BYM. Our
practical data sets were nine wells of “Khangiran” Iranian gas
field. Simulation results do prove the efficiency of our new
method for determining constant coefficients of Bourgoyne and
Young model over the previous ones.
This document presents three techniques to accelerate the estimation of a sequence of discrete choice models (DCMs): standardization, warm start, and early stopping. Standardization rescales independent variables to a common scale. Warm start initializes subsequent models with the optimized parameters from previous models. Early stopping monitors the log likelihood improvement over iterations and stops estimation early if improvement plateaus. These techniques are tested on two sequences of 100 DCMs estimated using the BFGS algorithm. Results show that each technique individually and combined provide significant speedups for estimating sequences of DCMs.
Time domain analysis and synthesis using Pth norm filter designCSCJournals
This document discusses time domain analysis and synthesis using pth norm filter design. It proposes using the least pth algorithm to design multirate filter banks for analysis and synthesis. This allows exploring stability and other properties using MATLAB. The algorithm does not require adapting the weighting function or constraints during optimization. Examples are provided to illustrate the effectiveness of designing filters with good signal-to-noise ratio using analysis and synthesis. Key concepts discussed include time domain analysis using impulse response, basic multirate building blocks like decimators and interpolators, and the design formulation using pth norm and infinity norm.
DESIGN OF DELAY COMPUTATION METHOD FOR CYCLOTOMIC FAST FOURIER TRANSFORMsipij
In this paper the Delay Computation method for Common Sub expression Elimination algorithm is being implemented on Cyclotomic Fast Fourier Transform. The Common Sub Expression Elimination algorithm is combined with the delay computing method and is known as Gate Level Delay Computation with Common Sub expression Elimination Algorithm. Common sub expression elimination is effective
optimization method used to reduce adders in cyclotomic Fourier transform. The delay computing method is based on delay matrix and suitable for implementation with computers. The Gate level delay computation method is used to find critical path delay and it is analyzed on various finite field elements. The presented algorithm is established through a case study in Cyclotomic Fast Fourier Transform over finite field. If Cyclotomic Fast Fourier Transform is implemented directly then the system will have high additive complexities. So by using GLDC-CSE algorithm on cyclotomic fast Fourier transform, the additive
complexities will be reduced and also the area and area delay product will be reduced.
Robust design of a 2 dof gmv controller a direct self-tuning and fuzzy schedu...ISA Interchange
This paper presents a study on self-tuning control strategies with generalized minimum variance control in a fixed two degree of freedom structure–or simply GMV2DOF–within two adaptive perspectives. One, from the process model point of view, using a recursive least squares estimator algorithm for direct self-tuning design, and another, using a Mamdani fuzzy GMV2DOF parameters scheduling technique based on analytical and physical interpretations from robustness analysis of the system. Both strategies are assessed by simulation and real plants experimentation environments composed of a damped pendulum and an under development wind tunnel from the Department of Automation and Systems of the Federal University of Santa Catarina.
Performance Analysis of Input Vector Monitoring Concurrent Built In Self Repa...IJTET Journal
Abstract—Built-in self test (BIST) techniques constitute an attractive and practical solution to the problem of testing VLSI circuits and systems. Input vector monitoring concurrent built-in self test (BIST) schemes perform testing during the normal operation of the circuit without imposing a need to set the circuit offline to perform the test. These schemes are evaluated based on the hardware overhead and the concurrent test latency (CTL). CTL means that the time required for the test to complete, whereas the circuit operates normally. In this brief, we present a novel input vector monitoring concurrent BIST scheme, which is based on the idea of monitoring a set (called window) of vectors reaching the circuit inputs during normal operation, and the use of a static-RAM like structure to store the relative locations of the vectors that reach the circuit inputs in the examined window; the proposed scheme is shown to perform significantly better than previously proposed schemes with respect to the hardware overhead and CTL tradeoff. The proposed method also performs novel BISD and BISR schemes based on the fail pattern identification and also binary matrix lossless compression scheme to compress the test patterns in test cubes.
Design and Implementation of Multiplier using Advanced Booth Multiplier and R...IRJET Journal
This document describes a proposed design for a high-speed multiplier that uses a modified Booth algorithm and adaptive hold logic (AHL) along with razor flip-flops to mitigate performance degradation due to aging effects like NBTI and PBTI. The proposed design uses a radix-4 Booth multiplier in place of traditional row/column bypass multipliers to increase throughput. It also includes an AHL circuit that can dynamically determine if an operation requires one or two cycles to complete and can adjust this determination over time to account for aging. Simulation results showed that the proposed design achieved higher performance than fixed-latency multipliers and was more resistant to aging effects.
FEEDBACK LINEARIZATION AND BACKSTEPPING CONTROLLERS FOR COUPLED TANKSieijjournal
This paper investigates the usage of some sophisticated and advanced nonlinear control algorithms in order to control a nonlinear Coupled Tanks System. The first control procedure is called the Feedback linearisation control (FLC), this type of control has been found a successful in achieving a global exponential asymptotic stability, with very short time response, no significant overshooting is recorded and with a negligible norm of the error. The second control procedure is the approaches of Back stepping control (BC) which is a recursive procedure that interlaces the choice of a Lyapunov function with the design of feedback control, from simulation results it shown that this method preserves tracking, robust control and it can often solve stabilization problems with less restrictive conditions may been countered in other methods. Finally both of the proposed control schemes guarantee the asymptoticstability of the closed loop system meeting trajectory tracking objectives.
This document describes a study on enhancing the serial estimation of discrete choice models through the use of standardization, warm starting, and early stopping techniques. It first reviews literature on accelerating discrete choice model estimation and quasi-Newton optimization methods. It then details the three techniques used: standardizing variables, initializing subsequent models with previous solutions, and stopping optimization early based on log-likelihood trends. Two sequences of 100 discrete choice models are tested to evaluate the effectiveness of the techniques. Results show that warm starting parameters and the Hessian matrix from previous solutions significantly reduces estimation time compared to estimating models separately.
Nonlinear combination of intensity measures for response prediction of RC bui...openseesdays
This document discusses using nonlinear combinations of intensity measures (IMs) to more accurately predict engineering demand parameters (EDPs) for response analysis of reinforced concrete (RC) buildings. It presents an evolutionary polynomial regression (EPR) technique to model complex nonlinear relationships between IMs and EDPs without assumptions about the form of the relationship. The EPR technique is applied to dynamic analyses of an RC framed building to predict maximum inter-story drift ratio and maximum floor acceleration under earthquake ground motions, demonstrating more accurate predictions compared to a single IM, especially for base-isolated buildings. The document concludes more accurate EDP predictions can be obtained through nonlinear IM combinations and advocates improving data-driven modeling before proposing new IMs.
Predicting (Nk) factor of (CPT) test using (GP): Comparative Study of MEPX & GN7Ahmed Ebid
Static cone penetration test (CPT) is a broadly satisfactory and dependable geotechnical in-situ apparatus that gives brisk and honest substantial measure of data about soil classification, stratification and properties. Un-drained shear strength of clay (cu) is one of the principle soil parameters that could be sensibly evaluated from the (CPT) results, as it is specifically connected to the tip resistance through the experimental cone factor (Nk). Earlier researches showed that (Nk) value depends on type of soil, nature and stress history conditions and many other variables. Construction development in some locations with thick deposits of soft to very soft clays motivates extensive researches to define the reasonable value of the (Nk) factor for such types of clay. The performed study concentrated on utilizing the genetic programming technique (GP) to predict (Nk) value of clay using the consistency limits that can be easily determined in the laboratory. A set of 102 records were gathered from the CPT site investigations and corresponding consistency limits and other physical properties experiments, were divided into training set of 72 records and validation set of 30 records. Both (GN7) & (MEPX) software were used to apply (GP) on the available data. Four trials for each software with different chromosome lengths were performed to correlate the (Nk) factor with the clay consistency limits, water content (wc) and unit weight (γ) using training data set, then, the produced relations were tested using the validation data set. The four generated formulas using (GN7) showed accuracies ranging between 93% and 97% and coefficient of determination (R2) ranging between 0.7 and 0.9, while the other four formulas form (MEPX) showed accuracy not exceeding 95% and coefficient of determination (R2) ranging between 0.45 and 0.75.
CARI2020: A CGM-Based Parallel Algorithm Using the Four-Russians Speedup for ...Mokhtar SELLAMI
The document describes a parallel algorithm for 1-D sequence alignment using the Coarse-Grained Multicomputer (CGM) model. The algorithm partitions the sequence alignment problem into smaller subproblems that are distributed across multiple processors. It requires O(n2/p log n) time using O(p) communication rounds on p processors, improving on the sequential O(n2) time algorithm. The algorithm first has each processor compute a lookup table efficiently in parallel before distributing equal-sized macro-blocks of the task graph across processors to calculate the sequence alignment in parallel.
A Template Matching Approach to Classification of QAM Modulation using Geneti...CSCJournals
The automatic recognition of the modulation format of a detected signal, the intermediate step between signal detection and demodulation, is a major task of an intelligent receiver, with various civilian and military applications. Obviously, with no knowledge of the transmitted data and many unknown parameters at the receiver, such as the signal power, carrier frequency and phase offsets, timing information, etc., blind identification of the modulation is a difficult task. This becomes even more challenging in real-world. In this paper modulation classification for QAM is performed by Genetic Algorithm followed by Template matching, considering the constellation of the received signal. In addition this classification finds the decision boundary of the signal which is critical information for bit detection. I have proposed and implemented a technique that casts modulation recognition into shape recognition. Constellation diagram is a traditional and powerful tool for design and evaluation of digital modulations. The simulation results show the capability of this method for modulation classification with high accuracy and appropriate convergence in the presence of noise.
Effect of Residual Modes on Dynamically Condensed Spacecraft StructureIRJET Journal
This document discusses the effect of residual modes on the fundamental frequencies of a condensed spacecraft structure. It presents the modeling and dynamic analysis of a spacecraft bus structure using finite element analysis. The structure is condensed using the Craig-Bampton method to reduce the degrees of freedom. Residual modes are then computed and included to recover data lost during condensation. The results show that including residual modes provides frequencies for the condensed structure that closely match those of the original full structure model, demonstrating the effectiveness of using residual modes for data recovery after structural condensation.
Symbol Based Modulation Classification using Combination of Fuzzy Clustering ...CSCJournals
Most of approaches for recognition and classification of modulation have been founded on modulated signal’s components. In this paper, we develop an algorithm using fuzzy clustering and consequently hierarchical clustering algorithms considering the constellation of the received signal to identify the modulation types of the communication signals automatically. The simulation that has been conducted shows high capability of this method for recognition of modulation levels in the presence of noise and also, this method is applicable to digital modulations of arbitrary size and dimensionality. In addition this classification finds the decision boundary of the signal which is critical information for bit detection.
Fault detection and diagnosis for non-Gaussian stochastic distribution system...ISA Interchange
This document proposes a new method for fault detection and diagnosis of non-Gaussian stochastic distribution systems using radial basis function neural networks. It introduces an RBF neural network technique to approximate the output probability density functions of such systems. It then presents a nonlinear adaptive observer-based algorithm for fault detection and diagnosis. The algorithm introduces a tuning parameter to make the residual as sensitive as possible to faults. It analyzes the stability and convergence of the error dynamics for both fault detection and diagnosis. An example is provided to demonstrate the effectiveness of the proposed method.
Transpose Form Fir Filter Design for Fixed and Reconfigurable CoefficientsIRJET Journal
This document discusses the design of transpose form finite impulse response (FIR) filters for both fixed and reconfigurable coefficients. Transpose form FIR filters naturally support the multiple constant multiplication technique, which can reduce computational delay. For fixed coefficients, a low-complexity design using multiple constant multiplication is implemented, reducing area and delay compared to direct form FIR filters. For reconfigurable coefficients, a multiplier-based design is used. Simulation results show the transpose form FIR filter achieves lower area and delay than the direct form structure.
Maximum likelihood estimation-assisted ASVSF through state covariance-based 2...TELKOMNIKA JOURNAL
The smooth variable structure filter (ASVSF) has been relatively considered as a new robust predictor-corrector method for estimating the state. In order to effectively utilize it, an SVSF requires the accurate system model, and exact prior knowledge includes both the process and measurement noise statistic. Unfortunately, the system model is always inaccurate because of some considerations avoided at the beginning. Moreover, the small addictive noises are partially known or even unknown. Of course, this limitation can degrade the performance of SVSF or also lead to divergence condition. For this reason, it is proposed through this paper an adaptive smooth variable structure filter (ASVSF) by conditioning the probability density function of a measurement
to the unknown parameters at one iteration. This proposed method is assumed to accomplish the localization and direct point-based observation task of a wheeled mobile robot, TurtleBot2. Finally, by realistically simulating it and comparing to a conventional method, the proposed method has been showing a better accuracy and stability in term of root mean square error (RMSE) of the estimated map coordinate (EMC) and estimated path coordinate (EPC).
Reduced Test Pattern Generation of Multiple SIC Vectors with Input and Output...IJERA Editor
In recent years, the design for low power has become one of the greatest challenges in high-performance very
large scale integration (VLSI) design. Most of the methods focus on the power consumption during normal mode
operation, while test mode operation has not normally been a predominant concern. However, it has been found
that the power consumed during test mode operation is often much higher than during normal mode operation
[1]. This is because most of the consumed power results from the switching activity in the nodes of the circuit
under test (CUT), which is much higher during test mode than during normal mode operation [1]–[3]. In the
proposed pattern, each generated vector applied to each scan chain is an SIC vector, which can minimize the
input transition and reduce test power. In VLSI testing, power reduction is achieved by increasing the correlation
between consecutive test patterns.
This document summarizes a study of built-in self-test (BIST) approaches for detecting single stuck-at faults in combinational logic circuits. Pseudorandom test patterns generated by a linear feedback shift register (LFSR) were applied in parallel and serially to benchmark circuits. Applying patterns in parallel via test-per-clock achieved high fault coverage but required a large LFSR for circuits with many inputs. Reseeding the LFSR improved coverage when an initial seed was ineffective. Seed selection and minimum LFSR size for different application methods were evaluated to optimize BIST fault detection.
Cellular wireless systems like GSM suffer from congestion resulting in overall system degradation and poor service delivery. When the traffic demand in a geographical area is high, the input traffic rate will exceed thecapacity of the output lines. This work focused on homogenous wireless network (the network traffic and resource dimensioning that are statistically identical) such that the network performance
evaluation can be reduced to a system with single cell and a single traffic type. Such system can employa queuing model to evaluate the performance metric of a cell in terms of blocking probability.
Five congestion control models were compared in the work to ascertain their peculiarities, they are Erlang B, Erlang C, Engset (cleared), Engset (buffered), and Bernoulli. To analyze the system, an aggregate onedimensional Markov chain wasderived, such that it describes a call arrival process under the assumption
that it is Poisson distributed. The models were simulated and their results show varying performances, however the Bernoulli model (Pb5) tends to show a situation that allows more users access to the system and the congestion level remain unaffected despite increase in the number of users and the offered traffic into the system.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Drilling rate prediction using bourgoyne and young model associated with gene...Avinash B
Bourgoyne and Young model (BYM) has been the dominant
method for drilling rate prediction. It demonstrates a relation
between drilling rate and the parameters affecting it. There are
eight variables influencing the drilling rate, and they depend on
ground formation type and must be determined based on the data
gathered in advance. Bourgoyne and Young proposed multiple
regression method for determining the unknown coefficients,
albeit, this method has the shortcoming that may lead to an
outcome, which doesn’t make sense physically in some
circumstances. To dissolve this flaw, some new mathematical
methods have been introduced, but utilizing these methods will
confront us with a loss in the accuracy of drilling rate prediction.
Our proposed method solves the two above mentioned
deficiencies, physically meaningless coefficients, and the decrease
in accuracy. In our method, we have employed Genetic algorithm
(GA) to determine the unknown parameters of BYM. Our
practical data sets were nine wells of “Khangiran” Iranian gas
field. Simulation results do prove the efficiency of our new
method for determining constant coefficients of Bourgoyne and
Young model over the previous ones.
This document presents three techniques to accelerate the estimation of a sequence of discrete choice models (DCMs): standardization, warm start, and early stopping. Standardization rescales independent variables to a common scale. Warm start initializes subsequent models with the optimized parameters from previous models. Early stopping monitors the log likelihood improvement over iterations and stops estimation early if improvement plateaus. These techniques are tested on two sequences of 100 DCMs estimated using the BFGS algorithm. Results show that each technique individually and combined provide significant speedups for estimating sequences of DCMs.
Time domain analysis and synthesis using Pth norm filter designCSCJournals
This document discusses time domain analysis and synthesis using pth norm filter design. It proposes using the least pth algorithm to design multirate filter banks for analysis and synthesis. This allows exploring stability and other properties using MATLAB. The algorithm does not require adapting the weighting function or constraints during optimization. Examples are provided to illustrate the effectiveness of designing filters with good signal-to-noise ratio using analysis and synthesis. Key concepts discussed include time domain analysis using impulse response, basic multirate building blocks like decimators and interpolators, and the design formulation using pth norm and infinity norm.
DESIGN OF DELAY COMPUTATION METHOD FOR CYCLOTOMIC FAST FOURIER TRANSFORMsipij
In this paper the Delay Computation method for Common Sub expression Elimination algorithm is being implemented on Cyclotomic Fast Fourier Transform. The Common Sub Expression Elimination algorithm is combined with the delay computing method and is known as Gate Level Delay Computation with Common Sub expression Elimination Algorithm. Common sub expression elimination is effective
optimization method used to reduce adders in cyclotomic Fourier transform. The delay computing method is based on delay matrix and suitable for implementation with computers. The Gate level delay computation method is used to find critical path delay and it is analyzed on various finite field elements. The presented algorithm is established through a case study in Cyclotomic Fast Fourier Transform over finite field. If Cyclotomic Fast Fourier Transform is implemented directly then the system will have high additive complexities. So by using GLDC-CSE algorithm on cyclotomic fast Fourier transform, the additive
complexities will be reduced and also the area and area delay product will be reduced.
Robust design of a 2 dof gmv controller a direct self-tuning and fuzzy schedu...ISA Interchange
This paper presents a study on self-tuning control strategies with generalized minimum variance control in a fixed two degree of freedom structure–or simply GMV2DOF–within two adaptive perspectives. One, from the process model point of view, using a recursive least squares estimator algorithm for direct self-tuning design, and another, using a Mamdani fuzzy GMV2DOF parameters scheduling technique based on analytical and physical interpretations from robustness analysis of the system. Both strategies are assessed by simulation and real plants experimentation environments composed of a damped pendulum and an under development wind tunnel from the Department of Automation and Systems of the Federal University of Santa Catarina.
Performance Analysis of Input Vector Monitoring Concurrent Built In Self Repa...IJTET Journal
Abstract—Built-in self test (BIST) techniques constitute an attractive and practical solution to the problem of testing VLSI circuits and systems. Input vector monitoring concurrent built-in self test (BIST) schemes perform testing during the normal operation of the circuit without imposing a need to set the circuit offline to perform the test. These schemes are evaluated based on the hardware overhead and the concurrent test latency (CTL). CTL means that the time required for the test to complete, whereas the circuit operates normally. In this brief, we present a novel input vector monitoring concurrent BIST scheme, which is based on the idea of monitoring a set (called window) of vectors reaching the circuit inputs during normal operation, and the use of a static-RAM like structure to store the relative locations of the vectors that reach the circuit inputs in the examined window; the proposed scheme is shown to perform significantly better than previously proposed schemes with respect to the hardware overhead and CTL tradeoff. The proposed method also performs novel BISD and BISR schemes based on the fail pattern identification and also binary matrix lossless compression scheme to compress the test patterns in test cubes.
Design and Implementation of Multiplier using Advanced Booth Multiplier and R...IRJET Journal
This document describes a proposed design for a high-speed multiplier that uses a modified Booth algorithm and adaptive hold logic (AHL) along with razor flip-flops to mitigate performance degradation due to aging effects like NBTI and PBTI. The proposed design uses a radix-4 Booth multiplier in place of traditional row/column bypass multipliers to increase throughput. It also includes an AHL circuit that can dynamically determine if an operation requires one or two cycles to complete and can adjust this determination over time to account for aging. Simulation results showed that the proposed design achieved higher performance than fixed-latency multipliers and was more resistant to aging effects.
FEEDBACK LINEARIZATION AND BACKSTEPPING CONTROLLERS FOR COUPLED TANKSieijjournal
This paper investigates the usage of some sophisticated and advanced nonlinear control algorithms in order to control a nonlinear Coupled Tanks System. The first control procedure is called the Feedback linearisation control (FLC), this type of control has been found a successful in achieving a global exponential asymptotic stability, with very short time response, no significant overshooting is recorded and with a negligible norm of the error. The second control procedure is the approaches of Back stepping control (BC) which is a recursive procedure that interlaces the choice of a Lyapunov function with the design of feedback control, from simulation results it shown that this method preserves tracking, robust control and it can often solve stabilization problems with less restrictive conditions may been countered in other methods. Finally both of the proposed control schemes guarantee the asymptoticstability of the closed loop system meeting trajectory tracking objectives.
This document describes a study on enhancing the serial estimation of discrete choice models through the use of standardization, warm starting, and early stopping techniques. It first reviews literature on accelerating discrete choice model estimation and quasi-Newton optimization methods. It then details the three techniques used: standardizing variables, initializing subsequent models with previous solutions, and stopping optimization early based on log-likelihood trends. Two sequences of 100 discrete choice models are tested to evaluate the effectiveness of the techniques. Results show that warm starting parameters and the Hessian matrix from previous solutions significantly reduces estimation time compared to estimating models separately.
Nonlinear combination of intensity measures for response prediction of RC bui...openseesdays
This document discusses using nonlinear combinations of intensity measures (IMs) to more accurately predict engineering demand parameters (EDPs) for response analysis of reinforced concrete (RC) buildings. It presents an evolutionary polynomial regression (EPR) technique to model complex nonlinear relationships between IMs and EDPs without assumptions about the form of the relationship. The EPR technique is applied to dynamic analyses of an RC framed building to predict maximum inter-story drift ratio and maximum floor acceleration under earthquake ground motions, demonstrating more accurate predictions compared to a single IM, especially for base-isolated buildings. The document concludes more accurate EDP predictions can be obtained through nonlinear IM combinations and advocates improving data-driven modeling before proposing new IMs.
Predicting (Nk) factor of (CPT) test using (GP): Comparative Study of MEPX & GN7Ahmed Ebid
Static cone penetration test (CPT) is a broadly satisfactory and dependable geotechnical in-situ apparatus that gives brisk and honest substantial measure of data about soil classification, stratification and properties. Un-drained shear strength of clay (cu) is one of the principle soil parameters that could be sensibly evaluated from the (CPT) results, as it is specifically connected to the tip resistance through the experimental cone factor (Nk). Earlier researches showed that (Nk) value depends on type of soil, nature and stress history conditions and many other variables. Construction development in some locations with thick deposits of soft to very soft clays motivates extensive researches to define the reasonable value of the (Nk) factor for such types of clay. The performed study concentrated on utilizing the genetic programming technique (GP) to predict (Nk) value of clay using the consistency limits that can be easily determined in the laboratory. A set of 102 records were gathered from the CPT site investigations and corresponding consistency limits and other physical properties experiments, were divided into training set of 72 records and validation set of 30 records. Both (GN7) & (MEPX) software were used to apply (GP) on the available data. Four trials for each software with different chromosome lengths were performed to correlate the (Nk) factor with the clay consistency limits, water content (wc) and unit weight (γ) using training data set, then, the produced relations were tested using the validation data set. The four generated formulas using (GN7) showed accuracies ranging between 93% and 97% and coefficient of determination (R2) ranging between 0.7 and 0.9, while the other four formulas form (MEPX) showed accuracy not exceeding 95% and coefficient of determination (R2) ranging between 0.45 and 0.75.
CARI2020: A CGM-Based Parallel Algorithm Using the Four-Russians Speedup for ...Mokhtar SELLAMI
The document describes a parallel algorithm for 1-D sequence alignment using the Coarse-Grained Multicomputer (CGM) model. The algorithm partitions the sequence alignment problem into smaller subproblems that are distributed across multiple processors. It requires O(n2/p log n) time using O(p) communication rounds on p processors, improving on the sequential O(n2) time algorithm. The algorithm first has each processor compute a lookup table efficiently in parallel before distributing equal-sized macro-blocks of the task graph across processors to calculate the sequence alignment in parallel.
A Template Matching Approach to Classification of QAM Modulation using Geneti...CSCJournals
The automatic recognition of the modulation format of a detected signal, the intermediate step between signal detection and demodulation, is a major task of an intelligent receiver, with various civilian and military applications. Obviously, with no knowledge of the transmitted data and many unknown parameters at the receiver, such as the signal power, carrier frequency and phase offsets, timing information, etc., blind identification of the modulation is a difficult task. This becomes even more challenging in real-world. In this paper modulation classification for QAM is performed by Genetic Algorithm followed by Template matching, considering the constellation of the received signal. In addition this classification finds the decision boundary of the signal which is critical information for bit detection. I have proposed and implemented a technique that casts modulation recognition into shape recognition. Constellation diagram is a traditional and powerful tool for design and evaluation of digital modulations. The simulation results show the capability of this method for modulation classification with high accuracy and appropriate convergence in the presence of noise.
Effect of Residual Modes on Dynamically Condensed Spacecraft StructureIRJET Journal
This document discusses the effect of residual modes on the fundamental frequencies of a condensed spacecraft structure. It presents the modeling and dynamic analysis of a spacecraft bus structure using finite element analysis. The structure is condensed using the Craig-Bampton method to reduce the degrees of freedom. Residual modes are then computed and included to recover data lost during condensation. The results show that including residual modes provides frequencies for the condensed structure that closely match those of the original full structure model, demonstrating the effectiveness of using residual modes for data recovery after structural condensation.
Symbol Based Modulation Classification using Combination of Fuzzy Clustering ...CSCJournals
Most of approaches for recognition and classification of modulation have been founded on modulated signal’s components. In this paper, we develop an algorithm using fuzzy clustering and consequently hierarchical clustering algorithms considering the constellation of the received signal to identify the modulation types of the communication signals automatically. The simulation that has been conducted shows high capability of this method for recognition of modulation levels in the presence of noise and also, this method is applicable to digital modulations of arbitrary size and dimensionality. In addition this classification finds the decision boundary of the signal which is critical information for bit detection.
Fault detection and diagnosis for non-Gaussian stochastic distribution system...ISA Interchange
This document proposes a new method for fault detection and diagnosis of non-Gaussian stochastic distribution systems using radial basis function neural networks. It introduces an RBF neural network technique to approximate the output probability density functions of such systems. It then presents a nonlinear adaptive observer-based algorithm for fault detection and diagnosis. The algorithm introduces a tuning parameter to make the residual as sensitive as possible to faults. It analyzes the stability and convergence of the error dynamics for both fault detection and diagnosis. An example is provided to demonstrate the effectiveness of the proposed method.
Transpose Form Fir Filter Design for Fixed and Reconfigurable CoefficientsIRJET Journal
This document discusses the design of transpose form finite impulse response (FIR) filters for both fixed and reconfigurable coefficients. Transpose form FIR filters naturally support the multiple constant multiplication technique, which can reduce computational delay. For fixed coefficients, a low-complexity design using multiple constant multiplication is implemented, reducing area and delay compared to direct form FIR filters. For reconfigurable coefficients, a multiplier-based design is used. Simulation results show the transpose form FIR filter achieves lower area and delay than the direct form structure.
Maximum likelihood estimation-assisted ASVSF through state covariance-based 2...TELKOMNIKA JOURNAL
The smooth variable structure filter (ASVSF) has been relatively considered as a new robust predictor-corrector method for estimating the state. In order to effectively utilize it, an SVSF requires the accurate system model, and exact prior knowledge includes both the process and measurement noise statistic. Unfortunately, the system model is always inaccurate because of some considerations avoided at the beginning. Moreover, the small addictive noises are partially known or even unknown. Of course, this limitation can degrade the performance of SVSF or also lead to divergence condition. For this reason, it is proposed through this paper an adaptive smooth variable structure filter (ASVSF) by conditioning the probability density function of a measurement
to the unknown parameters at one iteration. This proposed method is assumed to accomplish the localization and direct point-based observation task of a wheeled mobile robot, TurtleBot2. Finally, by realistically simulating it and comparing to a conventional method, the proposed method has been showing a better accuracy and stability in term of root mean square error (RMSE) of the estimated map coordinate (EMC) and estimated path coordinate (EPC).
Reduced Test Pattern Generation of Multiple SIC Vectors with Input and Output...IJERA Editor
In recent years, the design for low power has become one of the greatest challenges in high-performance very
large scale integration (VLSI) design. Most of the methods focus on the power consumption during normal mode
operation, while test mode operation has not normally been a predominant concern. However, it has been found
that the power consumed during test mode operation is often much higher than during normal mode operation
[1]. This is because most of the consumed power results from the switching activity in the nodes of the circuit
under test (CUT), which is much higher during test mode than during normal mode operation [1]–[3]. In the
proposed pattern, each generated vector applied to each scan chain is an SIC vector, which can minimize the
input transition and reduce test power. In VLSI testing, power reduction is achieved by increasing the correlation
between consecutive test patterns.
This document summarizes a study of built-in self-test (BIST) approaches for detecting single stuck-at faults in combinational logic circuits. Pseudorandom test patterns generated by a linear feedback shift register (LFSR) were applied in parallel and serially to benchmark circuits. Applying patterns in parallel via test-per-clock achieved high fault coverage but required a large LFSR for circuits with many inputs. Reseeding the LFSR improved coverage when an initial seed was ineffective. Seed selection and minimum LFSR size for different application methods were evaluated to optimize BIST fault detection.
Cellular wireless systems like GSM suffer from congestion resulting in overall system degradation and poor service delivery. When the traffic demand in a geographical area is high, the input traffic rate will exceed thecapacity of the output lines. This work focused on homogenous wireless network (the network traffic and resource dimensioning that are statistically identical) such that the network performance
evaluation can be reduced to a system with single cell and a single traffic type. Such system can employa queuing model to evaluate the performance metric of a cell in terms of blocking probability.
Five congestion control models were compared in the work to ascertain their peculiarities, they are Erlang B, Erlang C, Engset (cleared), Engset (buffered), and Bernoulli. To analyze the system, an aggregate onedimensional Markov chain wasderived, such that it describes a call arrival process under the assumption
that it is Poisson distributed. The models were simulated and their results show varying performances, however the Bernoulli model (Pb5) tends to show a situation that allows more users access to the system and the congestion level remain unaffected despite increase in the number of users and the offered traffic into the system.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Optimization of Test Pattern Using Genetic Algorithm for Testing SRAMIJERA Editor
This document proposes optimizing test patterns for testing SRAM using genetic algorithms. The genetic algorithm approach generates a compact set of test vectors requiring fewer configurations while detecting all stuck-at, open, and bridging faults. It reduces redundancy and optimizes the test patterns, resulting in reduced testing time and power consumption compared to previous methods. The optimized test patterns are generated using a linear feedback shift register and genetic operations including selection, crossover and mutation. Results show the proposed method achieves fault detection with a minimum time constraint of 1.5-2 ns and low power consumption of 220-230 mW.
This document describes a new recursive Monte Carlo simulation algorithm called the Sampled Path Set Algorithm (SPSA) for modeling complex k-out-of-n reliability systems. The SPSA uses a graph representation of a reliability block diagram and recursively searches the graph to determine system response based on the system state vector at each simulation iteration, allowing modeling of systems with general component failure and repair distributions and large numbers of components. Existing methods for analyzing such systems using tie/cut sets have limitations as the number of sets grows non-linearly with increased system complexity. The SPSA provides a more efficient alternative with linear growth in processing and memory requirements.
Heuristic approach to optimize the number of test cases for simple circuitsVLSICS Design
In this paper a new solution is proposed for testing simple stwo stage electronic circuits. It minimizes the number of tests to be performed to determine the genuinity of the circuit. The main idea behind the present research work is to identify the maximum number of indistinguishable faults present in the given circuit and minimize the number of test cases based on the number of faults that has been detected. Heuristic approach is used for test minimization part, which identifies the essential tests from overall test cases. From the results it is observed that, test minimization varies from 50% to 99% with the lowest one corresponding to a circuit with four gates .Test minimization is low in case of circuits with lesser input leads in gates compared to greater input leads in gates for the boolean expression with same number of symbols. Achievement of 99% reduction is due to the fact that the large number of tests find the same faults. The new approach is implemented for simple circuits. The results show potential for both smaller test sets and lower cpu times.
Heuristic approach to optimize the number of test cases for simple circuitsVLSICS Design
In this paper a new solution is proposed for testing simple stwo stage electronic circuits. It minimizes the number of tests to be performed to determine the genuinity of the circuit. The main idea behind the present research work is to identify the maximum number of indistinguishable faults present in the given circuit and minimize the number of test cases based on the number of faults that has been detected. Heuristic approach is used for test minimization part, which identifies the essential tests from overall test cases. From the results it is observed that, test minimization varies from 50% to 99% with the lowest one corresponding to a circuit with four gates .Test minimization is low in case of circuits with lesser input leads in gates compared to greater input leads in gates for the boolean expression with same number of symbols. Achievement of 99% reduction is due to the fact that the large number of tests find the same faults. The new approach is implemented for simple circuits. The results show potential for both smaller test sets and lower cpu times.
Compensation of Data-Loss in Attitude Control of Spacecraft Systems rinzindorjej
In this paper, a comprehensive comparison of two robust estimation techniques namely, compensated closed-loop Kalman filtering and open-loop Kalman filtering is presented. A common problem of data loss in a real-time control system is investigated through these two schemes. The open-loop scheme, dealing with the data-loss, suffers from several shortcomings. These shortcomings are overcome using compensated scheme, where an accommodating observation signal is obtained through linear prediction technique -- a closed-loop setting and is adopted at a posteriori update step. The calculation and employment of accommodating observation signal causes computational complexity. For simulation purpose, a linear time invariant spacecraft model is however, obtained from the nonlinear spacecraft attitude dynamics through linearization at nonzero equilibrium points -- achieved off-line through Levenberg-Marguardt iterative scheme. Attempt has been made to analyze the selected example from most of the perspectives in order to display the performance of the two techniques.
Probability Measurement of Setup And Hold Time With Statistical Static Timing...IJERA Editor
This document summarizes a methodology for performing statistical static timing analysis (SSTA) that accounts for the statistical codependence of setup and hold times. It presents a 3-step approach: 1) approximating probability mass functions (pmf) of codependent setup and hold time contours considering variability, 2) computing pmfs of required setup and hold times for each flip-flop, and 3) using these pmfs to compute probabilities of timing constraint violations. The method was applied to true single phase clocking flip-flops to generate piecewise linear characterization curves. An example design was used to demonstrate successful statistical timing verification that considers variability impacts.
IRJET- Performance Analysis of a Synchronized Receiver over Noiseless and Fad...IRJET Journal
This document summarizes the performance analysis of a synchronized receiver over noiseless and fading channels. It presents a baseband communications system that implements data phase transmission using a single-tone waveform. The behavior of the received signal when transmitted over a noiseless channel and a fading frequency selective channel with additive white Gaussian noise is analyzed. Key plots analyzed include the channel output power spectrum, cross-spectral phase between the equalizer input and output, control signal for the equalizer, and scatter plots of the equalizer input, output, and descrambler output. The analysis shows distortion of the signal due to noise in the fading channel.
Data-driven adaptive predictive control for an activated sludge processjournalBEEI
Data-driven control requires no information of the mathematical model of the controlled process. This paper proposes the direct identification of controller parameters of activated sludge process. This class of data-driven control calculates the predictive controller parameters directly using subspace identification technique. By updating input-output data using receding window mechanism, the adaptive strategy can be achieved. The robustness test and stability analysis of direct adaptive model predictive control are discussed to realize the effectiveness of this adaptive control scheme. The applicability of the controller algorithm to adapt into varying kinetic parameters and operating conditions is evaluated. Simulation results show that by a proper and effective excitation of direct identification of controller parameters, the convergence and stability of the implicit predictive model can be achieved.
IMPLEMENTATION OF COMPACTION ALGORITHM FOR ATPG GENERATED PARTIALLY SPECIFIED...VLSICS Design
In this paper the ATPG is implemented using C++. This ATPG is based on fault equivalence concept in which the number of faults gets reduced before compaction method. This ATPG uses the line justification and error propagation to find the test vectors for reduced fault set with the aid of controllability and observability. Single stuck at fault model is considered. The programs are developed for fault equivalence method, controllability Observability, automatic test pattern generation and test data compaction using object oriented language C++. ISCAS 85 C17 circuit was used for analysis purpose along with other circuits. Standard ISCAS (International Symposium on Circuits And Systems) netlist format was used. The flow charts and results for ISCAS 85 C17 circuits along with other netlists are given in this paper. The test vectors generated by the ATPG further compacted to reduce the test vector data. The algorithm is developed for the test vector compaction and discussed along with results.
Fault detection based on novel fuzzy modelling csijjournal
The Fault detection which is based on fuzzy modeling is investigated. Takagi-Sugeno (TS) fuzzy model can
be derived by structure and parameter identification, where only the input-output data of the identified system are available. In the structure identification step, Gustafson-Kessel clustering algorithm (GKCA) is used to detect clusters of different geometrical shapes in the data set and to obtain the point-wise membership function of the premise. In the parameter identification step, Unscented Kalman filter (UKF) is
used to estimate the parameters of the premise’s membership function. In the consequence part, Kalman filter (KF) algorithm is applied as a linear regression to estimate parameters of the TS model using the input-output data set. Then, the obtained fuzzy model is used to detect the fault. Simulations are provided to demonstrate the effectiveness of the theoretical results.
1) The planning of Physical Cell Identities (PCI) has a strong impact on the performance of LTE networks. PCI planning determines the frequency locations of cell-specific reference signals, which can collide between neighboring cells and degrade signal quality estimates.
2) Most research on PCI planning has focused on avoiding collisions and confusion between PCI assignments. However, no study has quantified the impact of PCI planning on downlink throughput performance.
3) This paper develops an analytical model to quantify the influence of PCI planning on reference signal collisions. It then uses a system-level simulator to test different PCI planning schemes and analyze their impact on call blocking, dropping, and user throughput under various traffic conditions.
Robust fault detection for switched positive linear systems with time varying...ISA Interchange
This paper investigates the problem of robust fault detection for a class of switched positive linear systems with time-varying delays. The fault detection filter is used as the residual generator, in which the filter parameters are dependent on the system mode. Attention is focused on designing the positive filter such that, for model uncertainties, unknown inputs and the control inputs, the error between the residual and fault is minimized. The problem of robust fault detection is converted into a positive L1 filtering problem. Subsequently, by constructing an appropriate multiple co-positive type Lyapunov–Krasovskii functional, as well as using the average dwell time approach, sufficient conditions for the solvability of this problem are established in terms of linear matrix inequalities (LMIs). Two illustrative examples are provided to show the effectiveness and applicability of the proposed results.
This document provides a tutorial on the RESTART rare event simulation technique. RESTART introduces nested sets of states (C1 to CM) defined by importance thresholds. When the process enters a set Ci, Ri simulation retrials are performed within that set until exiting. This oversamples high importance regions to estimate rare event probabilities more efficiently than crude simulation. The document analyzes RESTART's efficiency, deriving formulas for the variance of its estimator and gain over crude simulation. Guidelines are also provided for choosing an optimal importance function to maximize efficiency. Examples applying the guidelines to queueing networks and reliable systems are presented.
This document discusses a technique called Linear Feedback Shift Register - Bit Complement Test Pattern Generation (LFSR-BCTPG) to minimize test power in VLSI circuits. The LFSR-BCTPG generates test patterns with the LFSR but complements output bits to reduce repeated patterns. This increases unique test vectors and improves fault coverage while reducing circuit size and dynamic power compared to a conventional LFSR. The technique generates test patterns by alternating the enable signals to the two halves of the LFSR flip-flops. Evaluation on ISCAS benchmark circuits showed this method reduces dynamic power consumption during testing.
Similar to A Simulation Experiment on a Built-In Self Test Equipped with Pseudorandom Test Pattern Generator and Multi-Input Shift Register (MISR) (20)
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
A review on techniques and modelling methodologies used for checking electrom...nooriasukmaningtyas
The proper function of the integrated circuit (IC) in an inhibiting electromagnetic environment has always been a serious concern throughout the decades of revolution in the world of electronics, from disjunct devices to today’s integrated circuit technology, where billions of transistors are combined on a single chip. The automotive industry and smart vehicles in particular, are confronting design issues such as being prone to electromagnetic interference (EMI). Electronic control devices calculate incorrect outputs because of EMI and sensors give misleading values which can prove fatal in case of automotives. In this paper, the authors have non exhaustively tried to review research work concerned with the investigation of EMI in ICs and prediction of this EMI using various modelling methodologies and measurement setups.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTjpsjournal1
The rivalry between prominent international actors for dominance over Central Asia's hydrocarbon
reserves and the ancient silk trade route, along with China's diplomatic endeavours in the area, has been
referred to as the "New Great Game." This research centres on the power struggle, considering
geopolitical, geostrategic, and geoeconomic variables. Topics including trade, political hegemony, oil
politics, and conventional and nontraditional security are all explored and explained by the researcher.
Using Mackinder's Heartland, Spykman Rimland, and Hegemonic Stability theories, examines China's role
in Central Asia. This study adheres to the empirical epistemological method and has taken care of
objectivity. This study analyze primary and secondary research documents critically to elaborate role of
china’s geo economic outreach in central Asian countries and its future prospect. China is thriving in trade,
pipeline politics, and winning states, according to this study, thanks to important instruments like the
Shanghai Cooperation Organisation and the Belt and Road Economic Initiative. According to this study,
China is seeing significant success in commerce, pipeline politics, and gaining influence on other
governments. This success may be attributed to the effective utilisation of key tools such as the Shanghai
Cooperation Organisation and the Belt and Road Economic Initiative.
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...University of Maribor
Slides from talk presenting:
Aleš Zamuda: Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapter and Networking.
Presentation at IcETRAN 2024 session:
"Inter-Society Networking Panel GRSS/MTT-S/CIS
Panel Session: Promoting Connection and Cooperation"
IEEE Slovenia GRSS
IEEE Serbia and Montenegro MTT-S
IEEE Slovenia CIS
11TH INTERNATIONAL CONFERENCE ON ELECTRICAL, ELECTRONIC AND COMPUTING ENGINEERING
3-6 June 2024, Niš, Serbia
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODEL
A Simulation Experiment on a Built-In Self Test Equipped with Pseudorandom Test Pattern Generator and Multi-Input Shift Register (MISR)
1. International journal of VLSI design & Communication Systems (VLSICS) Vol.1, No.4, December 2010
DOI : 10.5121/vlsic.2010.1401 1
A Simulation Experiment on a Built-In Self Test
Equipped with Pseudorandom Test Pattern
Generator and Multi-Input Shift Register (MISR)
Afaq Ahmad
Department of Electrical and Computer Engineering
College of Engineering, Sultan Qaboos University
P. O. Box 33, Postal Code 123; Muscat, Sultanate of Oman
Telephone: (+ 968) 2414 1327
Fax: (+968) 2441 3416
afaq@squ.edu.om
ABSTRACT
This paper investigates the impact of the changes of the characteristic polynomials and initial loadings,
on behaviour of aliasing errors of parallel signature analyzer (Multi-Input Shift Register), used in an
LFSR based digital circuit testing technique. The investigation is carried-out through an extensive
simulation study of the effectiveness of the LFSR based digital circuit testing technique. The results of the
study show that when the identical characteristic polynomials of order n are used in both pseudo-random
test-pattern generator, as well as in Multi-Input Shift Register (MISR) signature analyzer (parallel type)
then the probability of aliasing errors remains unchanged due to the changes in the initial loadings of the
pseudo-random test-pattern generator.
KEYWORDS
LFSR, MISR, BIST, Characteristic Polynomial, Primitive
1.0 INTRODUCTION
Reliability is one of the main considerations in any circuit design. It involves a correct and
predictable behaviour of the circuit according to design specifications over a sufficiently long
period of time. To achieve this goal, the logic-circuit design is aimed at an error-free circuit
operation. Unfortunately, in spite of all possible care being bestowed on design and simulation,
hardware faults resulting from physical defects (e.g. mask defects, manufacturing process flaws)
will occur in the hardware implementation of the logic circuit. Hence, when a fault occurs
anyway, one must be able to detect the presence of the fault and, if desired to pinpoint its
location. This task is accomplished by testing the circuit. System maintenance draws heavily
upon the testing capability of the logic system [1], [2].
Digital circuit manufacturers are well aware of the need to incorporate testability features early
in the design stage, or otherwise they have to incur higher testing costs, subsequently. An
empirical relationship, that have been used for estimating the cost of finding a faulty chip,
indicates that the cost increases by a factor of 10, as fault finding advances from one level to the
next [3] – [7]. However, recent studies have shown that the cost of testing and fault finding, at
system and field level, is higher than this factor of 10 and increases exponentially [7] – [9].
Thus, if a fault can be detected at chip or board level, then significantly larger costs per fault can
be avoided. This is the prime reason that attention has now focused on providing testability at
chip, module or even at board level.
Any test methodology usually consists of
(i) A test strategy for generating the test-stimuli,
(ii) A strategy for evaluating output responses, and
(iii) Implementation mechanisms to realize the appropriate strategies in test-generation and
response evaluation.
2. International journal of VLSI design & Communication Systems (VLSICS) Vol.1, No.4, December 2010
2
Present day philosophy to achieve economical and cost effective testing of Very Large Scale
Integration (VLSI) components and systems is to provide on-chip testing. Though these
techniques involve additional chip area for the added test circuitry, they have provided
reasonable testability levels. In fact it has been reported that, for about 20% additional silicon
area required, more than 98% of the chip design can be checked using structured Design For
Testability (DFT) approaches [7] – [11].
As a natural outcome of the structured design approach for DFT, built-in testing has drawn
considerable attention. Usually a built-in test methodology is defined as the one that
incorporates both test-pattern generation and response data compression mechanisms internally
in the chip itself. If this methodology is self-sufficient in detecting the faults of its internal test
circuitry also, then such a methodology is referred to as Built-In Self-Test (BIST) in test
literature. The main emphasis in BIST designs is that to provide close to one hundred percent
testing of combinational circuits [10], [12]. In particular, pseudo-random test-pattern generation
followed by the compression of response data by signature analysis has become a standard form
of testing in BIST environment. Linear Feedback Shift Registers (LFSRs) have been proposed
as an integral part of a sequential logic design, such that they can be used to both generate and
compact the results of a test. Undoubtedly, an LFSR based pseudo-random test-pattern
generation is an extremely simple tool for generating desired sequence as well as the length of
the test-stimuli. Many estimation schemes are readily available for computing the length of test
patterns where the desired sequence of the test-patterns can be obtained by the predetermined
seed of the LFSRs. Further, the effective testing of large circuits uses the concept of ‘pseudo-
exhaustive testing’ where the principle of divide and conquer is applied [13] – [19].
Difficulty arises when the resulting response data obtained from the Circuit Under Test (CUT)
is compressed into small signatures using Signature Analyzer (SAZ). Although, this scheme of
SAZ is easily implemented by an LFSR either in the form of Single Input Shift Register (SISR)
– serial SAZ or MISR- parallel SAZ . But this leads to loss of information, due to the erroneous
response patterns that get compressed into the same signature as the fault-free signature of the
CUT. Thus, some of the faults might go undetected due to this masking phenomenon. This
compression can further reduce the fault-coverage in the BIST scheme. This problem of error
masking is called aliasing [20] – [25].
Methods to determine the extent of fault escape caused by a response compressor are not readily
available. However, various attempts have been made to analyze and improve the basic SAZ's
realization methods [26] – [33]. The end goal of the above schemes, individually, or with a
combination of these, is to reduce the deception volume [26]. Methods that consider both, the
test pattern generator and response data compressor factors in totality and simultaneously, in
analyzing the aliasing behaviour of the circuit, are not available. There is a growing need for
such an approach, which comprehensively looks at aliasing problems with respect to both test
pattern generator and response data compressor and reflects the true aliasing characteristics.
This is the prime justification of the research work in this area. Therefore, towards this direction
a research work have been done [34], [35]. Through these papers, different studies were carried
out to investigate the roles of characteristic polynomials used in Pseudo-Random Test-Pattern
Generator (PRTPG) as well as in SAZ, and initial loading of PRTPG with the behaviour of
aliasing errors of SAZ. The work contained in the papers [34], [35] used separate different
structures (internal and external exclusive-OR types) of LFSRs. However, both the papers
considered SISR type of SAZ. This paper is an extension of the previous research where MISR
type of SAZ is used. In this work external exclusive-OR type and internal exclusive-OR type
LFSR is used in PRTPG and MISR respectively. The results of the study show that the
probability of aliasing errors remains unchanged due to the change in the initial loading when
the identical characteristic polynomials of order n are used in both PRTPG, as well as in SAZ
(MISR type).
3. International journal of VLSI design & Communication Systems (VLSICS) Vol.1, No.4, December 2010
3
2.0 MATHEMATICAL CHARACTERIZATION OF LFSR
In this section we consider briefly the mathematical background, definitions and theorems
related to an LFSR. Details can be found in literatures [36] – [39]. Basic definitions and
theorems are given below for the sake of completeness.
Let [A] represent the state transition matrix of order n × n, for an n stage LFSR shown in Figure
1. Assume the state at any time ‘t’ be represented by vector [Q(t)] = [qn(t), ... ,qj(t), ... ,q2(t),
q1(t)] (which is effectively the contents of the LFSR) where each qj represents the state of the jth
stage of the LFSR. Further, let the LFSR feedback stages be numbered from C0 to Cn,
proceeding in the same direction as the shifting occurs i.e. left to right. Let the present state of
the LFSR be represented by [Q(t)] and, one clock later, the next state by [Q(t+1)]; then the
relationship between the two states is given by Equation (1).
Figure 1. An n-bit LFSR model
(1)
where, cj = 0 or 1, for 1≤ j ≤ n-1 and cj = 1, for j = n. (2)
In Equation (2), the values of Cj show the existence or absence of a feedback connection from
the jth
stage of the LFSR. Equation (1) can be written as
[Q(t+1)] = [A][Q(t)] (3)
If [Q] = [Q(0)] represents a particular initial loading of the LFSR, then the sequence of states
through which the LFSR will pass during successive times is given by
[Q(t)], [A][Q(t)], [A]2
[Q(t)], [A]3
[Q(t)], …
Let the matrix ‘period’ be the smallest integer p for which [A]p
=I, where I is an identity matrix.
Then [A]p
[Q(t)] = [Q(t)] for any non zero initial vector [Q(0)], indicating the ‘cycle length (or
period)’ of the LFSR is p.
Thus, on the basis of this property of periodicity of LFSR and Equation (3), it follows that
4. International journal of VLSI design & Communication Systems (VLSICS) Vol.1, No.4, December 2010
4
[Q(t)] = [Q(t+p)] = [A]p
[Q(t)] (4)
Definition 1:
The cycle length p, for vector [Q(0)] = 0 is always 1, which is independent of matrix [A].
Definition 2: The period p of an n bit LFSR will only be maximal when p = m = 2n
-1.
For the matrix [A] of the LFSR, the characteristic equation is given by
Determinant [A-λI] = 0. Thus,
F(λ)=
−
−
−
−
−−−−−− −−−
λ
λ
λ
λ
λλλλλλ
10000
01000
...
000...01
... 1221
L
L
MMMMM
CCCCCC innnn
(5)
Definition 3: For the matrix [A] of an LFSR, the polynomial of {determinant [A-λI]}is called
the characteristic polynomial F(λ) of the LFSR and can be written as
F(λ) = j
n
j
jC λ∑=
+
1
1 ; Cn = 1. (6)
Let, T(λ) denote the characteristic polynomial of an n stage LFSR used in PRTPG. Let, {a-1, a-
2,...,a-n+1 ,a-n } be the initial state of the shift register. The sequence of numbers a0, a1, a2, ... aq ...
can be associated with a polynomial, called generating function M(λ), by the rule
M(λ) = a0 + a1 λ + ... + aq λq
+ …
Let {aq} = a0, a1, a2,... represent the output sequence generated by an LFSR used as PRTPG,
where ai = 0 or 1. Then this sequence can be represented as
q
q
qaM λλ ∑
∞
=
=
0
)( (7)
From the structure of the type of the LFSR shown in Figure 2, it can be seen that if the current
state of the jth
flip-flop is aq-j, for j = 1, 2 , ... , n , then by the recurrence relation
jq
n
j
jq aCa −
=
∑=
1
(8)
Substituting (8) in (7)
q
jq
q
n
j
j aCM λλ −
∞
= =
∑∑=
0 1
)( (9)
Or, by solving for generating function, it can be shown that M(λ) is given by
5. International journal of VLSI design & Communication Systems (VLSICS) Vol.1, No.4, December 2010
5
∑
∑
=
=
+
+
+
+++
= n
j
j
j
n
j
C
M
1
1
1-
1-
1j-
1j-
j-
j-
jj
1
)a........a(aC
)(
λ
λλλλ
λ (10)
∑
∑
=
=
+
+
+++
= n
j
j
j
n
j
C
M
1
1
1-j
1-1j-j-
j
1
)a........a(aC
)(
λ
λλ
λ (11)
Or,
)(
)(
)(
λ
λ
λ
T
B
M = (12)
Thus, the PRTP (represented by polynomial M(λ), generated by the LFSR can be obtained
through the long division of the function B(λ) by T(λ). Therefore, it can be implied from the
Equation (11) that the generated sequence M(λ) is function of initial loading as well
characteristic polynomial of the LFSR used in the realization of the PRTPG.
3.0 MULTI-INPUT SHIFT REGISTER (MISR) MODEL
Figure 2 shows a typical MISR configuration. This configuration of MISR is based on the
internal-EXOR. In the figure, n denotes the length of the MISR, i.e., the number of FFs in the
register. Also, the LFSR feedback stages be numbered from Cn to C1, are the binary coefficients
of the characteristic polynomial P (of the MISR.
Figure 2. An n-bit MISR model
P(λ) = ∑=
+
n
j
j
jC
1
1 λ (13)
Let fi be the content of the i-th FF, then the state of the MISR
(i.e., the sequence r = rn + rn-1 + ………… + r2 + r1) can be represented by the state polynomial
s(λ) = rn λn-1
+ rn-1 λn-2
+ ………… + r2 λ+ r1 (14)
Similarly, an n-bit input sequence (d = Dn + Dn-1 + ………… + D2 + D1) can be represented by
the input polynomial.
6. International journal of VLSI design & Communication Systems (VLSICS) Vol.1, No.4, December 2010
6
d(λ) = Dn λn-1
+ Dn-1 λn-2
+ ………… + D2 λ+ D1 (15)
The signature obtained by SAZ (any SISR or MISR) is defined as the final state of the register
after the input sequence d has been entered into the register. Then the MISR signature can be
given as
s(x) = D(x) mod P(x) (16)
Thus, the theory behind the use of the LFSR for SAZ is based on the concept of polynomial
division process, where the remainder left in the register after the last bit of input data is
sampled, corresponds to the final signature. Whereas, the output sequence from the nth
bit of the
LFSR defines the quotient, QO of the division. In general, the shift register is initialized by the
reset or by parallel load function of the register, at a time of fault-free evaluation as well as
every time when a new fault is injected in the CUT. Assume that the CUT is of combinational
nature with n primary inputs and k primary outputs. If the initial state of the LFSR is all 0’s, let
the final state of the LFSR be represented by the polynomial s(λ). Then it can be shown that
these polynomials are related by the equation
)(
)(
)(
)(
)(
λ
λ
λ
λ
λ
P
s
QO
P
d
+= (17)
where, P(λ) is the characteristic polynomial of the LFSR structure used in MISR-SAZ. Hence
an LFSR carries out polynomial division on the input data stream by the characteristic
polynomial, producing an output stream corresponding to the quotient QO(λ) and remainder
s(λ).
3.0 SIMULATION STUDY
The testing model employed in the simulation study for PRTPG and MISR is as described in
section 2 and shown in Figures 1 and 2. Various combinational circuits have been simulated
using the manufacturer's logical diagrams with gate level description. A single stuck-at fault
model is assumed. Where, s-a-0 and s-a-1 faults are postulated on each individual, NL, branches
of the each CUT. In the case of each n-input CUT, identical all possible characteristic
polynomials of order n are individually applied to PRTPG and MISR-SAZ as well. All possible
initial loading of PRTPG, 2n
-1, are exhausted to monitor the aliasing error behaviour of MISR-
SAZ. These characteristic polynomials are generated using the algorithms developed and
reported in papers [2], [40] and [41]. To make it more readable the simulation procedure used to
study the aliasing behaviour is described below.
Simulation procedure
Begin
(For an n-input CUT)
NL = total number of branches in the CUT;
NP = total number of possible characteristic polynomial of order n, over GF(2);
Li = is the periodicity of ith
characteristic polynomial of order n;
NS = total number of possible initial loading {NS = 2n
-1, excluding S =[000..0]};
RC = is the aliasing count;
For i = 1 to NP, do
For r = 1 to NS, do
Begin
7. International journal of VLSI design & Communication Systems (VLSICS) Vol.1, No.4, December 2010
7
Choose ith
characteristic polynomial for PRTPG as well as for SAZ {i.e.
Ti(λ) and Pi(λ)}respectively;
Choose rth
initial loading Sr;
Generate test stimuli of length Li;
Choose circuit response dg {fault-free circuit response};
Compute sg {fault-free signature};
For t = 1 to 2NL, do
Begin
Initialize aliasing count RC;
Choose circuit response dft {df1 has fault number 1 inserted, df2 has fault
number 2 inserted, etc.};
Compute signature sft {signature when the tth
fault is inserted};
Compare sft, with sg if sft ≠ sg then, increment aliasing count RC;
End do;
Write aliasing count {one each for sr, Ti (λ), Pi(λ)};
End do;
End.
The above procedure is used to observe the effect of the characteristic polynomials used in
PRTPG as well as in MISR-SAZ along with the initial loading of PRTPG on aliasing counts of
MISR-SAZ. The aliasing counts for the circuits of Table 1 were monitored for the sets of all NP
and NS of order n.
Table 1. Summary of simulated circuits
Circuit
Number
Module of IC Number Circuit
Specifications
(n-inputs, k-outputs)
Number of
Faults
Injected
NP / NS
of order n
C-1 SN-74LS139
DUAL 2-TO-4 LINE
DECODER/
DEMULTIPLEXER
3-inputs; 4-outputs
9-gates
58 2 / 7
C-2 SN-74LS82
2-BIT BINARY FULL
ADDER
5-inputs; 3-outputs
21-gates
148 6 / 31
C-3 SN-74H87
4-BIT TRUE/
COMPLEMENT,
ZEO/ ONE ELEMENT
6-inputs; 4-outputs
14-gates
64 6 / 63
The observed results demonstrates that when the identical characteristic polynomials are used in
both the PRTPG and MISR-SAZ, then any change in initial loading of PRTPG does not change
the value of aliasing count. Due the complexity of the result sets and space only a candidate of
result for circuits summarized in Table 1 are shown in Tables 2 to 4. In Tables 2 – 4 the aliasing
count is shown. These values of aliasing counts remain unchanged for all the possible changes
of initial loading of PRTPG.
8. International journal of VLSI design & Communication Systems (VLSICS) Vol.1, No.4, December 2010
8
Table 2. Aliasing errors for all possible NS for circuit C-1
PRTPG T(λ) MISR-SAZ P(λ)
1+λ+λ4
1+λ3
+λ4
1+λ+λ3
9 13
1+λ2
+λ3
13 9
Table 3. Aliasing errors for all possible NS for circuit C-2
Table 4. Aliasing errors for all possible NS for circuit C-3
PRTPG T(λ) MISR-SAZ P(λ)
1+λ+λ4
1+λ3
+λ4
1+λ5
+λ6
24 19
1+λ+λ6
6 18
1+λ2
+λ3
+λ5
+λ6
17 22
1+λ+λ4
+λ5
+λ6
21 16
1+λ+λ3
+λ4
+λ6
23 26
1+λ+λ2
+λ5
+λ6
19 28
PRTPG T(λ) MISR-SAZ P(λ)
1+λ+λ3
1+λ2
+λ3
1+λ3
+λ5
4 12
1+λ2
+λ5
17 15
1+λ2
+λ3
+λ4
+λ5
13 9
1+λ+λ3
+λ4
+λ5
14 16
1+λ+λ2
+λ4
+λ5
10 18
1+λ+λ2
+λ3
+λ5
7 11
9. International journal of VLSI design & Communication Systems (VLSICS) Vol.1, No.4, December 2010
9
4.0 CONCLUSIONS
It has been demonstrated through this simulation study that the change of the initial loading of
PRTPG does not have any impact on the effectiveness of an LFSR based digital circuit testing
technique that uses identical characteristic polynomials in both the PRTPG and MISR-SAZ as
well. Thus, this result restricts the outright use of the results of findings; that the effectiveness of
LFSR based digital circuit testing techniques can be improved by changing the initial loading of
PRTPG. Thus, for effective use of initial loading of PRTPG of LFSR based digital circuit
testing technique, it is essential to analyze the role of characteristic polynomials used in PRTPG
as well as in MISR-SAZ. Although, our investigation is limited with small sizes of circuits but
the trend of results suggests for further through analytical investigation.
5.0 REFERENCES
[ 1] Ahmad, A., Reliability Computation of Microprocessor Based Mechatronic Systems – A
Highlight for Engineers, to appear in: Caledonian Journal of Engineering (CJE) vol. 6, no. 2, pp.
1 – 7, 2010.
[ 2] Ahmad, A., Dawood Al-Abri, Design of an Optimal Test Simulator for Built-In Self Test
Environment”, To appear in: The Journal of Engineering Research, vol. 7, no. 2, pp. 69 – 79,
2010.
[ 3] Muehldrof, E.I. and Savakar, A.D. 1981. LSI Logic Testing - An Overview. IEEE Transactions
on Computers, C-30(1), pp 1-17, 1981.
[ 4] Bennets, R. G.1982. Introduction to Digital Board Testing. Crane Russak Ltd., New York.
[ 5] Abadir, M. S., and Reghbati, H. K., LSI Testing Techniques. IEEE Micro, 3(1), pp 34-51, 1983.
[ 6] Williams, T. W. and Parker, K. P., Design for Testability - A Survey, IEEE Proceedings, 71(1),
pp 98-112, 1983.
[ 7] Ahmad, A., Testing of complex integrated circuits (ICs) – The bottlenecks and solutions, Asian
Journal of Information Technology, vol. 4, no. 9, pp. 816 – 822
[ 8] Williams, T. W., VLSI Testing. IEEE Computer, 17(10), pp 126-136, 1984.
[ 9] Ahmad, A., Achievement of Higher Testability Goals through the Modification of Shift
Registers In LFSR-based Testing. International Journal of Electronics, 82(3), pp 249-260, 1997.
[ 10] Buehler, M. G., and Sievers, M. W., Off-line Built-in Testing Techniques for VLSI Circuits.
IEEE Computer, 5(6), pp 69-82, 1982.
[ 11] Bierman, H., VLSI Test Gear Keeps With Chip Advances - Special Report. Electronics, pp 125-
128, 1984.
[ 12] Mc-Cluskey, E.J., Built-in Self-Test Techniques. IEEE Design & Test of Computers, 2(2), pp
21-28, 1985.
[ 13] Ibbarra, O. H., and Sahni, S. K., Polynomially Complete Fault Detection Problems. IEEE
Transactions on Computers, C-24(3), pp 24-29, 1975.
[ 14] Mc-Cluskey, E. J., and Boizorgui-Nesbat, S., Design for Autonomous Test. IEEE Transactions
on Computers, C-30(11), pp 866-875, 1981.
10. International journal of VLSI design & Communication Systems (VLSICS) Vol.1, No.4, December 2010
10
[ 15] Hung, A. C., and Wang, F. C., A Method for Test-Generation from Testability Analysis.
Proceedings of the IEEE. International Test Conference (ITC-1985 IEEE Computer Society
Press), pp 62-78, 1985.
[ 16] Dufanza, C. and Cambon, G., LFSR Based Deterministic and Pseudo-Random Test Pattern
Generator Structure. Proceedings of European Test Conference, pp 27-34, 1991.
[ 17] Touba, N.A. and Pouya, B., Testing Core-Based Designs Using Partial Isolation Rings. IEEE
Design & Test Magazine, 14(5), pp 52-59, 1997.
[ 18] Touba, N.A. and Mc-Cluskey, E.J., RP-SYN: Synthesis of Random-Pattern Testable Circuits
with Test Point Insertion. IEEE Transactions on Computer-Aided Design, 18(8), pp 1202-1213,
1999.
[ 19] Ahmad A, Al-Lawati, A. M. J. and Ahmed M. Al-Naamany, Identification of test point insertion
location via comprehensive knowledge of digital system’s nodal controllability through a
simulated tool, Asian Journal of Information Technology (AJIT), vol. 3, no. 3, pp. 142 – 147,
2004.
[ 20] Frohwerk, R.A., Signature Analysis: A New Digital Field Services Methods. Journal of Hewlett-
Packard, 5, pp 2-8, 1977.
[ 21] Smith, J.E., Measure of Effectiveness of Fault in Signature Analysis. IEEE Transactions on
Computers, C-29(6), pp 510-514, 1980.
[ 22] Kiryonov, K.G., ‘On Theory of Signature Analysis - Communications Equipment. Radioizm
Tekh, 27(2), pp 1-46, 1980.
[ 23] Akl, S.G., Digital Signatures - A Tutorial Survey. IEEE Computer, 16(2), pp 15-24, 1983.
[ 24] Davies, D.W., Applying the RSA Digital Signature to Electronic Mail. Computer, 16(2), pp 55-
65, 1983.
[ 25] Yarmolic, V.N., On the Validity of Binary Data Sequence by Signature Analyzer. Electron
Model, 6, pp 49-57, 1985.
[ 26] Agrawal, V.K., Increased Effectiveness of Built-in Testing by Output Data Modification. Digest
of the 13th int’l Symposium on Fault-Tolerant computing (FTCS-13 IEEE Computer Society
Press), pp 227-234, 1983.
[ 27] Eichelberger, E.B. and Lindbloom, E., Random Pattern Coverage Enhancement and Diagnosis
for LSSD Logic Self-Test. IBM Journal, 27(3), pp 265-272, 1983.
[ 28] Hassan, S. Z., and Mc-Cluskey, E. J., Increased Fault-Coverage through Multiple Signatures.
Digest of 14th Int’l Symposium on Fault-Tolerant Computing (FTCS-14 IEEE Computer
Society Press), pp 354-359, 1984.
[ 29] Bhavasar, D. K., and Krishnamurthy, B., Can We Eliminate Fault Escape in Self-Testing by
Polynomial Division? Proceedings of the IEEE Int’l Test Conference (ITC - IEEE Computer
Society Press), pp 134-139, 1984.
[ 30] Williams, T.W., Daehn, W., Gruetzner, M. and Starke, C.W., Bounds and Analysis of Aliasing
Errors in Linear Feedback Shift Registers. IEEE Transactions on Computer-Aided Design, 7(1),
pp 75-83, 1988.
11. International journal of VLSI design & Communication Systems (VLSICS) Vol.1, No.4, December 2010
11
[ 31] Robinson, J. P., and Saxsena, N.R., Simultaneous Signature and Syndrome Compression. IEEE
Transactions on Computer Aided Design, CAD-7, pp 589-594, 1988.
[ 32] Ahmad, A., Nanda, N.K., and Garg, K., Are Primitive Polynomial Always Best in Signature
Analyzer? IEEE Design & Test of Computers, 7(4), pp 36-38, 1990.
[ 33] Raina, R. and Marinos, P.N., Signature Analysis with Modified Linear Feedback Shift Registers
(M-LFSRs). Digest of 21st Int’l Symposium on Fault-Tolerant Computing (FTCS-21, 1IEEE
Computer Soc. Press) pp 88-95, 1991.
[ 34] Ahmad, A., Investigation of a constant behaviour aliasing errors in signature analysis due to the
use of different ordered test-patterns in a BIST testing techniques,” Journal of Microelectronics
and Reliability, (PERGAMON, Elsevier Science), vol. 42, pp. 967 – 974, 2002.
[ 35] Ahmad, A., Constant error masking behaviour of an internal XOR type signature analyzer due to
the changed polynomial seeds,” Journal of Computers & Electrical Engineering (PERGAMON,
Elsevier Science), vol. 28, no. 6, pp. 577 – 585, 2002.
[ 36] Peterson, W.W., and Weldon, J.J., Error Correcting Codes. MIT Press, Cambridge, London,
1972.
[ 37] Matyas, S.M., and Meyer, C.H., Electronic Signature for Data Encryption Standard. IBM
Technical Bulletin, 24(5), pp 2332-34, 1981.
[ 38] Golomb, S.W. Shift Register Sequences. Aegean Park Press, Leguna Hills - U.S.A., 1982.
[ 39] Ahmad, A., Development of State Model Theory for External Exclusive NOR Type LFSR
Structures, Enformatika, Volume 10, December 2005, pp. 125 – 129, 2005
[ 40] Ahmad A. and Elabdalla A. M., An efficient method to determine linear feedback connections in
shift registers that generate maximal length pseudo-random up and down binary sequences,”
Computer & Electrical Engineering - An Int’l Journal (USA), vol. 23, no. 1, pp. 33-39, 1997
[ 41] Ahmad, A., Al-Musharafi, M.J., and Al-Busaidi S., A new algorithmic procedure to test m-
sequences generating feedback connections of stream cipher’s LFSRs, Proceedings IEEE
conference on electrical and electronic technology (TENCON’01), Singapore, August 19 – 22,
2001, vo. 1, pp. 366 – 369, 2001.
ACKNOWLEDGEMENTS
The acknowledgements are due to authorities of Sultan Qaboos University (Sultanate of Oman)
for providing generous research support grants and environments for carrying out the research
works.
12. International journal of VLSI design & Communication Systems (VLSICS) Vol.1, No.4, December 2010
12
Author (Short Biography)
Afaq Ahmad belongs to department of Electrical and Computer Engineering department at
Sultan Qaboos University, Sultanate of Oman. He holds B.Sc. Eng., M.Sc. Eng., DLLR and
Ph.D. degrees. Ahmad did his PhD from IIT Roorkee, India in 1990. Before joining Sultan
Qaboos University, Dr. Ahmad was Associate Professor at Aligarh Muslim University, India.
Prior to starting carrier at Aligarh, he also worked as consultant engineer with Light & Co.,
lecturer with REC Srinagar and senior research fellow with CSIR, India.
Dr. Ahmad is Fellow member of IETE (India), senior member of IEEE Computer Society
(USA) and life member of SSI (India), senior member IACSIT, member IAENG and WSEAS;
He has published over 100 technical papers. At present he is associated as editors and reviewers
of many reputed journals. He has delivered many keynote, invited addresses, extension lectures,
organized conferences, short courses, and conducted tutorials at various universities of globally
repute. He chaired many technical sessions of international conferences, workshops,
symposiums, seminars, and short courses. He has undertaken and satisfactorily completed
many highly reputed and challenging consultancy and project works. His research interests are:
fault diagnosis and digital system testing, data security, graph theoretic approach,
microprocessor based systems and computer programming.
Dr. Ahmad’s field of specialization is VLSI testing and fault-tolerant computing.