The paper proposes a design strategy to retain the true nature of the output in the event of occurrence of stuck at faults at the interconnect levels of digital circuits. The procedure endeavours to design a combinational architecture which includes attributes to identify stuck at faults present in the intermediate lines and involves a healing mechanism to redress the same. The simulated fault injection procedure introduces both single as well as multiple stuck-at faults at the interconnect levels of a two level combinational circuit in accordance with the directives of a control signal. The inherent heal facility attached to the formulation enables to reach out the fault free output even in the presence of faults. The Modelsim based simulation results obtained for the Circuit Under Test [CUT] implemented using a Read Only Memory [ROM], proclaim the ability of the system to survive itself from the influence of faults. The comparison made with the traditional Triple Modular Redundancy [TMR] exhibits the superiority of the scheme in terms of fault coverage and area overhead.
implementation of area efficient high speed eddr architectureKumar Goud
Abstract-This project presents an EDDR design, based on the residue-and-quotient (RQ) code, to embed into motion estimation (ME) for video coding testing applications. An error in processing elements (PEs), i.e. key components of a ME, can be detected and recovered effectively by using the EDDR design. The proposed EDDR design for ME testing can detect errors and recover data with an acceptable area overhead and timing penalty. The functional verification and synthesis can be done by Xilinx ISE. That is when compare to the existing design the implemented design area and timing will be reduced.
Index Terms—Area overhead, data recovery, error detection, reliability, residue-and-quotient (RQ) code, Xilinx ISE
Elliptic Curve Cryptography (ECC) is capable of constructing public-key cryptosystems. Specifically, the security of the ECC minimizes to testing the ability to handle and solve the DLP (Discrete Logarithmic Problem) in the group of points of an elliptic curve (ECDLP). ECC based on ECDLP is in the list of recommended algorithms for use by NIST (National Institute of Standards and Technology) and NSA (National Security Agency). Given that ECDLP based cryptosystems are in wide-spread utilization, continuous efforts on monitoring the effectiveness of new attacks or improvements to pre-existing attacks on ECDLP over large prime factor is a significant part in that. This paper aims to provide a secure, effective, and flexible method to improve data security in cloud computing. In this chapter, a novel algorithm using MapReduce and Pollard-Rho’s approach to solve the ECDLP problems and to enhance the security level.
Robust Malware Detection using Residual Attention NetworkShamika Ganesan
In this paper, we explore the use of an attention based mechanism known as Residual Attention for malware detection and compare this with existing CNN based methods and conventional Machine Learning algorithms with the help of GIST features. The proposed method outperformed traditional malware detection methods which use Machine Learning and CNN based Deep Learning algorithms, by demonstrating an accuracy of 99.25%.
This paper has been accepted in the International Conference of Consumer Electronics (ICCE 2021).
High performance energy efficient multicore embedded computingAnkit Talele
The high-performance energy-efficient multicore embedded computing(HPEEC) domain addresses the unique design challenges of high-performance and low-power/energy embedded computing.
A BIST GENERATOR CAD TOOL FOR NUMERIC INTEGRATED CIRCUITSVLSICS Design
This paper describes a training and research tool for learning basic issues related to BIST (Built-In Self- Test) generator. The main didactic aim of the tool is presenting complicated concepts in a comprehensive graphical and analytical way. The paper describes a computer-aided design (CAD) that is used to generate automatically the BIST to any digital circuit. This software technique attempts to reduce the amount of extra hardware and cost of the circuit. In order to make our software being easily available, we used the Java platform, which is supported by most operating systems. The multi-platform JAVA runtime environment allows for easy access and usage of
the tool.
Systematic Model based Testing with Coverage AnalysisIDES Editor
Aviation safety has come a long way in over one
hundred years of implementation. In aeronautics, commonly,
requirements are Simulink Models. Considering this, many
conventional low level testing methods are adapted by various
test engineers. This paper is to propose a method to undertake
Low Level testing/ debugging in comparatively easier and
faster way. As a first step, an attempt is made to simulate
developed safety critical control blocks within a specified
simulation time. For this, the blocks developed will be utilized
to test in Simulink environment. What we propose here is
Processor in loop test method using RTDX. The idea is to
simulate model (requirement) in parallel with handwritten
code (not a generated one) running on a specified target,
subjected to same inputs (test cases). Comparing the results
of model and target, fidelity can be assured. This paper suggests
a development workflow starting with a model created in
Simulink and proceeding through generating verified and
profiled code for the processor.
implementation of area efficient high speed eddr architectureKumar Goud
Abstract-This project presents an EDDR design, based on the residue-and-quotient (RQ) code, to embed into motion estimation (ME) for video coding testing applications. An error in processing elements (PEs), i.e. key components of a ME, can be detected and recovered effectively by using the EDDR design. The proposed EDDR design for ME testing can detect errors and recover data with an acceptable area overhead and timing penalty. The functional verification and synthesis can be done by Xilinx ISE. That is when compare to the existing design the implemented design area and timing will be reduced.
Index Terms—Area overhead, data recovery, error detection, reliability, residue-and-quotient (RQ) code, Xilinx ISE
Elliptic Curve Cryptography (ECC) is capable of constructing public-key cryptosystems. Specifically, the security of the ECC minimizes to testing the ability to handle and solve the DLP (Discrete Logarithmic Problem) in the group of points of an elliptic curve (ECDLP). ECC based on ECDLP is in the list of recommended algorithms for use by NIST (National Institute of Standards and Technology) and NSA (National Security Agency). Given that ECDLP based cryptosystems are in wide-spread utilization, continuous efforts on monitoring the effectiveness of new attacks or improvements to pre-existing attacks on ECDLP over large prime factor is a significant part in that. This paper aims to provide a secure, effective, and flexible method to improve data security in cloud computing. In this chapter, a novel algorithm using MapReduce and Pollard-Rho’s approach to solve the ECDLP problems and to enhance the security level.
Robust Malware Detection using Residual Attention NetworkShamika Ganesan
In this paper, we explore the use of an attention based mechanism known as Residual Attention for malware detection and compare this with existing CNN based methods and conventional Machine Learning algorithms with the help of GIST features. The proposed method outperformed traditional malware detection methods which use Machine Learning and CNN based Deep Learning algorithms, by demonstrating an accuracy of 99.25%.
This paper has been accepted in the International Conference of Consumer Electronics (ICCE 2021).
High performance energy efficient multicore embedded computingAnkit Talele
The high-performance energy-efficient multicore embedded computing(HPEEC) domain addresses the unique design challenges of high-performance and low-power/energy embedded computing.
A BIST GENERATOR CAD TOOL FOR NUMERIC INTEGRATED CIRCUITSVLSICS Design
This paper describes a training and research tool for learning basic issues related to BIST (Built-In Self- Test) generator. The main didactic aim of the tool is presenting complicated concepts in a comprehensive graphical and analytical way. The paper describes a computer-aided design (CAD) that is used to generate automatically the BIST to any digital circuit. This software technique attempts to reduce the amount of extra hardware and cost of the circuit. In order to make our software being easily available, we used the Java platform, which is supported by most operating systems. The multi-platform JAVA runtime environment allows for easy access and usage of
the tool.
Systematic Model based Testing with Coverage AnalysisIDES Editor
Aviation safety has come a long way in over one
hundred years of implementation. In aeronautics, commonly,
requirements are Simulink Models. Considering this, many
conventional low level testing methods are adapted by various
test engineers. This paper is to propose a method to undertake
Low Level testing/ debugging in comparatively easier and
faster way. As a first step, an attempt is made to simulate
developed safety critical control blocks within a specified
simulation time. For this, the blocks developed will be utilized
to test in Simulink environment. What we propose here is
Processor in loop test method using RTDX. The idea is to
simulate model (requirement) in parallel with handwritten
code (not a generated one) running on a specified target,
subjected to same inputs (test cases). Comparing the results
of model and target, fidelity can be assured. This paper suggests
a development workflow starting with a model created in
Simulink and proceeding through generating verified and
profiled code for the processor.
COVERAGE DRIVEN FUNCTIONAL TESTING ARCHITECTURE FOR PROTOTYPING SYSTEM USING ...VLSICS Design
Time and efforts for functional testing of digital logic is big chunk of overall project cycle in VLSI industry. Progress of functional testing is measured by functional coverage where test-plan defines what needs to be covered, and test-results indicates quality of stimulus. Claiming closer of functional testing requires that functional coverage hits 100% of original test-plan. Depending on the complexity of the design, availability of resources and budget, various methods are used for functional testing. Software simulations using various logic simulators, available from Electronic Design Automation (EDA) companies, is primary method for functional testing. The next level in functional testing is pre-silicon verification using Field Programmable Gate Array (FPGA) prototype and/or emulation platforms for stress testing the Design Under Test (DUT). With all the efforts, the purpose is to gain confidence on maturity of DUT to ensuresfirst time silicon success that meets time to market needs of the industry. For any test-environment the bottleneck, in achieving verification closer, is controllability and observability that is quality of stimulus to unearth issues at early stage and coverage calculation. Software simulation, FPGA prototype, or emulation, each method has its own limitations, be it test-time, ease of use, or cost of software, tools and
hardware-platform. Compared to software simulation, FPGA prototyping and emulation methods pose greater challenges in quality stimulus generation and coverage calculation. Many researchers have identified the problems of bug-detection / localization, but very few have touched the concept of quality stimulus generation that leads to better functional coverage and thereby uncover hidden bugs in FPGA prototype verification setup. This paper presents a novel approach to address above-mentioned issues by embedding synthesizable active-agent and coverage collector into FPGA prototype. The proposed architecture has been experimented for functional and stress testing of Universal Serial Bus (USB) Link Training and Status State Machine (LTSSM) logic module as DUT in FPGA prototype. The proposed solution is fully synthesizable and hence can be used in both software simulation as well as in prototype system. The biggest advantage is plug and play nature of this active-agent component, that allows its reusability in any USB3.0 LTSSM digital core.
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
A Unique Test Bench for Various System-on-a-Chip IJECEIAES
This paper discusses a standard flow on how an automated test bench environment which is randomized with constraints can verify a SOC efficiently for its functionality and coverage. Today, in the time of multimillion gate ASICs, reusable intellectual property (IP), and system-ona-chip (SoC) designs, verification consumes about 70 % of the design effort. Automation means a machine completes a task autonomously, quicker and with predictable results. Automation requires standard processes with welldefined inputs and outputs. By using this efficient methodology it is possible to provide a general purpose automation solution for verification, given today’s technology. Tools automating various portions of the verification process are being introduced. Here, we have Communication based SOC The content of the paper discusses about the methodology used to verify such a SOC-based environment. Cadence Efficient Verification Methodology libraries are explored for the solution of this problem. We can take this as a state of art approach in verifying SOC environments. The goal of this paper is to emphasize the unique testbench for different SOC using Efficient Verification Constructs implemented in system verilog for SOC verification.
Network Function Modeling and Performance EstimationIJECEIAES
This work introduces a methodology for the modelization of network functions focused on the identification of recurring execution patterns as basic building blocks and aimed at providing a platform independent representation. By mapping each modeling building block on specific hardware, the performance of the network function can be estimated in terms of maximum throughput that the network function can achieve on the specific execution platform. The approach is such that once the basic modeling building blocks have been mapped, the estimate can be computed automatically for any modeled network function. Experimental results on several sample network functions show that although our approach cannot be very accurate without taking in consideration traffic characteristics, it is very valuable for those application where even loose estimates are key. One such example is orchestration in network functions virtualization (NFV) platforms, as well as in general virtualization platforms where virtual machine placement is based also on the performance of network services offered to them. Being able to automatically estimate the performance of a virtualized network function (VNF) on different execution hardware, enables optimal placement of VNFs themselves as well as the virtual hosts they serve, while efficiently utilizing available resources.
Coding Schemes for Implementation of Fault Tolerant Parrallel FilterIJAAS Team
Digital filters are utilized as a one of flag handling and correspondence frameworks. At times, the unwavering quality of those frameworks is basic, and blame tolerant channel executions are needed. Throughout the years, numerous systems that endeavor the channels' structure and properties to accomplish adaptation to internal failure have been proposed. As innovation scales, it empowers more unpredictable frameworks that join many channels. In those perplexing frameworks, it is regular that a portion of the channels work in parallel. A plan in view of big rectification coding has been as of late proposed to protect parallel channels. In that plan, each channel is deal with as a bit, and excess channels that go about as equality check bits are acquainted with distinguish and rectify blunders. In this short, applying coding systems to secure parallel channels is tended to in a broader manner. This decreases the assurance overhead and makes the quantity of excess channels autonomous of the quantity of parallel channels. The proposed technique is first described and then illustrated with two case studies. Finally, both the effectiveness in protecting against errors and the cost are evaluated for a field-programmable gate array implementation.
An Investigation towards Effectiveness in Image Enhancement Process in MPSoC IJECEIAES
Image enhancement has a primitive role in the vision-based applications. It involves the processing of the input image by boosting its visualization for various applications. The primary objective is to filter the unwanted noises, clutters, sharpening or blur. The characteristics such as resolution and contrast are constructively altered to obtain an outcome of an enhanced image in the bio-medical field. The paper highlights the different techniques proposed for the digital enhancement of images. After surveying these methods that utilize Multiprocessor System-on-Chip (MPSoC), it is concluded that these methodologies have little accuracy and hence none of them are efficiently capable of enhancing the digital biomedical images.
EDEM is a general-purpose CAE tool that uses state-of-the-art discrete element modeling technology for the simulation and analysis of particle handling and manufacturing operations.
Modifying Hamming code and using the replication method to protect memory aga...TELKOMNIKA JOURNAL
As technology scaling increases computer memory’s bit-cell density and reduces the voltage of semiconductors, the number of soft errors due to radiation induced single event upsets (SEU) and multi-bit upsets (MBU) also increases. To address this, error-correcting codes (ECC) can be used to detect and correct soft errors, while x-modular-redundancy improves fault tolerance. This paper presents a technique that provides high error-correction performance, high speed, and low complexity. The proposed technique ensures that only correct values get passed to the system output or are processed in spite of the presence of up to three-bit errors. The Hamming code is modified in order to provide a high probability of MBU detection. In addition, the paper describes the new technique and associated analysis scheme for its implementation. The new technique has been simulated, evaluated, and compared to error correction codes with similar decoding complexity to better understand the overheads required, the gained capabilities to protect data against three-bit errors, and to reduce the misdetection probability and false-detection probability of four-bit errors.
AN EFFICIENT INTRUSION DETECTION SYSTEM WITH CUSTOM FEATURES USING FPA-GRADIE...IJCNCJournal
An efficient Intrusion Detection System has to be given high priority while connecting systems with a network to prevent the system before an attack happens. It is a big challenge to the network security group to prevent the system from a variable types of new attacks as technology is growing in parallel. In this paper, an efficient model to detect Intrusion is proposed to predict attacks with high accuracy and less false-negative rate by deriving custom features UNSW-CF by using the benchmark intrusion dataset UNSW-NB15. To reduce the learning complexity, Custom Features are derived and then Significant Features are constructed by applying meta-heuristic FPA (Flower Pollination algorithm) and MRMR (Minimal Redundancy and Maximum Redundancy) which reduces learning time and also increases prediction accuracy. ENC (ElasicNet Classifier), KRRC (Kernel Ridge Regression Classifier), IGBC (Improved Gradient Boosting Classifier) is employed to classify the attacks in the datasets UNSW-CF, UNSW and recorded that UNSW-CF with derived custom features using IGBC integrated with FPA provided high accuracy of 97.38% and a low error rate of 2.16%. Also, the sensitivity and specificity rate for IGB attains a high rate of 97.32% and 97.50% respectively.
An Efficient Approach Towards Mitigating Soft Errors Riskssipij
Smaller feature size, higher clock frequency and lower power consumption are of core concerns of today’s nano-technology, which has been resulted by continuous downscaling of CMOS technologies. The resultant‘device shrinking’ reduces the soft error tolerance of the VLSI circuits, as very little energy is needed to change their states. Safety critical systems are very sensitive to soft errors. A bit flip due to soft error can change the value of critical variable and consequently the system control flow can completely be changed which leads to system failure. To minimize soft error risks, a novel methodology is proposed to detect and recover from soft errors considering only ‘critical code blocks’ and ‘critical variables’ rather than considering all variables and/or blocks in the whole program. The proposed method shortens space and time overhead in comparison to existing dominant approaches.
System reliability is an important issue in designing modern multiprocessor systems. This paper proposes a fault-tolerant, scalable, multiprocessor system architecture that adopts a pipeline scheme. To verify the performance of the proposed system, the SimEvent/Stateflow tool of the MATLAB program was used to simulate the system. The proposed system uses twelve processors (P), connected in a linear array, to build a ten-stage system with two backup processors (BP). However, the system can be expanded by adding more processors to increase pipeline stages and performance, and more backup processors to increase system reliability. The system can automatically reorganize itself in the event of a failure of one or two processors and execution continues without interruption. Each processor communicates with its neighboring processors through input/output (I/O) ports which are used as bypass links between the processors. In the event of a processor failure, the function of the faulty processor is assigned to the next processor that is free from faults. The fast Fourier transform (FFT) algorithm is implemented on the simulated circuit to evaluate the performance of the proposed system. The results showed that the system can continue to execute even if one or two processors fail without a noticeable decrease in performance.
COVERAGE DRIVEN FUNCTIONAL TESTING ARCHITECTURE FOR PROTOTYPING SYSTEM USING ...VLSICS Design
Time and efforts for functional testing of digital logic is big chunk of overall project cycle in VLSI industry. Progress of functional testing is measured by functional coverage where test-plan defines what needs to be covered, and test-results indicates quality of stimulus. Claiming closer of functional testing requires that functional coverage hits 100% of original test-plan. Depending on the complexity of the design, availability of resources and budget, various methods are used for functional testing. Software simulations using various logic simulators, available from Electronic Design Automation (EDA) companies, is primary method for functional testing. The next level in functional testing is pre-silicon verification using Field Programmable Gate Array (FPGA) prototype and/or emulation platforms for stress testing the Design Under Test (DUT). With all the efforts, the purpose is to gain confidence on maturity of DUT to ensuresfirst time silicon success that meets time to market needs of the industry. For any test-environment the bottleneck, in achieving verification closer, is controllability and observability that is quality of stimulus to unearth issues at early stage and coverage calculation. Software simulation, FPGA prototype, or emulation, each method has its own limitations, be it test-time, ease of use, or cost of software, tools and
hardware-platform. Compared to software simulation, FPGA prototyping and emulation methods pose greater challenges in quality stimulus generation and coverage calculation. Many researchers have identified the problems of bug-detection / localization, but very few have touched the concept of quality stimulus generation that leads to better functional coverage and thereby uncover hidden bugs in FPGA prototype verification setup. This paper presents a novel approach to address above-mentioned issues by embedding synthesizable active-agent and coverage collector into FPGA prototype. The proposed architecture has been experimented for functional and stress testing of Universal Serial Bus (USB) Link Training and Status State Machine (LTSSM) logic module as DUT in FPGA prototype. The proposed solution is fully synthesizable and hence can be used in both software simulation as well as in prototype system. The biggest advantage is plug and play nature of this active-agent component, that allows its reusability in any USB3.0 LTSSM digital core.
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
A Unique Test Bench for Various System-on-a-Chip IJECEIAES
This paper discusses a standard flow on how an automated test bench environment which is randomized with constraints can verify a SOC efficiently for its functionality and coverage. Today, in the time of multimillion gate ASICs, reusable intellectual property (IP), and system-ona-chip (SoC) designs, verification consumes about 70 % of the design effort. Automation means a machine completes a task autonomously, quicker and with predictable results. Automation requires standard processes with welldefined inputs and outputs. By using this efficient methodology it is possible to provide a general purpose automation solution for verification, given today’s technology. Tools automating various portions of the verification process are being introduced. Here, we have Communication based SOC The content of the paper discusses about the methodology used to verify such a SOC-based environment. Cadence Efficient Verification Methodology libraries are explored for the solution of this problem. We can take this as a state of art approach in verifying SOC environments. The goal of this paper is to emphasize the unique testbench for different SOC using Efficient Verification Constructs implemented in system verilog for SOC verification.
Network Function Modeling and Performance EstimationIJECEIAES
This work introduces a methodology for the modelization of network functions focused on the identification of recurring execution patterns as basic building blocks and aimed at providing a platform independent representation. By mapping each modeling building block on specific hardware, the performance of the network function can be estimated in terms of maximum throughput that the network function can achieve on the specific execution platform. The approach is such that once the basic modeling building blocks have been mapped, the estimate can be computed automatically for any modeled network function. Experimental results on several sample network functions show that although our approach cannot be very accurate without taking in consideration traffic characteristics, it is very valuable for those application where even loose estimates are key. One such example is orchestration in network functions virtualization (NFV) platforms, as well as in general virtualization platforms where virtual machine placement is based also on the performance of network services offered to them. Being able to automatically estimate the performance of a virtualized network function (VNF) on different execution hardware, enables optimal placement of VNFs themselves as well as the virtual hosts they serve, while efficiently utilizing available resources.
Coding Schemes for Implementation of Fault Tolerant Parrallel FilterIJAAS Team
Digital filters are utilized as a one of flag handling and correspondence frameworks. At times, the unwavering quality of those frameworks is basic, and blame tolerant channel executions are needed. Throughout the years, numerous systems that endeavor the channels' structure and properties to accomplish adaptation to internal failure have been proposed. As innovation scales, it empowers more unpredictable frameworks that join many channels. In those perplexing frameworks, it is regular that a portion of the channels work in parallel. A plan in view of big rectification coding has been as of late proposed to protect parallel channels. In that plan, each channel is deal with as a bit, and excess channels that go about as equality check bits are acquainted with distinguish and rectify blunders. In this short, applying coding systems to secure parallel channels is tended to in a broader manner. This decreases the assurance overhead and makes the quantity of excess channels autonomous of the quantity of parallel channels. The proposed technique is first described and then illustrated with two case studies. Finally, both the effectiveness in protecting against errors and the cost are evaluated for a field-programmable gate array implementation.
An Investigation towards Effectiveness in Image Enhancement Process in MPSoC IJECEIAES
Image enhancement has a primitive role in the vision-based applications. It involves the processing of the input image by boosting its visualization for various applications. The primary objective is to filter the unwanted noises, clutters, sharpening or blur. The characteristics such as resolution and contrast are constructively altered to obtain an outcome of an enhanced image in the bio-medical field. The paper highlights the different techniques proposed for the digital enhancement of images. After surveying these methods that utilize Multiprocessor System-on-Chip (MPSoC), it is concluded that these methodologies have little accuracy and hence none of them are efficiently capable of enhancing the digital biomedical images.
EDEM is a general-purpose CAE tool that uses state-of-the-art discrete element modeling technology for the simulation and analysis of particle handling and manufacturing operations.
Modifying Hamming code and using the replication method to protect memory aga...TELKOMNIKA JOURNAL
As technology scaling increases computer memory’s bit-cell density and reduces the voltage of semiconductors, the number of soft errors due to radiation induced single event upsets (SEU) and multi-bit upsets (MBU) also increases. To address this, error-correcting codes (ECC) can be used to detect and correct soft errors, while x-modular-redundancy improves fault tolerance. This paper presents a technique that provides high error-correction performance, high speed, and low complexity. The proposed technique ensures that only correct values get passed to the system output or are processed in spite of the presence of up to three-bit errors. The Hamming code is modified in order to provide a high probability of MBU detection. In addition, the paper describes the new technique and associated analysis scheme for its implementation. The new technique has been simulated, evaluated, and compared to error correction codes with similar decoding complexity to better understand the overheads required, the gained capabilities to protect data against three-bit errors, and to reduce the misdetection probability and false-detection probability of four-bit errors.
AN EFFICIENT INTRUSION DETECTION SYSTEM WITH CUSTOM FEATURES USING FPA-GRADIE...IJCNCJournal
An efficient Intrusion Detection System has to be given high priority while connecting systems with a network to prevent the system before an attack happens. It is a big challenge to the network security group to prevent the system from a variable types of new attacks as technology is growing in parallel. In this paper, an efficient model to detect Intrusion is proposed to predict attacks with high accuracy and less false-negative rate by deriving custom features UNSW-CF by using the benchmark intrusion dataset UNSW-NB15. To reduce the learning complexity, Custom Features are derived and then Significant Features are constructed by applying meta-heuristic FPA (Flower Pollination algorithm) and MRMR (Minimal Redundancy and Maximum Redundancy) which reduces learning time and also increases prediction accuracy. ENC (ElasicNet Classifier), KRRC (Kernel Ridge Regression Classifier), IGBC (Improved Gradient Boosting Classifier) is employed to classify the attacks in the datasets UNSW-CF, UNSW and recorded that UNSW-CF with derived custom features using IGBC integrated with FPA provided high accuracy of 97.38% and a low error rate of 2.16%. Also, the sensitivity and specificity rate for IGB attains a high rate of 97.32% and 97.50% respectively.
An Efficient Approach Towards Mitigating Soft Errors Riskssipij
Smaller feature size, higher clock frequency and lower power consumption are of core concerns of today’s nano-technology, which has been resulted by continuous downscaling of CMOS technologies. The resultant‘device shrinking’ reduces the soft error tolerance of the VLSI circuits, as very little energy is needed to change their states. Safety critical systems are very sensitive to soft errors. A bit flip due to soft error can change the value of critical variable and consequently the system control flow can completely be changed which leads to system failure. To minimize soft error risks, a novel methodology is proposed to detect and recover from soft errors considering only ‘critical code blocks’ and ‘critical variables’ rather than considering all variables and/or blocks in the whole program. The proposed method shortens space and time overhead in comparison to existing dominant approaches.
System reliability is an important issue in designing modern multiprocessor systems. This paper proposes a fault-tolerant, scalable, multiprocessor system architecture that adopts a pipeline scheme. To verify the performance of the proposed system, the SimEvent/Stateflow tool of the MATLAB program was used to simulate the system. The proposed system uses twelve processors (P), connected in a linear array, to build a ten-stage system with two backup processors (BP). However, the system can be expanded by adding more processors to increase pipeline stages and performance, and more backup processors to increase system reliability. The system can automatically reorganize itself in the event of a failure of one or two processors and execution continues without interruption. Each processor communicates with its neighboring processors through input/output (I/O) ports which are used as bypass links between the processors. In the event of a processor failure, the function of the faulty processor is assigned to the next processor that is free from faults. The fast Fourier transform (FFT) algorithm is implemented on the simulated circuit to evaluate the performance of the proposed system. The results showed that the system can continue to execute even if one or two processors fail without a noticeable decrease in performance.
DESIGN APPROACH FOR FAULT TOLERANCE IN FPGA ARCHITECTUREVLSICS Design
Failures of nano-metric technologies owing to defects and shrinking process tolerances give rise to significant challenges for IC testing. In recent years the application space of reconfigurable devices has grown to include many platforms with a strong need for fault tolerance. While these systems frequently contain hardware redundancy to allow for continued operation in the presence of operational faults, the need to recover faulty hardware and return it to full functionality quickly and efficiently is great. In addition to providing functional density, FPGAs provide a level of fault tolerance generally not found in mask-programmable devices by including the capability to reconfigure around operational faults in the field. Reliability and process variability are serious issues for FPGAs in the future. With advancement in process technology, the feature size is decreasing which leads to higher defect densities, more sophisticated techniques at increased costs are required to avoid defects. If nano-technology fabrication are applied the yield may go down to zero as avoiding defect during fabrication will not be a feasible option Hence, feature architecture have to be defect tolerant. In regular structure like FPGA, redundancy is commonly used for fault tolerance. In this work we present a solution in which configuration bit-stream of FPGA is modified by a hardware controller that is present on the chip itself. The technique uses redundant device for replacing faulty device and increases the yield.
Design Verification and Test Vector Minimization Using Heuristic Method of a ...ijcisjournal
The reduction in feature size increases the probability of manufacturing defect in the IC will result in a
faulty chip. A very small defect can easily result in a faulty transistor or interconnecting wire when the
feature size is less. Testing is required to guarantee fault-free products, regardless of whether the product
is a VLSI device or an electronic system. Simulation is used to verify the correctness of the design. To test n
input circuit we required 2n
test vectors. As the number inputs of a circuit are more, the exponential growth
of the required number of vectors takes much time to test the circuit. It is necessary to find testing methods
to reduce test vectors . So here designed an heuristic approach to test the ripple carry adder. Modelsim and
Xilinx tools are used to verify and synthesize the design.
Recently, research has picked up a fervent pace in the area of fault diagnosis of electrical vehicle. Like failures of a position sensor, a voltage sensor, and current sensors. Three-phase induction motors are the “workhorses” of industry and are the most widely used electrical machines. This paper presents a scheme for Fault Detection and Isolation (FDI). The proposed approach is a sensor-based technique using the mains current measurement. Current sensors are widespread in power converters control and in electrical drives. Thus, to ensure continuous operation with reconfiguration control, a fast sensor fault detection and isolation is required. In this paper, a new and fast faulty current sensor detection and isolation is presented. It is derived from intelligent techniques. The main interest of field programmable gate array is the extremely fast computation capabilities. That allows a fast residual generation when a sensor fault occurs. Using of Xilinx System Generator in Matlab / Simulink allows the real-time simulation and implemented on a field programmable gate array chip without any VHSIC Hardware Description Language coding. The sensor fault detection and isolation algorithm was implemented targeting a Virtex5. Simulation results are given to demonstrate the efficiency of this FDI approach.
APPROXIMATE ARITHMETIC CIRCUIT DESIGN FOR ERROR RESILIENT APPLICATIONSVLSICS Design
When the application context is ready to accept different levels of exactness in solutions and is supported
by human perception quality, then the term ‘Approximate Computing’ tossed before one decade will
become the first priority . Even though computer hardware and software are working to generate exact
results, approximate results are preferred whenever an error is in predefined bound and adaptive. It will
reduce power demand and critical path delay and improve other circuit metrics. When it comes to
traditional arithmetic circuits, those generating correct results with limitations on performance are rapidly
getting replaced by approximate arithmetic circuits which are the need of the hour, and so on about their
design.
APPROXIMATE ARITHMETIC CIRCUIT DESIGN FOR ERROR RESILIENT APPLICATIONSVLSICS Design
When the application context is ready to accept different levels of exactness in solutions and is supported
by human perception quality, then the term ‘Approximate Computing’ tossed before one decade will
become the first priority . Even though computer hardware and software are working to generate exact
results, approximate results are preferred whenever an error is in predefined bound and adaptive. It will
reduce power demand and critical path delay and improve other circuit metrics. When it comes to
traditional arithmetic circuits, those generating correct results with limitations on performance are rapidly
getting replaced by approximate arithmetic circuits which are the need of the hour, and so on about their
design.
Comparative Study of Neural Networks Algorithms for Cloud Computing CPU Sched...IJECEIAES
Cloud Computing is the most powerful computing model of our time. While the major IT providers and consumers are competing to exploit the benefits of this computing model in order to thrive their profits, most of the cloud computing platforms are still built on operating systems that uses basic CPU (Core Processing Unit) scheduling algorithms that lacks the intelligence needed for such innovative computing model. Correspdondingly, this paper presents the benefits of applying Artificial Neural Networks algorithms in regards to enhancing CPU scheduling for Cloud Computing model. Furthermore, a set of characteristics and theoretical metrics are proposed for the sake of comparing the different Artificial Neural Networks algorithms and finding the most accurate algorithm for Cloud Computing CPU Scheduling.
Bibliometric analysis highlighting the role of women in addressing climate ch...IJECEIAES
Fossil fuel consumption increased quickly, contributing to climate change
that is evident in unusual flooding and draughts, and global warming. Over
the past ten years, women's involvement in society has grown dramatically,
and they succeeded in playing a noticeable role in reducing climate change.
A bibliometric analysis of data from the last ten years has been carried out to
examine the role of women in addressing the climate change. The analysis's
findings discussed the relevant to the sustainable development goals (SDGs),
particularly SDG 7 and SDG 13. The results considered contributions made
by women in the various sectors while taking geographic dispersion into
account. The bibliometric analysis delves into topics including women's
leadership in environmental groups, their involvement in policymaking, their
contributions to sustainable development projects, and the influence of
gender diversity on attempts to mitigate climate change. This study's results
highlight how women have influenced policies and actions related to climate
change, point out areas of research deficiency and recommendations on how
to increase role of the women in addressing the climate change and
achieving sustainability. To achieve more successful results, this initiative
aims to highlight the significance of gender equality and encourage
inclusivity in climate change decision-making processes.
Voltage and frequency control of microgrid in presence of micro-turbine inter...IJECEIAES
The active and reactive load changes have a significant impact on voltage
and frequency. In this paper, in order to stabilize the microgrid (MG) against
load variations in islanding mode, the active and reactive power of all
distributed generators (DGs), including energy storage (battery), diesel
generator, and micro-turbine, are controlled. The micro-turbine generator is
connected to MG through a three-phase to three-phase matrix converter, and
the droop control method is applied for controlling the voltage and
frequency of MG. In addition, a method is introduced for voltage and
frequency control of micro-turbines in the transition state from gridconnected mode to islanding mode. A novel switching strategy of the matrix
converter is used for converting the high-frequency output voltage of the
micro-turbine to the grid-side frequency of the utility system. Moreover,
using the switching strategy, the low-order harmonics in the output current
and voltage are not produced, and consequently, the size of the output filter
would be reduced. In fact, the suggested control strategy is load-independent
and has no frequency conversion restrictions. The proposed approach for
voltage and frequency regulation demonstrates exceptional performance and
favorable response across various load alteration scenarios. The suggested
strategy is examined in several scenarios in the MG test systems, and the
simulation results are addressed.
Enhancing battery system identification: nonlinear autoregressive modeling fo...IJECEIAES
Precisely characterizing Li-ion batteries is essential for optimizing their
performance, enhancing safety, and prolonging their lifespan across various
applications, such as electric vehicles and renewable energy systems. This
article introduces an innovative nonlinear methodology for system
identification of a Li-ion battery, employing a nonlinear autoregressive with
exogenous inputs (NARX) model. The proposed approach integrates the
benefits of nonlinear modeling with the adaptability of the NARX structure,
facilitating a more comprehensive representation of the intricate
electrochemical processes within the battery. Experimental data collected
from a Li-ion battery operating under diverse scenarios are employed to
validate the effectiveness of the proposed methodology. The identified
NARX model exhibits superior accuracy in predicting the battery's behavior
compared to traditional linear models. This study underscores the
importance of accounting for nonlinearities in battery modeling, providing
insights into the intricate relationships between state-of-charge, voltage, and
current under dynamic conditions.
Smart grid deployment: from a bibliometric analysis to a surveyIJECEIAES
Smart grids are one of the last decades' innovations in electrical energy.
They bring relevant advantages compared to the traditional grid and
significant interest from the research community. Assessing the field's
evolution is essential to propose guidelines for facing new and future smart
grid challenges. In addition, knowing the main technologies involved in the
deployment of smart grids (SGs) is important to highlight possible
shortcomings that can be mitigated by developing new tools. This paper
contributes to the research trends mentioned above by focusing on two
objectives. First, a bibliometric analysis is presented to give an overview of
the current research level about smart grid deployment. Second, a survey of
the main technological approaches used for smart grid implementation and
their contributions are highlighted. To that effect, we searched the Web of
Science (WoS), and the Scopus databases. We obtained 5,663 documents
from WoS and 7,215 from Scopus on smart grid implementation or
deployment. With the extraction limitation in the Scopus database, 5,872 of
the 7,215 documents were extracted using a multi-step process. These two
datasets have been analyzed using a bibliometric tool called bibliometrix.
The main outputs are presented with some recommendations for future
research.
Use of analytical hierarchy process for selecting and prioritizing islanding ...IJECEIAES
One of the problems that are associated to power systems is islanding
condition, which must be rapidly and properly detected to prevent any
negative consequences on the system's protection, stability, and security.
This paper offers a thorough overview of several islanding detection
strategies, which are divided into two categories: classic approaches,
including local and remote approaches, and modern techniques, including
techniques based on signal processing and computational intelligence.
Additionally, each approach is compared and assessed based on several
factors, including implementation costs, non-detected zones, declining
power quality, and response times using the analytical hierarchy process
(AHP). The multi-criteria decision-making analysis shows that the overall
weight of passive methods (24.7%), active methods (7.8%), hybrid methods
(5.6%), remote methods (14.5%), signal processing-based methods (26.6%),
and computational intelligent-based methods (20.8%) based on the
comparison of all criteria together. Thus, it can be seen from the total weight
that hybrid approaches are the least suitable to be chosen, while signal
processing-based methods are the most appropriate islanding detection
method to be selected and implemented in power system with respect to the
aforementioned factors. Using Expert Choice software, the proposed
hierarchy model is studied and examined.
Enhancing of single-stage grid-connected photovoltaic system using fuzzy logi...IJECEIAES
The power generated by photovoltaic (PV) systems is influenced by
environmental factors. This variability hampers the control and utilization of
solar cells' peak output. In this study, a single-stage grid-connected PV
system is designed to enhance power quality. Our approach employs fuzzy
logic in the direct power control (DPC) of a three-phase voltage source
inverter (VSI), enabling seamless integration of the PV connected to the
grid. Additionally, a fuzzy logic-based maximum power point tracking
(MPPT) controller is adopted, which outperforms traditional methods like
incremental conductance (INC) in enhancing solar cell efficiency and
minimizing the response time. Moreover, the inverter's real-time active and
reactive power is directly managed to achieve a unity power factor (UPF).
The system's performance is assessed through MATLAB/Simulink
implementation, showing marked improvement over conventional methods,
particularly in steady-state and varying weather conditions. For solar
irradiances of 500 and 1,000 W/m2
, the results show that the proposed
method reduces the total harmonic distortion (THD) of the injected current
to the grid by approximately 46% and 38% compared to conventional
methods, respectively. Furthermore, we compare the simulation results with
IEEE standards to evaluate the system's grid compatibility.
Enhancing photovoltaic system maximum power point tracking with fuzzy logic-b...IJECEIAES
Photovoltaic systems have emerged as a promising energy resource that
caters to the future needs of society, owing to their renewable, inexhaustible,
and cost-free nature. The power output of these systems relies on solar cell
radiation and temperature. In order to mitigate the dependence on
atmospheric conditions and enhance power tracking, a conventional
approach has been improved by integrating various methods. To optimize
the generation of electricity from solar systems, the maximum power point
tracking (MPPT) technique is employed. To overcome limitations such as
steady-state voltage oscillations and improve transient response, two
traditional MPPT methods, namely fuzzy logic controller (FLC) and perturb
and observe (P&O), have been modified. This research paper aims to
simulate and validate the step size of the proposed modified P&O and FLC
techniques within the MPPT algorithm using MATLAB/Simulink for
efficient power tracking in photovoltaic systems.
Adaptive synchronous sliding control for a robot manipulator based on neural ...IJECEIAES
Robot manipulators have become important equipment in production lines, medical fields, and transportation. Improving the quality of trajectory tracking for
robot hands is always an attractive topic in the research community. This is a
challenging problem because robot manipulators are complex nonlinear systems
and are often subject to fluctuations in loads and external disturbances. This
article proposes an adaptive synchronous sliding control scheme to improve trajectory tracking performance for a robot manipulator. The proposed controller
ensures that the positions of the joints track the desired trajectory, synchronize
the errors, and significantly reduces chattering. First, the synchronous tracking
errors and synchronous sliding surfaces are presented. Second, the synchronous
tracking error dynamics are determined. Third, a robust adaptive control law is
designed,the unknown components of the model are estimated online by the neural network, and the parameters of the switching elements are selected by fuzzy
logic. The built algorithm ensures that the tracking and approximation errors
are ultimately uniformly bounded (UUB). Finally, the effectiveness of the constructed algorithm is demonstrated through simulation and experimental results.
Simulation and experimental results show that the proposed controller is effective with small synchronous tracking errors, and the chattering phenomenon is
significantly reduced.
Remote field-programmable gate array laboratory for signal acquisition and de...IJECEIAES
A remote laboratory utilizing field-programmable gate array (FPGA) technologies enhances students’ learning experience anywhere and anytime in embedded system design. Existing remote laboratories prioritize hardware access and visual feedback for observing board behavior after programming, neglecting comprehensive debugging tools to resolve errors that require internal signal acquisition. This paper proposes a novel remote embeddedsystem design approach targeting FPGA technologies that are fully interactive via a web-based platform. Our solution provides FPGA board access and debugging capabilities beyond the visual feedback provided by existing remote laboratories. We implemented a lab module that allows users to seamlessly incorporate into their FPGA design. The module minimizes hardware resource utilization while enabling the acquisition of a large number of data samples from the signal during the experiments by adaptively compressing the signal prior to data transmission. The results demonstrate an average compression ratio of 2.90 across three benchmark signals, indicating efficient signal acquisition and effective debugging and analysis. This method allows users to acquire more data samples than conventional methods. The proposed lab allows students to remotely test and debug their designs, bridging the gap between theory and practice in embedded system design.
Detecting and resolving feature envy through automated machine learning and m...IJECEIAES
Efficiently identifying and resolving code smells enhances software project quality. This paper presents a novel solution, utilizing automated machine learning (AutoML) techniques, to detect code smells and apply move method refactoring. By evaluating code metrics before and after refactoring, we assessed its impact on coupling, complexity, and cohesion. Key contributions of this research include a unique dataset for code smell classification and the development of models using AutoGluon for optimal performance. Furthermore, the study identifies the top 20 influential features in classifying feature envy, a well-known code smell, stemming from excessive reliance on external classes. We also explored how move method refactoring addresses feature envy, revealing reduced coupling and complexity, and improved cohesion, ultimately enhancing code quality. In summary, this research offers an empirical, data-driven approach, integrating AutoML and move method refactoring to optimize software project quality. Insights gained shed light on the benefits of refactoring on code quality and the significance of specific features in detecting feature envy. Future research can expand to explore additional refactoring techniques and a broader range of code metrics, advancing software engineering practices and standards.
Smart monitoring technique for solar cell systems using internet of things ba...IJECEIAES
Rapidly and remotely monitoring and receiving the solar cell systems status parameters, solar irradiance, temperature, and humidity, are critical issues in enhancement their efficiency. Hence, in the present article an improved smart prototype of internet of things (IoT) technique based on embedded system through NodeMCU ESP8266 (ESP-12E) was carried out experimentally. Three different regions at Egypt; Luxor, Cairo, and El-Beheira cities were chosen to study their solar irradiance profile, temperature, and humidity by the proposed IoT system. The monitoring data of solar irradiance, temperature, and humidity were live visualized directly by Ubidots through hypertext transfer protocol (HTTP) protocol. The measured solar power radiation in Luxor, Cairo, and El-Beheira ranged between 216-1000, 245-958, and 187-692 W/m 2 respectively during the solar day. The accuracy and rapidity of obtaining monitoring results using the proposed IoT system made it a strong candidate for application in monitoring solar cell systems. On the other hand, the obtained solar power radiation results of the three considered regions strongly candidate Luxor and Cairo as suitable places to build up a solar cells system station rather than El-Beheira.
An efficient security framework for intrusion detection and prevention in int...IJECEIAES
Over the past few years, the internet of things (IoT) has advanced to connect billions of smart devices to improve quality of life. However, anomalies or malicious intrusions pose several security loopholes, leading to performance degradation and threat to data security in IoT operations. Thereby, IoT security systems must keep an eye on and restrict unwanted events from occurring in the IoT network. Recently, various technical solutions based on machine learning (ML) models have been derived towards identifying and restricting unwanted events in IoT. However, most ML-based approaches are prone to miss-classification due to inappropriate feature selection. Additionally, most ML approaches applied to intrusion detection and prevention consider supervised learning, which requires a large amount of labeled data to be trained. Consequently, such complex datasets are impossible to source in a large network like IoT. To address this problem, this proposed study introduces an efficient learning mechanism to strengthen the IoT security aspects. The proposed algorithm incorporates supervised and unsupervised approaches to improve the learning models for intrusion detection and mitigation. Compared with the related works, the experimental outcome shows that the model performs well in a benchmark dataset. It accomplishes an improved detection accuracy of approximately 99.21%.
Developing a smart system for infant incubators using the internet of things ...IJECEIAES
This research is developing an incubator system that integrates the internet of things and artificial intelligence to improve care for premature babies. The system workflow starts with sensors that collect data from the incubator. Then, the data is sent in real-time to the internet of things (IoT) broker eclipse mosquito using the message queue telemetry transport (MQTT) protocol version 5.0. After that, the data is stored in a database for analysis using the long short-term memory network (LSTM) method and displayed in a web application using an application programming interface (API) service. Furthermore, the experimental results produce as many as 2,880 rows of data stored in the database. The correlation coefficient between the target attribute and other attributes ranges from 0.23 to 0.48. Next, several experiments were conducted to evaluate the model-predicted value on the test data. The best results are obtained using a two-layer LSTM configuration model, each with 60 neurons and a lookback setting 6. This model produces an R 2 value of 0.934, with a root mean square error (RMSE) value of 0.015 and a mean absolute error (MAE) of 0.008. In addition, the R 2 value was also evaluated for each attribute used as input, with a result of values between 0.590 and 0.845.
A review on internet of things-based stingless bee's honey production with im...IJECEIAES
Honey is produced exclusively by honeybees and stingless bees which both are well adapted to tropical and subtropical regions such as Malaysia. Stingless bees are known for producing small amounts of honey and are known for having a unique flavor profile. Problem identified that many stingless bees collapsed due to weather, temperature and environment. It is critical to understand the relationship between the production of stingless bee honey and environmental conditions to improve honey production. Thus, this paper presents a review on stingless bee's honey production and prediction modeling. About 54 previous research has been analyzed and compared in identifying the research gaps. A framework on modeling the prediction of stingless bee honey is derived. The result presents the comparison and analysis on the internet of things (IoT) monitoring systems, honey production estimation, convolution neural networks (CNNs), and automatic identification methods on bee species. It is identified based on image detection method the top best three efficiency presents CNN is at 98.67%, densely connected convolutional networks with YOLO v3 is 97.7%, and DenseNet201 convolutional networks 99.81%. This study is significant to assist the researcher in developing a model for predicting stingless honey produced by bee's output, which is important for a stable economy and food security.
A trust based secure access control using authentication mechanism for intero...IJECEIAES
The internet of things (IoT) is a revolutionary innovation in many aspects of our society including interactions, financial activity, and global security such as the military and battlefield internet. Due to the limited energy and processing capacity of network devices, security, energy consumption, compatibility, and device heterogeneity are the long-term IoT problems. As a result, energy and security are critical for data transmission across edge and IoT networks. Existing IoT interoperability techniques need more computation time, have unreliable authentication mechanisms that break easily, lose data easily, and have low confidentiality. In this paper, a key agreement protocol-based authentication mechanism for IoT devices is offered as a solution to this issue. This system makes use of information exchange, which must be secured to prevent access by unauthorized users. Using a compact contiki/cooja simulator, the performance and design of the suggested framework are validated. The simulation findings are evaluated based on detection of malicious nodes after 60 minutes of simulation. The suggested trust method, which is based on privacy access control, reduced packet loss ratio to 0.32%, consumed 0.39% power, and had the greatest average residual energy of 0.99 mJoules at 10 nodes.
Fuzzy linear programming with the intuitionistic polygonal fuzzy numbersIJECEIAES
In real world applications, data are subject to ambiguity due to several factors; fuzzy sets and fuzzy numbers propose a great tool to model such ambiguity. In case of hesitation, the complement of a membership value in fuzzy numbers can be different from the non-membership value, in which case we can model using intuitionistic fuzzy numbers as they provide flexibility by defining both a membership and a non-membership functions. In this article, we consider the intuitionistic fuzzy linear programming problem with intuitionistic polygonal fuzzy numbers, which is a generalization of the previous polygonal fuzzy numbers found in the literature. We present a modification of the simplex method that can be used to solve any general intuitionistic fuzzy linear programming problem after approximating the problem by an intuitionistic polygonal fuzzy number with n edges. This method is given in a simple tableau formulation, and then applied on numerical examples for clarity.
The performance of artificial intelligence in prostate magnetic resonance im...IJECEIAES
Prostate cancer is the predominant form of cancer observed in men worldwide. The application of magnetic resonance imaging (MRI) as a guidance tool for conducting biopsies has been established as a reliable and well-established approach in the diagnosis of prostate cancer. The diagnostic performance of MRI-guided prostate cancer diagnosis exhibits significant heterogeneity due to the intricate and multi-step nature of the diagnostic pathway. The development of artificial intelligence (AI) models, specifically through the utilization of machine learning techniques such as deep learning, is assuming an increasingly significant role in the field of radiology. In the realm of prostate MRI, a considerable body of literature has been dedicated to the development of various AI algorithms. These algorithms have been specifically designed for tasks such as prostate segmentation, lesion identification, and classification. The overarching objective of these endeavors is to enhance diagnostic performance and foster greater agreement among different observers within MRI scans for the prostate. This review article aims to provide a concise overview of the application of AI in the field of radiology, with a specific focus on its utilization in prostate MRI.
Seizure stage detection of epileptic seizure using convolutional neural networksIJECEIAES
According to the World Health Organization (WHO), seventy million individuals worldwide suffer from epilepsy, a neurological disorder. While electroencephalography (EEG) is crucial for diagnosing epilepsy and monitoring the brain activity of epilepsy patients, it requires a specialist to examine all EEG recordings to find epileptic behavior. This procedure needs an experienced doctor, and a precise epilepsy diagnosis is crucial for appropriate treatment. To identify epileptic seizures, this study employed a convolutional neural network (CNN) based on raw scalp EEG signals to discriminate between preictal, ictal, postictal, and interictal segments. The possibility of these characteristics is explored by examining how well timedomain signals work in the detection of epileptic signals using intracranial Freiburg Hospital (FH), scalp Children's Hospital Boston-Massachusetts Institute of Technology (CHB-MIT) databases, and Temple University Hospital (TUH) EEG. To test the viability of this approach, two types of experiments were carried out. Firstly, binary class classification (preictal, ictal, postictal each versus interictal) and four-class classification (interictal versus preictal versus ictal versus postictal). The average accuracy for stage detection using CHB-MIT database was 84.4%, while the Freiburg database's time-domain signals had an accuracy of 79.7% and the highest accuracy of 94.02% for classification in the TUH EEG database when comparing interictal stage to preictal stage.
Analysis of driving style using self-organizing maps to analyze driver behaviorIJECEIAES
Modern life is strongly associated with the use of cars, but the increase in acceleration speeds and their maneuverability leads to a dangerous driving style for some drivers. In these conditions, the development of a method that allows you to track the behavior of the driver is relevant. The article provides an overview of existing methods and models for assessing the functioning of motor vehicles and driver behavior. Based on this, a combined algorithm for recognizing driving style is proposed. To do this, a set of input data was formed, including 20 descriptive features: About the environment, the driver's behavior and the characteristics of the functioning of the car, collected using OBD II. The generated data set is sent to the Kohonen network, where clustering is performed according to driving style and degree of danger. Getting the driving characteristics into a particular cluster allows you to switch to the private indicators of an individual driver and considering individual driving characteristics. The application of the method allows you to identify potentially dangerous driving styles that can prevent accidents.
Hyperspectral object classification using hybrid spectral-spatial fusion and ...IJECEIAES
Because of its spectral-spatial and temporal resolution of greater areas, hyperspectral imaging (HSI) has found widespread application in the field of object classification. The HSI is typically used to accurately determine an object's physical characteristics as well as to locate related objects with appropriate spectral fingerprints. As a result, the HSI has been extensively applied to object identification in several fields, including surveillance, agricultural monitoring, environmental research, and precision agriculture. However, because of their enormous size, objects require a lot of time to classify; for this reason, both spectral and spatial feature fusion have been completed. The existing classification strategy leads to increased misclassification, and the feature fusion method is unable to preserve semantic object inherent features; This study addresses the research difficulties by introducing a hybrid spectral-spatial fusion (HSSF) technique to minimize feature size while maintaining object intrinsic qualities; Lastly, a soft-margins kernel is proposed for multi-layer deep support vector machine (MLDSVM) to reduce misclassification. The standard Indian pines dataset is used for the experiment, and the outcome demonstrates that the HSSF-MLDSVM model performs substantially better in terms of accuracy and Kappa coefficient.
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Dr.Costas Sachpazis
Terzaghi's soil bearing capacity theory, developed by Karl Terzaghi, is a fundamental principle in geotechnical engineering used to determine the bearing capacity of shallow foundations. This theory provides a method to calculate the ultimate bearing capacity of soil, which is the maximum load per unit area that the soil can support without undergoing shear failure. The Calculation HTML Code included.
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
Overview of the fundamental roles in Hydropower generation and the components involved in wider Electrical Engineering.
This paper presents the design and construction of hydroelectric dams from the hydrologist’s survey of the valley before construction, all aspects and involved disciplines, fluid dynamics, structural engineering, generation and mains frequency regulation to the very transmission of power through the network in the United Kingdom.
Author: Robbie Edward Sayers
Collaborators and co editors: Charlie Sims and Connor Healey.
(C) 2024 Robbie E. Sayers
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
HEAP SORT ILLUSTRATED WITH HEAPIFY, BUILD HEAP FOR DYNAMIC ARRAYS.
Heap sort is a comparison-based sorting technique based on Binary Heap data structure. It is similar to the selection sort where we first find the minimum element and place the minimum element at the beginning. Repeat the same process for the remaining elements.
Forklift Classes Overview by Intella PartsIntella Parts
Discover the different forklift classes and their specific applications. Learn how to choose the right forklift for your needs to ensure safety, efficiency, and compliance in your operations.
For more technical information, visit our website https://intellaparts.com
CW RADAR, FMCW RADAR, FMCW ALTIMETER, AND THEIR PARAMETERSveerababupersonal22
It consists of cw radar and fmcw radar ,range measurement,if amplifier and fmcw altimeterThe CW radar operates using continuous wave transmission, while the FMCW radar employs frequency-modulated continuous wave technology. Range measurement is a crucial aspect of radar systems, providing information about the distance to a target. The IF amplifier plays a key role in signal processing, amplifying intermediate frequency signals for further analysis. The FMCW altimeter utilizes frequency-modulated continuous wave technology to accurately measure altitude above a reference point.
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...ssuser7dcef0
Power plants release a large amount of water vapor into the
atmosphere through the stack. The flue gas can be a potential
source for obtaining much needed cooling water for a power
plant. If a power plant could recover and reuse a portion of this
moisture, it could reduce its total cooling water intake
requirement. One of the most practical way to recover water
from flue gas is to use a condensing heat exchanger. The power
plant could also recover latent heat due to condensation as well
as sensible heat due to lowering the flue gas exit temperature.
Additionally, harmful acids released from the stack can be
reduced in a condensing heat exchanger by acid condensation. reduced in a condensing heat exchanger by acid condensation.
Condensation of vapors in flue gas is a complicated
phenomenon since heat and mass transfer of water vapor and
various acids simultaneously occur in the presence of noncondensable
gases such as nitrogen and oxygen. Design of a
condenser depends on the knowledge and understanding of the
heat and mass transfer processes. A computer program for
numerical simulations of water (H2O) and sulfuric acid (H2SO4)
condensation in a flue gas condensing heat exchanger was
developed using MATLAB. Governing equations based on
mass and energy balances for the system were derived to
predict variables such as flue gas exit temperature, cooling
water outlet temperature, mole fraction and condensation rates
of water and sulfuric acid vapors. The equations were solved
using an iterative solution technique with calculations of heat
and mass transfer coefficients and physical properties.
2. ISSN: 2088-8708
IJECE Vol. 7, No. 5, October 2017 : 2451 – 2458
2452
injection include hardware implemented fault injection, software implemented fault injection and simulated
fault injection. Advantages of simulated fault injection include cost reduction in the design process due to
early diagnosis, avoidance of redesign in case of error and thus short time-to-market [13], [14]. Very high
speed integrated circuits Hardware Description Language [VHDL] based fault injection receives wide spread
recognition due to its flexibility combined with a high degree of controllability and observability on all the
components of the simulated model [12].
A cost effective non intrusive technique similar to duplication with comparison, wherein duplicated
function module and comparator together acts as a function checker to detect any erroneous response of the
original function module has been presented in [15]. Based on VHDL descriptions, implementation of
separable codes for concurrent error detection within VLSI ICs has been described in [16]. A suitable
approach for generating two level combinational circuits with concurrent error detection capability based on
three different techniques has been proposed in [17].
An in-depth review of the literature related to self healing along with a detailed survey and synthesis
has been presented using the developed taxonomy in [18]. A self healing architecture based on human
immune system suitable for tolerating soft errors occurring in VLSI based digital systems has been proposed
in [19]. The error in the digital circuit has been treated as an antigen by the system and a distributed defence
mechanism has been evolved to heal itself from the effect of the error.
Three different architectures using online checkers for error detection which in turn initiates the
reconfiguration process of the faulty unit are presented in [20]. The modification of fault tolerant
architectures into partial reconfigurable modules and the significant advantages of partial dynamic
reconfiguration when employed in fault tolerant system design are demonstrated. A hybrid fault tolerant
architecture has been proposed in [21] to improve the robustness of logic CMOS circuits. The architecture
combines different types of redundancies together to tolerate transient as well as permanent faults. A new
hybrid fault-tolerant architecture to improve robustness of digital CMOS circuits and systems has been
presented in [22]. This architecture also employs information redundancy for error detection, timing
redundancy for transient error correction and hardware redundancy for permanent error correction.
Despite all the attempts being made, still there is an exigency for a paradigm shift in the design of
fault tolerant digital systems. The scheme proposes healing strategy to counteract the presence of stuck at
faults in multi level combinational circuits with a view to nullify their influence on the system. The scheme
employs simulated fault injection technique on the basis of its ability to validate the dependability of systems
during the design phase. The scheme manages to distant itself from the current literature in the sense it
unveils a novel black box modelling approach for the formation of healing mechanism which tolerates single
as well as multiple bit faults and makes the system truly fault tolerant.
The rest of the paper organizes itself under three sections that include design methodology, results
and discussion and finally conclusion.
2. DESIGN METHODOLOGY
The primary theory vows to evolve a self healing strategy to survive stuck at faults occurring in
combinational logic of any digital circuit. The procedure reiterates its promise to create a thoroughly reliable
system with a view to provide fault free output even on the occurrence of faults. It involves the simulated
fault injection procedure to inject faults at the interconnect levels of the system and lay down measures to
identify their occurrence in order to formulate the healing sequence. The travel moves on using the portals of
Modelsim platform to realize the functional status of the proposed architecture and ensures its practical
suitability with the help of Xilinx FPGA.
The central theme of the scheme promises to negate the effect of stuck-at faults present at the
interconnect levels of combinational circuits and bring out the expected behaviour. The procedure strives to
ensure a sense of reliability in the flow of signals and come up with the desired performance. The scheme
enjoys the benefit of incorporating a built-in healing procedure that can eliminate the impact of stuck at faults
present at the interconnect levels and produce true values on the primary output lines of the system thereby
making it a self healing system by its very nature.
Internally, the ROM belongs to the category of a combinational circuit that can be implemented with
AND gates connected as a decoder and a number of OR gates equal to the number of outputs. The ROM falls
into a two level implementation in sum of minterms form in order that each of its output provides the sum of
all the minterms of “n” inputs. The Boolean functions of the CUT expressed in sum of minterms are:
F1(A2,A1,A0) = ∑(1,3,5,7)
F2(A2,A1,A0) = ∑(0,1,4,5)
3. IJECE ISSN: 2088-8708
Black Box Model based Self Healing Solution for Stuck at Faults in Digital Circuits (S. Meyyappan)
2453
F3(A2,A1,A0) = ∑(2,3,4,5,6)
F4(A2,A1,A0) = ∑(3,4,7)
F5(A2,A1,A0) = ∑(0,1,2,3)
F6(A2,A1,A0) = ∑(0,5,6)
F7(A2,A1,A0) = ∑(2,3,4,5)
F8(A2,A1,A0) = ∑(1,2,3,4,5,6)
The chosen CUT relates to a ROM of size 8X8 with three inputs and eight outputs. The Figures 1 &
2 show the logic diagram of the CUT implemented using 8Χ8 ROM and the block diagram of the self healing
mechanism respectively.
Figure 1. Logic diagram of the CUT implemented using 8X8 ROM
In general, a model is the one which represents the behaviour of a physical system. A model can
either be a black box model or a white box model or the combination of both called as grey box model. In
black box modelling, the system is simply considered as a black box in the sense the tester does not possess
any prior knowledge over the internal structure and functions of the system but thoroughly knows that a
particular input should return a certain invariable output. In other words, the tester is aware of what the
system is supposed to do but not of how does it do. The proposed scheme utilizes this fact to build the
healing architecture which assumes the role of tester and brings out the desired output based on the already
established input-output relationship of the system regardless of the presence of faults at the interconnect
levels of the system.
Actual First Stage
Output
Desired First Stage
Output
Comparator Healer Second StageError Primary
Outputs
No Error
Figure 2. Block diagram of the self healing mechanism
It is in this perspective the scheme houses a healing circuit consists of eight EX-OR gates equal to
the number of outputs in the first stage of the 8X8 ROM as an integral part of the system. The desired output
along with the corresponding eventual outcome from each of the output lines of the decoder together form
the inputs to each of the EX-OR gates. The approach extends to follow the outputs of the EX-OR gates and
senses the fault when the output of any of the EX-OR gates goes high. It further proceeds to toggle the logic
4. ISSN: 2088-8708
IJECE Vol. 7, No. 5, October 2017 : 2451 – 2458
2454
state of the faulty interconnect line and revert it back to its fault free state. The rigorous process of continuous
monitoring of signal flow and the ability to take the corrective action on the fly make the system a truly self
healing one. The algorithm seen below enumerates the steps involved in the procedure to harmonize the fault
injection mechanism and the healing part with the rest of the system.
Algorithm
1. Determine the outputs of the system stage by stage for the given set of primary inputs
2. Check the status of the control signal
3. If “control” is not enabled then
4. Get the fault free outputs of the system without fault injection
5. Else
6. Choose any of the interconnect line(s) in the first stage of the system randomly
7. Inject fault on the chosen line(s)
8. Heal the system with the built in self healing facility
9. Obtain the fault free primary outputs of the system
10. End if
3. RESULTS AND DISCUSSION
When the CUT operates in fault free state, it produces the output as “11110000” for a given input
combination of “100” in accordance with its input – output relationship. On the other hand, the appearance of
stuck at faults at the intermediate lines causes the circuit to generate faulty output. Figure 3 depicts one such
turbulent state of the CUT in which the interconnect levels int(0) and int(4) are stuck at 1 and stuck at 0
respectively.
Figure 3. The CUT in the presence of stuck at faults
The Table 1 displays the different outputs generated by the circuit for the same input combination in
consequence of the presence of stuck at faults occurred at the interconnect levels.
Table 1. Faulty operating condition of the CUT
5. IJECE ISSN: 2088-8708
Black Box Model based Self Healing Solution for Stuck at Faults in Digital Circuits (S. Meyyappan)
2455
* int(1) is stuck at
**int(1) and int(4) are s-a-1 & s-a-0 respectively
The Modelsim based simulation results displayed in Figures 4 to 7 elucidate the ability of the
proposed healing strategy to keep the CUT in its fault tolerant state. The response seen in Figure.4 explains
the fact that as long as the control signal is at logic 0 state it inhibits the fault injection campaign and the
system operates in an error free environment.
Figure 4. Output of the CUT when the control signal “con” is at logic 0
However the enabling of the control signal as seen in Figures 5 and 6 permits the introduction of
stuck-at faults based on the logic values of the 4 bit fault generator “fg” in the sense at the 500th
ns the 2th
i.e.,
int(1) interconnect line is made to be stuck-at logic 1 and at the 1000th
ns the 5nd
interconnect line i.e., int(4)
to be stuck-at logic 0 respectively. Despite the introduction of faults, the circuit continues to generate the
correct information on its primary output lines thanks to the innate ability of the healing mechanism
inherently associated with it.
Figure 5. Output of the CUT when the intermediate line “int(1)” is stuck-at 1
6. ISSN: 2088-8708
IJECE Vol. 7, No. 5, October 2017 : 2451 – 2458
2456
Figure 6. Output of the CUT when the intermediate lines “int(1 & 4)” are stuck at 1 & 0 respectively
Figure 7. Output of the self healing ROM when the control signal “con” reverts to logic 0
The Figure 7 elaborates the retraction of the system into its fault free state as the control signal
reverts to logic 0 at the 1500th
ns. The Table 2 highlights the logic states of the signals associated with the
CUT in consequence of the sequence of events occurring at different simulation instants.
Table 2. Logic states of signals at different simulation instants
Simulation Instant con (1) fg (4) A (3) int (8) F (8)
0 ns to 499 ns 0 0100 100 00010000 11110000
500 ns to 999 ns 1 0100 100 *00010010 11110000
1000 ns to 1499 ns 1 1000 100 **00000010 11110000
1500 ns to 2000 ns 0 1000 100 00010000 11110000
* int(1) is stuck at 1
**int(1) and int(4) are s-a-1 & s-a-0 respectively
The simulation results clearly show that the proposed self healing architecture exhibits excellent
resilience and meets the design specifications even in turbulent situations. The ability of the FPGA to
reconfigure itself mandates the designer to incorporate fault tolerant features and helps to realize the logical
implementation of the specified design. The real time implementation of the proposed self healing scheme
using XC3S500E FPGA on Xilinx foundation series ISE 9.2i platform validates the simulated performance
of the VHDL code developed for the CUT and endorses its practical suitability.
7. IJECE ISSN: 2088-8708
Black Box Model based Self Healing Solution for Stuck at Faults in Digital Circuits (S. Meyyappan)
2457
3.1. Performance Analysis
The Table 3 compares the fault coverage ability and the required area overhead of the proposed
scheme with TMR for the CUT. The fault coverage explains the ability of the system to tolerate faults at a
given point of time while overhead provides an idea over the amount of additional chip area needed to
implement the design. The percentage overhead is calculated using the following relation;
% 𝑂𝑣𝑒𝑟ℎ𝑒𝑎𝑑 =
𝑁𝑜.𝑜𝑓 𝑟𝑒𝑑𝑢𝑛𝑑𝑎𝑛𝑡 𝑔𝑎𝑡𝑒𝑠
𝑁𝑜.𝑜𝑓 𝑎𝑐𝑡𝑢𝑎𝑙 𝑔𝑎𝑡𝑒𝑠 𝑡𝑜 𝑖𝑚𝑝𝑙𝑒𝑚𝑒𝑛𝑡 𝑡ℎ𝑒 𝑙𝑜𝑔𝑖𝑐
× 100
Table 3. Comparison Summary
Methodology Fault Coverage Area Overhead
Proposed Self healing Architecture
100% for both single as well as
multiple bit faults
84.21%
Triple Modular Redundancy Tolerates only single bit faults 368.42%
The proposed self healing architecture enjoys a clear edge over TMR due to the fact that the TMR
does not yield a cost effective solution. Furthermore, the TMR does not guarantee fault free output because if
two or all the three of the inputs to the voter turn out to be faulty, it permits the faulty signal to proceed
further in the system. The other important issue concerning TMR relates to the area overhead which goes as
high as 368.42% since it requires two identical copies of the CUT and the voter along with the original CUT.
On the other hand, the proposed scheme requires only as many XOR gates and NOT gates as the number of
intermediate lines of the CUT which in turn results in significantly low area overhead of 84.21% and
provides hundred per cent fault coverage against single as well as multiple bit errors.
4. CONCLUSION
A black box model based self healing scheme has been formulated to detect and correct stuck at
faults occurring at the interconnect levels of multi level combinational circuits. The merits of simulated fault
injection procedure have been utilized to attain a very high degree of controllability and observability in the
process of generating faults. The sense of reliability in the system performance has been ensured through the
self healing ability of the proposed methodology. The Modelsim based simulation results obtained for the
chosen combinational circuit implemented using ROM add strength to the simplicity and veracity of the
proposed approach.
The VHDL code developed for the proposed self healing architecture has been validated through
XC3S500E FPGA using Xilinx Foundation series ISE 9.2i with a view to exhibit its suitability for use in
practice. The comparative analysis made with the traditional TMR based healing approach brought out the
ultimate benefits of the scheme to the light as it comprehensively overpowers the TMR in terms of fault
coverage and area overhead.
REFERENCES
[1] KShirsagar R.V, Patrikar R.M. A New Design Approach for Tolerating Interconnect Faults in Digital Circuits.
Proceedings of SPIT-IEEE colloquium and International conference, India. 2007; 2: 117-120.
[2] KShirsagar R.V, Patrikar R.M. A Novel Fault Tolerant Design and an Algorithm for tolerating Faults in Digital
Circuits. 3rd
International Design and Test Workshop, Tunisia, December 2008.
[3] Cristian Constantinescu. Intermittent Faults in VLSI Circuits. 2nd
Workshop on Silicon Errors in Logic - System
Effects (SELSE2), April 2006.
[4] Cristian Constantinescu. Impact of Intermittent Faults on Nanocomputing Devices. WDSN-07, Edinburgh, UK,
June 2007.
[5] Gracia J, Saiz L.J, Baraza J.C, Gil D, Gil P.J. Analysis of the Influence of Intermittent Faults in a
Microcontroller. 11th
IEEE International Workshop on Design and Diagnostics of Electronic Circuits and
Systems, April 2008; 80-85.
[6] Saiz L.J, Gracia J, Baraza J.C, Gil D, Gil P.J. Applying Fault Injection to study the Effects of Intermittent Faults.
7th
European Dependable Computing Conference, Kaunas. May 2008: 67-69.
[7] Parag K. Lala. Self-Checking and Fault Tolerant Digital Design. Morgan Kaufmann Publishers. 2001.
[8] Lala P.K., Kiran Kumar B, Parkerson J.P. On Self-Healing Digital System Design. Microelectronics Journal.
2006; 37(4): 353-362.
8. ISSN: 2088-8708
IJECE Vol. 7, No. 5, October 2017 : 2451 – 2458
2458
[9] Marwa Ben Hammouda, Mohamed Najeh Lakhoua, Lilia El Amraoui. Dependability Evaluation and Supervision
in Thermal Power Plants. International Journal of Electrical and Computer Engineering (IJECE). 2015; 5(5):
905-917.
[10] Hamid Bagheri1, Mohammad Ali Torkamani, Zhaleh Ghaffari. Architectural Approaches for Self-Healing
Systems Based on Multi Agent Technologies. International Journal of Electrical and Computer Engineering
(IJECE). 2013; 3(6): 779-783.
[11] Baraza J.C, Gracia J, Gil D, Gil P.J. A prototype of a VHDL-based fault injection tool: Description and
Application. Journal of Systems Architecture. 2002; 47(10): 847–867.
[12] Gil D, Gracia J, Baraza J.C, Gil P.J. Study, Comparison and Application of different VHDL-Based Fault
Injection Techniques for the Experimental Validation of a Fault-Tolerant System. Microelectronics Journal.
2003; 34(1): 41-51.
[13] Jongwhoa Na, Lee D.W. Simulated Fault Injection using Simulator Modification Technique. ETRI Journal. 2011;
33(1): 50-59.
[14] Lee D.W, Na J.W. Novel Simulation Fault Injection Method for Dependability Analysis. IEEE Design & Test
Computers. 2009; 26(6): 50-61.
[15] Goran Lj. Djordjevec, Mile K. Stojcev, Tatjana R. Stankovic. Approach to Partially Self-Checking
Combinational Circuits Design. Microelectronics Journal. 2004; 35(12): 945-952.
[16] Stojcev M.K, Djordjevic G.Lj, Stankovic T.R. Implementation of self- checking two-level combinational logic on
FPGA and CPLD circuits. Journal of Microelectronics Reliability. 2004; 44(1): 173-178.
[17] Tatjana R. Stankovic, Mile K. Stojcev, Goran Lj. Djordjevic. On VHDL Synthesis Of Self Checking Two Level
Combinational Circuits. Electrical Energy. 2004; 17: 69-79.
[18] Debanjan Ghosh, Raj Sharman, H. Raghav Rao, Shambhu Upadhyaya. Self-healing systems — Survey and
Synthesis. Decision Support Systems. 2006; 42(4): 2164-2185.
[19] Lala P.K, B. Kiran Kumar. An Architecture for Self-Healing Digital Systems. Journal of Electronic Testing:
Theory and Applications, Kluwer Academic Publishers. 2003; 19: 523-535.
[20] Straka M, Kastil J, Kotasek Z. Modern fault tolerant architectures based on partial dynamic reconfiguration in
fpgas. 13th
IEEE International Symposium on Design and Diagnostics of Electronic Circuits and Systems, IEEE
Computer Society, USA. 2010; 336–341.
[21] Tran D. A, Virazel A, Bosio A, Dilillo L, Girard P, Pravossoudovitch S, Wunderlich H.-J. A Hybrid Fault
Tolerant Architecture for Robustness Improvement of Digital Circuits. Proceedings of the 20th
IEEE Asian Test
Symposium (ATS11). 2011; 136-141.
[22] Tran D. A., Virazel A, Bosio A, Dilillo L, Girard P, Pravossoudovitch S, Wunderlich H.-J. A New Hybrid Fault-
Tolerant Architecture for Digital CMOS Circuits and Systems. Journal of Electronic Testing: Theory and
Applications (JETTA). 2014; doi: 10.1007/s10836-014-5459-3.
BIOGRAPHIES OF AUTHORS
S. Meyyappan has been a faculty member in Engineering Education for the past 13
years and is currently working in the capacity of Assistant Professor in the Department
of Electronics and Instrumentation Engineering, Annamalai University, India. His
research interests include Fault Tolerant Digital Systems, Embedded Control and
Process Automation.
V. Alamelumangai is presently holding the post of Professor in the Department of
Electronics and Instrumentation Engineering, Annamalai University, India. Her
teaching experience spans over 20 years. Her areas of specialization are Digital System
Design, Process Control and Industrial Instrumentation.