IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
UNIT-II CPLD & FPGA Architectures and ApplicationsDr.YNM
This document provides an overview of Xilinx programmable gate array (PGA) architecture and its components. The key components are configurable logic blocks (CLBs) that contain programmable combinational logic and flip-flops, input/output blocks (IOBs) that provide interfaces, and a programmable interconnect that allows any two points to be connected. The architecture uses these components along with an external memory chip to implement user logic functions by loading a configuration onto the chip.
Fault Injection Approach for Network on Chipijsrd.com
Packet-based on-chip interconnection networks, or Network-on-Chips (NoCs) are progressively replacing global on-chip interconnections in Multi-processor System-on-Chips (MP-SoCs) thanks to better performances and lower power consumption. However, modern generations of MP-SoCs have an increasing sensitivity to faults due to the progressive shrinking technology. Consequently, in order to evaluate the fault sensitivity in NoC architectures, there is the need of accurate test solution which allows evaluating the fault tolerance capability of NoCs. Presents an innovative test architecture based on a dual-processor system which is able to extensively test mesh based NoCs. The proposed solution improves previously developed methods since it is based on a NoC physical implementation which allows investigating the effects induced by several kind of faults thanks to the execution of on-line fault injection within all the network interface and router resources during NoC run-time operations. The solution has been physically implemented on an FPGA platform using a NoC emulation model adopting standard communication protocols. The obtained results demonstrated the effectiveness of the developed solution in term of testability and diagnostic capabilities and make our solutions suitable for testing large scale.
The document discusses programmable logic arrays (PLAs) and their minimization and testing. It describes how PLAs can be used to implement combinational and sequential logic. PLA minimization techniques include removing redundant product terms and raising terms to optimize area usage. Folding is also described as a technique to minimize the PLA by allowing columns and rows to be shared, reducing the overall size. Column and row folding algorithms are discussed as well as their complexity.
This document discusses built-in self-test (BIST) techniques for testing combinational circuits using FPGAs. It begins with an introduction to BIST and discusses reasons for on-chip testing compared to off-chip testing. It then reviews different types of faults in integrated circuits and compares digital and analog testing techniques. The document also compares design-for-test (DFT) and BIST techniques. Finally, it classifies existing BIST techniques and reviews approaches based on input vectors, test operation modes, and response analysis domain.
This document discusses fault tolerance techniques for field programmable gate arrays (FPGAs). It begins with an abstract and introduction describing how FPGAs are complex and must work reliably. It then covers FPGA architecture and different types of faults that can occur. The main methods of fault detection discussed are functional redundancy, off-line testing (built-in self-test), and roving testing. Functional redundancy uses additional logic for detection and has fast detection but high area overhead. Off-line testing has no performance impact but can only detect faults during dedicated test modes. Roving testing exploits run-time reconfiguration to test parts of the FPGA online with low overhead.
This document discusses fault tolerance in FPGA-based systems. It begins by defining an FPGA and its architecture, consisting of programmable logic blocks and a routing matrix. Fault tolerance aims to prevent failures caused by defects introduced during manufacturing. Methods of fault detection discussed include redundant error detection, offline testing, and roving tests. The document also covers single and multiple fault tolerance, as well as hardware, configuration, and system-level approaches. It provides VHDL code examples and concludes that the best solution combines dynamic and static fault tolerance methods to adapt to different environments.
This document summarizes a new fault injection approach for testing network-on-chip (NoC) architectures. The approach uses a dual-processor system on an FPGA to inject faults into a NoC design under test and evaluate the effects. Faults are injected by modifying the FPGA configuration memory to physically implement different fault models. The approach allows testing of routing and logic resources without intrusive test modules. Experimental results demonstrate the effectiveness of classifying faults in a mesh NoC case study implemented on the FPGA.
UNIT-III CASE STUDIES -FPGA & CPGA ARCHITECTURES APPLICATIONSDr.YNM
voltage circuits from the programming voltage.
This document discusses different types of programming technologies used in field programmable gate arrays (FPGAs). It describes SRAM-based programming technology, which is the most commonly used technology due to its re-programmability and use of standard CMOS processes. Flash programming technology and anti-fuse programming technology are also discussed. Each technology has advantages and disadvantages related to factors like area efficiency, volatility, re-programmability, and process requirements. The document provides detailed information on how each technology works at a circuit level.
UNIT-II CPLD & FPGA Architectures and ApplicationsDr.YNM
This document provides an overview of Xilinx programmable gate array (PGA) architecture and its components. The key components are configurable logic blocks (CLBs) that contain programmable combinational logic and flip-flops, input/output blocks (IOBs) that provide interfaces, and a programmable interconnect that allows any two points to be connected. The architecture uses these components along with an external memory chip to implement user logic functions by loading a configuration onto the chip.
Fault Injection Approach for Network on Chipijsrd.com
Packet-based on-chip interconnection networks, or Network-on-Chips (NoCs) are progressively replacing global on-chip interconnections in Multi-processor System-on-Chips (MP-SoCs) thanks to better performances and lower power consumption. However, modern generations of MP-SoCs have an increasing sensitivity to faults due to the progressive shrinking technology. Consequently, in order to evaluate the fault sensitivity in NoC architectures, there is the need of accurate test solution which allows evaluating the fault tolerance capability of NoCs. Presents an innovative test architecture based on a dual-processor system which is able to extensively test mesh based NoCs. The proposed solution improves previously developed methods since it is based on a NoC physical implementation which allows investigating the effects induced by several kind of faults thanks to the execution of on-line fault injection within all the network interface and router resources during NoC run-time operations. The solution has been physically implemented on an FPGA platform using a NoC emulation model adopting standard communication protocols. The obtained results demonstrated the effectiveness of the developed solution in term of testability and diagnostic capabilities and make our solutions suitable for testing large scale.
The document discusses programmable logic arrays (PLAs) and their minimization and testing. It describes how PLAs can be used to implement combinational and sequential logic. PLA minimization techniques include removing redundant product terms and raising terms to optimize area usage. Folding is also described as a technique to minimize the PLA by allowing columns and rows to be shared, reducing the overall size. Column and row folding algorithms are discussed as well as their complexity.
This document discusses built-in self-test (BIST) techniques for testing combinational circuits using FPGAs. It begins with an introduction to BIST and discusses reasons for on-chip testing compared to off-chip testing. It then reviews different types of faults in integrated circuits and compares digital and analog testing techniques. The document also compares design-for-test (DFT) and BIST techniques. Finally, it classifies existing BIST techniques and reviews approaches based on input vectors, test operation modes, and response analysis domain.
This document discusses fault tolerance techniques for field programmable gate arrays (FPGAs). It begins with an abstract and introduction describing how FPGAs are complex and must work reliably. It then covers FPGA architecture and different types of faults that can occur. The main methods of fault detection discussed are functional redundancy, off-line testing (built-in self-test), and roving testing. Functional redundancy uses additional logic for detection and has fast detection but high area overhead. Off-line testing has no performance impact but can only detect faults during dedicated test modes. Roving testing exploits run-time reconfiguration to test parts of the FPGA online with low overhead.
This document discusses fault tolerance in FPGA-based systems. It begins by defining an FPGA and its architecture, consisting of programmable logic blocks and a routing matrix. Fault tolerance aims to prevent failures caused by defects introduced during manufacturing. Methods of fault detection discussed include redundant error detection, offline testing, and roving tests. The document also covers single and multiple fault tolerance, as well as hardware, configuration, and system-level approaches. It provides VHDL code examples and concludes that the best solution combines dynamic and static fault tolerance methods to adapt to different environments.
This document summarizes a new fault injection approach for testing network-on-chip (NoC) architectures. The approach uses a dual-processor system on an FPGA to inject faults into a NoC design under test and evaluate the effects. Faults are injected by modifying the FPGA configuration memory to physically implement different fault models. The approach allows testing of routing and logic resources without intrusive test modules. Experimental results demonstrate the effectiveness of classifying faults in a mesh NoC case study implemented on the FPGA.
UNIT-III CASE STUDIES -FPGA & CPGA ARCHITECTURES APPLICATIONSDr.YNM
voltage circuits from the programming voltage.
This document discusses different types of programming technologies used in field programmable gate arrays (FPGAs). It describes SRAM-based programming technology, which is the most commonly used technology due to its re-programmability and use of standard CMOS processes. Flash programming technology and anti-fuse programming technology are also discussed. Each technology has advantages and disadvantages related to factors like area efficiency, volatility, re-programmability, and process requirements. The document provides detailed information on how each technology works at a circuit level.
The document discusses the architecture of CPLDs and FPGAs. It begins by explaining the problems with using basic logic gates on PCBs and introduces programmable logic devices as a solution. It then describes different types of PLDs including PLA, PAL, GAL, CPLD and FPGA. CPLDs have a complexity between FPGAs and basic PLDs, containing non-volatile memory and supporting larger logic than PLDs. FPGAs contain logic cells, interconnects, and can implement thousands of gates. The document provides examples of implementing logic with different PLDs and describes the architecture and programming of CPLDs and FPGAs.
The document provides an overview of FPGA routing, which is an important step in the CAD process that connects logic blocks placed on the FPGA. It discusses the routing resources in Xilinx FPGAs including connection boxes, switch boxes, and wire segments. It also describes the FPGA routing model commonly used in academia, which simplifies the island-style architecture of commercial FPGAs. Efficient routing aims to minimize wiring area and critical path lengths to improve circuit performance.
This document summarizes a seminar on FPGA, CPLD, and VHDL programming basics. The seminar schedule includes sessions on FPGA technologies compared to previous programmable devices like CPLD, Microsemi FPGA devices and VHDL introduction. There is also an application example of using an FPGA for an Ethernet bus interface board and a discussion of current trends and technologies.
This document discusses the architecture of Xilinx Cool Runner CPLDs. It provides an overview of Xilinx CPLD technologies including Cool Runner XPLA3 and Cool Runner-II. For the Cool Runner XPLA3, it describes the features and specifications, and details the architecture including the high-level block diagram, function block, macrocell, and I/O cell. For the Cool Runner-II, it lists the key features and specifications. The document is intended to explain the architectures of these Xilinx CPLD families.
This document provides an overview of FPGAs and VHDL. It describes what an FPGA is and its advantages over an integrated circuit. It explains the basic architecture of an FPGA including configurable logic blocks, slices, look-up tables, multiplexers, carry chains and flip-flops. It also discusses VHDL in terms of abstraction levels, behavioral and register transfer level descriptions. Examples of combinational and sequential logic blocks in VHDL are provided including a 3-to-8 decoder and an ALU.
FPGA are a special form of Programmable logic devices(PLDs) with higher densities as compared to custom ICs and capable of implementing functionality in a short period of time using computer aided design (CAD) software....by mathewsubin3388@gmail.com
UNIT I- CPLD & FPGA ARCHITECTURE & APPLICATIONSDr.YNM
Dr. Y.Narasimha Murthy Ph.D introduces programmable logic devices and their evolution from PLDs to CPLDs and FPGAs. The document discusses the basic architecture and applications of ROM, RAM, PLDs including PLA, PAL and GAL. It provides details on the programmable AND and OR planes in a PLA and compares device types based on their AND and OR array programmability. SPLDs, CPLDs and FPGAs are the main types of PLDs discussed.
This document discusses the programming technologies and interconnect architectures used in different FPGA devices. It covers antifuse-based OTP technologies used in Actel FPGAs, SRAM-based reprogrammable technologies used in Xilinx FPGAs, and EPROM/EEPROM technologies used in Altera CPLDs. It also describes the segmented channel routing interconnect architecture used in Actel FPGAs and the LCA architecture used in Xilinx FPGAs.
The document discusses various programmable chip and board implementation technologies including PLDs, CPLDs, and FPGAs. It describes the basic components and features of these technologies. PLDs contain programmable logic arrays that can implement sum-of-products logic functions. CPLDs are an evolution of PLDs, containing multiple PALs and an interconnect matrix. FPGAs provide even higher densities by placing programmable logic elements in an array with a programmable routing fabric between them. The document discusses the logic elements, interconnect, memory blocks, I/O and other features of example FPGA families from Altera and Actel.
This document discusses the architectures and applications of CPLDs and FPGAs. It begins by classifying programmable logic devices and describing simple programmable logic devices like PLDs, PALs, and GALs. It then discusses more complex programmable logic devices like CPLDs, describing their architecture which consists of logic blocks, I/O blocks, and a global interconnect. Finally, it covers field programmable gate arrays including their architecture of configurable logic blocks, I/O blocks, and a programmable interconnect, as well as describing Xilinx's logic cell array architecture for FPGAs.
The document discusses different types of programmable logic devices including CPLDs and FPGAs. It provides details on the architecture and workings of the Xilinx XC9500 CPLD family and Xilinx XC4000 FPGA family. The XC9500 CPLD uses function blocks containing macrocells with programmable AND and OR arrays. The XC4000 FPGA uses configurable logic blocks containing function generators, flip-flops and programmable multiplexers to implement logic functions. Both devices use programmable interconnects to route signals between blocks.
Voice Activity Detector of Wake-Up-Word Speech Recognition System Design on FPGAIJERA Editor
A typical speech recognition system is push-to-talk operated that requires activation. However for those who use hands-busy applications, movement may by restricted or impossible. One alternative is to use Speech-Only Interface. The proposed method that is called Wake-Up-Word Speech Recognition (WUW-SR) that utilizes speech only interface. A WUW-SR system would allow the user to activate systems (Cell phone, Computer, etc.) with only speech commands instead of manual activation. The trend in WUW-SR hardware design is towards implementing a complete system on a single chip intended for various applications. This paper presents an experimental FPGA design and implementation of a novel architecture of a real time feature extraction processor that includes: Voice Activity Detector (VAD), and features extraction, MFCC, LPC, and ENH_MFCC. In the WUW-SR system, the recognizer front-end with VAD is located at the terminal which is typically connected over a data network(e.g., server)for remote back-end recognition. VAD is responsible for segmenting the signal into speech-like and non-speech-like segments. For any given frame VAD reports one of two possible states: VAD_ON or VAD_OFF. The back-end is then responsible to score the features that are being segmented during VAD_ON stage. The most important characteristic of the presented design is that it should guarantee virtually 100% correct rejection for non-WUW (out of vocabulary words - OOV) while maintaining correct acceptance rate of 99.9% or higher (in vocabulary words - INV). This requirement sets apart WUW-SR from other speech recognition tasks because no existing system can guarantee 100% reliability by any measure.
An FPGA (Field-Programmable Gate Array) is an integrated circuit device that can be reconfigured to implement different logic functions. It contains a matrix of configurable logic blocks and programmable interconnects. Unlike processors, FPGAs use dedicated hardware rather than an operating system, allowing truly parallel processing. FPGAs can be reconfigured after deployment to change their internal circuitry. A single FPGA can replace thousands of discrete components. FPGAs are classified based on their internal structure and the technology used for user programmable switches. The FPGA design flow involves system design, design description, synthesis, implementation, verification and testing.
The document discusses the evolution of programmable logic from TTL to FPGAs. It describes how early programmable logic arrays (PLAs) combined logic gates and registers into single devices with programmable connections. Modern FPGAs arrange logic blocks in an array with programmable interconnect to implement complex digital designs with high density, performance and reprogrammability. The document outlines FPGA architecture including look-up tables, routing resources and specialized blocks to efficiently implement applications like high-speed data processing.
FPGA BASED IMPLEMENTATION OF DELAY OPTIMISED DOUBLE PRECISION IEEE FLOATING-P...Somsubhra Ghosh
This document summarizes an algorithm for implementing a double precision floating point adder according to the IEEE 754 standard. The algorithm uses several optimization techniques to reduce latency, including separating the computation into two parallel paths based on the operands and operation, reducing the number of IEEE rounding modes, using a sign-magnitude representation for subtraction, and performing prefix addition of the significands. Analysis using the logical effort model estimates the delay of this optimized design is 30.6 FO4 delays, an improvement over prior designs.
FPGA Optimized Fuzzy Controller Design for Magnetic Ball Levitation using Gen...IDES Editor
This paper presents an optimum approach for
designing of fuzzy controller for nonlinear system using
FPGA technology with Genetic Algorithms (GA) optimization
tool. A magnetic levitation system is considered as a case study
and the fuzzy controller is designed to keep a magnetic object
suspended in the air counteracting the weight of the object.
Fuzzy controller will be implemented using FPGA chip.
Genetic Algorithm (GA) is used in this paper as optimization
method that optimizes the membership, output gain and inputs
gains of the fuzzy controllers. The design will use a highlevel
programming language HDL for implementing the fuzzy
logic controller using the Xfuzzy tools to implement the fuzzy
logic controller into HDL code. This paper, advocates a novel
approach to implement the fuzzy logic controller for magnetic
ball levitation system by using FPGA with GA.
Fault simulation – application and methodsSubash John
The document summarizes a seminar presentation on fault simulation techniques. It discusses (1) different fault simulation methods like serial, parallel, and concurrent fault simulation, (2) how concurrent fault simulation works using an example circuit, and (3) applications of fault simulation like measuring fault coverage, generating test vectors, and creating fault dictionaries. The presentation concludes with references for further reading on fault simulation and testing techniques.
This document discusses various digital system implementation options including ROM, PROM, EPROM, EEPROM, sequential circuits using ROMs, PLDs, ASICs and FPGAs. It describes the basic structure and characteristics of ROMs, PROMs, PLDs like PLA, PAL, CPLDs and different types of ASICs including full-custom, standard-cell based, gate-array based and structured gate arrays. It also provides examples of implementing functions using PAL and discusses the core structure of FPGAs.
An integrated approach for designing and testing specific processorsVLSICS Design
This paper proposes a validation method for the des
ign of a CPU on which, in parallel with the
development of the CPU, it is also manually describ
ed a testbench that performs automated testing on t
he
instructions that are being described. The testbenc
h consists of the original program memory of the CP
U
and it is also coupled to the internal registers, P
ORTS, stack and other components related to the pro
ject.
The program memory sends the instructions requested
by the processor and checks the results of its
instructions, progressing or not with the tests. Th
e proposed method resulted in a CPU compatible with
the
instruction set and the CPU registers present into
the PIC16F628 microcontroller. In order to shows th
e
usability and success of the depuration method empl
oyed, this work shows that the CPU developed is
capable of running real programs generated by compi
lers existing on the market. The proposed CPU was
mapped in FPGA, and using Cadence tools, was synthe
sized on silicon.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
New optimization scheme for cooperative spectrum sensing taking different snr...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
The document discusses the architecture of CPLDs and FPGAs. It begins by explaining the problems with using basic logic gates on PCBs and introduces programmable logic devices as a solution. It then describes different types of PLDs including PLA, PAL, GAL, CPLD and FPGA. CPLDs have a complexity between FPGAs and basic PLDs, containing non-volatile memory and supporting larger logic than PLDs. FPGAs contain logic cells, interconnects, and can implement thousands of gates. The document provides examples of implementing logic with different PLDs and describes the architecture and programming of CPLDs and FPGAs.
The document provides an overview of FPGA routing, which is an important step in the CAD process that connects logic blocks placed on the FPGA. It discusses the routing resources in Xilinx FPGAs including connection boxes, switch boxes, and wire segments. It also describes the FPGA routing model commonly used in academia, which simplifies the island-style architecture of commercial FPGAs. Efficient routing aims to minimize wiring area and critical path lengths to improve circuit performance.
This document summarizes a seminar on FPGA, CPLD, and VHDL programming basics. The seminar schedule includes sessions on FPGA technologies compared to previous programmable devices like CPLD, Microsemi FPGA devices and VHDL introduction. There is also an application example of using an FPGA for an Ethernet bus interface board and a discussion of current trends and technologies.
This document discusses the architecture of Xilinx Cool Runner CPLDs. It provides an overview of Xilinx CPLD technologies including Cool Runner XPLA3 and Cool Runner-II. For the Cool Runner XPLA3, it describes the features and specifications, and details the architecture including the high-level block diagram, function block, macrocell, and I/O cell. For the Cool Runner-II, it lists the key features and specifications. The document is intended to explain the architectures of these Xilinx CPLD families.
This document provides an overview of FPGAs and VHDL. It describes what an FPGA is and its advantages over an integrated circuit. It explains the basic architecture of an FPGA including configurable logic blocks, slices, look-up tables, multiplexers, carry chains and flip-flops. It also discusses VHDL in terms of abstraction levels, behavioral and register transfer level descriptions. Examples of combinational and sequential logic blocks in VHDL are provided including a 3-to-8 decoder and an ALU.
FPGA are a special form of Programmable logic devices(PLDs) with higher densities as compared to custom ICs and capable of implementing functionality in a short period of time using computer aided design (CAD) software....by mathewsubin3388@gmail.com
UNIT I- CPLD & FPGA ARCHITECTURE & APPLICATIONSDr.YNM
Dr. Y.Narasimha Murthy Ph.D introduces programmable logic devices and their evolution from PLDs to CPLDs and FPGAs. The document discusses the basic architecture and applications of ROM, RAM, PLDs including PLA, PAL and GAL. It provides details on the programmable AND and OR planes in a PLA and compares device types based on their AND and OR array programmability. SPLDs, CPLDs and FPGAs are the main types of PLDs discussed.
This document discusses the programming technologies and interconnect architectures used in different FPGA devices. It covers antifuse-based OTP technologies used in Actel FPGAs, SRAM-based reprogrammable technologies used in Xilinx FPGAs, and EPROM/EEPROM technologies used in Altera CPLDs. It also describes the segmented channel routing interconnect architecture used in Actel FPGAs and the LCA architecture used in Xilinx FPGAs.
The document discusses various programmable chip and board implementation technologies including PLDs, CPLDs, and FPGAs. It describes the basic components and features of these technologies. PLDs contain programmable logic arrays that can implement sum-of-products logic functions. CPLDs are an evolution of PLDs, containing multiple PALs and an interconnect matrix. FPGAs provide even higher densities by placing programmable logic elements in an array with a programmable routing fabric between them. The document discusses the logic elements, interconnect, memory blocks, I/O and other features of example FPGA families from Altera and Actel.
This document discusses the architectures and applications of CPLDs and FPGAs. It begins by classifying programmable logic devices and describing simple programmable logic devices like PLDs, PALs, and GALs. It then discusses more complex programmable logic devices like CPLDs, describing their architecture which consists of logic blocks, I/O blocks, and a global interconnect. Finally, it covers field programmable gate arrays including their architecture of configurable logic blocks, I/O blocks, and a programmable interconnect, as well as describing Xilinx's logic cell array architecture for FPGAs.
The document discusses different types of programmable logic devices including CPLDs and FPGAs. It provides details on the architecture and workings of the Xilinx XC9500 CPLD family and Xilinx XC4000 FPGA family. The XC9500 CPLD uses function blocks containing macrocells with programmable AND and OR arrays. The XC4000 FPGA uses configurable logic blocks containing function generators, flip-flops and programmable multiplexers to implement logic functions. Both devices use programmable interconnects to route signals between blocks.
Voice Activity Detector of Wake-Up-Word Speech Recognition System Design on FPGAIJERA Editor
A typical speech recognition system is push-to-talk operated that requires activation. However for those who use hands-busy applications, movement may by restricted or impossible. One alternative is to use Speech-Only Interface. The proposed method that is called Wake-Up-Word Speech Recognition (WUW-SR) that utilizes speech only interface. A WUW-SR system would allow the user to activate systems (Cell phone, Computer, etc.) with only speech commands instead of manual activation. The trend in WUW-SR hardware design is towards implementing a complete system on a single chip intended for various applications. This paper presents an experimental FPGA design and implementation of a novel architecture of a real time feature extraction processor that includes: Voice Activity Detector (VAD), and features extraction, MFCC, LPC, and ENH_MFCC. In the WUW-SR system, the recognizer front-end with VAD is located at the terminal which is typically connected over a data network(e.g., server)for remote back-end recognition. VAD is responsible for segmenting the signal into speech-like and non-speech-like segments. For any given frame VAD reports one of two possible states: VAD_ON or VAD_OFF. The back-end is then responsible to score the features that are being segmented during VAD_ON stage. The most important characteristic of the presented design is that it should guarantee virtually 100% correct rejection for non-WUW (out of vocabulary words - OOV) while maintaining correct acceptance rate of 99.9% or higher (in vocabulary words - INV). This requirement sets apart WUW-SR from other speech recognition tasks because no existing system can guarantee 100% reliability by any measure.
An FPGA (Field-Programmable Gate Array) is an integrated circuit device that can be reconfigured to implement different logic functions. It contains a matrix of configurable logic blocks and programmable interconnects. Unlike processors, FPGAs use dedicated hardware rather than an operating system, allowing truly parallel processing. FPGAs can be reconfigured after deployment to change their internal circuitry. A single FPGA can replace thousands of discrete components. FPGAs are classified based on their internal structure and the technology used for user programmable switches. The FPGA design flow involves system design, design description, synthesis, implementation, verification and testing.
The document discusses the evolution of programmable logic from TTL to FPGAs. It describes how early programmable logic arrays (PLAs) combined logic gates and registers into single devices with programmable connections. Modern FPGAs arrange logic blocks in an array with programmable interconnect to implement complex digital designs with high density, performance and reprogrammability. The document outlines FPGA architecture including look-up tables, routing resources and specialized blocks to efficiently implement applications like high-speed data processing.
FPGA BASED IMPLEMENTATION OF DELAY OPTIMISED DOUBLE PRECISION IEEE FLOATING-P...Somsubhra Ghosh
This document summarizes an algorithm for implementing a double precision floating point adder according to the IEEE 754 standard. The algorithm uses several optimization techniques to reduce latency, including separating the computation into two parallel paths based on the operands and operation, reducing the number of IEEE rounding modes, using a sign-magnitude representation for subtraction, and performing prefix addition of the significands. Analysis using the logical effort model estimates the delay of this optimized design is 30.6 FO4 delays, an improvement over prior designs.
FPGA Optimized Fuzzy Controller Design for Magnetic Ball Levitation using Gen...IDES Editor
This paper presents an optimum approach for
designing of fuzzy controller for nonlinear system using
FPGA technology with Genetic Algorithms (GA) optimization
tool. A magnetic levitation system is considered as a case study
and the fuzzy controller is designed to keep a magnetic object
suspended in the air counteracting the weight of the object.
Fuzzy controller will be implemented using FPGA chip.
Genetic Algorithm (GA) is used in this paper as optimization
method that optimizes the membership, output gain and inputs
gains of the fuzzy controllers. The design will use a highlevel
programming language HDL for implementing the fuzzy
logic controller using the Xfuzzy tools to implement the fuzzy
logic controller into HDL code. This paper, advocates a novel
approach to implement the fuzzy logic controller for magnetic
ball levitation system by using FPGA with GA.
Fault simulation – application and methodsSubash John
The document summarizes a seminar presentation on fault simulation techniques. It discusses (1) different fault simulation methods like serial, parallel, and concurrent fault simulation, (2) how concurrent fault simulation works using an example circuit, and (3) applications of fault simulation like measuring fault coverage, generating test vectors, and creating fault dictionaries. The presentation concludes with references for further reading on fault simulation and testing techniques.
This document discusses various digital system implementation options including ROM, PROM, EPROM, EEPROM, sequential circuits using ROMs, PLDs, ASICs and FPGAs. It describes the basic structure and characteristics of ROMs, PROMs, PLDs like PLA, PAL, CPLDs and different types of ASICs including full-custom, standard-cell based, gate-array based and structured gate arrays. It also provides examples of implementing functions using PAL and discusses the core structure of FPGAs.
An integrated approach for designing and testing specific processorsVLSICS Design
This paper proposes a validation method for the des
ign of a CPU on which, in parallel with the
development of the CPU, it is also manually describ
ed a testbench that performs automated testing on t
he
instructions that are being described. The testbenc
h consists of the original program memory of the CP
U
and it is also coupled to the internal registers, P
ORTS, stack and other components related to the pro
ject.
The program memory sends the instructions requested
by the processor and checks the results of its
instructions, progressing or not with the tests. Th
e proposed method resulted in a CPU compatible with
the
instruction set and the CPU registers present into
the PIC16F628 microcontroller. In order to shows th
e
usability and success of the depuration method empl
oyed, this work shows that the CPU developed is
capable of running real programs generated by compi
lers existing on the market. The proposed CPU was
mapped in FPGA, and using Cadence tools, was synthe
sized on silicon.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
New optimization scheme for cooperative spectrum sensing taking different snr...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Localization based range map stitching in wireless sensor network under non l...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Discovering adaptive wireless sensor network using β synchronizereSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Reuse of inorganic sludge as a coagulant on colloidal suspension removal in r...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Design and verification of pipelined parallel architecture implementation in ...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Hydrostatic transmission as an alternative to conventional gearboxeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
This document describes a fuzzy logic controller (FLC) model for determining the step size of perturbation in the duty cycle of a photovoltaic system to track the maximum power point (MPP). The FLC uses the slope of the power-voltage curve and the perturbation from the previous step as inputs to determine the step size. The FLC model is simulated in MATLAB/Simulink. Simulation results show that the variable step size MPPT technique using FLC extracts maximum power more efficiently under changing environmental conditions compared to fixed step size techniques.
Contractual implications of cash flow on owner and contractor in villa constr...eSAT Publishing House
This document summarizes a study analyzing the contractual implications of cash flow on owners and contractors for villa construction projects in Oman. Data from 25 villa projects was used to calculate the minimum fund contractors require to continue work in cases of delayed interim payments. The analysis found that for a maximum 4-week payment delay, contractors require on average 8.5% of the contract value in minimum funds. Delayed payments can result in work stoppages, delayed project completion, penalties for the contractor, and even project incompletion. Ensuring adequate minimum funds mitigates these risks for contractors while also helping owners receive projects on schedule.
This document summarizes the performance analysis of VRLA batteries under continuous operation. It discusses testing various capacity VRLA battery banks to analyze electrical and thermal characteristics. The batteries were tested with 80% depth of discharge over 32-43 hours. A battery regenerator was used to reduce sulfation and a battery measurement system monitored individual cell voltages. Testing showed battery capacity and lifespan increased after regeneration, with backups extending 1-2 hours. Larger 550Ah-682Ah batteries showed greater improvements than the 300Ah batteries tested. Regenerating existing batteries can save significant power compared to replacing them.
Semantic approach utilizing data mining and case based reasoning for it suppo...eSAT Publishing House
This document discusses using a semantic approach combining data mining and case-based reasoning to improve IT support services. It proposes a system that uses web crawlers to extract IT problem/solution data from public resources. The data is preprocessed and latent semantic analysis is applied to represent the data semantically in lower dimensions. This allows IT teams to semantically retrieve relevant problem/solution cases from the knowledge base to resolve new issues. The system aims to dynamically increase the IT experience knowledge base and guarantee accurate and efficient case retrieval to enhance IT support service quality and efficiency.
Hybrid fingerprint matching algorithm for high accuracy and reliabilityeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Process monitoring, controlling and load management system in an induction motoreSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Design and analysis of worm pair used in self locking system with development...eSAT Publishing House
This document describes the design and analysis of a worm pair system used for self-locking. A worm pair system combines two threaded rods or worm screws that are meshed together to provide self-locking properties with over 90% efficiency, compared to around 40% for a conventional worm gear system. The design process involves selecting materials and dimensions for the input shaft, output shaft, load drum hub, and worm screws. Calculations are shown for torque capacities, shear stresses, and efficiencies. Experimental results validate that the worm pair system has higher efficiency than a conventional worm gear self-locking system. A manual clutch is also designed to allow quickly releasing the load by disengaging the load drum from the output shaft.
This document summarizes a research paper that proposes a technique called reconfigurable built-in self test (RBIST) to detect and correct faults in field programmable gate arrays (FPGAs). The RBIST approach uses the partial reconfiguration capability of FPGAs to dynamically reconfigure logic blocks and implement a self-test controller for fault detection. The self-test controller coordinates test pattern generation, response verification, and identification of faults. The technique was implemented on a Xilinx FPGA board to demonstrate fault detection and correction without disrupting the normal operation of other logic blocks.
Wavelet Based on the Finding of Hard and Soft Faults in Analog and Digital Si...ijcisjournal
In this paper methods for testing both software and hardware faults are implemented in analog and digital
signal circuits are presented. They are based on the wavelet transform (WT). The limit which affected by
faults detect ability, for the reference circuits is set by statistical processing data obtained from a set of
faults free circuits .In wavelet analysis it has two algorithm one is based on a discrimination factor using
Euclidean distances and the other mahalanobis distances, are introduced both methods on wavelet energy
calculation. Simulation result from proposed test methods in the testing known analog and digital signal
circuit benchmark are given. The results shows that effectiveness of existing methods two test metrics
against three other test methods, namely a test method based on rms value of the measured signal, a test
method utilizing the harmonic magnitude component of the measured signal waveform
DESIGN APPROACH FOR FAULT TOLERANCE IN FPGA ARCHITECTUREVLSICS Design
Failures of nano-metric technologies owing to defects and shrinking process tolerances give rise to significant challenges for IC testing. In recent years the application space of reconfigurable devices has grown to include many platforms with a strong need for fault tolerance. While these systems frequently contain hardware redundancy to allow for continued operation in the presence of operational faults, the need to recover faulty hardware and return it to full functionality quickly and efficiently is great. In addition to providing functional density, FPGAs provide a level of fault tolerance generally not found in mask-programmable devices by including the capability to reconfigure around operational faults in the field. Reliability and process variability are serious issues for FPGAs in the future. With advancement in process technology, the feature size is decreasing which leads to higher defect densities, more sophisticated techniques at increased costs are required to avoid defects. If nano-technology fabrication are applied the yield may go down to zero as avoiding defect during fabrication will not be a feasible option Hence, feature architecture have to be defect tolerant. In regular structure like FPGA, redundancy is commonly used for fault tolerance. In this work we present a solution in which configuration bit-stream of FPGA is modified by a hardware controller that is present on the chip itself. The technique uses redundant device for replacing faulty device and increases the yield.
Abstract: Design verification is an essential step in the development of any product. It ensures that the product as designed is the same as the product as intended. Software simulation is the common approach for validating hardware design unfortunately, it will take hours together to execute. Difficulties in validation arise due to the complexity of the design and also due to the lack of on chip observability. One common solution to this problem is to instrument the prototype using trace-buffers to record a subset of internal signals into on-chip memory for subsequent analysis. In the proposed system, an example circuit is implemented to perform the tracing operation and various trace buffers are designed to record the different stages of internal signal states. The resulting signal states are to be stored, like a error outputs. Low power methodologies are also implemented to achieve low power consumption. Thus the errors are separately stored in the memory for analyzing the signals. This might be used for changes in the logic wherever needed. Thus this tracing is performed to monitor signal states of an FPGA.
This document discusses two methods for diagnosing faulty logic blocks in FPGA fabrics: the algebraic logic method and vector-logical method. The algebraic logic method is more useful for processing sparse fault tables with fewer than 20% 1s values, as it reduces the fault table size and simplifies computations to generate sum-of-products expressions to diagnose issues. The method involves removing rows and columns with all 0s, then constructing product-of-sums terms for each 1 in the response vector and converting to sum-of-products form. The vector-logical method is better for dense fault tables with many 1s, as it can more easily analyze tables where 1s predominate over 0s. Both methods aim to localize
Microcontroller Based Testing of Digital IP-CoreVLSICS Design
Testing core based System on Chip [1] is a challenge for the test engineers. To test the complete SOC at one time with maximum fault coverage, test engineers prefer to test each IP-core separately. At speed testing using external testers is more expensive because of gigahertz processor. The purpose of this paper is to develop cost efficient and flexible test methodology for testing digital IP-cores [2]. The prominent feature of the approach is to use microcontroller to test IP-core. The novel feature is that there is no need of test pattern generator and output response analyzer as microcontroller performs the function of both. This approach has various advantages such as at speed testing, low cost, less area overhead and greater flexibility since most of the testing process is based on software.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Vlsi Design of Low Transition Low Power Test Pattern Generator Using Fault Co...iosrjce
Now a day’s highly integrated multi layer board with IC’s is virtually impossible to be accessed
physically for testing. The major problem detected during testing a circuit includes test generation and gate to
I/O pin problems. In design of any circuit, consuming low power and less hardware utilization is an important
design parameter. Therefore reliable testing methods are introduced which reduces the cost of the hardware
required and also power consumed by the device. In this project a new fault coverage test pattern generator is
generated using a linear feedback shift register called FC-LFSR which can perform fault analysis and reduces
the total power of the circuit. In this test, it generates three intermediate patterns between the random patterns
which reduces the transitional activities of primary inputs so that the switching activities inside the circuit under
test will be reduced. The test patterns generated are applied to c17 benchmark circuit, whose results with fault
coverage of the circuit being tested. The simulation for this design is performed using Xilinx ISE software using
Verilog hardware description language
This document describes the design of a low transition, low power test pattern generator using a fault coverage circuit. It begins with background on the need for built-in self-test (BIST) techniques due to challenges with external testing. It then presents a new technique that generates three intermediate patterns between random patterns to reduce switching activity and power. The design is implemented using a linear feedback shift register (LFSR) modified with additional logic. Simulation results on a C17 benchmark circuit show the fault coverage achieved by the low power patterns.
Advancing VLSI Design Reliability: A Comprehensive Examination of Embedded De...IRJET Journal
The document summarizes research on Embedded Deterministic Test (EDT) logic insertion's impact on VLSI designs. Key findings include:
1) EDT insertion enhances test and fault coverage, but also increases the number of test patterns required.
2) There are significant shifts in fault sub-classes like untestable faults and tied cells after EDT insertion, highlighting its nuanced effects.
3) Results provide empirical evidence for designers to optimize testability by strategically integrating EDT logic.
COMPARATIVE ANALYSIS OF SIMULATION TECHNIQUES: SCAN COMPRESSION AND INTERNAL ...IJCI JOURNAL
This document compares two design for testability (DFT) pattern simulation techniques: scan compression and internal scan. Scan compression divides long scan chains into shorter chains using a compressor and decompressor, reducing simulation time significantly with little area overhead. An experiment on benchmark circuits found scan compression detects more faults, achieves higher coverage, and reduces simulation time by up to 99.7% compared to internal scan, though it increases area by 10-20%. In conclusion, scan compression is more time efficient than internal scan for testing large designs.
Implementation of a bit error rate tester of a wireless communication system ...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
A Unique Test Bench for Various System-on-a-Chip IJECEIAES
This paper discusses a standard flow on how an automated test bench environment which is randomized with constraints can verify a SOC efficiently for its functionality and coverage. Today, in the time of multimillion gate ASICs, reusable intellectual property (IP), and system-ona-chip (SoC) designs, verification consumes about 70 % of the design effort. Automation means a machine completes a task autonomously, quicker and with predictable results. Automation requires standard processes with welldefined inputs and outputs. By using this efficient methodology it is possible to provide a general purpose automation solution for verification, given today’s technology. Tools automating various portions of the verification process are being introduced. Here, we have Communication based SOC The content of the paper discusses about the methodology used to verify such a SOC-based environment. Cadence Efficient Verification Methodology libraries are explored for the solution of this problem. We can take this as a state of art approach in verifying SOC environments. The goal of this paper is to emphasize the unique testbench for different SOC using Efficient Verification Constructs implemented in system verilog for SOC verification.
COVERAGE DRIVEN FUNCTIONAL TESTING ARCHITECTURE FOR PROTOTYPING SYSTEM USING ...VLSICS Design
Time and efforts for functional testing of digital logic is big chunk of overall project cycle in VLSI industry. Progress of functional testing is measured by functional coverage where test-plan defines what needs to be covered, and test-results indicates quality of stimulus. Claiming closer of functional testing requires that functional coverage hits 100% of original test-plan. Depending on the complexity of the design, availability
of resources and budget, various methods are used for functional testing. Software simulations using various logic simulators, available from Electronic Design Automation (EDA) companies, is primary method for functional testing. The next level in functional testing is pre-silicon verification using Field
Programmable Gate Array (FPGA) prototype and/or emulation platforms for stress testing the Design Under Test (DUT). With all the efforts, the purpose is to gain confidence on maturity of DUT to ensures first time silicon success that meets time to market needs of the industry. For any test-environment the
bottleneck, in achieving verification closer, is controllability and observability that is quality of stimulus to unearth issues at early stage and coverage calculation. Software simulation, FPGA prototype, or emulation, each method has its own limitations, be it test-time, ease of use, or cost of software, tools and
hardware-platform. Compared to software simulation, FPGA prototyping and emulation methods pose greater challenges in quality stimulus generation and coverage calculation. Many researchers have identified the problems of bug-detection / localization, but very few have touched the concept of quality
stimulus generation that leads to better functional coverage and thereby uncover hidden bugs in FPGA prototype verification setup. This paper presents a novel approach to address above-mentioned issues by embedding synthesizable active-agent and coverage collector into FPGA prototype. The proposed
architecture has been experimented for functional and stress testing of Universal Serial Bus (USB) Link Training and Status State Machine (LTSSM) logic module as DUT in FPGA prototype. The proposed solution is fully synthesizable and hence can be used in both software simulation as well as in prototype system. The biggest advantage is plug and play nature of this active-agent component, that allows its
reusability in any USB3.0 LTSSM digital core.
COVERAGE DRIVEN FUNCTIONAL TESTING ARCHITECTURE FOR PROTOTYPING SYSTEM USING ...VLSICS Design
This document describes a proposed architecture for functional testing of a USB Link Training and Status State Machine (LTSSM) logic module using a synthesizable active agent embedded in an FPGA prototyping system. The active agent controls stimulus generation and injects errors to target time-sensitive link training and low power states. It also includes a coverage collector to provide observability for closed-loop functional testing. The active agent is fully synthesizable, making it reusable for both software simulation and FPGA prototyping. Experimental results showed the architecture was able to better generate stimuli and improve functional coverage for stress testing the LTSSM module.
COVERAGE DRIVEN FUNCTIONAL TESTING ARCHITECTURE FOR PROTOTYPING SYSTEM USING ...VLSICS Design
Time and efforts for functional testing of digital logic is big chunk of overall project cycle in VLSI industry. Progress of functional testing is measured by functional coverage where test-plan defines what needs to be covered, and test-results indicates quality of stimulus. Claiming closer of functional testing requires that functional coverage hits 100% of original test-plan. Depending on the complexity of the design, availability of resources and budget, various methods are used for functional testing. Software simulations using various logic simulators, available from Electronic Design Automation (EDA) companies, is primary method for functional testing. The next level in functional testing is pre-silicon verification using Field Programmable Gate Array (FPGA) prototype and/or emulation platforms for stress testing the Design Under Test (DUT). With all the efforts, the purpose is to gain confidence on maturity of DUT to ensuresfirst time silicon success that meets time to market needs of the industry. For any test-environment the bottleneck, in achieving verification closer, is controllability and observability that is quality of stimulus to unearth issues at early stage and coverage calculation. Software simulation, FPGA prototype, or emulation, each method has its own limitations, be it test-time, ease of use, or cost of software, tools and
hardware-platform. Compared to software simulation, FPGA prototyping and emulation methods pose greater challenges in quality stimulus generation and coverage calculation. Many researchers have identified the problems of bug-detection / localization, but very few have touched the concept of quality stimulus generation that leads to better functional coverage and thereby uncover hidden bugs in FPGA prototype verification setup. This paper presents a novel approach to address above-mentioned issues by embedding synthesizable active-agent and coverage collector into FPGA prototype. The proposed architecture has been experimented for functional and stress testing of Universal Serial Bus (USB) Link Training and Status State Machine (LTSSM) logic module as DUT in FPGA prototype. The proposed solution is fully synthesizable and hence can be used in both software simulation as well as in prototype system. The biggest advantage is plug and play nature of this active-agent component, that allows its reusability in any USB3.0 LTSSM digital core.
Design Verification and Test Vector Minimization Using Heuristic Method of a ...ijcisjournal
The reduction in feature size increases the probability of manufacturing defect in the IC will result in a
faulty chip. A very small defect can easily result in a faulty transistor or interconnecting wire when the
feature size is less. Testing is required to guarantee fault-free products, regardless of whether the product
is a VLSI device or an electronic system. Simulation is used to verify the correctness of the design. To test n
input circuit we required 2n
test vectors. As the number inputs of a circuit are more, the exponential growth
of the required number of vectors takes much time to test the circuit. It is necessary to find testing methods
to reduce test vectors . So here designed an heuristic approach to test the ripple carry adder. Modelsim and
Xilinx tools are used to verify and synthesize the design.
Optimal and Power Aware BIST for Delay Testing of System-On-ChipIDES Editor
Test engineering for fault tolerant VLSI systems is
encumbered with optimization requisites for hardware
overhead, test power and test time. The high level quality of
these complex high-speed VLSI circuits can be assured only
through delay testing, which involves checking for accurate
temporal behavior. In the present paper, a data-path based
built-in test pattern generator (TPG) that generates iterative
pseudo-exhaustive two-patterns (IPET) for parallel delay
testing of modules with different input cone capacities is
implemented. Further, in the present study a CMOS
implementation of low power architecture (LPA) for scan based
built-in self test (BIST) for delay testing and combinational
testing is carried out. This reduces test power dissipation in
the circuit under test (CUT). Experimental results and
comparisons with pre-existing methods prove the reduction
in hardware overhead and test-time.
Fault Modeling of Combinational and Sequential Circuits at Register Transfer ...VLSICS Design
As the complexity of Very Large Scale Integration (VLSI) is growing, testing becomes tedious and tougher. As of now fault models are used to test digital circuits at the gate level or below that level. By using fault models at the lower levels, testing becomes cumbersome and will lead to delays in the design cycle. In addition, developments in deep submicron technology provide an opening to new defects. We must develop efficient fault detection and location methods in order to reduce manufacturing costs and time to market. Thus there is a need to look for a new approach of testing the circuits at higher levels to speed up the design cycle. This paper proposes on Register Transfer Level (RTL) modeling for digital circuits and computing the fault coverage. The result obtained through this work establishes that the fault coverage with the RTL fault model is comparable to the gate level fault coverage.
FAULT MODELING OF COMBINATIONAL AND SEQUENTIAL CIRCUITS AT REGISTER TRANSFER ...VLSICS Design
This document summarizes research on modeling faults at the register transfer level (RTL) for digital circuit testing. It proposes a new RTL fault model that models stuck-at faults by inserting buffers for each bit in the variables of the RTL code. Fault simulation is performed on faulty circuits generated from the RTL code to determine fault coverage. Results on combinational and sequential circuits show the RTL fault coverage obtained matches closely with gate-level fault coverage obtained through logic synthesis and gate-level fault simulation. The proposed RTL fault model provides a way to estimate fault coverage earlier in the design cycle compared to traditional gate-level fault simulation.
Similar to An application specific reconfigurable architecture (20)
Hudhud cyclone caused extensive damage in Visakhapatnam, India in October 2014, especially to tree cover. This will likely impact the local environment in several ways: increased air pollution as trees absorb less; higher temperatures without tree canopy; increased erosion and landslides. It also created large amounts of waste from destroyed trees. Proper management of solid waste is needed to prevent disease spread. Suggested measures include restoring damaged plants, building fountains to reduce heat, mandating light-colored buildings, improving waste management, and educating public on health risks. Overall, changes are needed to water, land, and waste practices to rebuild the environment after the cyclone removed green cover.
Impact of flood disaster in a drought prone area – case study of alampur vill...eSAT Publishing House
1) In September-October 2009, unprecedented heavy rainfall and dam releases caused widespread flooding in Alampur village in Mahabub Nagar district, a historically drought-prone area.
2) The flood damaged or destroyed homes, buildings, infrastructure, crops, and documents. It displaced many residents and cut off the village.
3) The socioeconomic conditions and mud-based construction of homes in the village exacerbated the flood's impacts, making damage more severe and recovery more difficult.
The document summarizes the Hudhud cyclone that struck Visakhapatnam, India in October 2014. It describes the cyclone's formation, rapid intensification to winds of 175 km/h, and landfall near Visakhapatnam. The cyclone caused extensive damage estimated at over $1 billion and at least 109 deaths in India and Nepal. Infrastructure like buildings, bridges, and power lines were destroyed. Crops and fishing boats were also damaged. The document then discusses coping strategies and improvements needed to disaster management plans to better prepare for future cyclones.
Groundwater investigation using geophysical methods a case study of pydibhim...eSAT Publishing House
This document summarizes the results of a geophysical investigation using vertical electrical sounding (VES) methods at 13 locations around an industrial area in India. The VES data was interpreted to generate geo-electric sections and pseudo-sections showing subsurface resistivity variations. Three main layers were typically identified - a high resistivity topsoil, a weathered middle layer, and a basement rock. Pseudo-sections revealed relatively more weathered areas in the northwest and southwest. Resistivity sections helped identify zones of possible high groundwater potential based on low resistivity anomalies sandwiched between more resistive layers. The study concluded the electrical resistivity method was useful for understanding subsurface geology and identifying areas prospective for groundwater exploration.
Flood related disasters concerned to urban flooding in bangalore, indiaeSAT Publishing House
1. The document discusses urban flooding in Bangalore, India. It describes how factors like heavy rainfall, population growth, and improper land use have contributed to increased flooding in the city.
2. Flooding events in 2013 are analyzed in detail. A November rainfall caused runoff six times higher than the drainage capacity, inundating low-lying residential areas.
3. Impacts of urban flooding include disrupted daily life, damaged infrastructure, and decreased economic activity in affected areas. The document calls for improved flood management strategies to better mitigate urban flooding risks in Bangalore.
Enhancing post disaster recovery by optimal infrastructure capacity buildingeSAT Publishing House
This document discusses enhancing post-disaster recovery through optimal infrastructure capacity building. It presents a model to minimize the cost of meeting demand using auxiliary capacities when disaster damages infrastructure. The model uses genetic algorithms to select optimal capacity combinations. The document reviews how infrastructure provides vital services supporting recovery activities and discusses classifying infrastructure into six types. When disaster reduces infrastructure services, a gap forms between community demands and available support, hindering recovery. The proposed research aims to identify this gap and optimize capacity selection to fill it cost-effectively.
Effect of lintel and lintel band on the global performance of reinforced conc...eSAT Publishing House
This document analyzes the effect of lintels and lintel bands on the seismic performance of reinforced concrete masonry infilled frames through non-linear static pushover analysis. Four frame models are considered: a frame with a full masonry infill wall; a frame with a central opening but no lintel/band; a frame with a lintel above the opening; and a frame with a lintel band above the opening. The results show that the full infill wall model has 27% higher stiffness and 32% higher strength than the model with just an opening. Models with lintels or lintel bands have slightly higher strength and stiffness than the model with just an opening. The document concludes lintels and lintel
Wind damage to trees in the gitam university campus at visakhapatnam by cyclo...eSAT Publishing House
1) A cyclone with wind speeds of 175-200 kph caused massive damage to the green cover of Gitam University campus in Visakhapatnam, India. Thousands of trees were uprooted or damaged.
2) A study assessed different types of damage to trees from the cyclone, including defoliation, salt spray damage, damage to stems/branches, and uprooting. Certain tree species were more vulnerable than others.
3) The results of the study can help in selecting more wind-resistant tree species for future planting and reducing damage from future storms.
Wind damage to buildings, infrastrucuture and landscape elements along the be...eSAT Publishing House
1) A visual study was conducted to assess wind damage from Cyclone Hudhud along the 27km Visakha-Bheemli Beach road in Visakhapatnam, India.
2) Residential and commercial buildings suffered extensive roof damage, while glass facades on hotels and restaurants were shattered. Infrastructure like electricity poles and bus shelters were destroyed.
3) Landscape elements faced damage, including collapsed trees that damaged pavements, and debris in parks. The cyclone wiped out over half the city's green cover and caused beach erosion around protected areas.
1) The document reviews factors that influence the shear strength of reinforced concrete deep beams, including compressive strength of concrete, percentage of tension reinforcement, vertical and horizontal web reinforcement, aggregate interlock, shear span-to-depth ratio, loading distribution, side cover, and beam depth.
2) It finds that compressive strength of concrete, tension reinforcement percentage, and web reinforcement all increase shear strength, while shear strength decreases as shear span-to-depth ratio increases.
3) The distribution and amount of vertical and horizontal web reinforcement also affects shear strength, but closely spaced stirrups do not necessarily enhance capacity or performance.
Role of voluntary teams of professional engineers in dissater management – ex...eSAT Publishing House
1) A team of 17 professional engineers from various disciplines called the "Griha Seva" team volunteered after the 2001 Gujarat earthquake to provide technical assistance.
2) The team conducted site visits, assessments, testing and recommended retrofitting strategies for damaged structures in Bhuj and Ahmedabad. They were able to fully assess and retrofit 20 buildings in Ahmedabad.
3) Factors observed that exacerbated the earthquake's impacts included unplanned construction, non-engineered buildings, improper prior retrofitting, and defective materials and workmanship. The professional engineers' technical expertise was crucial for effective post-disaster management.
This document discusses risk analysis and environmental hazard management. It begins by defining risk, hazard, and toxicity. It then outlines the steps involved in hazard identification, including HAZID, HAZOP, and HAZAN. The document presents a case study of a hypothetical gas collecting station, identifying potential accidents and hazards. It discusses quantitative and qualitative approaches to risk analysis, including calculating a fire and explosion index. The document concludes by discussing hazard management strategies like preventative measures, control measures, fire protection, relief operations, and the importance of training personnel on safety.
Review study on performance of seismically tested repaired shear wallseSAT Publishing House
This document summarizes research on the performance of reinforced concrete shear walls that have been repaired after damage. It begins with an introduction to shear walls and their failure modes. The literature review then discusses the behavior of original shear walls as well as different repair techniques tested by other researchers, including conventional repair with new concrete, jacketing with steel plates or concrete, and use of fiber reinforced polymers. The document focuses on evaluating the strength retention of shear walls after being repaired with various methods.
Monitoring and assessment of air quality with reference to dust particles (pm...eSAT Publishing House
This document summarizes a study on monitoring and assessing air quality with respect to dust particles (PM10 and PM2.5) in the urban environment of Visakhapatnam, India. Sampling was conducted in residential, commercial, and industrial areas from October 2013 to August 2014. The average PM2.5 and PM10 concentrations were within limits in residential areas but moderate to high in commercial and industrial areas. Exceedance factor levels indicated moderate pollution for residential areas and moderate to high pollution for commercial and industrial areas. There is a need for management measures like improved public transport and green spaces to combat particulate air pollution in the study areas.
Low cost wireless sensor networks and smartphone applications for disaster ma...eSAT Publishing House
This document describes a low-cost wireless sensor network and smartphone application system for disaster management. The system uses an Arduino-based wireless sensor network comprising nodes with various sensors to monitor the environment. The sensor data is transmitted to a central gateway and then to the cloud for analysis. A smartphone app connected to the cloud can detect disasters from the sensor data and send real-time alerts to users to help with early evacuation. The system aims to provide low-cost localized disaster detection and warnings to improve safety.
Coastal zones – seismic vulnerability an analysis from east coast of indiaeSAT Publishing House
This document summarizes an analysis of seismic vulnerability along the east coast of India. It discusses the geotectonic setting of the region as a passive continental margin and reports some moderate seismic activity from offshore in recent decades. While seismic stability cannot be assumed given events like the 2004 tsunami, no major earthquakes have been recorded along this coast historically. The document calls for further study of active faults, neotectonics, and implementation of improved seismic building codes to mitigate vulnerability.
Can fracture mechanics predict damage due disaster of structureseSAT Publishing House
This document discusses how fracture mechanics can be used to better predict damage and failure of structures. It notes that current design codes are based on small-scale laboratory tests and do not account for size effects, which can lead to more brittle failures in larger structures. The document outlines how fracture mechanics considers factors like size effect, ductility, and minimum reinforcement that influence the strength and failure behavior of structures. It provides examples of how fracture mechanics has been applied to problems like evaluating shear strength in deep beams and investigating a failure of an oil platform structure. The document argues that fracture mechanics provides a more scientific basis for structural design compared to existing empirical code provisions.
This document discusses the assessment of seismic susceptibility of reinforced concrete (RC) buildings. It begins with an introduction to earthquakes and the importance of vulnerability assessment in mitigating earthquake risks and losses. It then describes modeling the nonlinear behavior of RC building elements and performing pushover analysis to evaluate building performance. The document outlines modeling RC frames and developing moment-curvature relationships. It also summarizes the results of pushover analyses on sample 2D and 3D RC frames with and without shear walls. The conclusions emphasize that pushover analysis effectively assesses building properties but has limitations, and that capacity spectrum method provides appropriate results for evaluating building response and retrofitting impact.
A geophysical insight of earthquake occurred on 21 st may 2014 off paradip, b...eSAT Publishing House
1) A 6.0 magnitude earthquake occurred off the coast of Paradip, Odisha in the Bay of Bengal on May 21, 2014 at a depth of around 40 km.
2) Analysis of magnetic and bathymetric data from the area revealed the presence of major lineaments in NW-SE and NE-SW directions that may be responsible for seismic activity through stress release.
3) Movements along growth faults at the margins of large Bengal channels, due to large sediment loads, could also contribute to seismic events by triggering movements along the faults.
Effect of hudhud cyclone on the development of visakhapatnam as smart and gre...eSAT Publishing House
This document discusses the effects of Cyclone Hudhud on the development of Visakhapatnam as a smart and green city through a case study and preliminary surveys. The surveys found that 31% of participants had experienced cyclones, 9% floods, and 59% landslides previously in Visakhapatnam. Awareness of disaster alarming systems increased from 14% before the 2004 tsunami to 85% during Cyclone Hudhud, while awareness of disaster management systems increased from 50% before the tsunami to 94% during Hudhud. The surveys indicate that initiatives after the tsunami improved awareness and preparedness. Developing Visakhapatnam as a smart, green city should consider governance
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
6th International Conference on Machine Learning & Applications (CMLA 2024)ClaraZara1
6th International Conference on Machine Learning & Applications (CMLA 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of on Machine Learning & Applications.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
2. Operations Strategy in a Global Environment.ppt
An application specific reconfigurable architecture
1. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
_______________________________________________________________________________________
Volume: 03 Issue: 02 | Feb-2014, Available @ http://www.ijret.org 645
AN APPLICATION SPECIFIC RECONFIGURABLE ARCHITECTURE
FOR FAULT TESTING AND DIAGNOSIS: A SURVEY
A.R Kasetwar1
, Gaurav Kumar2
, S. M. Gulhane3
1
Research Scholar,Dept. of Electronics & Tele communication, BDCE, Sewagram, Maharashtra, India
2
P.G student Dept. of Electronics & Tele communication, DBNCOET,Yavatmal, Maharashtra,India
3
Professor and Head of Dept. of Elect.& Telecomm.Engg, JDIET, Yavatmal, Maharashtra, India
Abstract
Now a day’s many VLSI designers are implementing different applications on real time with the use of FPGAs. Although they are
working efficiently, they are not achieving their expected goals. This is only because of the faults which are occurring in the
FPGA at the runtime of the application. Those faults are remaining in the circuitry as there is no provision for removal of those
faults at application level. So there is a great need of detection & removal of faults. Mainly Interconnect faults, Logical Faults
and Delay are the faults which reduces the performance of FPGA. Although the manufacturers are trying to decrease the fault
present in the FPGA, it is very necessary to remove those faults at run time of the particular application. This paper includes the
brief discussion about the occurrence of different faults and various methods to remove those faults.
Key Words: Fault diagnosis, field-programmable gate array (FPGA), testing.
--------------------------------------------------------------------***----------------------------------------------------------------------
1. INTRODUCTION
A Field Programmable Gate Array (FPGA) is a logic device
that is used to implement a number of digital circuits. FPGA
is widely used in many applications due to their reprogram
ability, flexibility characteristic. It has also the advantage of
short design & implementation cycles with low non-
recurring engineering cost. As compare to Application
specific integrated circuits (ASIC) FPGA results in faster
design and debug cycle due to its reprogram ability. Though
the density capability and speed of FPGA is increased, it
becomes more vulnerable to various types of faults, but the
FPGA test can be substantially more complex than
application –Specific integrated circuit test. The basic
architecture of FPGA consists of three major components:
programmable logic blocks which implements the logic
functions, programmable routing (interconnects) to
implement these functions and IO blocks to make off-chip
connections. We can Program FPGA for combination and
sequential functions. All Programmable logic blocks (PLB)
are Identical before programming. An illustration of typical
FPGA architecture is shown in figure
FIG.1Basic Architecture of FPGA
FIG.2Typical Plb Structure
Above diagram shows the typical structure of programmable
logic blocks (PLB) ; it consist of a memory block that can
function as look – up table(LUT) or RAM, number of flip-
flop(FFs); and multiplying output logic. The LUT/RAM
block may also contain special-purpose logic for arithmetic
functions (counters, adders, multipliers, etc.). The RAM
may be configured in various modes of operation like
synchronous, asynchronous, single-port, dual-port, etc. The
FFs can also be configured as latches, and may have
programmable clock-enable, preset/clear, and data selector
functions [2].
The manufacturer of FPGA is constantly trying to decrease
the number of faults, which are present in their designed
FPGA. Detection of faults and the type of faults, which is
present in the circuit, is known as fault detection. Fault
diagnosis, process locate the fault in the circuit and replace
or remove the faulty circuit with a good one.
In general, FPGA testing is of two types
1) Application Independent.
2) Application dependent.
Application independent tests are performed at the
manufacturer level. In this test, entire FPGA resources are
tested for the presence of faults. The faults usually focused
2. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
_______________________________________________________________________________________
Volume: 03 Issue: 02 | Feb-2014, Available @ http://www.ijret.org 646
in this testing are logical faults and interconnection faults
[21].
Compared to application- independent test and diagnosis,
application –dependent test and diagnosis is faster with
higher diagnosis resolution over more compressive faults
[12].This is because application –dependent test focuses
only on a specific part of FPGA used for a particular design
instead of diagnoses complete FPGA. The faults of interest
in this type of testing are only those that can affect the
operation of a specific part of FPGA. It includes diagnosis
of faults related to logic blocks, interconnection & delay
faults which can strongly affect the timing characteristics of
the circuitry. In this paper, we are focusing on the detailed
study of various application dependent fault diagnosis
methods. An application dependent fault diagnosis includes
faults in the Logical blocks, Interconnect faults and fault due
to the presence of delay.
Faults in the logic blocks are those faults which are related
to Look-Up-Table (LUT), multiplexers and with the flip-
flops. For an LUT, a fault can occur in any of the memory
matrix, decoder, and input output lines. A faulty memory
matrix makes some memory cells incapable of storing the
correct logic values. If the fault is related with the decoder,
then a wrong address may lead to reading of wrong cell
contents. The next possible fault is related to input output
lines that led to generation of the stack at fault. The
multiplexer faults are functional faults because the internal
connection of multiplexers in FPGAs is different for
different application. Faulty multiplexer may have the
problem in selecting correct inputs applied to it. The faults
in flip-flop are also a functional fault any fault can cause a
flip-flop to receive no data, to be unable to be triggered by
the correct clock edge, or to be unable to be set or reset.
The interconnect faults are the faults which are generated
due to the fault in connecting wires [22]. It may be an open
fault, short fault or stack-at-faults. The other type of fault is
a delay fault which may severely affect the timing
characteristics of the output. It is an important task to test
whether an operation is completed within the specified clock
cycle or not. Though a circuit is free from logical and
interconnection fault, the output characteristics may get
badly affected.
2 PREVIOUS WORK
2.1 LOGICAL FAULTS
TomooInque et.al proposed a universal fault diagnosis
method for unprogrammed FPGAs, based on a test
procedure for Configurable logic blocks (CLBs) developed
by Michinishi. Author’s method is used to diagnose one
CLB i.e. method is used to locate the fault in only on CLB.
Their assumption is that there is at least one CLB which
includes faults like Stuck-at, interconnect, multiple-access
faults of look-up-table. They had performed their procedure
repeatedly by implementing a configuration and alternately
applying input sequence to the configuration. TPCLB is
represented by a sequence of pairs consisting of a
configuration and input sequence applied to the
configuration as follows:
TPCLB= [(C1, S1), (C2, S2)....................., (C2k+1, S2k+1)]
This test procedure detects any faults in faulty block.
However this method requires repetitive computations,
which lead to consume much time for testing.
A hybrid fault model for FPGA testing was introduced in
2001, which permits the detection of all single faults (i.e.
stuck –at faults, functional) with some multiple faults [2].
Repeated FPGA reprogramming is used. They had assumed
that interconnects and IOB’s had already tested. The main
objectives of their proposed method is
a) 100% fault coverage with neither delay nor area
overhead.
b) Ease of test pattern generation because test patterns
generated for CLBs, not for complete FPGA.
c) Efficient implementation of the testing process.
d) Number of programming phase must be as small as
possible.
Author had generated test pattern into two phases according
to the CLB partitioning. The LUT memory matrix can be
tested by reading all the memory bits in two phases; second
phase is complement of the first. For testing stuck-at faults,
the scenario is different. The contents of LUT must be
arranged such that Boolean difference is one for input to be
tested for which multiple patterns are required. For
multiplexers, each data inputs must be activated at least for
one phase because multiplexers selects single output from
all inputs i.e., at least niphases are required to test a
multiplexers with niinput. Advantage of using this method is
that the time required to test all the CLBs is the same as to
test a single CLB with perfect controllability/observability.
All the CLBs can be under test simultaneously, which is not
possible with other method like neither BIST approach nor
naive approach.
Faults related to CLBs are solved correctly in the method
proposed [2], but the problem of testing faulty multiplexers
was solved incorrectly. A new built-in-self –test approach
which is able to detect and accurately diagnose all single
and practically all multiple faulty PLBs in FPGA with
maximum diagnosis resolution was proposed by
M.Abramovici and C.Stroud.[5]
The logic and interconnect faults are tested separately; it is
an offline testing method. The problem related to
conventional BIST approach is the problem area overhead
and delay penalties; which later results in speed degradation,
which is unacceptable in high performance system. The
BIST methods are first proposed for testing PLBs and then
extended for testing interconnects faults.
To configure groups of PLBs as Test Pattern Generators
(TPG), Out Response Analyzer (ORA) and other group as
Block Under Test(BLT) as shown in fig 3 (a). The BUT is
reconfigured repeatedly to test it in all modes of operation.
Once the BUT is tested, the role of PLBs is reversed so that
3. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
_______________________________________________________________________________________
Volume: 03 Issue: 02 | Feb-2014, Available @ http://www.ijret.org 647
in the next test session the previous BUTs become TPG or
ORAs and vice versa. Fig 3(b) & 3 (c) gives the floor plan
for two test session. Authors had used Pseudo exhaustive
testing method.
FIG.3. BIST Architecture TPG, BUT and ORA Connection.
FIG.4. A) Floor Plan for First Test Session B) Floor Plan
for Second Test Session.
Following claim had been made: Any single faulty PLB is
guaranteed to be detected, with the group of faulty PLBs in
the same row is guaranteed to be detected also group of
faulty PLBs in the middle rows of the same column that has
at least two adjacent faults free PLBs is guaranteed to be
detected.
Most of the authors had focused on application independent
diagnosis of FPGAs. Application dependent diagnosis of
FPGAs techniques for logic and interconnect resources was
introduced by M.B.Tahoori [17]. For logic diagnosis, the
configurations of used logic blocks remain unchanged while
the configurations of theInterconnect resources and unused
logic blocks are modified. Any single functional fault,
inclusive of all stuck-at faults, in logic blocks are accurately
diagnosed in only one test configuration.
The problem with previous testing is that it diagnose only
those blocks which are used in some particular application
where as other blocks, which are not used for that operation
may introduce new faults. This affects the reliability of the
system. A new technique for online testing and diagnosis of
nontransient faults in PLBs with the help of roving self-
testing-area (STARs) is introduced [6]. The STAR is a
temporarily off line section of the FPGA in which self-
testing continues without affecting the actual operation of
FPGA. BIST approach detects any combination faulty
PLBs. During the testing process, BIST approach is used to
test all PLBs in the BISTER tile. The main advantage of
this testing is that if a particular fault is not obtained, the
suspected faulty PLBs are divided into subset and retested.
The diagnosis time is very fast.
Further J. Emert et.al had introduced new Fault Tolerating
(FT) techniques for PLBs. In previous FT techniques, faults
are detected within the working part of the system, and then
they are located or bypassed as quickly as possible so that
working of the FPGAs does not get affected. In STAR
technique, the FPGA is divided into two parts. The STARs
where the BIST and diagnosis take place and the working
area where operation is carried out. When the test of one
part is completed, STAR exchanges its part so that it can
cover complete FPGA. The main advantage of this
technique is that the fault is detected in an STAR due to
which they do not affect the working of the system. This
technique is used only for the logical faults. Authors had
determined whether the system can continue it works under
the presence of the located faults or not. Because in many
situations this is possible due to which no reconfiguration is
needed, but if fault affects the system function, alternative
configurations was determined that avoid the faulty
resources. This method allows more time for accurate
diagnosis and for computing any required fault by passing
configuration. This method determines the faulty LUTs or
the faulty FFs inside a PLB.M.B.Tahoori had focused on
logic and interconnect faults. For the logic faults, Built – in
–self-Diagnosis (BISD) method is used in which the
configuration of used blocks remain unchanged while the
configuration of the interconnect resources and unused logic
blocks are modified. In this method, any functional fault is
accurately diagnosed. [11]
In this scheme, all used logic blocks were tested
exhaustively. In which global interconnect is reprogrammed
such that test signals are routed to each logic blocks. A
Linear feedback shift register (LFSR) is used for generating
test vectors, which are connected to all logic blocks. The
output of logic blocks is connected to internal response
compactor. The number of the test session in this technique
required is less as compare to others. While in other cases,
the time requirement is also more and even then those
methods focus only on single faults.
4. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
_______________________________________________________________________________________
Volume: 03 Issue: 02 | Feb-2014, Available @ http://www.ijret.org 648
FIG.5Application –Dependent Self-Test Architecture For
Logic Blocks A) Original Configuration B) Bist
Configuration.
2.2 INTERCONNECTION FAULTS
Detection of interconnection faults in FPGAs circuit is a
difficult problem. In year 2002, M.B.Tahoori proposed a
new method to diagnosis open defects present in the circuit.
An open defect is a discontinuity in the connection between
two circuit’s nodes that should be completely connected
[13]. Author had proposed a two-step diagnosis processes to
identify the faulty interconnects blocks. In which the first
step is Coarse –grain step which localizes the fault to a small
portion of the FPGA. In the second step i.e. fine grain step,
it precisely locates the faults inside that portion of the
FPGA.
FIG.6. A Test Configuration For Interconnect
In the above figure test configuration consist of the number
of wires under test (WUT). A WUT consist of the routing
paths which connect the output of one logic blocks with the
input of other logic blocks. During the diagnosis process, the
logic value of WUT is captured, and the values are stored in
the flip- flop connected to it in the next cycle. The value
stored in the flip- flop is verified by applying it to the test
vector, and the faulty WUT is obtained. Input to the fine
grain diagnosis is a defective WUT, which is the output
ofCoarse –grain. In this step, the goal is to identify faulty
recourses. The basic idea, which is used here, is a portion of
WUT is removed and using some other WUT connection is
made. If the new WUT still fails which means those
removed WUT are faults free, i.e., the fault is located in the
non-removed recourses. Otherwise, the opposite conclusion
is made. The time required for diagnosis of interconnect is
large because the complete process is performed in two parts
and 100% fault removal is also not guaranteed.
Further G.Hriss et.al had suggested a new method to
diagnose faults in Cluster based FPGAs [14]. The fault
detection in cluster based FPGAs are very difficult because
of its high densities. Author had used BIST method they had
focused on two possible faults present in the FPGAs an open
fault in which a single line is broken, or a connectable cable
is unconnected and short defect which causes two lines to be
crossed. However, the diagnostic resolution was limited to a
given set of WUT, and far from the diagnostic resolution
required for efficient fault tolerant application.
According to M.B.Tahoori interconnects diagnosis for the
configurations of used logic blocks are modified and
interconnect configuration remain unchanged [11]. Any
single fault (open, stuck-at, or bridging fault) in interconnect
can be uniquely identified in a small number of test
configurations. Author had categorized the diagnosis
procedure into two parts (a) Adaptive (b) Non adaptive
approach. In adaptive approach, the selection of the next
step is depending on the result of the previous step. Whereas
in non-adaptive process all the steps are performed first and
then the result is calculated from falling pattern. The non-
adaptive approach is preferred over adaptive approach
because the time requirement is less
2.3. DELAY FAULT
Delay fault diagnosis is more difficult as compared to
interconnect faults as delay fault model depends on the size
of a delay defect due to which it is harder to define. Path
delay testing of FPGAs is very important because FPGAs
which is fault free will, still not work properly due to delay.
The basic idea is to test a set of interconnects, or paths,
between two logic blocks for delay faults by creating a race
condition between the signals propagating on those paths.
The particular set of paths under test (PUTs) between two
logic blocks are configured such that their fault-free
propagation delays are nearly identical, and a signal
transition simultaneously occurring at the start of the PUTs
should also simultaneously occur the end of the PUTs. [15]
In the above method, an iterative logic array model is used
which test number of similar sections of interconnects
simultaneously [16]. Author had suggested a new approach
which partitions target paths into subset which are used in
the same test configuration. They had tested all the paths
with all combinations of signal inversions. As there are
numbers of paths present in the circuit only a limited sets of
the path is tested, with the guarantee that the maximum
delays along the tested path will not exceeds the clock
period during the normal operation. They had proposed two
methods in the first method, which is known as single –
phase method; paths are selected so that all paths in each
configuration can be tested in parallel. Whereas the second
method is the multi-phase method; attempts to test the paths
in a configuration with a sequence of test phases, each of
which test a set of paths in parallel.
All the previous authors had delayed fault testing for general
integrated circuit. Nur A.Touba et.al had proposed a new
method to delay diagnose [26]. The method is simple and
5. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
_______________________________________________________________________________________
Volume: 03 Issue: 02 | Feb-2014, Available @ http://www.ijret.org 649
easy to apply. CLBs are reprogrammed, and then the
modified circuit is tested by using the same test pattern
which caused the circuit to fail. As the same test pattern is
re-used, it eliminates the time that is required for generating
additional diagnostic vectors. This technique targeted
towards the common case of single point delay defect that
increases the delay through a CLB or an interconnect
causing it to exceed its timing specification.
3. SUMMARY AND CONCLUSION
Fault diagnosis has particular importance in the context of
field programmable gate arrays (FPGAs) because faults can
be avoided by reconfiguration at almost no real cost. The
main faults in the FPGAs are interconnects, logic blocks and
faults related to delay of an arbitrary design. As FPGAs
works properly, all three faults should be removed. FPGA
testing is of two types an application dependent and
application independent. The testing process for both is
different. The time required to test an application dependent
testing is less as it focuses only on some specific part
whereas the application independent testing tests complete
FPGAs.Faults related to logical block, interconnected faults
and delay faults are the problems for the FPGAs user.
Numbers of faults diagnosis method are present.
In this paper, we have made a detail survey of number of
methods for fault diagnosis from this survey we come to
know that for interconnect diagnosis, the method explained
by M.B.Tahorri [11] is better as compared to other method
because all the single faults and multiple faults (open, stuck-
at –faults, or bridging faults) are uniquely identified and
removed. For logic blocks diagnosis, BISD approach is
selected because in it multiple faults are uniquely identified
in a single test configuration with a fixed test time. All other
methods are limited only for faulty CLBs and single faults
[11].For delay fault the method proposed by JayabrataGhosh
is preferred. It is used for both manufacturing as well as
user configuration test with the guarantee that the maximum
delays along the tested path will not exceeds the clock
period during the normal operation.
TABLE- 1: Comparison of Different Methods
References Logic
Blocks
Intercon
nection
Delay
faults
Conclusion
[1] Yes NO No Only one fault
considered
[2] Yes NO No Only one fault
considered
[5] Yes NO No Two test session
for all CLBs
[6] Yes NO No Accurate only for
single faulty PLB
and for some
multiple PLBs
[9] Yes NO No Used for online
operation
[11] Yes Yes No For logic block
only one test
configuration and
for interconnect it
logarithmic
depends on the
FPGAs size
[25] No No Yes Used by
Manufacturer
[26] No No Yes Time required is
small & used by
manufacturer and
user
REFERENCES
[1]TomooInque ,S.Miyazaki, H.Fujiwara “ Universal fault
diagnosis for look up table”, in IEEE design & test of
computers 1998,39 -44.
[2]w.-j. Huang and e. J. Mccluskey, “column-based
precompiled configuration techniques for fpga fault
tolerance,” in proc. Ieeesymp.field-program. Custom
compute. Mach., 2001, pp. 137–146.
[3]AbderrahimDoumar and Hideo Ito “Detecting,
Diagnosing, and Tolerating Faultsin SRAM-Based Field
Programmable Gate Arrays: A Survey” ieee transactions on
very large scale integration (vlsi) systems, vol. 11, no. 3,
june 2003.
[4]c. Stroud, j. Nall, m. Lashinsky, and m. Abramovici,
“bist-based diagnosis of fpga interconnect,” in proc. Int.
Test conf., oct. 2002,pp. 618–627.
[5] M.Abramoviei&C.Stroud “ BIST –Based test and
diagnosis og FPGA logic blocks” , in IEEE transaction very
large scale integrated VLSI system Vol 9 ,2001,159-172.
[6] M. Abramovici,J.Emmert and C. Stroud, “Online BIST
and BIST based diagnosis of fpgas logic blocks,” in IEEE
transaction very large scale integrated VLSI system Vol 12
,2004,1284-1294.
[7]W. Shi and W. K. Fuchs, “Optimal interconnect
diagnosis of wiring networks,” IEEE Trans. Very Large
Scale Integr. (VLSI) Syst., vol. 3,no. 3, pp. 430–436, Mar.
1995
[8] W. K. Huang, X. T. Cheng, and F. Lombardi, “On the
diagnosis of programmable interconnect systems: Theory
and application,” in Proc. IEEE VLSI Test Symp.,
Princeton, NJ, 1996, pp. 204–209.
[9]M. Abramovici,J.Emmert and C. Stroud, “Online fault
tolerance for FPGA logic blocks,” in IEEE transaction very
large scale integrated VLSI system Vol 115,2007,216-226.
[10]S. Mitra, P. P. Shirvani, and E. J. Mccluskey, “Fault
location in fpgabased reconfigurable systems,” in Proc.
IEEE Int. High Level Design Validation Test Workshop,
1998, pp. 12–14.
6. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
_______________________________________________________________________________________
Volume: 03 Issue: 02 | Feb-2014, Available @ http://www.ijret.org 650
[11]M. B.Tahorri, “High resolution application specific fault
diagnosis of fpgas,” in IEEE transaction very large scale
integrated VLSI system Vol19,2011,1775-1786.
[12]C. Stroud and M. Abramovici, “BIST-Based diagnosis
of FPGA interconnect,”inProc. Int. Test Conf., 2002, pp.
618–627.
[13]M. B.Tahorri, “Diagnosis of open defects in fpgas
interconnects,” in Proc. Design Autom. Conf., 2003,
pp.678–681.
[14]G.Hariss&R.Tessier“Testing and diagnosis of
interconnects faults in cluster based FPGA architecture” ,”
in IEEE transaction on computer aided design of integrated
circuit and system Vol 21,2002,1337-1343.
[15]E. Chmelar, “FPGA interconnect delay fault testing”,in
IEEE int test conf.NC 2003,pp 1239-1247.
[16]P.Menon ,W.Xu& R. Tessier, “Design specific path
delay testing in lookup table based fpgas”, in IEEE
transaction on computer aided design of integrated circuit
and system Vol 23,2005,1337-1343.
[17]M. B. Tahoori, “Application dependent testing of FPGA
interconnects,”inProc. Defect Fault Toler. VLSI, 2003, pp.
409–416.
[18]M. Abramovici and E. Charles, “BIST-based test and
diagnosis of FPGA logic blocks,” IEEE Trans. Very Large
Scale Integr. (VLSI) Syst., vol. 9, no. 1, pp. 159–172, Feb.
2001.
[19]X. Sun, J. Xu, B. Chan, and P. Trouborst, “Novel
technique for built-in self-test of FPGA interconnects,” in
Proc. Int. Test Conf., 2000, pp.795–803.
[20]D. Das and N. A. Touba, “a low cost approach for
detecting, locating, and avoiding interconnect faults
in fpga-based reconfigurable systems,”inproc. Int. Conf. Vlsi
des., 1999, pp. 266–269.
[21]M. B. Tahoori,SubhasishMitra” Application-
Independent Testingof FPGA Interconnects” ieee
transactions on computer-aided design of integrated circuits
and systems, vol. 24, no. 11, november 2005.
[22]M. Renovell, J. M. Portal, J. Figueras, and Y. Zorian,
“Testing the configurable interconnect/logic interface of
SRAM-based fpgas,” in IEEEInt. Conf. Design, Automation
and Test Europe, Munich, Germany, 1999,pp. 618–622.
[23]ShantanuDutt, VinayVerma, and Vishal Suthar “Built-
in-Self-Test of fpgas With Provable Diagnosabilities and
High Diagnostic Coverage With application to online
testing” ieee transactions on computer-aided design of
integrated circuits and systems, vol. 27, no. 2, february
2008.
[24]M. B. Tahoori, E. J. Mccluskey, M. Renovell, and P.
Faure, “A multiconfiguration Strategy for an application
dependent testing of fpgas,”in Proc. VLSI Test Symp., 2004,
pp. 154–159.
[25]A.George“ An efficient design for application specific
fault diagnosis of FPGA”, third national conference on
modern trend in Electronics communication & signal
Processing 2013.
[26]JayabrataGhosh&Nur A. Touba “ Improving Diagnosis
resolution of Delay faults in fpgas by exploiting
reconfigurablity” Proceedings in international Symposium
on Defect & fault Tolerance in VLSI system 2001.
[27]T. Nandha Kumar and Fabrizio Lombardi” A Novel
Heuristic Method for Application-Dependent Testing Of a
SRAM-Based FPGA Interconnect” ieee transactions on
computers, vol. 62, no. 1, January 2013
[28]M. B. Tahoori, S. Mitra, S. Toutounchi, and E. J.
McCluskey,“FaultGrading FPGA interconnect test
configurations,” in Proc. Int. Test Conf.,Baltimore, MD,
2002, pp. 608–617.
[29]W. H. Kautz, “Testing for faults in wiring networks,”
IEEE Trans.Comput., vol. C-23, no. 4, pp. 358–363, Apr.
1974.
BIOGRAPHIES
A.R. Kasetwar is a research
scholar from BapuroDeshmukh
College of Engineering, Wardha
working in the field of D.S.P-
VLSI (phone: 9766813382;
E-mail:
abhaykasetwar@gmail.com).
Gaurav Kumar is pursuing his M.E
from Dr.BhausahebNandurkar
College of Engineering and
Technology, Yavatmal India
(phone: 9860402620;
E-
mail:singhgauravsep@gmail.com)
.
Dr.S. M. Gulhane working as a
Professor and Head of Dept. of
Elect. & Telecomm. Engg. and
holding a charge of Dean (Admin.)
at Jawaharlal Darda Inst. Of Engg.
& Tech., Yavatmal,(phone
9881832100;
E-mail :
smgulhane67@rediffmail.com)