Implementation of an FPGA based video capture card. A video input is to be taken from an analog VGA source i.e. PC . The is video is in RGB format and resolution of 1024x768p with color depth of 24 bit i.e. RGB(8:8:8) at 30 fps the data rate comes out to be 566MbpsWhat is 1024x768 in analog?The output is to be send to a target PC through USB port. the maximum data rate of USB which is at hi-speed is 480 Mbps but the throughput cannot go further than 200Mbps.
Hence video compression technique had to be applied to reduce the data rate so that data could be sent over USB The data should be captured by the target PC and should be stored there in some standard video format i.e. AVI, 3GP, ASF, WMV, WMA etc The video should be played back through any standard or custom media player. It is being done on FPGA rather than any other ASIC because of its fast parallel processing capability.
FPGA Mezzanine ConnectorsThe FMC-Video daughter board (hereafter referred to as FMC -Video) includes severalvideo interfaces. A DVI connector supports both analog and digital video data. SDTVinput is supported through S-Video and composite inputsThe Digital Visual Interface (DVI) input on this board supports the DVI 1.0 specification forcombined single-link digital and analog video.DDC-EDIDThe DVI input supports identification through the use of an Extended DisplayIdentification Data (EDID) structure available through the Display Data Channel (DDC)interface. This consists of an I2C EEPROM that is powered through the DVI connector andaccessible through the connector. The FMC-Video board also includes the ability to accessthis EEPROM internally for programming it,
Verilog/VHDL design in ISE integrated simulation environment:First code had to be written in hardware descriptive language Verilog or VHDLAfter that this hdl code is synthesized into registers and combinational logicThis technique is very difficult to use for this project as very high level of HDL programming is required.Xilinx Platform Studios Microblaze soft processor handling the software portion of the design andDifferent HDL cores handling the hardware portion of the design which are integrated with microblaze through PLB. System Generator Mdl Very basic and limited numbers of blocks are availableAccelDSP Matlab code
enables you to design a complete embedded processor system (microblaze) for implementation in aXilinx FPGA device. XPS is used primarily for embedded processor hardware system development.Configuration of the microprocessor, peripherals, and the interconnection of thesecomponents, takes place in XPS.All the tools were studied and it was decided that Xilinx platform studios will be the better option because input and output interfaces will be easy to develop in XPS.Since there are in-built IP cores for different interfaces. But then it was realized that the video compression and USB core was not available free of cost so it had to be developed by some other means. Base platform for video capturing was prepared using XPSFirst study of verilog was started but then it was realized that developing a verilog code for video compression required a very high level of programming.A matlab code will be written in matlab for video compression, implemented in AccelDSP, What are the limitations of matlab code to be implemented on AccelDSP?exported to a system generator as a block , simulate and tested through simulink and then passes to XPS as a peripheral core where it will be attached to microblaze through a bus i.e. PLB or FSL.Difference b/w FSL and PLB?
The base platform includes the following Processor IP blocks:• MicroBlaze™ 32-bit Soft Microprocessor• Local Memory Bus (LMB)• LMB Block RAM Controller• Block RAM Block Memory• Processor Local Bus (PLB)• XPS Uartlite• Xilinx Platform Studio (XPS) General Purpose Input/Output (GPIO)• XPS Inter-Integrated Circuit (IIC) Controller• XPS System ACE™ Compact Flash Controller• External Multi-port Memory Controller (MPMC)• MDM MicroBlaze Debug Module• Clock Generator• Processor System Reset
Video capturing is capturing of frames from analog VGA signals or digital DVI streams. Capturing images means reading data from the VGA or DVI signal and converting this data into a digital image.The Frame Grabber synchronizes itself with the video source to capture images at the resolution and color depth output by the video source or at the maximum color depth and resolutionsupported by the Frame GrabberIIC Programming for the dvi_in PCOREThe dvi_in PCORE supports multiple modes and video resolutions. The IIC programmingis used to set the VSK into the desired mode and resolution. The IIC programming is nothandled by the dvi_in PCORE. The MicroBlaze processor performs the IIC processing byway of the XPS_IIC PCORE.The dvi_in PCORE brings in the input signals from the input chip, registers the signals,and groups the video signals into a unified bus that can be connected to other EDKPCOREs for processing. A bus interface called DVI_VIDEO_OUT has been defined for the dvi_in PCORE outputs
The de_gen PCORE generates a Data Enable signal when the video source is analog. TheData Enable signal indicates when active video is present. It does this by analyzing the input HSYNC and VSYNC signals combined with front porch and back porch values. The MicroBlazeprocessor writes the porch values to the block over the PLB interface. The de_gen PCORE is inactive when the video source is DVI, because the Data Enable signal is already generated by the source. Output is a fully digital signal which can be used by other peripherals for further processing.Software registers are used which read the values of Vsync and Hsync along with data and after calculations by microblaze, write the values of front porch and back porch to block over PLB interface.
The video capturing design has been prepared in Xilinx Platform studios.
USB as its name would suggest is a serial bus. It uses 4 shielded wires of which two are power (+5v & GND). The remaining two are twisted pair differential data signals. It uses a NRZI (Non Return to Zero Invert) encoding scheme to send data with a sync field to synchronize the host and receiver clocksThe Universal Serial Bus is host controlled. There can only be one host per bus. The specification in itself, does not support any form of multimaster arrangement.Up to 127 devices can be connected to any one USB bus at any one given time. Each USB device has a unique address.USB ProtocolsUnlike RS-232 and similar serial interfaces where the format of data being sent is not defined, USB is made up of several layers of protocolsEach USB transaction consists of a Token Packet (Header defining what it expects to follow), an Data Packet, (Containing the payload) and a Status Packet (Used to acknowledge transactions and to provide a means of error correction)
EZ-Host has an HPI interface. The HPI interface provides DMAaccess to the EZ-Host internal memory by an external host, plusa bidirectional mailbox register for supporting high level communicationprotocols. This port is designed to be the primaryhigh-speed connection to a host processor. Complete control ofEZ-Host can be accomplished through this interface via anextensible API and communication protocol.
Understanding of different languages i.e. Verilog, VHDL , C/C++,MatlabLate arrival of kit.Working on the interfaces of different kit before arrival of the current kit. Spartan 3E 1600.Learning of different tools i.e. XPS, AccelDSP, System Generator.Developing of a USB controller is very difficult because it has a very complex protocol.Developing some software application on PC which could grab and store data from its USB port.Development of a custom media player which could decompress the data according to our compression technique.Integrating an external RAM i.e. DDR2 in our design, because the size of a frame is too large to be stored with in FPGA.
A single uncompressed color image or video frame with a medium resolution of 500 x 500 pixels would require 100 s for transmission over an ISDN (Integrated Services Digital Network) link having a capacity of 64 Kbps.
Simplest and earliest data compression scheme developed• Sampled images and audio and video data streams oftencontain sequences of identical bytes• by replacing these sequences with the byte pattern to berepeated and providing the number of its occurrence,data canbe reduced substantially
AccelDSP converts Matlab code directly into FPGA with (slight modifications) AccelDSP comes with Accel Ware toolkit.Accel Ware contains IP cores for common dsp algorithmsFloating to Fixed Point ConversionAlgorithm development uses floating-point numbers as they represent infinite precisionFloat ( 32 bits ) & Double ( 64 bits )To realize algorithm in hardware floating-point numbers are not practicalConsumes more hardware resourcesNo. of gates, adder, multiplier etcConvert precise floating-point to less precise fixed-point numbers. Process called quantizationDuring algorithm developed, floating-point numbers are often used because theyrepresent infinite precision. When it comes time to realize the algorithm in hardware,floating-point numbers are often not practical. The solution is to convert very precisefloating-point numbers to less precise fixed-point numbers. In MATLAB, this conversionprocess is called quantization and is done using the quantizer and quantize functionSo this process introduces additional noise called quantization noise into the signal. Determining the optimum no of bits requires that we know the dyanamic range of signal at various stages of design
Fpga video capturing
IMPLEMENTATION OF AN FPGA BASED VIDEO CAPTURE CARD<br />1<br />PRESENTED BY:<br /> SHEHRYAR<br />
PROJECT AIM<br />FPGA based video capture card <br />Input from an Analog VGA source<br />Resolution = 1024x768 pixels<br />Frame Rate= 30 fps<br />Format = RGB (8:8:8)<br />Data Rate = 566 Mbps<br />Output through a USB 2.0 port <br />Interfacing VGA with USB<br />Data Rate = 480 Mbps at hi-speed<br />4<br />
PROJECT AIM (CONTD.)<br />Video compression<br />Data Rate reduction<br />Memory reduction<br />Capture and storage of compressed video data on PC<br />Standard video format<br />Play back through media player<br />Major parts of project<br />Video capturing<br />USB interfacing<br />Video compression<br />5<br />
VIDEO CAPTURE (CONTD.)<br />DE-Gen PCORE<br />Data Enable signal <br />Generated from HSYNC and VSYNC signals combined with front porch and back porch values<br />Output fully captured video signal<br />14<br />
FPGA<br />UNIVERSAL SERIAL BUS (CONTD.)<br />17<br />Cypress CY7C67300<br />MICROBLAZE<br />USB Controller<br />RISC Core<br />Host Port Interface<br />Processor Local Bus<br />ILMB<br />DLMB<br />BRAM<br />SIE<br />Spartan 3A DSP 3400 Board<br />USB Port<br />PC<br />USB port<br />
CHALLANGES<br />Understanding of different languages<br />HDL<br />C/C++<br />Learning of XPS<br />Integration of PCOREs<br />Developing the driver <br />Integrating software with hardware<br />Development of USB controller<br />Difficult as RISC processor is involved<br />18<br />
VIDEO COMPRESSION<br />What is Video<br />What is Compression<br />Need for Compression<br />Space requirements<br />Storage constraints<br />Bandwidth requirements<br />Channel constraints<br />
VIDEO COMPRESSION (CONTD.)<br />Spatial Redundancies<br />Correlation between adjacent data points<br />Intra –within the frame<br />Temporal Redundancies<br />Correlation between different <br /> frames in a video<br />Inter –across the frames<br />uses block-based motion<br /> compensation<br />20<br />
VIDEO COMPRESSION (CONTD.)<br />To encode a frame each operation is performed at macroblock (MB) level (n x n block of pixel. n=16 )<br />Intra coded frame (I): every MB of the frame is coded using spatial redundancy<br />Inter coded frame (P): most of the MBs of the frame are coded exploiting temporal redundancy (in the past) <br />Bi-predictive frame (B): most of the MBs of the frame are coded exploiting temporal redundancy in the past and in the future<br />Group of Picture (GOP): <br />sequence of pictures between two I-frames<br />22<br />
VIDEO COMPRESSION (CONTD.)<br />Exploiting Spatial Redundancies<br />RGB to YCC<br />Chrominance vs luminance<br />Chroma-sub sampling<br />Sensation of human eye<br />8x8 Blocks<br />DCT2<br />Frequency Domain<br />Real part of FFT<br />DC and AC coefficients <br />Easy to implement<br />
VIDEO COMPRESSION(CONTD.)<br />Exploiting Spatial Redundancies<br />Zig Zag Scan<br />Scan Pattern<br />Quantization<br />Quantization Table<br />Quantization threshold value<br />Truncation of coefficients<br />High frequency values approaching zero<br />
VIDEOCOMPRESSION(CONTD.)<br />Exploiting Spatial Redundancies<br />Run Length Encoding<br />Consecutive coefficients with the same value<br />Assigning number of repetitions of same value <br />Huffman Encoding<br />Order of probability of occurrence<br />Huffman Table<br />25<br />