services on...... embedded(ARM9,ARM11,LINUX,DEVICE DRIVERS,RTOS) VLSI-FPGA DIP/DSP PLC AND SCADA JAVA AND DOTNET iPHONE ANDROID If ur intrested in these project please feel free to contact us@09640648777,Mallikarjun.V
Hardware Implementation of a Digital Watermarking System for Video Authentication
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.IEEE Transactions on Circuits and Systems for Video Technology, Paper ID: 5318 Hardware Implementation of a Digital Watermarking System for Video Authentication Sonjoy Deb Roy, Xin Li, Yonatan Shoshan, Alexander Fish, Member, IEEE and Orly Yadid-Pecht, Fellow, IEEE Abstract—This paper presents a hardware implementation of a Nowadays, digital video WM techniques are widely used indigital watermarking system that can insert invisible, semi-fragile various video applications -. For video authentication,watermark information in compressed video streams in real time. WM can ensure that the original content has not been altered.The watermark embedding is processed in the discrete cosinetransform (DCT) domain. To achieve high performance, the WM is used in fingerprinting to track back a malicious user andproposed system architecture employs pipeline structure and uses also in a copy control system with WM capability to preventparallelism. Hardware implementation using field programmable unauthorized copying , . Because of its commercialgate array (FPGA) has been done and an experiment was carried potential applications, current digital WM techniques haveout using a custom versatile breadboard for overall performance focused on multimedia data and in particular on video contents.evaluation. Experimented results show that such a Over the past few years, researchers have investigated thehardware-based video authentication system using thiswatermarking technique features minimum video quality embedding process of visible or invisible digital watermarksdegradation and it can withstand certain potential attacks i.e. into raw digital video , uncompressed digital video both oncover-up attack, cropping, segment removal on video sequences. software - and hardware platforms -. ContraryFurthermore, the proposed hardware based watermarking to still image WM techniques, new problems and newsystem features low power consumption, low cost implementation, challenges have emerged in video WM applications. Shoshan ethigh processing speed and reliability. al.  and Li et al.  presented an overview of the various Index Terms—Digital video watermarking, Hardware existing video WM techniques and showed their features andImplementation, Real-time data hiding, Video authentication, specific requirements, possible applications, benefits andVLSI. drawbacks. The main objective of this article is to describe an efficient I. INTRODUCTION hardware-based concept of a digital video WM system which features low power consumption, efficient and low costR ECENTLY the advances in electronic and information technology, together with the rapid growth of techniquesfor powerful digital signal and multimedia processing, have implementation, high processing speed, reliability and invisible, semi-fragile watermarking in compressed videomade the distribution of video data much easier and faster  - streams. It works in the discrete cosine transform (DCT). However, concerns regarding authentication of the digital domain in real time. The proposed WM system can bevideo are mounting, since digital video sequences are very integrated with video compressor unit and it achievessusceptible to manipulations and alterations using widely performance that matches complex software algorithms available editing tools. This issue turns to be more significant within a simple efficient hardware implementation. The systemwhen the video sequence is to be used as evidence. In such also features minimum video quality degradation and cancases, the video data should be credible. Consequently, withstand certain potential attacks i.e. cover-up attack,authentication techniques are needed in order to maintain cropping, segment removal on video sequences. Theauthenticity, integrity and security of digital video content. As a above-mentioned design objectives were achieved viaresult, digital watermarking (WM), a data hiding technique has combined parallel hardware architecture with a simplifiedbeen considered as one of the key authentication methods , design approach of each of the components. This can enhance. Digital Watermarking (WM) is the process of embedding the suitability of this design approach to fit easily in devicesan additional, identifying information within a host multimedia that require high tampering resistance such as surveillanceobject, such as text, audio, image or video. By adding a cameras and video protection apparatus. The proposed WMtransparent watermark to the multimedia content, it is possible system is implemented using Verilog hardware descriptionto detect hostile alterations as well as to verify the integrity and language (HDL) synthesized into a field programming gatethe ownership of the digital media. array (FPGA) and then experimented using a custom versatile breadboard for performance evaluation. The remainder of this paper is organized as follows. Section Manuscript received February 15, 2011; revised November 7, 2011;accepted April 13, 2012. This paper was recommended by Associate Editor II provides a survey on the previous related work on video WMMladen Berekovic. technologies. The details of the proposed novel video WM S. D. Roy, X. Li, Y. Shoshan and O. Yadid-Pecht are with the Integrated system solution are described in Section III. Section IVSensors Intelligent System (ISIS) Lab, Department of Electrical and ComputerEngineering, University of Calgary, Calgary, AB T2N 1N4, Canada (e-mail: presents the hardware architecture of the proposed video WMorly.firstname.lastname@example.org). A. Fish is now with the VLSI systems center, system, followed by a description of the FPGA-basedBen-Gurion University, Beer-Sheva, Israel (e-mail: email@example.com). prototyping of the hardware architecture. Section V discusses Copyright (c) 2012 IEEE. Personal use of this material is permitted. However, permission to use this material for any other purposes must be obtained from the IEEE by sending an email to firstname.lastname@example.org.Copyright (c) 2011 IEEE. Personal use is permitted. For any other purposes, permission must be obtained from the IEEE by emailing email@example.com.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.IEEE Transactions on Circuits and Systems for Video Technology, Paper ID: 5318 2the experimental setup and verification methodology used to and implement any WM algorithm at various level ofanalyze the FPGA experimental results which is followed by complexity. Over the last decade, numerous softwarecomparisons with existing approaches. Conclusions are implementations of WM algorithms for relatively low data ratepresented in section VI. signals (like audio and image data) have been invented -. While the software approach has the advantage of flexibility, II. RELATED WORK ON VIDEO WATERMARKING SYSTEMS computational limitations may arise when attempting to utilize these WM methods for video signals or in portable devices. A. Robustness level of WM for Video Authentication Therefore, there is a strong incentive to apply hardware-based The level of robustness of the WM can be categorized into implementation for real-time WM of video streams . Thethree main divisions: Fragile, Semi-fragile and Robust. hardware-level design offers several distinct advantages overA watermark is called fragile if it fails to be detectable after the the software implementation in terms of low powerslightest modification. A watermark is called robust if it resists consumption, reduced area and reliability. It enables thea designated class of transformations. A semi-fragile addition of a tiny, fast and potentially cheap watermarkWatermark is the one which is able to withstand certain embedder as a part of portable consumer electronic devices.legitimate modifications, but cannot resist malicious Such devices can be a digital camera, camcorder or othertransformations , . There is no absolute robustness multimedia devices, where the multimedia data arescale and the definition is very much dependent on the watermarked at the origin. On the other hand, hardwarerequirements of the application at hand, as well as the set of implementations of WM techniques require flexibility in thepossible attacks. Different applications will have different implementation of both computational and design complexity.requirements. The algorithm must be carefully designed to minimize any In copyright protected applications, the attacker wishes to susceptibility as well as maintaining a sufficient level of security.remove the WM without causing severe damage to the image.This can be done in various ways, including digital-to-analog C. Past research on Video Watermarkingand analog-to-digital conversions, cropping, scaling, segment In the past few years, a great deal of research effort has beenremoval and others , . Robust WM is used in these focused on efficient WM systems implementation usingapplications so that it remains detectable even after these hardware platforms. For example, Strycker et al.  proposedattacks are applied, provided that the host image is not severely a well-known video WM scheme, called Just Anotherdamaged. For image integrity applications, fragile watermarks Watermarking System (JAWS), for television (TV) broadcastare commonly used so that it can detect even the slightest monitoring and implemented the system on a Philips’schange in the image. Most of the fragile WM methods perform Trimedia TM-1000 Very Long Instruction Word (VLIW)the embedding of added information in the spatial domain. processor. The experimental results proved the feasibility of Unlike the fragile WM techniques, a semi-fragile invisible WM in a professional TV broadcast monitoring system. Mathaiwatermark, such as that proposed in the present paper, is et al. ,  presented an Application Specific Integrateddesigned to withstand certain legitimate manipulations i.e. Circuits (ASIC) implementation of the JAWS WM algorithmlossy compression, mild geometric changes of images, but is using 1.8V, 0.18μm complementary metal oxidecapable to reject malicious changes i.e. cropping, segment semiconductor (CMOS) technology for real-time video stream embedding. With a core area of 3.53 mm2 and operatingremoval, etc. Furthermore, the semi-fragile approaches are frequency of 75MHz, that chip implemented watermarking ofgenerally processed in the frequency domain. raw digital video stream at a peak pixel rate of over 3 Mpixels/s Frequency-domain WM methods are more robust than the while consuming only 60 mW power. A new real-time WMspatial-domain techniques . In practical video storage and VLSI architecture for spatial and transform domain wasdistribution systems, video sequences are stored and presented by Tsai and Wu . Maes et. al.  presented thetransmitted in a compressed format and during compression theimage is transformed from spatial domain to frequency domain.Thus, a watermark that is embedded and detected directly in thecompressed video stream can minimize computationallydemanding operations. Therefore, working on compressedrather than uncompressed video is beneficial for practical WMapplications. B. Watermark Implementations-Hardware vs. Software A WM system can be implemented on either software orhardware platforms, or some combinations of the two. Insoftware implementation, the WM scheme can simply beimplemented in a PC environment. The WM algorithm’soperations can be performed as machine code software runningon an embedded processor. By programming the code and Fig. 1. Overview of the proposed video WM system.making use of available software tools, it can be easy to designCopyright (c) 2011 IEEE. Personal use is permitted. For any other purposes, permission must be obtained from the IEEE by emailing firstname.lastname@example.org.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.IEEE Transactions on Circuits and Systems for Video Technology, Paper ID: 5318 3Millennium watermarking system for copyright protection of Finally, watermarked DCT coefficients of each video frame areDVD Video and some specific issues such as watermark encoded by the video compression unit which outputs thedetector location and copy generation control were also compressed frame with embedded authentication watermarkaddressed in their work. An FPGA prototyping is presented for data.HAAR-wavelet based real time video watermarking in  byJeong et. al. A real-time video watermarking system using DSP A. Video Compressionand VLIW processors was presented in  that embeds the Currently all popular standards for video compression,watermark using fractal approximation by Petitjean et. al. namely MPEG-x (ISO standard) and H.26x formats (ITU-TMohanty et. al.  presented a concept of secure digital standard), use the same basic hybrid coding schemes that applycamera (SDC) with a built-in invisible-robust watermarking the principle of motion-compensated prediction andand encryption facility. Also another watermarking algorithm block-based transform coding using DCT . MPEG-2 videoand corresponding VLSI architecture that inserts a broadcasters compression standard has been described below as alogo (a visible watermark) into video streams in real-time was representative case for utilizing the WM algorithm for morepresented  by the same group. advanced DCT-based compression methods. In general, digital WM techniques proposed so far for media Generally, a video sequence is divided into multiple group ofauthentication are usually designed to be visible or pictures (GOP), representing sets of video frames which areinvisible-robust or invisible-fragile watermarks according to neighboring in display order. An encoded MPEG-2 videothe level of required robustness , . Each of the schemes is sequence is made up of two frame-encoded pictures:equally important due to its unique applications. In this workhowever we present the hardware implementation of invisible intra-frames (I-frame) and Inter-frames (P-frame or B-frame).semi-fragile watermarking system for video authentication. P-frames are forward prediction frames and B-frames areThe motivation here is to integrate the video watermarking bidirectional prediction frames. Within a typical sequence of ansystem with a surveillance video camera for real time encoded GOP, P-frames may be 10% of the size of I-frames andwatermarking in the source end. Our work is the first B-frames are about 2% of the I-frames.semi-fragile watermarking scheme for video streams with There can be two types of redundancies in video frames:hardware architecture. temporal redundancy and spatial redundancy. MPEG-2 video compression technique reduces these redundancies to compress III. PROCEDURE FOR DIGITAL VIDEO WATERMARKING the images. SYSTEM Within a GOP, the temporal redundancy among the video In this section, a detailed description of the hardware frames is reduced by applying temporal differential pulse codearchitecture of the proposed digital video WM system is modulation (DPCM). The major video coding standards likeprovided. Fig. 1 illustrates the general block diagram of the H.261, H.263, MPEG-1, MPEG-2, MPEG-4 and H.264 are allproposed system which is comprised of four main modules: a based on the Hybrid DPCM/DCT CODEC, which incorporatesvideo camera, video compression unit, watermark generation motion estimation and motion compensation function, aand watermark embedding units. transform stage and an entropy encoder -. This has The watermark embedding approach is designed to be been illustrated in Fig. 2 an input video frame Fn is comparedperformed in the DCT domain. This holds several advantages. with a reference frame (previously encoded) F’n-1 and a motionDCT is used in the most popular stills and video compression estimation function finds a region in F’n-1 that matches theformats including JPEG, MPEG, H.26x. This allows the current macro-block in Fn. The offset between the currentintegration of both watermarking and compression into a single macro-block position and the chosen reference region is asystem. Compression is divided into three elementary phases: motion vector, dk. Based on this dk, a motion compensatedDCT transformation, quantization and Huffman encoding. prediction F”n is generated and it is then subtracted from theEmbedding the watermark after quantization makes the current macro-block to produce a residual or prediction error, ewatermark robust to the DCT compression with a quantization . For proper decoding this motion vector, dk has to beof equal or lower degree used during the watermarking process. transmitted as well.Another advantage of this approach is that in image or video The spatial redundancy in the prediction error, e (also calledcompression the image or frames are first divided into 8x8 the displaced frame difference) of the predicted frames and theblocks. By embedding the WM specifically to each 8x8 block, I-frame is reduced by the following operations: each frame istamper localization and better detection ratios are achieved split into blocks of 8 × 8 pixels which are compressed using the. DCT followed by quantization (Q) and entropy coding Each of the video frames undergoes 8x8 block DCT and (Run-level-coding and Huffman coding) (Fig. 2).quantization. Then they are passed to the watermark embedding Hardware implementation of MPEG-2 standard is not thatmodule. The watermark generation unit produces a specific simple. For simplifying the implementation of the videowatermark data for each video frame, based upon initial compressor module, Motion JPEG (MJPEG) video encodingpredefined secret keys. The watermark embedding module technique rather than the MPEG-2 can also be considered and itinserts the watermark data into the quantized DCT coefficients was chosen in our experiment. The MJPEG standard wasfor each video frame according to the algorithm detailed below. developed by Xinph.org Foundation for Theora encoders (based on VP3 made by On2 Technologies) ,  toCopyright (c) 2011 IEEE. Personal use is permitted. For any other purposes, permission must be obtained from the IEEE by emailing email@example.com.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.IEEE Transactions on Circuits and Systems for Video Technology, Paper ID: 5318 4compete efficiently with MPEG encoders. The encoding and the frame number. Any manipulation, such as frameprocess, performed on the raw data, is similar in both MPEG-2 exchange, cut and substitution will be detected by the specificand MJPEG. The only difference is the motion compensated watermark. The corresponding N-bit (64-bit) binary valuedprediction which is used in MPEG to encode the inter-frames (P, pattern, ai, will be used as a primitive watermark sequence. ThisB frames). would generate a different watermark for every frame (time-varying) because of the instantaneously changing serial B. Watermark Generation number and time. Since simple watermark data can be easily cracked, it is The block diagram of the proposed novel watermarkessential that the primitive watermark sequence will be generator is depicted in Fig. 4. A secure watermark pattern isencoded by an encipher. This insures that the primitive generated by performing expanding, scrambling andwatermark data is secured before being embedded into each modulation on a primitive watermark sequence. There are twovideo frame. Currently, there are different approaches to digital secret keys: Key 1 is used for scrambling and Key 2 isconvert a primitive watermark into a secured pattern  - . used for the RNG module which generates pseudo-randomContradictory to existing solution approaches, a novel video sequence.watermark generator is proposed. The WM generator generates Initially, the primitive binary watermark sequence, ai (of 64a secure watermark sequence for each video frame using a bit) is expanded (ai’) and stored in a memory buffer. It ismeaningful primitive watermark sequence and secret input expanded by a factor cr. For example, if we use a 64 bitkeys. primitive watermark sequence then for a 256 x 256 pixels video According to the recommendation by J. Dittman et al. in  frame, cr will be (256 x 256 / (8x8)) or, 1024. This is done tofor the feature of a video watermark, a primitive watermark meet the appropriate length for the video frame.pattern can be defined as a meaningful identifying sequence for Scrambling is actually a sequence of XOR operations amongeach video frame. As shown in Fig. 3, the unique meaningful the contents (Bytes) of the expanded primitive WM in thewatermark data for each video frame contains the time, date, buffer. Key 1 initiates the scrambling process by specifying twocamera ID and frame serial number (which is related to its different addresses (Add1 and Add2) of the buffer for havingcreation). This will establish a unique relationship of the video the XOR operation in between them. The basic purpose ofstream frames with the time instant, the specific video camera scrambling is to add complexity and encryption in the primitive watermark structure. After that the expanded and scrambled sequence ci is obtained. The bit size of ci is the same as the size of the video of frame. Finally, the expanded and scrambled watermark sequence, ci is modulated by a binary pseudo-random sequence to generate the secured watermark sequence wi. Due to the random nature of the pseudo-random sequence pi, modulation makes the watermark sequence ci to be a pseudo random sequence and thus difficult to detect, locate and manipulate. A secure pseudo-random sequence pi used for the modulation can be generated by a random number generator (RNG) structure using the Key 2. RNG is based on a Gollmann Cascade of Filtered Feedback with Carry Shift Register (F-FCSR) cores, presented by Li et al. , . Fig. 2. Block diagram of hybrid DPCM/DCT coding scheme . Fig. 3. Structure of the primitive watermark. Fig. 4. Block diagram of the proposed watermark generator.Copyright (c) 2011 IEEE. Personal use is permitted. For any other purposes, permission must be obtained from the IEEE by emailing firstname.lastname@example.org.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.IEEE Transactions on Circuits and Systems for Video Technology, Paper ID: 5318 5 Fig. 5. Dataflow of the proposed WM algorithm. calculate the modification value for each selected cell. C. Watermark Embedding 4. Modify the identified watemarkable DCT coefficients Watermark embedding is done only in the I (Intra) frames. It according to the modification values.is understandable since B and P frames are predicted from I 5. Perform inverse DCT and inverse quantization forframes. If all I, B, P frames are watermarked, the watermarked each 8 × 8 block watermarked coefficients todata of previous frame and the one of the current frame may reconstruct the original I pixel values.accumulate, resulting in visual artifacts (called ‘drift’ or ‘error 6. Buffer the reconstructed watermarked I frame.accumulation’) during decoding procedures . To avoid such 7. Perform motion estimation for B/P frames to obtaina major issue, within each GOP of MPEG-2 video stream, only the motion vector.the I-frame is identified to be watermarked. 8. Using the motion vector and reconstructed The watermarking algorithm should be hardware friendly in watermarked I frame motion compensation is done.a way that it can be implemented in hardware with high 9. Difference between the motion-compensatedthroughput. For this purpose one concern for the algorithm prediction frame and the watermarked reference framedevelopment should be that it must support pipelining I is the prediction error.architecture so that two or more macro-blocks inside a single 10. Perform DCT, quantization and zig-zag-scan on thevideo frame or more than one frame can be watermarked prediction error.simultaneously. This feature will aid in increasing the 11. Perform entropy coding for the blocks of the differentprocessing speed of watermarking. frames. The watermark embedding approach used in this work was 12. Generate compressed and watermark embedded videooriginally developed by Nelson et al.  and Shoshan et al. steam.. This WM algorithm, capable of inserting a semi-fragile 13. To avoid heavy computationally demandinginvisible watermark in a compressed image in the DCT operations and to simplify the hardwarefrequency domain, was modified and then applied inwatermarking of a video stream. In general, for each DCTblock of a video frame, N cells need to be identified as‘watermarkable’ and modulated by the watermark sequence.The chosen cells contain non-zero DCT coefficient values andare found in the mid-frequency range. This algorithm isdetailed by Shoshan et al . The proposed WM algorithmalong with MPEG-2 video encoding standard is presented as aflow chart in Fig. 5. This can be described briefly as following: 1. Split I frame and watermark data into 8 × 8 blocks. 2. For each 8 × 8 block (Both Watermark Data and I frame), perform DCT, quantization and zig-zag-scan to generate quantized DCT coefficients. Fig. 6. Block diagram of the hardware system architecture. 3. Identify N watermarkable cells for each block andCopyright (c) 2011 IEEE. Personal use is permitted. For any other purposes, permission must be obtained from the IEEE by emailing email@example.com.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.IEEE Transactions on Circuits and Systems for Video Technology, Paper ID: 5318 6 implementation, watermarking can be done with The hardware implementation for the complete design is MJPEG standard video compressing unit. Since developed using Verilog HDL. As previously mentioned, all watermark is only embedded on I frames, the steps the processing in the implementation is assumed to be done on a stated above will be the same for the MJPEG video block basis (such as 8 × 8 pixels). First, the captured video standard except for the motion estimation and motion frame is temporarily stored in a memory buffer and then each compensation. block of the frame data is continuously processed by the video compressor unit using DCT and quantization cores. The IV. HARDWARE ARCHITECTURE DESIGN watermark embedder block inserts an identifying message, There exists a wide range of available techniques to generated using the watermark generator unit, in the selectedimplement the peripheral blocks of the proposed video WM block data within the video frame and sends it to memory forsystem. Here, the focus is on simplifying the process as much as storage. The control unit is responsible for driving thepossible, thus making it fit easily within existing video operations of the modules and the data flow in the wholeprocessing circuitry. At the same time, the security level and system.video frame quality are kept high. An overall view of the A. Video Compressorhardware implementation for the video WM system is depicted For hardware implementation of the video compressor module,in Fig. 6. Basically, the proposed system architecture includes the Motion JPEG (MJPEG) video encoding technique rathersix modules: video camera, video compressor, watermark than the MPEG-2 standard was chosen as because MJPEGgenerator, watermark embedder, control unit and memory. The offers several significant advantages over the MPEG-2parts implemented by the FPGA are shown in shaded blocks.Fig. 7. Hardware architecture of MJPEG video compressor module with Watermarking unit (Fori is the original frame, Fref is the reference frame). Key 1 Ai ci Addr 2Counter 1 Counter 2 Primitive WM buffer Addr 1 Bi wi pi Random Number Key 2 Generator Pseudo Random Sequence (b) (c)Fig. 8. Hardware architecture of the watermark generator module. Fig. 9. Hardware architecture of the watermark embedder module designed by Shoshan et al. .Copyright (c) 2011 IEEE. Personal use is permitted. For any other purposes, permission must be obtained from the IEEE by emailing firstname.lastname@example.org.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.IEEE Transactions on Circuits and Systems for Video Technology, Paper ID: 5318 7technique for hardware implementation . Even though proposed Gollman Cascade Filtered Feedback with Carry ShiftMJPEG provides less compression than MPEG-2, it is easy to Register (FFCSR) RNG , seeded with secret key Key2, isimplement MJPEG format in hardware with relatively low used to modulate the expanded and scrambled watermarkcomputational complexity and low power dissipation. sequence ci. Finally, the generated secure watermark data wi isMoreover, it is available for free usage and modification, and it embedded into the video stream by the watermark embedder.is now supported by many video data players like VLC Media C. Watermark EmbedderPlayer , MPlayer  and so on , . Furthermore, using MJPEG video encoding technique has no A schematic view of the hardware architecture for theeffect on the watermark embedding procedure, since the watermark embedder unit is presented in Fig. 9. As describedintra-frames (chosen as watermarkable) in both compression by Shoshan et al. , the watermark embedder works in twostandards have the same formats. This is due to the same phases (calculation and embedding). When considering a cycleencoding process that is performed on the raw data. The of embedding the watermark in one 8 × 8 block of pixels, eachdifference is motion-compensated prediction, which is used to phase takes one block cycle. Two blocks are processedencode the inter-frames. However, as described, only the simultaneously in a pipelined manner so that the embeddingintra-frames are identified to be watermarked, thus the process only requires one block cycle. As the number of cells toprocedure does not affect the watermark embedding process. be watermarked (N) in an 8x8 block increases, the securityTherefore, for our evaluation purposes, the MJPEG robustness of the algorithm also increases. But such an increasecompression standard was found to be a better alternative for reduces the video frame quality because of the reduction in thehardware implementation of the video compressor module than originality of the video frame. Simulation results show thatthe MPEG-2 standard. Fig. 7 depicts the hardware architecture even for N as low as 2, the performance like detection ratio orof the MJPEG video compressor. Peak Signal to Noise Ratio (PSNR) is satisfactory. A block that Depending on the type of the frames, raw video data is produces less than N cells is considered to be unmarked andencoded either directly (intra-frame) or after the reference disregarded. Only blocks that are distinctively homogeneousframe is subtracted (inter-frame). The encoded Inter-frames (P) and have low values for high-frequency coefficients areare passed to the decoder directly and Intra-frames (I) are fed to problematic. The details of the architecture for the watermarkwatermark embedder and after WM embedding they are also embedder module were presented by Shoshan et al. .passed to the decoder. The output of the decoder is combined with the referenceframe data delayed by the bypass buffer so that it arrives at thesame time as the processed one and is fed to the multiplexer.The multiplexer selects the inverse DCT output (Decoded Intraframes) or the sum of the inverse DCT output and previousreference frame (Decoded Inter frames). The multiplexeroutput is stored in the SRAM as reconstructed frames.Furthermore, the decoded Inter frames are stored in a memorybuffer to be utilized as reference frame for the next frameswhich are yet to be encoded. The second branch of the videodata from the encoder block is fed to the watermark embeddermodule. B. Watermark Generator Fig. 8 describes the hardware architecture of the novelwatermark generator. The expanding procedure isaccomplished by providing consecutive clock signal so thatexpanded watermark sequence can be generated by reading outthe primitive watermark sequence (ai) for cr times. Expandedsequence (ai’) is stored in memory buffer. Scrambling is done by using the secret digital key Key1,which has two parts. The two different parts initiate twodifferent counters. At each state of the counters two readings(addressed by Add1 and Add2) from the buffer occur for havingthe XOR operation between them. Thus the scrambledwatermark sequence, ci is generated. Furthermore, differentdigital keys can make the counters start running with different Fig. 10. State diagram of the controller for WM system.states and generate different corresponding addresses, so thatwe can get different patterns of ci. A secure pseudo-random sequence pi, generated by theCopyright (c) 2011 IEEE. Personal use is permitted. For any other purposes, permission must be obtained from the IEEE by emailing email@example.com.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.IEEE Transactions on Circuits and Systems for Video Technology, Paper ID: 5318 8Fig. 11. FPGA-based implementation of the proposed video WM system. individually, and then integrated together to obtain the final D. Control Unit system architecture. The proposed architecture was first The control unit generates control signals to synchronize the modeled in Verilog HDL and the functional simulation of theoverall process in the whole system. As shown in Fig. 10, it is HDL design was performed using the Mentor’s ModelSim tool.implemented as a finite state machine (FSM) with four main Finally, the system design was synthesized to Altera Cyclonestates. EP1C20 FPGA device using Altera Quartus II design software. • S0: In this initial state, all the modules stay idle till the The block diagram of the FPGA implementation of the whole signal Start is set to high. Depending on the Select signal, system is shown in Fig. 11. The synthesis results provide the different processing steps are applied to the video frames. hardware resources usage of the units, shown in TABLE I. When Select is 1, the control state will move to state S1 for intra-frames or directly jump to the state S2 for V. EXPERIMENTAL RESULTS inter-frames if Select is 0. A. Methodology for Verification • S1: In this state, watermark embedding is performed. The intra-frame blocks read from memory and the generatedIn order to evaluate the performance of the hardware-based video WM system properly, the algorithm was tested with two watermark sequence are added together by activating the watermark embedder module in this state. Once the representative grey-scale video clips: A “dynamic” scene, which has significant high-frequency patterns and a “Static” watermarking is completed for the block, Blk signal will be ‘1’ and the FSM moves to state S2. scene, which has several homogenous (low spatial frequency) • S2: In the state S2, the watermarked intra-frame data from areas. The video streams at a rate of 25 frames/s and 256 × 256 the watermark embedder module or the unmodified pixels/frame were captured by a surveillance video camera. For each video stream, a comparison was performed between inter-frame sequences from the memory are encoded. The two sets of experimental results: original video stream vs signal Blk remains ‘0’ until the encoding of the current block is completed. When finished, the encoded andMJPEG video stream and original video data vs watermarked watermarked blocks are fed to the next state S3. video stream. The comparisons were quantified using the standard video quality metric: Peak Signal to Noise Ratio • S3: In this stage, the watermarked and compressed video (PSNR), which is a well-known quantitative measure in frame blocks are written back to the memory. When all multimedia processing used to determine the fidelity of a video the blocks of a frame are encoded the control signal Img frame and the amount of distortion found in it, as suggested by changes to ‘1’ and the FSM goes back to the state S0 for Piva et al.  and Strycker et al. . The PSNR, measured in considering the next frame. If encoding of all the blocks of decibels (dB), is computed using Equation (1). the current frame is not finished, the system goes back to the state S1 or S2 depending on the type of the current 255 PSNR = 10 log10 (1) frame (inter-frame or intra-frame). MSE 1 M −1 N −1 E. FPGA-based Prototyping ∑ ∑ f ( m, n ) − k ( m, n ) 2 = MSE Each module in the proposed digital video WM system, MN = 0= 0 m n (2)including the MJPEG compressor, watermark generator andwatermark embedder has been implemented and testedCopyright (c) 2011 IEEE. Personal use is permitted. For any other purposes, permission must be obtained from the IEEE by emailing firstname.lastname@example.org.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.IEEE Transactions on Circuits and Systems for Video Technology, Paper ID: 5318 9 Fig. 12. Examples of watermarked video streams. (a) Original video frame. (b) MJPEG video frame. (c) Watermarked video frame. (a) (b) Fig. 13. PSNR comparisons. (a) Static scene. (b) Dynamic scene.where 255 is the maximum pixel value in the grey-scale image with watermark is comparable to the one generated by aand MSE is the average mean-squared error, as defined in software-based algorithms.Equation (2). Here, f and k are the two compared images, the Three sets of video Group of pictures (GOPs) consisting ofsize of each being M × N pixels (256 x 256 pixels in our three frames (one I and two P frames) were tested (for bothexperiment). static and dynamic scene). As described above, the inter-frames are encoded based on the intra-frames and thus the PSNR values of the intra-frames are expected to be higher than that of B. Analysis of Experimental Results the inter-frames, as demonstrated in Fig. 13(a) and Fig. 13(b). The results of the experiment done with the implemented That is why we get the ‘hill’ behavior at the 1st, 4th, 7th famesystem are presented in Fig. 12, which contains the original which are the I – frames. On the other hand, watermarksample frames, the compressed frames and the watermarked embedding did not degrade the quality of the video streamsframes. Here, the presented results are achieved for the case compared to the compressed video without watermarkthat only two DCT coefficients in each block are changed. In embedded (Fig. 13). Therefore, the proposed hardwareterms of PSNR, the quality of the watermarked frames is implementation of video watermarking system featuresmaintained above ~35dB, measured consistently with no minimum degradation of video quality.visually perceptible artifacts. Moreover, the quality of the videoCopyright (c) 2011 IEEE. Personal use is permitted. For any other purposes, permission must be obtained from the IEEE by emailing email@example.com.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.IEEE Transactions on Circuits and Systems for Video Technology, Paper ID: 5318 10 TABLE I FPGA SYNTHESIS REPORT MB1 MB2 MB3 Initial Power Clock cycles 94 64 64 64 64 ………. Logic memory Components Latency consum Dct_out Cells bits (cycles) ption WM_Gen_out WM_Embed_out MJPEG Video 8822 13540 188 260 mW IDCT_out Compressor 94 64 64 ………. 64 Watermark 309 512 64 Generator 10 mW MB1 MB2 MB1024 Watermark 132 744 64 320 1023 x 64 Embedder Overall 9263 14796 320 270 mW Fig. 14. Timing diagram of a 256 x 256 frame processing (I frame). MB stands for Architecture macro-block of 8x8 pixels. watermarking unit consumes 270 mW power, where as the TABLE II COMPARISONS WITH OTHER VIDEO WM CHIPS Chip Statistics Video Research Design Type of Processing Logic Cells Standard Clock Frequency and Works Type WM Domain/method (Kilo PSNR processing speed gates) Strycker et. Invisible-Ro 100 MHz al.  DSP board bust - Spatial N/A N/A Mathai et al. Custom Invisible-Ro 75 MHz @ 30 fps 40 dB  IC-0.18µm bust - Wavelet N/A (320 x 320) Tsai and Wu Raw N/A N/A  Custom IC Robust Video Spatial N/A Maes et al. FPGA 17 N/A N/A  Custom IC Robust MPEG Spatial 14 Petitjean et. FPGA Board Invisible-Ro 50 MHz takes 6 us al.  DSP Board bust MPEG Fractal N/A 250 MHz takes 118 us N/A Mohanty et. 100 MHz @ 43 fps al.  FPGA Visible MPEG-4 DCT 28.3 (320 x 240) Around 30 dB 40 MHz @ 607 fps (256 x 256) 44 dB This Work FPGA Semi-fragile MJPEG DCT 9.263 40 MHz @ 130 fps (640x 480) MJPEG video compressor itself consumes 260 mW power. C. Performance Analysis So with an addition of 10 mW power the video watermarking The performance of the overall FPGA implementation was unit (watermark generator and watermark embedder) can beevaluated in terms of hardware cost, power consumption, integrated with the MJPEG video compressor unit.processing speed and security issues. 2) Processing Speed 1) Hardware Cost and Power Consumption The data of each frame is processed macro-block (8x8) wise. The hardware resources used by the different modules are Each macro-block passes through the DCT module first, andgiven in TABLE I. The results clearly indicate that the addition then through the WM embedder and Inverse DCT moduleof the watermark generator and embedder modules caused only respectively. From the timing diagram of Fig. 14 the processing4.99% increase in logic cells usage and 9.28% increase in time of a single I frame can be determined.memory resource consumed with relation to the hardware of the For the first 8x8 macro-block of a frame it takes 320 clockvideo compressor. cycles. Then for all the later 8x8 blocks it takes 64 cycles each The combined system would easily fit in the original FPGA (Throughput). The WM generator takes 64 clock cycles todevice. In general, any device that is large enough for the generate the WM data which is completed at the first 64 clockimplementation of the MJPEG video compressor would be able cycles of the frame processing period and then WM data isto accommodate the additional hardware required for the stored in a WM buffer. Hence the WM generator does notwatermark generator and embedder blocks. contribute to the initial latency of the frame processing period. Power consumption is an important concern in hardware The pipelining architecture and parallelism of the designedimplementation of a VLSI system. As this is a constraint on the system helped in achieving this high throughput after the initialsystem it needs to be kept low as much as possible. Using the latency state. Processing of P frame will require less time asbuilt in power analyzer of Altera Quartus II the power they are not watermarked. If we consider a video frame of N×consumption of our WM system was measured and it was M pixel resolution, the time it would take to watermark onefound that the MJPEG video compressor with theCopyright (c) 2011 IEEE. Personal use is permitted. For any other purposes, permission must be obtained from the IEEE by emailing firstname.lastname@example.org.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.IEEE Transactions on Circuits and Systems for Video Technology, Paper ID: 5318 11frame of the video is defined by Equation (3). compression, mild geometric changes of images, but is capable 𝑇 = �𝐿𝑎𝑡𝑒𝑛𝑐𝑦 + � − 1� 𝑇ℎ𝑟𝑜𝑢𝑔ℎ𝑝𝑢𝑡� 𝑀𝑁 1 to reject malicious changes i.e. cover-up attack, cropping 8𝑥8 𝑐𝑙𝑜𝑐𝑘_𝑓𝑟𝑒𝑞𝑢𝑒𝑛𝑐𝑦 (3) segment removal, etc. To prove the robustness of the watermark, two sample video sequences were embedded with watermark according to For the present case, in which real-time video streams of size the proposed algorithm. The cover-up attack is applied to the256 x 256 pixels/frame are used, the number of cycles it would watermarked sample video. The tampered video is analyzed bytake to watermark one frame would be 65792 clock cycles. the watermark detector, which outputs a detection map. TheThus, the processing time it takes to watermark each frame is detection map is used to indicate which blocks of the image are1.6 ms (607 frames/s) at a clock frequency of 40 MHz. For suspected as inauthentic.video streams of 640 x 480 pixels/frame, the processing time The results of the tamper detection of the video sequenceswould be 7 ms/frame (130 frames/sec). This means that, if this are presented in Fig. 15 and Fig. 16. Only one frame of videowatermarking system is employed in practical applications samples is shown here as an example. Following tampering was(such as a surveillance video system) with an input video of 30 done in the video sequence-1. a) The book in the hand of theframe/sec, then the implemented watermark system can person was removed by copying the contents of adjacent blockswatermark the video stream in real time. The reason of this high onto the blocks where the book was in the original frame. To anframe rate is the pipelining architecture of the watermarking innocent observer the original existence of the book in the video frames is visually un-detectable. b) The segmentsystem along with the video encoder. Another reason is that the containing the electrical outlet on the wall under the whiteencoded video is of MJPEG standard which requires less board was also removed. Hence the electrical outlet was alsoprocessing time than MPEG standard. If the proposed system is undetectable in the video frames.integrated with an MPEG encoder then the processing time will In video sequence-2, the portion of the white horse wasbe higher only because of the MPEG encoding process, as the covered-up by a black horse, so for an innocent observer itwatermark embedding process will remain same. seems that there was no white horse in the frame. The detection map successfully illustrated the blocks of the tampered video 3) Robustness Analysis: frame which are suspected as inauthentic. It is important to understand that the proposed WM system As stated before the WM system, designed in this paper, is embeds the watermark into MJPEG video frames and enables (a) (b) (c) (d) Fig. 15. Tamper Detection of Video Sample-1: (a) Original video (b) Watermarked Video (c) Tampered Video (d) Detected Video. (a) (b) (c) (d) Fig. 16. Tamper Detection of Video Sample-2: (a) Original video (b) Watermarked Video (c) Tampered Video (d) Detected Video.for invisible semi-fragile watermark which is allowed to the detection of tampered video frames. The detection is donewithstand certain legitimate manipulations i.e. lossy in software by using the same algorithm utilized for theCopyright (c) 2011 IEEE. Personal use is permitted. For any other purposes, permission must be obtained from the IEEE by emailing email@example.com.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.IEEE Transactions on Circuits and Systems for Video Technology, Paper ID: 5318 12watermark embedding process. Both modifications are easily VI. CONCLUSION AND FUTURE RESEARCHnoticed using the detection map created by the watermark A. Summary and conclusionsdetector and presented in Fig. 15(d) and 16(d). First design of the hardware architecture of a novel digital 4) Security issues: video watermarking system to authenticate video stream in real time is presented in this paper. FPGA -based prototyping for the The encryption of the watermark is mainly dependent on the hardware architecture has been developed. The proposedstatistical property of the pseudo random sequence generated system is suitable for implementation using an FPGA and canby the Random Number Generator (RNG) module since the be used as a part of an ASIC. In the current implementation,primitive watermark is modulated by the pseudo random FPGA is the simple and available way of the proof-of-concept.sequence. The implementation makes integration to peripheral video Hence it is necessary that the pseudo random sequence (such as surveillance cameras) to achieve real-time image datashould have the statistical property as close as possible to a true protection. The aim of this work was to achieve threerandom sequence . A true random binary sequence has objectives:equal distribution of 1’s and 0’s. Another indication of the First, to propose a new HW architecture of a digitalrandomness of the sequence is the auto-correlation values of the watermarking system for video authentication and make itsequence. This feature is crucial for the resistance of the suitable for VLSI implementation. Second, to ensure that thesequence to correlation attacks . watermarking algorithm achieves a certain level of security to In order to evaluate the statistical quality of the pseudo withstand certain potential threats. The third goal was to makerandom number sequence generated by the Gollmann cascade the watermarking system suitable for real time video, whichF-FCSR RNG module, designed in , two tests were carried can be easily adapted with commonly used digital videoout. First, a comparison between the auto-correlation values of compression standards with minor video frame degradation.the proposed Gollman Cascade F-FCSR RNG and F-FCSR Contradictory to existing solutions, where robust WMRNG (which is well studied and known to have good statistical algorithms are mainly used, a semi-fragile WM system forproperty ) with a similar size, was done. The second video authentication has been developed in this work. Theevaluation was done using the statistical test suite available proposed watermark system is capable of watermarking videothrough the National Institute of Standards and Technology streams in the DCT domain in real time. It was also(NIST) . NIST provides a comprehensive tool to verify the demonstrated that the designed system is capable to achieve thestatistical quality of a pseudo random sequence through varioustests. These tests check the properties of the sequence and required security level with minor video frame qualitycompare the results with the expected values of a true random degradation.sequence . The tests were all successful and proved that the B. Future Researchproposed RNG meets the standards compared to other Future research should concentrate on applying thepublished RNGs, while providing a simple design methodology watermarking algorithm to other modern video compression. standard like MPEG-4/H.264 so that it can be utilized in various commercial applications as well. Embedding the 5) Comparison with Existing Research watermark information within high resolution video streams in The proposed hardware-based video WM authentication real time is another challenge.features minimum video frame quality degradation with novisually perceptible artifacts and high PSNR values. Moreover, ACKNOWLEDGMENTthe results are comparable to that generated by software-based Authors would like to thank Dr Efraim Pecht, fromalgorithms. However, the complexity of the proposed algorithm Technologies and Beyond and the University of Calgary, foris lower since only the intra-frames are modified and the results his constructive suggestions and helpful advice.shown are achieved with only two DCT coefficients in each 8 ×8 block being changed. Furthermore, the watermark embedding REFERENCESthat is designed to be performed directly in the compressed  V. M. Potdar, S. Han, and E. Chang, “A survey of digital imagevideo streams minimizes computationally demanding watermarking techniques”, in Proc. of the IEEE International Conference on Industrial Informatics, pp. 709–716, Perth, Australia, Aug. 2005.operations.  A. D. Gwenael, and J. L. Dugelay, “A guide tour of video watermarking,” TABLE II presents a comparative perspective with other Signal Process. Image Commun. , vol. 18, no. 4, pp. 263–282, Apr. 2003.published hardware-based video WM system. The proposed  A. Piva, F. Bartolini, and M. Barni, “Managing copyright in open networks,” IEEE Trans. Internet Computing, vol. 6, no. 3, pp. 18–26,WM system is the first which employs invisible semi-fragile May/June 2002.watermark approach in the frequency domain (DCT) for video  Y. Shoshan, A. Fish, X. Li, G. A. Jullien, and O. Yadid-Pecht, “VLSIstreams with fewer logic gates and higher processing speed watermark implementations and applications,” International Journal oncompared to other works like in -, , . Information Technologies and Knowledge, vol. 2, no. 4 pp. 379–386, June 2008.  X. Li, Y. Shoshan, A. Fish, G. A. Jullien, and O. Yadid-Pecht, “Hardware implementations of video watermarking,” International Book Series on Information Science and Computing, no. 5, pp. 9–16, June 2008.Copyright (c) 2011 IEEE. Personal use is permitted. For any other purposes, permission must be obtained from the IEEE by emailing firstname.lastname@example.org.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.IEEE Transactions on Circuits and Systems for Video Technology, Paper ID: 5318 13 I. J. Cox, J. Kilian, F. T. Leighton, and T. Shamoon, “Secure spread  V. M. Potdar, S. Han, and E. Chang, "A survey of digital image spectrum watermarking for multimedia,” IEEE Trans. Image Process., watermarking techniques," in Proc. 3rd IEEE Int. Conf. Industrial vol. 6, no. 12, pp. 1673–1687, Dec. 1997. Informatics, 2005, pp. 709- 716. S. P. Mohanty, “Digital watermarking: a tutorial review,” Dept. of  F. Petitcolas, R. J. Anderson and M. G. Kuhn, "Information Hiding - A Computer Eng., Univ. South Florida, 1999. Available: Survey," in Proc. IEEE, 1999, vol. 87, no. 7, pp. 1062-1078. http://www.linkpdf.com/download/dl/digital-watermarking-a-tutorial-re  F. Arnault, T. Berger and A. Necer, "A new class of stream ciphers view-.pdf combining LFSR and FCSR architectures," In Advances in Cryptology - F. Hartung, and B. Girod, “Watermarking of uncompressed and INDOCRYPT 2002, number 2551in Lecture Notes in Computer Science, compressed video,” IEEE Trans. Signal Process. , vol. 66, no. 3, pp. 283 – pp 22-33, Springer-Verlag, 2002. 302, May 1998.  NIST, “A Statistical Test Suite for the Validation of Random Number T. L. Wu, and S. F. Wu, “Selective encryption and watermarking of Generators and Pseudo Random Number Generators for Cryptographic MPEG video,” Inter. Conf. on Image Science, Systems, and Technology Applications,”Available:http://csrc.nist.gov/groups/ST/toolkit/rng/docu (CISST’97), Las Vegas, Nevada, USA, Jun. 1997. mentation_software.html A. Shan, and E. Salari, “Real-time digital video watermarking,” 2002  http://en.wikipedia.org/wiki/Digital_watermarking Digest of Technical Papers: International Conference on Consumer  http://en.wikipedia.org/wiki/Motion_JPEG Electronics, June 2002, pp. 12 – 13.  Y.-J. Jeong, K.-S. Moon, J.-N. Kim, “Implementation of Real Time Video L. Qiao, and K. Nahrstedt, “Watermarking methods for MPEG encoded Watermark Embedder Based on Haar Wavelet Transform Using FPGA”, video: towards resolving rightful ownership,” in Proc. IEEE Int. Conf. in Proceedings of the Second International Conference on Future Multimedia Computing and Systems, June 1998, pp. 276–285. Generation Communication and Networking Symposia (FGCNS), 2008, L. D. Strycker, P. Termont, J. Vandewege, J. Haitsma, A. Kalker, M. pp. 63 – 66. Maes, and G. Depovere, “Implementation of a real-time digital  G. Petitjean, J. L. Dugelay, S. Gabriele, C. Rey, J. Nicolai, “Towards watermarking process for broadcast monitoring on Trimedia VLIW Realtime Video Watermarking for Systems-On-Chip”, in Proceedings of processor,” Proc. Inst. Elect. Eng. Vision, Image Signal Proc. , vol. 147, the IEEE International Conference on Multimedia and Expo (Vol. 1), no. 4, pp. 371–376, Aug. 2000. 2002, pp. 597–600. N. J. Mathai, A. Sheikholesami, and D. Kundur, “Hardware  S. P. Mohanty, “A Secure Digital Camera Architecture for Integrated implementation perspectives of digital video watermarking algorithms”, Real-Time Digital Rights Management”, Journal of Systems IEEE Trans. Signal Process, vol. 51, no. 4, pp. 925 – 938, Apr. 2003. Architecture, Volume 55, Issues 10-12,October-December 2009, pp. N. J. Mathai, A. Sheikholesami, and D. Kundur, “VLSI implementation 468-480. of a real-time video watermark embedder and detector,” Pro. Intl.  S. P. Mohanty and E. Kougianos, “Real-Time Perceptual Watermarking Symposium on Circuits and Systems, vol. 2, pp. 772–775, May 2003. Architectures for Video Broadcasting”, Journal of Systems and Software, T. H. Tsai, and C. Y. Wu, “An implementation of configurable digital Vol. 84, No. 5, May 2011, pp. 724-738. watermarking systems in MPEG video encoder,” in Proc. of Intl. Conf.  S. Saha, D. Bhattacharyya, S. K. Bandyopadhyay, “Security on Fragile on Consumer Electronics, pp. 216–217, Jun. 2003. and Semi-fragile Watermarks Authentication”, International Journal of M. Maes, T. Kalker, J. P. Linnartz, J. Talstra, G. Depoyere, and J. Computer Application, Vol. 3, No. 4, June 2010, pp. 23 - 27. Haitsma, “Digital watermarking for DVD video copy protection,” IEEE Signal Process. Mag., vol. 17, no. 5, pp. 47–57, Sep. 2000. X. Wu, J. Hu, Z. Gu, and J. Huang, “A secure semi-fragile watermarking for image authentication based on integer wavelet transform with BIOGRAPHIES parameters,” in Proc. Australian workshop on Grid Computing and Sonjoy Deb Roy received the B.Sc. degree in E-Research, vol. 44, pp. 75–80, Newcastle, NSW, Australia, 2005. Electrical and Electronic Engineering from ISO/IEC 13818-2:1996(E), “Information technology – generic coding of Bangladesh University of Engineering and moving pictures and associated audio information,” Video International Technology, Dhaka, Bangladesh, in 2009. Currently Standard, 1996. he is pursuing his M.Sc. at the ISIS laboratory in the K. Jack, Video demystified: a handbook for the digital engineer, 2nd ed., Department of Electrical and Computer Engineering, LLH Technology Publishing, Eagle Rock, VA 24085, 2001. University of Calgary, Canada. Sonjoy’s research I. E. G. Richardson, H.264 and MPEG-4 video compression, John Wiley involves hardware implementation of secured digital and Sons, Ltd, England, 2003. watermarking system for image and video F. Bartolini, M. Barni, A. Tefas, and I. Pitas, “Image authentication authentication. techniques for surveillance applications,” Proc. IEEE, vol. 89, no. 10, pp. 1403–1418, Oct. 2001. Xin Li received the B.Eng. degree in Electrical Engineering from Nantong J. Dittmann, T. Fiebig, R. Steinmetz, S. Fischer, and I. Rimac, “Combined University, China. He completed his M.Sc. in 2010, at the ISIS laboratory at the video and audio watermarking: embedding content information in University of Calgary, Canada. Xins research involved digital watermarking multimedia data,” in Proc. SPIE Security Watermarking Multimedia system design for image and video. Contents II, vol. 3971, pp. 455-464, Jan. 2000. X. Li, Y. Shoshan, A. Fish, and G. A. Jullien, “A simplified approach for Yonatan Shoshan received the B.Sc. degree in Electrical Engineering from designing secure random number generators in HW,” IEEE International Ben-Gurion University, Beer-Sheva, Israel, in 2007. He completed his M.Sc. in Conference on Electronics, Circuits, and Systems, pp. 372-375, Aug. 2009, at the ATIPS laboratory at the University of Calgary, Canada. Currently, 2008. he is with Texas Instruments Inc. in Raanana Israel. Yonatans research G. R. Nelson, G. A. Jullien, and O. Yadid-Pecht, “CMOS image sensor involved smart CMOS image sensors and watermarking. with watermarking capabilities,” in Proc. IEEE Int. Symp. on Circuits and Systems (ISCAS’05), vol. 5, pp. 5326-5329, Kobe, Japan, May 2005. Alexander Fish received the B.Sc. degree in Y. Shoshan, A. Fish, G. A. Jullien, and O. Yadid-Pecht, “Hardware Electrical Engineering from the Technion, Israel implementation of a DCT watermark for CMOS image sensors,” IEEE Institute of Technology, Haifa, Israel, in 1999. He International Conference on Electronics, Circuits, and Systems, pp. completed his M.Sc. in 2002 and his Ph.D. (summa 368-371, Aug. 2008. cum laude) in 2006, respectively, at Ben-Gurion Theora.org. Available: http://www.theora.org/ University in Israel. He was a postdoctoral fellow in A. Filippov, “Encoding high-resolution Ogg/Theora video with the ATIPS laboratory at the University of Calgary reconfigurable FPGAs,” in Xcell Journal, Second Quarter (Canada) from 2006-2008. In 2008 he joined the 2005.Available:http://www.xilinx.com/publications/xcellonline/xcell_53 Ben-Gurion University in Israel, as a faculty member /xc_pdf/xc_video53.pdf in the Electrical and Computer Engineering http://www.videolan.org/vlc/features.php?cat=video Department. Dr. Fish’s research interests include http://www.mplayerhq.hu/design7/info.html low voltage digital design, energy efficient SRAM, Flash memory arrays and http://wiki.xiph.org/index.php/TheoraSoftwarePlayers low power CMOS image sensors. He has authored over 60 scientific papers and B. Schneier, "Applied Cryptography. 2nd Ed," New York: Wiley, 1996. patent applications. Dr. Fish has also published two book chapters. Dr. FishCopyright (c) 2011 IEEE. Personal use is permitted. For any other purposes, permission must be obtained from the IEEE by emailing email@example.com.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.IEEE Transactions on Circuits and Systems for Video Technology, Paper ID: 5318 14serves as an Editor in Chief for the MDPI Journal of Low Power Electronicsand Applications (JLPEA) and as an Associate Editor for the IEEE SensorsJournal. Dr. Fish is an IEEE member. Orly Yadid-Pecht received her B.Sc. from the Electrical Engineering Department at the Technion - Israel Institute of Technology. She completed her M.Sc. in 1990 and her D.Sc. in 1995, respectively, also at the Technion. She was a National Research Council (USA) research fellow from 1995-1997 in the areas of Advanced Image Sensors at the Jet Propulsion Laboratory (JPL)/ California Institute of Technology (Caltech). In 1997 she joined Ben-Gurion University, as a member in theElectrical and Electro-Optical Engineering departments. There she founded theVLSI Systems Center, specializing in CMOS Image Sensors. From 2003-2005she was affiliated with the ATIPS laboratory at the University of Calgary,Canada, promoting the area of Integrated Sensors. Since 2009 she is the iCOREProfessor of Integrated Sensors, Intelligent Systems (ISIS) at the University ofCalgary.Dr. Yadid-Pecht’s main subjects of interest are integrated CMOS sensors, smartsensors, image processing hardware, micro and Biomedical systemimplementations. She has published over a hundred papers and patents and hasled over a dozen research projects supported by government and industry. Inaddition, she has co-authored and co-edited the first book on CMOS ImageSensors: “CMOS Imaging: From Photo-transduction to Image Processing”,published in 2004. She also serves as a director on the board of two companies.Dr. Yadid-Pecht has served on different IEEE Transactions editorial boards,was the general chair of the IEEE International Conference on ElectronicCircuits and Systems (ICECS) and is a current member of the steeringcommittee of this conference. Dr. Yadid-Pecht is also an IEEE Fellow.Copyright (c) 2011 IEEE. Personal use is permitted. For any other purposes, permission must be obtained from the IEEE by emailing firstname.lastname@example.org.