This paper proposes about motion estimation in H.264/AVC encoder. Compared with standards
such as MPEG-2 and MPEG-4 Visual, H.264 can deliver better image quality at the same
compressed bit rate or at a lower bit rate. The increase in compression efficiency comes at the
expense of increase in complexity, which is a fact that must be overcome. An efficient Co-design
methodology is required, where the encoder software application is highly optimized and
structured in a very modular and efficient manner, so as to allow its most complex and time
consuming operations to be offloaded to dedicated hardware accelerators. The Motion
Estimation algorithm is the most computationally intensive part of the encoder which is simulated using MATLAB. The hardware/software co-simulation is done using system generator tool and implemented using Xilinx FPGA Spartan 3E for different scanning methods.
implementation of area efficient high speed eddr architectureKumar Goud
Abstract-This project presents an EDDR design, based on the residue-and-quotient (RQ) code, to embed into motion estimation (ME) for video coding testing applications. An error in processing elements (PEs), i.e. key components of a ME, can be detected and recovered effectively by using the EDDR design. The proposed EDDR design for ME testing can detect errors and recover data with an acceptable area overhead and timing penalty. The functional verification and synthesis can be done by Xilinx ISE. That is when compare to the existing design the implemented design area and timing will be reduced.
Index Terms—Area overhead, data recovery, error detection, reliability, residue-and-quotient (RQ) code, Xilinx ISE
This document describes a project to design an H.264 video decoder using Verilog. It implements the key decoding blocks like Context-Based Adaptive Binary Arithmetic Coding (CABAC), inverse quantization, and inverse discrete cosine transform. CABAC is the entropy decoding method used in H.264 that is computationally intensive. The project develops hardware modules for these blocks to accelerate decoding and enable real-time performance. It presents the designs of the individual modules and simulation results showing their functionality. The goal is to improve on software implementations by using dedicated hardware for the critical decoding stages.
The document discusses a built-in self-detection and correction (BISDC) architecture for motion estimation computing arrays (MECAs) used in video coding systems. It proposes using biresidue error correcting codes to detect and correct single bit errors in each processing element of the MECA with minimal area overhead. The BISDC architecture achieves online error detection and correction through test code generation, detection and selection circuits, syndrome decoding, and error correction circuits. Performance analysis shows the approach effectively detects and corrects errors while introducing minor additional circuitry.
PERFORMANCE EVALUATION OF JPEG IMAGE COMPRESSION USING SYMBOL REDUCTION TECHN...cscpconf
Lossy JPEG compression is a widely used compression technique. Normally the JPEG technique
uses two process quantization, which is lossy process and entropy encoding, which is considered
lossless process. In this paper, a new technique has been proposed by combining the JPEG
algorithm and Symbol Reduction Huffman technique for achieving more compression ratio. The
symbols reduction technique reduces the number of symbols by combining together to form a new
symbol. As a result of this technique the number of Huffman code to be generated also reduced.
The result shows that the performance of standard JPEG method can be improved by proposed method. This hybrid approach achieves about 20% more compression ratio than the Standard JPEG.
Performance and Analysis of Video Compression Using Block Based Singular Valu...IJMER
This document presents an analysis of low-complexity video compression using block-based singular value decomposition (SVD) algorithms. It begins with an introduction to video compression and its importance for reducing storage and transmission costs. Current video compression standards like MPEG and H.26x are computationally expensive, making them unsuitable for real-time applications. The document then discusses block SVD algorithms as an alternative that can provide higher quality compression at lower computational complexity. It analyzes reducing the time complexity of video compression using block SVD and compares it to other compression methods. The document outlines the SVD decomposition process and how a 2D version can be applied to groups of image blocks for more efficient compression than 1D SVD.
A REAL-TIME H.264/AVC ENCODER&DECODER WITH VERTICAL MODE FOR INTRA FRAME AND ...csandit
The video coding standards are being developed to satisfy the requirements of applications for
various purposes, better picture quality, higher coding efficiency, and more error robustness.
The new international video coding standard H.264 /AVC aims at having significant
improvements in coding efficiency, and error robustness in comparison with the previous
standards such as MPEG-2, H261, H263,and H264. Video stream needs to be processed from
several steps in order to encode and decode the video such that it is compressed efficiently with
available limited resources of hardware and software. All advantages and disadvantages of
available algorithms should be known to implement a codec to accomplish final requirement.
The purpose of this project is to implement all basic building blocks of H.264 video encoder and
decoder. The significance of the project is the inclusion of all components required to encode
and decode a video in MatLab .
Selection of intra prediction modes for intra frame coding in advanced video ...eSAT Journals
Abstract This paper proposes selection of Intra prediction modes for Intra frame coding in Advanced Video Coding Standard using Matlab. The proposed algorithm selects prediction modes for intra frame coding. There are nine prediction modes are there to predict the intra frame in AVC using Intra prediction,but all the prediction modes are not required for all the applications. Intra prediction is the first process of advanced video coding standard. It predicts a macro block by referring to its previous macro blocks to reduce spatial redundancy,appling all the prediction modes to predict intra frame it leads to more computational complexity is increased at the encoder of AVC. In the proposed algoriyhm, applied all the prediction modes(0-8) for prediction of intra frame but only few modes such as mode0, mode1, mode2,mode4,mode6 gives good PSNR, high comprssion ratio and low bit rate. Out of these modes mode2 gives good PSNR, compression ratio and redced bit rate, mode5, mode7 and mode8 gives lower PSNR, low compression ratio and increased bitrate compared to mode0,mode1, mode2, mode4 and mode6. The simulation results are presented using Matlab. The PSNR , compressed ratio and bit rate achived for different quantization parameters of mother daughter frames , foreman frames was presented. Keywords: AVC, PSNR, CAVLC, Macroblock, Prediction modes.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
implementation of area efficient high speed eddr architectureKumar Goud
Abstract-This project presents an EDDR design, based on the residue-and-quotient (RQ) code, to embed into motion estimation (ME) for video coding testing applications. An error in processing elements (PEs), i.e. key components of a ME, can be detected and recovered effectively by using the EDDR design. The proposed EDDR design for ME testing can detect errors and recover data with an acceptable area overhead and timing penalty. The functional verification and synthesis can be done by Xilinx ISE. That is when compare to the existing design the implemented design area and timing will be reduced.
Index Terms—Area overhead, data recovery, error detection, reliability, residue-and-quotient (RQ) code, Xilinx ISE
This document describes a project to design an H.264 video decoder using Verilog. It implements the key decoding blocks like Context-Based Adaptive Binary Arithmetic Coding (CABAC), inverse quantization, and inverse discrete cosine transform. CABAC is the entropy decoding method used in H.264 that is computationally intensive. The project develops hardware modules for these blocks to accelerate decoding and enable real-time performance. It presents the designs of the individual modules and simulation results showing their functionality. The goal is to improve on software implementations by using dedicated hardware for the critical decoding stages.
The document discusses a built-in self-detection and correction (BISDC) architecture for motion estimation computing arrays (MECAs) used in video coding systems. It proposes using biresidue error correcting codes to detect and correct single bit errors in each processing element of the MECA with minimal area overhead. The BISDC architecture achieves online error detection and correction through test code generation, detection and selection circuits, syndrome decoding, and error correction circuits. Performance analysis shows the approach effectively detects and corrects errors while introducing minor additional circuitry.
PERFORMANCE EVALUATION OF JPEG IMAGE COMPRESSION USING SYMBOL REDUCTION TECHN...cscpconf
Lossy JPEG compression is a widely used compression technique. Normally the JPEG technique
uses two process quantization, which is lossy process and entropy encoding, which is considered
lossless process. In this paper, a new technique has been proposed by combining the JPEG
algorithm and Symbol Reduction Huffman technique for achieving more compression ratio. The
symbols reduction technique reduces the number of symbols by combining together to form a new
symbol. As a result of this technique the number of Huffman code to be generated also reduced.
The result shows that the performance of standard JPEG method can be improved by proposed method. This hybrid approach achieves about 20% more compression ratio than the Standard JPEG.
Performance and Analysis of Video Compression Using Block Based Singular Valu...IJMER
This document presents an analysis of low-complexity video compression using block-based singular value decomposition (SVD) algorithms. It begins with an introduction to video compression and its importance for reducing storage and transmission costs. Current video compression standards like MPEG and H.26x are computationally expensive, making them unsuitable for real-time applications. The document then discusses block SVD algorithms as an alternative that can provide higher quality compression at lower computational complexity. It analyzes reducing the time complexity of video compression using block SVD and compares it to other compression methods. The document outlines the SVD decomposition process and how a 2D version can be applied to groups of image blocks for more efficient compression than 1D SVD.
A REAL-TIME H.264/AVC ENCODER&DECODER WITH VERTICAL MODE FOR INTRA FRAME AND ...csandit
The video coding standards are being developed to satisfy the requirements of applications for
various purposes, better picture quality, higher coding efficiency, and more error robustness.
The new international video coding standard H.264 /AVC aims at having significant
improvements in coding efficiency, and error robustness in comparison with the previous
standards such as MPEG-2, H261, H263,and H264. Video stream needs to be processed from
several steps in order to encode and decode the video such that it is compressed efficiently with
available limited resources of hardware and software. All advantages and disadvantages of
available algorithms should be known to implement a codec to accomplish final requirement.
The purpose of this project is to implement all basic building blocks of H.264 video encoder and
decoder. The significance of the project is the inclusion of all components required to encode
and decode a video in MatLab .
Selection of intra prediction modes for intra frame coding in advanced video ...eSAT Journals
Abstract This paper proposes selection of Intra prediction modes for Intra frame coding in Advanced Video Coding Standard using Matlab. The proposed algorithm selects prediction modes for intra frame coding. There are nine prediction modes are there to predict the intra frame in AVC using Intra prediction,but all the prediction modes are not required for all the applications. Intra prediction is the first process of advanced video coding standard. It predicts a macro block by referring to its previous macro blocks to reduce spatial redundancy,appling all the prediction modes to predict intra frame it leads to more computational complexity is increased at the encoder of AVC. In the proposed algoriyhm, applied all the prediction modes(0-8) for prediction of intra frame but only few modes such as mode0, mode1, mode2,mode4,mode6 gives good PSNR, high comprssion ratio and low bit rate. Out of these modes mode2 gives good PSNR, compression ratio and redced bit rate, mode5, mode7 and mode8 gives lower PSNR, low compression ratio and increased bitrate compared to mode0,mode1, mode2, mode4 and mode6. The simulation results are presented using Matlab. The PSNR , compressed ratio and bit rate achived for different quantization parameters of mother daughter frames , foreman frames was presented. Keywords: AVC, PSNR, CAVLC, Macroblock, Prediction modes.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
This document discusses video quality analysis for H.264 based on the human visual system. It proposes an improved video quality assessment method that adds color comparison to structural similarity measurement. The method separates similarity measurement into four comparisons: luminance, contrast, structure, and color. Experimental results on video sets with two distortion types show the proposed method's quality scores are more consistent with visual quality than classical methods. It also discusses the H.264 video coding standard and provides examples of encoding and decoding experimental results.
This document describes an FPGA-based human detection system with an embedded platform. Key points:
- The system uses HOG features, SVM classification, and AdaBoost algorithms for human detection in images and video.
- FPGA circuits are designed to accelerate the computationally intensive HOG feature extraction, including modules for gradient calculation, histogram accumulation, and more.
- The full system is implemented on an embedded platform to achieve a real-time human detection system running at 15 frames per second.
- Experimental results show the FPGA-based system has similar detection accuracy to a PC-based software implementation but significantly faster speed, suitable for real-time embedded applications.
Optimization of latency of temporal key Integrity protocol (tkip) using graph...ijcseit
The document discusses optimization of latency in the Temporal Key Integrity Protocol (TKIP) using hardware-software co-design and graph theory. It presents a mathematical model to partition TKIP algorithms between hardware and software blocks to minimize latency. Simulation results showed the proposed technique achieved lower latency than a hardware-only implementation, reducing latency from 10us to 8us. The technique models TKIP modules as a graph and uses algorithms to assign modules to hardware or software based on latency calculations.
IOSR journal of VLSI and Signal Processing (IOSRJVSP) is an open access journal that publishes articles which contribute new results in all areas of VLSI Design & Signal Processing. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced VLSI Design & Signal Processing concepts and establishing new collaborations in these areas.
An Investigation towards Effectiveness in Image Enhancement Process in MPSoC IJECEIAES
The document discusses image enhancement techniques for use in multiprocessor systems-on-chip (MPSoC). It reviews existing image enhancement methods implemented on FPGA and identifies limitations like low accuracy. The paper proposes using advanced bus architectures like AMBA/AXI in MPSoC to improve communication between memory and input medical images for enhancement, reducing heterogeneity effects. It summarizes various image enhancement algorithms that could be implemented on reconfigurable FPGA hardware for use in MPSoC, including brightness control, contrast stretching, histogram equalization, and edge detection techniques.
International Journal of Engineering Research and DevelopmentIJERD Editor
The document summarizes an emerging VP8 video codec that is designed for mobile devices. It aims to significantly reduce computational complexity through several techniques while maintaining good video quality. The key techniques include a predictive algorithm for motion estimation that reduces computation by 18.5-20x compared to full search, using integer discrete cosine transform instead of floating point to achieve 2.6-3.5x speed improvement, and skipping DCT and quantization for some macroblocks to reduce computations. Experimental results on test sequences show negligible quality degradation of 0.2-0.5dB for integer DCT and 0.5dB on average for the full codec, while achieving real-time encoding rates on mobile devices. The proposed low-complexity
This document discusses two methods for diagnosing faulty logic blocks in FPGA fabrics: the algebraic logic method and vector-logical method. The algebraic logic method is more useful for processing sparse fault tables with fewer than 20% 1s values, as it reduces the fault table size and simplifies computations to generate sum-of-products expressions to diagnose issues. The method involves removing rows and columns with all 0s, then constructing product-of-sums terms for each 1 in the response vector and converting to sum-of-products form. The vector-logical method is better for dense fault tables with many 1s, as it can more easily analyze tables where 1s predominate over 0s. Both methods aim to localize
LEGENDRE TRANSFORM BASED COLOR IMAGE AUTHENTICATION (LTCIA) cscpconf
In this paper, a fragile watermarking technique using Legendre transform (LT) has been
proposed for color image authentication (LTCIA). An initial pixel adjustment is used to ensure
that it never exceeds the range of a valid pixel component. The Legendre transform (LT) is applied on each pair of pixel components of the carrier image in row major order. The
transformed components of each transformed pair are used to fabricate one/two/three watermark bits starting from the first bit position (LSB-1) based on the perceptibility of human
eye on red, green and blue channel. An adjustment method has been incorporated to keep the embedded transformed components closer to the original without hampering the fabricated watermark bits. The inverse Legendre transform (ILT) is applied on each adjusted pair as post
embedding operation to re-generate the watermarked image in spatial domain. At the receiving
end the whole watermark can be extracted based on the reverse procedure thus authentication is done through message digest. Experimental results conform that the proposed lgorithm has enhanced payload and PSNR over Varsaki et. al’s Method [1] and SDHTIWCIA [2]
Legendre transform based color image authentication (ltcia)csandit
In this paper, a fragile watermarking technique using Legendre transform (LT) has been
proposed for color image authentication (LTCIA). An initial pixel adjustment is used to ensure
that it never exceeds the range of a valid pixel component. The Legendre transform (LT) is
applied on each pair of pixel components of the carrier image in row major order. The
transformed components of each transformed pair are used to fabricate one/two/three
watermark bits starting from the first bit position (LSB-1) based on the perceptibility of human
eye on red, green and blue channel. An adjustment method has been incorporated to keep the
embedded transformed components closer to the original without hampering the fabricated
watermark bits. The inverse Legendre transform (ILT) is applied on each adjusted pair as post
embedding operation to re-generate the watermarked image in spatial domain. At the receiving
end the whole watermark can be extracted based on the reverse procedure thus authentication
is done through message digest. Experimental results conform that the proposed algorithm has
enhanced payload and PSNR over Varsaki et. al’s Method [1] and SDHTIWCIA [2].
“FIELD PROGRAMMABLE DSP ARRAYS” - A NOVEL RECONFIGURABLE ARCHITECTURE FOR EFF...sipij
Digital Signal Processing functions are widely used in real time high speed applications. Those functions
are generally implemented either on ASICs with inflexibility, or on FPGAs with bottlenecks of relatively
smaller utilization factor or lower speed compared to ASIC. The proposed reconfigurable DSP processor is
redolent to FPGA, but with basic fixed Common Modules (CMs) (like adders, subtractors, multipliers,
scaling units, shifters) instead of CLBs. This paper introduces the development of a reconfigurable DSP
processor that integrates different filter and transform functions. The switching between DSP functions is
occurred by reconfiguring the interconnection between CMs. Validation of the proposed reconfigurable
architecture has been achieved on Virtex5 FPGA. The architecture provides sufficient amount of flexibility,
parallelism and scalability.
Video Compression Algorithm Based on Frame Difference Approaches ijsc
The huge usage of digital multimedia via communications, wireless communications, Internet, Intranet and cellular mobile leads to incurable growth of data flow through these Media. The researchers go deep in developing efficient techniques in these fields such as compression of data, image and video. Recently, video compression techniques and their applications in many areas (educational, agriculture, medical …) cause this field to be one of the most interested fields. Wavelet transform is an efficient method that can be used to perform an efficient compression technique. This work deals with the developing of an efficient video compression approach based on frames difference approaches that concentrated on the calculation of frame near distance (difference between frames). The
selection of the meaningful frame depends on many factors such as compression performance, frame details, frame size and near distance between frames. Three different approaches are applied for removing the lowest frame difference. In this paper, many videos are tested to insure the efficiency of this technique, in addition a good performance results has been obtained.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
This article proposes a closer-to-metal approach of RTL inspection in microprocessor
design for use in education, engineering, and research. Signals of interest are tapped
throughout the microprocessor hierarchical design and are then output to the top-level
entity and finally displayed to a VGA monitor. Input clock signal can be fed as slow as
one wish to trace or debug the microprocessor being designed. An FPGA development
board, along with its accompanying software package, is used as the design and
test platform. The use of VHDL commands ’type’ and ’record’ in the hierarchy
provides key ingredients in the overall design, since this allows simple, clean, and
tractable code. The method is tested on MIPS single-cycle microprocessor blueprint.
The result shows that the technique produces more consistent display of the true
contents of registers, ALU input/output signals, and other wires – compared to the
standard, widely-used simulation method. This approach is expected to increase
confidence in students and designers since the reported signals’ values are the true
values. Its use is not limited to the development of microprocessors; every FPGAbased
digital design can benefit from it.
The document discusses an improved error detection and data recovery architecture for motion estimation testing applications. It presents a residue-and-quotient (RQ) code-based design to embed into motion estimation for detecting and recovering from errors in processing elements. Experimental results show the design can detect errors and recover data with acceptable overhead in area and timing. It also performs satisfactorily in terms of throughput and reliability for motion estimation testing.
Graduate Research Assistant at Multimedia Processing Laboratory, University of Texas at Arlington. MS in EE with focus on Embedded Systems & Image Processing
Designing of telecommand system using system on chip soc for spacecraft contr...IAEME Publication
The emerging developments in semiconductor technology have made possible to design
entire system onto a single chip, commonly known as System-On-Chip (SoC). The increase in Space
System‘s capabilities by the On-board data processing capabilities can be overcome by optimizing
the SoCs to provide cost effective, high performance, and reliable data. This is achieved by
embedding pre-designed functions into a single SoC, which utilizes specialized reusable core (IP
cores) architecture into complex chip. This paper is concerned with the design of Telecommand
system for transfer of signals from ground station to space station by the integration of SRAM (Static
Random Access Memory), ARM (Advanced RISC Machine) Processor, EDAC unit (Error Detection
and Correction) and CCSDS (Consultative Committee for Space Data System) decoder system. In
this paper we designed the Telecommand SoC by using Verilog code. The implementations have
been done using XILINX FPGA platform and the functionality of the system is verified using
Modelsim simulation. The results are analyzed for SPARTAN 3E device and ARM board and two
devices are being controlled by the signal transfer.
Exploring NISC Architectures for Matrix ApplicationIDES Editor
This document summarizes the results of exploring different NISC (No Instruction Set Computer) architectures for matrix applications. The authors implemented several matrix applications using a NISC toolset and analyzed the compilation and simulation results on different NISC architectures. They analyzed register counts and execution cycle counts and identified the best architectures. For register counts, architectures with pipelined datapaths and data forwarding performed best. For cycle counts, a custom architecture designed by the authors outperformed generic NISC architectures for matrix applications. The authors conducted a comparative analysis to determine the optimal architecture depending on the application requirements.
This document proposes power efficient sum of absolute difference (SAD) algorithms for video compression. It describes:
1. Developing low power 1-bit full adder architectures including a proposed design using NAND, AND, and OR gates that improves power over existing designs.
2. Implementing 4x4 and 8x8 SAD architectures using the proposed low power full adder, ripple carry adders, and carry save adders.
3. Synthesizing the SAD designs in a 180nm technology and finding the proposed 4x4 SAD improves total power by 61% compared to an existing design.
This document discusses post-processing and rate distortion algorithms for the VP8 video codec. It first provides background on the need for post-processing algorithms to reduce blocking artifacts in compressed video, and for rate control algorithms to regulate bitrates and achieve high video quality within bandwidth constraints. It then summarizes existing in-loop deblocking filters and post-processing algorithms. A novel optimal post-processing/in-loop filtering algorithm is described that can achieve better performance than H.264/AVC or VP8 by computing optimal filter coefficients. Finally, a proposed rate distortion optimization algorithm for VP8 is discussed to improve its rate control and coding efficiency.
Distributed Video Coding (DVC) has become increasingly popular in recent times among the researchers in video coding due to its attractive and promising features. DVC primarily has a modified complexity balance between the encoder and decoder, in contrast to conventional video codecs. However, Most of the reported DVC schemes have a high time-delay in decoder which hinders its practical application in real-time systems. In this work, we focus on speed up the Side Information(SI) generation module in DVC, which is a major function in the DVC coding algorithm and one of the time-consuming factor at the decoder. By applied it through Compute Unified Device Architecture (CUDA) based on General-Purpose Graphics Processing Unit (GPGPU), the experimental results show that a considerable speedup can be obtained by using the proposed parallelized SI generation algorithm.
Improved Error Detection and Data Recovery Architecture for Motion Estimation...IJERA Editor
The document discusses an improved error detection and data recovery architecture for motion estimation testing applications. It presents a residue-and-quotient (RQ) code-based design to embed into motion estimation for detecting and recovering from errors in processing elements. The proposed design can effectively detect errors and recover data with acceptable area overhead and timing penalty. It performs satisfactorily in terms of throughput and reliability for motion estimation testing.
Design and Implementation of JPEG CODEC using NoCIRJET Journal
This document describes the design and implementation of a JPEG codec using a Network-on-Chip (NoC) structure. It aims to speed up the image transfer process and provide shorter processing times. The key steps are:
1. The JPEG encoding process includes color space conversion, downsampling, block division, discrete cosine transform, quantization, and entropy coding to compress the image.
2. A NoC is used to transmit the compressed image data packets across the chip to reduce latency during transfer.
3. The JPEG decoding process reverses the encoding steps through entropy decoding, dequantization, inverse discrete cosine transform, and image reconstruction to decompress the image for viewing.
This document discusses video quality analysis for H.264 based on the human visual system. It proposes an improved video quality assessment method that adds color comparison to structural similarity measurement. The method separates similarity measurement into four comparisons: luminance, contrast, structure, and color. Experimental results on video sets with two distortion types show the proposed method's quality scores are more consistent with visual quality than classical methods. It also discusses the H.264 video coding standard and provides examples of encoding and decoding experimental results.
This document describes an FPGA-based human detection system with an embedded platform. Key points:
- The system uses HOG features, SVM classification, and AdaBoost algorithms for human detection in images and video.
- FPGA circuits are designed to accelerate the computationally intensive HOG feature extraction, including modules for gradient calculation, histogram accumulation, and more.
- The full system is implemented on an embedded platform to achieve a real-time human detection system running at 15 frames per second.
- Experimental results show the FPGA-based system has similar detection accuracy to a PC-based software implementation but significantly faster speed, suitable for real-time embedded applications.
Optimization of latency of temporal key Integrity protocol (tkip) using graph...ijcseit
The document discusses optimization of latency in the Temporal Key Integrity Protocol (TKIP) using hardware-software co-design and graph theory. It presents a mathematical model to partition TKIP algorithms between hardware and software blocks to minimize latency. Simulation results showed the proposed technique achieved lower latency than a hardware-only implementation, reducing latency from 10us to 8us. The technique models TKIP modules as a graph and uses algorithms to assign modules to hardware or software based on latency calculations.
IOSR journal of VLSI and Signal Processing (IOSRJVSP) is an open access journal that publishes articles which contribute new results in all areas of VLSI Design & Signal Processing. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced VLSI Design & Signal Processing concepts and establishing new collaborations in these areas.
An Investigation towards Effectiveness in Image Enhancement Process in MPSoC IJECEIAES
The document discusses image enhancement techniques for use in multiprocessor systems-on-chip (MPSoC). It reviews existing image enhancement methods implemented on FPGA and identifies limitations like low accuracy. The paper proposes using advanced bus architectures like AMBA/AXI in MPSoC to improve communication between memory and input medical images for enhancement, reducing heterogeneity effects. It summarizes various image enhancement algorithms that could be implemented on reconfigurable FPGA hardware for use in MPSoC, including brightness control, contrast stretching, histogram equalization, and edge detection techniques.
International Journal of Engineering Research and DevelopmentIJERD Editor
The document summarizes an emerging VP8 video codec that is designed for mobile devices. It aims to significantly reduce computational complexity through several techniques while maintaining good video quality. The key techniques include a predictive algorithm for motion estimation that reduces computation by 18.5-20x compared to full search, using integer discrete cosine transform instead of floating point to achieve 2.6-3.5x speed improvement, and skipping DCT and quantization for some macroblocks to reduce computations. Experimental results on test sequences show negligible quality degradation of 0.2-0.5dB for integer DCT and 0.5dB on average for the full codec, while achieving real-time encoding rates on mobile devices. The proposed low-complexity
This document discusses two methods for diagnosing faulty logic blocks in FPGA fabrics: the algebraic logic method and vector-logical method. The algebraic logic method is more useful for processing sparse fault tables with fewer than 20% 1s values, as it reduces the fault table size and simplifies computations to generate sum-of-products expressions to diagnose issues. The method involves removing rows and columns with all 0s, then constructing product-of-sums terms for each 1 in the response vector and converting to sum-of-products form. The vector-logical method is better for dense fault tables with many 1s, as it can more easily analyze tables where 1s predominate over 0s. Both methods aim to localize
LEGENDRE TRANSFORM BASED COLOR IMAGE AUTHENTICATION (LTCIA) cscpconf
In this paper, a fragile watermarking technique using Legendre transform (LT) has been
proposed for color image authentication (LTCIA). An initial pixel adjustment is used to ensure
that it never exceeds the range of a valid pixel component. The Legendre transform (LT) is applied on each pair of pixel components of the carrier image in row major order. The
transformed components of each transformed pair are used to fabricate one/two/three watermark bits starting from the first bit position (LSB-1) based on the perceptibility of human
eye on red, green and blue channel. An adjustment method has been incorporated to keep the embedded transformed components closer to the original without hampering the fabricated watermark bits. The inverse Legendre transform (ILT) is applied on each adjusted pair as post
embedding operation to re-generate the watermarked image in spatial domain. At the receiving
end the whole watermark can be extracted based on the reverse procedure thus authentication is done through message digest. Experimental results conform that the proposed lgorithm has enhanced payload and PSNR over Varsaki et. al’s Method [1] and SDHTIWCIA [2]
Legendre transform based color image authentication (ltcia)csandit
In this paper, a fragile watermarking technique using Legendre transform (LT) has been
proposed for color image authentication (LTCIA). An initial pixel adjustment is used to ensure
that it never exceeds the range of a valid pixel component. The Legendre transform (LT) is
applied on each pair of pixel components of the carrier image in row major order. The
transformed components of each transformed pair are used to fabricate one/two/three
watermark bits starting from the first bit position (LSB-1) based on the perceptibility of human
eye on red, green and blue channel. An adjustment method has been incorporated to keep the
embedded transformed components closer to the original without hampering the fabricated
watermark bits. The inverse Legendre transform (ILT) is applied on each adjusted pair as post
embedding operation to re-generate the watermarked image in spatial domain. At the receiving
end the whole watermark can be extracted based on the reverse procedure thus authentication
is done through message digest. Experimental results conform that the proposed algorithm has
enhanced payload and PSNR over Varsaki et. al’s Method [1] and SDHTIWCIA [2].
“FIELD PROGRAMMABLE DSP ARRAYS” - A NOVEL RECONFIGURABLE ARCHITECTURE FOR EFF...sipij
Digital Signal Processing functions are widely used in real time high speed applications. Those functions
are generally implemented either on ASICs with inflexibility, or on FPGAs with bottlenecks of relatively
smaller utilization factor or lower speed compared to ASIC. The proposed reconfigurable DSP processor is
redolent to FPGA, but with basic fixed Common Modules (CMs) (like adders, subtractors, multipliers,
scaling units, shifters) instead of CLBs. This paper introduces the development of a reconfigurable DSP
processor that integrates different filter and transform functions. The switching between DSP functions is
occurred by reconfiguring the interconnection between CMs. Validation of the proposed reconfigurable
architecture has been achieved on Virtex5 FPGA. The architecture provides sufficient amount of flexibility,
parallelism and scalability.
Video Compression Algorithm Based on Frame Difference Approaches ijsc
The huge usage of digital multimedia via communications, wireless communications, Internet, Intranet and cellular mobile leads to incurable growth of data flow through these Media. The researchers go deep in developing efficient techniques in these fields such as compression of data, image and video. Recently, video compression techniques and their applications in many areas (educational, agriculture, medical …) cause this field to be one of the most interested fields. Wavelet transform is an efficient method that can be used to perform an efficient compression technique. This work deals with the developing of an efficient video compression approach based on frames difference approaches that concentrated on the calculation of frame near distance (difference between frames). The
selection of the meaningful frame depends on many factors such as compression performance, frame details, frame size and near distance between frames. Three different approaches are applied for removing the lowest frame difference. In this paper, many videos are tested to insure the efficiency of this technique, in addition a good performance results has been obtained.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
This article proposes a closer-to-metal approach of RTL inspection in microprocessor
design for use in education, engineering, and research. Signals of interest are tapped
throughout the microprocessor hierarchical design and are then output to the top-level
entity and finally displayed to a VGA monitor. Input clock signal can be fed as slow as
one wish to trace or debug the microprocessor being designed. An FPGA development
board, along with its accompanying software package, is used as the design and
test platform. The use of VHDL commands ’type’ and ’record’ in the hierarchy
provides key ingredients in the overall design, since this allows simple, clean, and
tractable code. The method is tested on MIPS single-cycle microprocessor blueprint.
The result shows that the technique produces more consistent display of the true
contents of registers, ALU input/output signals, and other wires – compared to the
standard, widely-used simulation method. This approach is expected to increase
confidence in students and designers since the reported signals’ values are the true
values. Its use is not limited to the development of microprocessors; every FPGAbased
digital design can benefit from it.
The document discusses an improved error detection and data recovery architecture for motion estimation testing applications. It presents a residue-and-quotient (RQ) code-based design to embed into motion estimation for detecting and recovering from errors in processing elements. Experimental results show the design can detect errors and recover data with acceptable overhead in area and timing. It also performs satisfactorily in terms of throughput and reliability for motion estimation testing.
Graduate Research Assistant at Multimedia Processing Laboratory, University of Texas at Arlington. MS in EE with focus on Embedded Systems & Image Processing
Designing of telecommand system using system on chip soc for spacecraft contr...IAEME Publication
The emerging developments in semiconductor technology have made possible to design
entire system onto a single chip, commonly known as System-On-Chip (SoC). The increase in Space
System‘s capabilities by the On-board data processing capabilities can be overcome by optimizing
the SoCs to provide cost effective, high performance, and reliable data. This is achieved by
embedding pre-designed functions into a single SoC, which utilizes specialized reusable core (IP
cores) architecture into complex chip. This paper is concerned with the design of Telecommand
system for transfer of signals from ground station to space station by the integration of SRAM (Static
Random Access Memory), ARM (Advanced RISC Machine) Processor, EDAC unit (Error Detection
and Correction) and CCSDS (Consultative Committee for Space Data System) decoder system. In
this paper we designed the Telecommand SoC by using Verilog code. The implementations have
been done using XILINX FPGA platform and the functionality of the system is verified using
Modelsim simulation. The results are analyzed for SPARTAN 3E device and ARM board and two
devices are being controlled by the signal transfer.
Exploring NISC Architectures for Matrix ApplicationIDES Editor
This document summarizes the results of exploring different NISC (No Instruction Set Computer) architectures for matrix applications. The authors implemented several matrix applications using a NISC toolset and analyzed the compilation and simulation results on different NISC architectures. They analyzed register counts and execution cycle counts and identified the best architectures. For register counts, architectures with pipelined datapaths and data forwarding performed best. For cycle counts, a custom architecture designed by the authors outperformed generic NISC architectures for matrix applications. The authors conducted a comparative analysis to determine the optimal architecture depending on the application requirements.
This document proposes power efficient sum of absolute difference (SAD) algorithms for video compression. It describes:
1. Developing low power 1-bit full adder architectures including a proposed design using NAND, AND, and OR gates that improves power over existing designs.
2. Implementing 4x4 and 8x8 SAD architectures using the proposed low power full adder, ripple carry adders, and carry save adders.
3. Synthesizing the SAD designs in a 180nm technology and finding the proposed 4x4 SAD improves total power by 61% compared to an existing design.
This document discusses post-processing and rate distortion algorithms for the VP8 video codec. It first provides background on the need for post-processing algorithms to reduce blocking artifacts in compressed video, and for rate control algorithms to regulate bitrates and achieve high video quality within bandwidth constraints. It then summarizes existing in-loop deblocking filters and post-processing algorithms. A novel optimal post-processing/in-loop filtering algorithm is described that can achieve better performance than H.264/AVC or VP8 by computing optimal filter coefficients. Finally, a proposed rate distortion optimization algorithm for VP8 is discussed to improve its rate control and coding efficiency.
Distributed Video Coding (DVC) has become increasingly popular in recent times among the researchers in video coding due to its attractive and promising features. DVC primarily has a modified complexity balance between the encoder and decoder, in contrast to conventional video codecs. However, Most of the reported DVC schemes have a high time-delay in decoder which hinders its practical application in real-time systems. In this work, we focus on speed up the Side Information(SI) generation module in DVC, which is a major function in the DVC coding algorithm and one of the time-consuming factor at the decoder. By applied it through Compute Unified Device Architecture (CUDA) based on General-Purpose Graphics Processing Unit (GPGPU), the experimental results show that a considerable speedup can be obtained by using the proposed parallelized SI generation algorithm.
Improved Error Detection and Data Recovery Architecture for Motion Estimation...IJERA Editor
The document discusses an improved error detection and data recovery architecture for motion estimation testing applications. It presents a residue-and-quotient (RQ) code-based design to embed into motion estimation for detecting and recovering from errors in processing elements. The proposed design can effectively detect errors and recover data with acceptable area overhead and timing penalty. It performs satisfactorily in terms of throughput and reliability for motion estimation testing.
Design and Implementation of JPEG CODEC using NoCIRJET Journal
This document describes the design and implementation of a JPEG codec using a Network-on-Chip (NoC) structure. It aims to speed up the image transfer process and provide shorter processing times. The key steps are:
1. The JPEG encoding process includes color space conversion, downsampling, block division, discrete cosine transform, quantization, and entropy coding to compress the image.
2. A NoC is used to transmit the compressed image data packets across the chip to reduce latency during transfer.
3. The JPEG decoding process reverses the encoding steps through entropy decoding, dequantization, inverse discrete cosine transform, and image reconstruction to decompress the image for viewing.
This document provides an overview of a project that implemented image filtering using VHDL on an FPGA board. It discusses designing filters like average, Sobel, Gaussian, and Laplacian filters. Cache memory and a processing unit were developed to hold pixel values and apply filter kernels. Different methods for multiplication in the convolution process were evaluated. Results showed the output images after applying each filter both in software and on the FPGA board. In conclusion, FPGAs provide reconfigurable, accelerated processing for image applications like filtering compared to general purpose computers.
HOMOGENEOUS MULTISTAGE ARCHITECTURE FOR REAL-TIME IMAGE PROCESSINGcscpconf
The document describes a homogeneous multistage architecture for real-time image processing. It proposes a parallel architecture using multiple identical processing elements connected by different communication links. As an example application, it discusses a multi-hypothesis approach for road recognition, which uses multiple hypotheses to detect and track road edges in video in real-time. Experimental results using a FPGA demonstrate the architecture can detect roadsides in images within 60 milliseconds.
This document compares two video compression techniques: Embedded Zerotree Wavelet (EZW) algorithm and H.264 codec. It finds that H.264 provides higher compression ratios of 60-70% compared to EZW, which achieves lower compression. Frames compressed with H.264 show better visual quality than those compressed with EZW. The document concludes that while EZW achieves good compression performance, H.264 is currently the better technique for video compression applications.
Multi Processor Architecture for image processingideas2ignite
The document describes a multiprocessor architecture designed for image processing applications. It consists of an array of customized processing elements (PEs) interconnected via a network. Each PE contains an instruction set optimized for pixel/region-based operations. A test algorithm for image change detection was implemented and simulated across the PE network to demonstrate the concept. Future work involves designing a more efficient custom processor for the PEs.
An Overview on Multimedia Transcoding Techniques on Streaming Digital Contentsidescitation
The current IT infrastructure as well as various
commercial applications are directly formulated based on
deployment in multimedia system e.g. education, marketing,
risk management, tele-medicines, military etc. One of the
challenges found in using such application is to deliver
uninterrupted stream of video between multiple terminals
e.g. smart-phone, PDAs, laptops, IPTV etc. The research shows
that there is a stipulated need of designing novel mechanism
of bit rate adjustment as well as format conversion policy so
that the source stream may stream well in diverse end devices
with multiple configuration of processor, memory, decoding
etc. This paper discusses various eminent points from
literature that will throw better highlights in understanding
a schema of direct digital-to-digital data conversion of one
encoding to another termed as transcoding. Although
multimedia transcoding has covered more than a decade in
the area of research, but unfortunately, there is a huge trade-
off between the application, service, resource constraint, and
hardware design that gives rise to QoS issues.
This document summarizes the development of real-time video processing IP cores in FPGA by NeST including a video scaler, sharpness enhancer, gamma correction, and picture quality enhancer modules. It describes the specifications, algorithms, and architectures of each module developed as reusable IP cores. The video scaler uses bilinear interpolation for scaling up and nearest neighbor for scaling down. The sharpness enhancer uses a Laplacian filter. Gamma correction uses programmable lookup tables. The picture quality enhancer contains brightness, contrast, and color adjustment modules. Together these cores form a video processing suite for applications like surveillance and medical imaging.
Efficient Architecture for Variable Block Size Motion Estimation in H.264/AVCIDES Editor
This paper proposes an efficient VLSI architecture
for the implementation of variable block size motion
estimation (VBSME). To improve the performance video
compression the Variable Block Size Motion Estimation
(VBSME) is the critical path. Variable Block Size Motion
Estimation feature has been introduced in to the H.264/AVC.
This feature induces significant complexities into the design
of the H.264/AVC video codec. This paper we compare the
existing architectures for VBSME. An efficient architecture
to improve the performance of Spiral Search for Variable Size
Motion Estimation in H.264/AVC is proposed. Among various
architectures available for VBSME spiral search provides
hardware friendly data flow with efficient utilization of
resources. The proposed implementation is verified using the
MATLAB on foreman, coastguard and train sequences. The
proposed Adaptive thresholding technique reduces the average
number of computations significantly with negligible effect
on the video quality. The results are verified using hardware
implementation on Xilinx Virtex 4 it was able to achieve real
time video coding of 60 fps at 95.56 MHz CLK frequency.
EFFICIENT ABSOLUTE DIFFERENCE CIRCUIT FOR SAD COMPUTATION ON FPGAVLSICS Design
Video Compression is very essential to meet the technological demands such as low power, less memory
and fast transfer rate for different range of devices and for various multimedia applications. Video
compression is primarily achieved by Motion Estimation (ME) process in any video encoder which
contributes to significant compression gain.Sum of Absolute Difference (SAD) is used as distortion metric
in ME process.In this paper, efficient Absolute Difference(AD)circuit is proposed which uses Brent Kung
Adder(BKA) and a comparator based on modified 1’s complement principle and conditional sum adder
scheme. Results shows that proposed architecture reduces delay by 15% and number of slice LUTs by 42
% as compared to conventional architecture. Simulation and synthesis are done on Xilinx ISE 14.2 using
Virtex 7 FPGA.
EFFICIENT ABSOLUTE DIFFERENCE CIRCUIT FOR SAD COMPUTATION ON FPGAVLSICS Design
Video Compression is very essential to meet the technological demands such as low power, less memory and fast transfer rate for different range of devices and for various multimedia applications. Video compression is primarily achieved by Motion Estimation (ME) process in any video encoder which contributes to significant compression gain.Sum of Absolute Difference (SAD) is used as distortion metric in ME process.In this paper, efficient Absolute Difference(AD)circuit is proposed which uses Brent Kung Adder(BKA) and a comparator based on modified 1’s complement principle and conditional sum adder scheme. Results shows that proposed architecture reduces delay by 15% and number of slice LUTs by 42 % as compared to conventional architecture. Simulation and synthesis are done on Xilinx ISE 14.2 using Virtex 7 FPGA.
The surveillance systems are expected to record the videos in 24/7 and obviously it requires a huge storage space. Even though the hard disks are cheaper today, the number of CCTV cameras is also vertically increasing in order to boost up security. The video compression techniques is the only better option to reduce required the storage space; however, the existing video compression techniques are not adequate at all for the modern digital surveillance system monitoring as they require huge video streams. In this paper, a novel video compression technique is presented with a critical analysis of the experimental results.
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
IBM VideoCharger and Digital Library MediaBase.docVideoguy
This document provides an overview of video streaming over the internet. It discusses video compression standards like H.261, H.263, MJPEG, MPEG1, MPEG2 and MPEG4. It also covers internet transport protocols like TCP and UDP, and challenges like firewall penetration. Both commercial streaming products and research projects aiming to improve streaming are reviewed, with limitations of current approaches outlined. The SuperNOVA research project is evaluated against other work seeking to make high quality video streaming over the internet practical.
IRJET- A Hybrid Image and Video Compression of DCT and DWT Techniques for H.2...IRJET Journal
This document discusses a hybrid image and video compression technique using both discrete cosine transform (DCT) and discrete wavelet transform (DWT) for H.265/HEVC video compression. The proposed hybrid DWT-DCT method exploits the advantages of both techniques for improved compression performance compared to using them individually. It involves applying DWT-DCT transformations to video frames, entropy coding the compressed frames with Huffman coding, and transmitting the bitstreams to the decoder. The technique is evaluated based on compression ratio, peak signal-to-noise ratio, and mean square error.
Designing of telecommand system using system on chip soc for spacecraft contr...IAEME Publication
This document describes the design of a telecommand system using a System on Chip (SoC) for spacecraft control applications. It involves integrating various components onto a single chip, including SRAM, an ARM processor, and an Error Detection and Correction (EDAC) unit. The telecommand data is received and stored in the SRAM. The EDAC unit uses a Hamming code to detect and correct any errors in the data before it is processed by the ARM processor. The processor then collects onboard data signals and produces the output result. Verilog code is used to design the SoC, which is implemented and tested on a Xilinx FPGA platform and ARM board. The SoC allows two devices to be controlled by
Similar to HARDWARE SOFTWARE CO-SIMULATION OF MOTION ESTIMATION IN H.264 ENCODER (20)
ANALYSIS OF LAND SURFACE DEFORMATION GRADIENT BY DINSAR cscpconf
The progressive development of Synthetic Aperture Radar (SAR) systems diversify the exploitation of the generated images by these systems in different applications of geoscience. Detection and monitoring surface deformations, procreated by various phenomena had benefited from this evolution and had been realized by interferometry (InSAR) and differential interferometry (DInSAR) techniques. Nevertheless, spatial and temporal decorrelations of the interferometric couples used, limit strongly the precision of analysis results by these techniques. In this context, we propose, in this work, a methodological approach of surface deformation detection and analysis by differential interferograms to show the limits of this technique according to noise quality and level. The detectability model is generated from the deformation signatures, by simulating a linear fault merged to the images couples of ERS1 / ERS2 sensors acquired in a region of the Algerian south.
4D AUTOMATIC LIP-READING FOR SPEAKER'S FACE IDENTIFCATIONcscpconf
A novel based a trajectory-guided, concatenating approach for synthesizing high-quality image real sample renders video is proposed . The lips reading automated is seeking for modeled the closest real image sample sequence preserve in the library under the data video to the HMM predicted trajectory. The object trajectory is modeled obtained by projecting the face patterns into an KDA feature space is estimated. The approach for speaker's face identification by using synthesise the identity surface of a subject face from a small sample of patterns which sparsely each the view sphere. An KDA algorithm use to the Lip-reading image is discrimination, after that work consisted of in the low dimensional for the fundamental lip features vector is reduced by using the 2D-DCT.The mouth of the set area dimensionality is ordered by a normally reduction base on the PCA to obtain the Eigen lips approach, their proposed approach by[33]. The subjective performance results of the cost function under the automatic lips reading modeled , which wasn’t illustrate the superior performance of the
method.
MOVING FROM WATERFALL TO AGILE PROCESS IN SOFTWARE ENGINEERING CAPSTONE PROJE...cscpconf
Universities offer software engineering capstone course to simulate a real world-working environment in which students can work in a team for a fixed period to deliver a quality product. The objective of the paper is to report on our experience in moving from Waterfall process to Agile process in conducting the software engineering capstone project. We present the capstone course designs for both Waterfall driven and Agile driven methodologies that highlight the structure, deliverables and assessment plans.To evaluate the improvement, we conducted a survey for two different sections taught by two different instructors to evaluate students’ experience in moving from traditional Waterfall model to Agile like process. Twentyeight students filled the survey. The survey consisted of eight multiple-choice questions and an open-ended question to collect feedback from students. The survey results show that students were able to attain hands one experience, which simulate a real world-working environment. The results also show that the Agile approach helped students to have overall better design and avoid mistakes they have made in the initial design completed in of the first phase of the capstone project. In addition, they were able to decide on their team capabilities, training needs and thus learn the required technologies earlier which is reflected on the final product quality
PROMOTING STUDENT ENGAGEMENT USING SOCIAL MEDIA TECHNOLOGIEScscpconf
This document discusses using social media technologies to promote student engagement in a software project management course. It describes the course and objectives of enhancing communication. It discusses using Facebook for 4 years, then switching to WhatsApp based on student feedback, and finally introducing Slack to enable personalized team communication. Surveys found students engaged and satisfied with all three tools, though less familiar with Slack. The conclusion is that social media promotes engagement but familiarity with the tool also impacts satisfaction.
A SURVEY ON QUESTION ANSWERING SYSTEMS: THE ADVANCES OF FUZZY LOGICcscpconf
In real world computing environment with using a computer to answer questions has been a human dream since the beginning of the digital era, Question-answering systems are referred to as intelligent systems, that can be used to provide responses for the questions being asked by the user based on certain facts or rules stored in the knowledge base it can generate answers of questions asked in natural , and the first main idea of fuzzy logic was to working on the problem of computer understanding of natural language, so this survey paper provides an overview on what Question-Answering is and its system architecture and the possible relationship and
different with fuzzy logic, as well as the previous related research with respect to approaches that were followed. At the end, the survey provides an analytical discussion of the proposed QA models, along or combined with fuzzy logic and their main contributions and limitations.
DYNAMIC PHONE WARPING – A METHOD TO MEASURE THE DISTANCE BETWEEN PRONUNCIATIONS cscpconf
Human beings generate different speech waveforms while speaking the same word at different times. Also, different human beings have different accents and generate significantly varying speech waveforms for the same word. There is a need to measure the distances between various words which facilitate preparation of pronunciation dictionaries. A new algorithm called Dynamic Phone Warping (DPW) is presented in this paper. It uses dynamic programming technique for global alignment and shortest distance measurements. The DPW algorithm can be used to enhance the pronunciation dictionaries of the well-known languages like English or to build pronunciation dictionaries to the less known sparse languages. The precision measurement experiments show 88.9% accuracy.
INTELLIGENT ELECTRONIC ASSESSMENT FOR SUBJECTIVE EXAMS cscpconf
In education, the use of electronic (E) examination systems is not a novel idea, as Eexamination systems have been used to conduct objective assessments for the last few years. This research deals with randomly designed E-examinations and proposes an E-assessment system that can be used for subjective questions. This system assesses answers to subjective questions by finding a matching ratio for the keywords in instructor and student answers. The matching ratio is achieved based on semantic and document similarity. The assessment system is composed of four modules: preprocessing, keyword expansion, matching, and grading. A survey and case study were used in the research design to validate the proposed system. The examination assessment system will help instructors to save time, costs, and resources, while increasing efficiency and improving the productivity of exam setting and assessments.
TWO DISCRETE BINARY VERSIONS OF AFRICAN BUFFALO OPTIMIZATION METAHEURISTICcscpconf
African Buffalo Optimization (ABO) is one of the most recent swarms intelligence based metaheuristics. ABO algorithm is inspired by the buffalo’s behavior and lifestyle. Unfortunately, the standard ABO algorithm is proposed only for continuous optimization problems. In this paper, the authors propose two discrete binary ABO algorithms to deal with binary optimization problems. In the first version (called SBABO) they use the sigmoid function and probability model to generate binary solutions. In the second version (called LBABO) they use some logical operator to operate the binary solutions. Computational results on two knapsack problems (KP and MKP) instances show the effectiveness of the proposed algorithm and their ability to achieve good and promising solutions.
DETECTION OF ALGORITHMICALLY GENERATED MALICIOUS DOMAINcscpconf
In recent years, many malware writers have relied on Dynamic Domain Name Services (DDNS) to maintain their Command and Control (C&C) network infrastructure to ensure a persistence presence on a compromised host. Amongst the various DDNS techniques, Domain Generation Algorithm (DGA) is often perceived as the most difficult to detect using traditional methods. This paper presents an approach for detecting DGA using frequency analysis of the character distribution and the weighted scores of the domain names. The approach’s feasibility is demonstrated using a range of legitimate domains and a number of malicious algorithmicallygenerated domain names. Findings from this study show that domain names made up of English characters “a-z” achieving a weighted score of < 45 are often associated with DGA. When a weighted score of < 45 is applied to the Alexa one million list of domain names, only 15% of the domain names were treated as non-human generated.
GLOBAL MUSIC ASSET ASSURANCE DIGITAL CURRENCY: A DRM SOLUTION FOR STREAMING C...cscpconf
The document proposes a blockchain-based digital currency and streaming platform called GoMAA to address issues of piracy in the online music streaming industry. Key points:
- GoMAA would use a digital token on the iMediaStreams blockchain to enable secure dissemination and tracking of streamed content. Content owners could control access and track consumption of released content.
- Original media files would be converted to a Secure Portable Streaming (SPS) format, embedding watermarks and smart contract data to indicate ownership and enable validation on the blockchain.
- A browser plugin would provide wallets for fans to collect GoMAA tokens as rewards for consuming content, incentivizing participation and addressing royalty discrepancies by recording
IMPORTANCE OF VERB SUFFIX MAPPING IN DISCOURSE TRANSLATION SYSTEMcscpconf
This document discusses the importance of verb suffix mapping in discourse translation from English to Telugu. It explains that after anaphora resolution, the verbs must be changed to agree with the gender, number, and person features of the subject or anaphoric pronoun. Verbs in Telugu inflect based on these features, while verbs in English only inflect based on number and person. Several examples are provided that demonstrate how the Telugu verb changes based on whether the subject or pronoun is masculine, feminine, neuter, singular or plural. Proper verb suffix mapping is essential for generating natural and coherent translations while preserving the context and meaning of the original discourse.
EXACT SOLUTIONS OF A FAMILY OF HIGHER-DIMENSIONAL SPACE-TIME FRACTIONAL KDV-T...cscpconf
In this paper, based on the definition of conformable fractional derivative, the functional
variable method (FVM) is proposed to seek the exact traveling wave solutions of two higherdimensional
space-time fractional KdV-type equations in mathematical physics, namely the
(3+1)-dimensional space–time fractional Zakharov-Kuznetsov (ZK) equation and the (2+1)-
dimensional space–time fractional Generalized Zakharov-Kuznetsov-Benjamin-Bona-Mahony
(GZK-BBM) equation. Some new solutions are procured and depicted. These solutions, which
contain kink-shaped, singular kink, bell-shaped soliton, singular soliton and periodic wave
solutions, have many potential applications in mathematical physics and engineering. The
simplicity and reliability of the proposed method is verified.
AUTOMATED PENETRATION TESTING: AN OVERVIEWcscpconf
The document discusses automated penetration testing and provides an overview. It compares manual and automated penetration testing, noting that automated testing allows for faster, more standardized and repeatable tests but has limitations in developing new exploits. It also reviews some current automated penetration testing methodologies and tools, including those using HTTP/TCP/IP attacks, linking common scanning tools, a Python-based tool targeting databases, and one using POMDPs for multi-step penetration test planning under uncertainty. The document concludes that automated testing is more efficient than manual for known vulnerabilities but cannot replace manual testing for discovering new exploits.
CLASSIFICATION OF ALZHEIMER USING fMRI DATA AND BRAIN NETWORKcscpconf
Since the mid of 1990s, functional connectivity study using fMRI (fcMRI) has drawn increasing
attention of neuroscientists and computer scientists, since it opens a new window to explore
functional network of human brain with relatively high resolution. BOLD technique provides
almost accurate state of brain. Past researches prove that neuro diseases damage the brain
network interaction, protein- protein interaction and gene-gene interaction. A number of
neurological research paper also analyse the relationship among damaged part. By
computational method especially machine learning technique we can show such classifications.
In this paper we used OASIS fMRI dataset affected with Alzheimer’s disease and normal
patient’s dataset. After proper processing the fMRI data we use the processed data to form
classifier models using SVM (Support Vector Machine), KNN (K- nearest neighbour) & Naïve
Bayes. We also compare the accuracy of our proposed method with existing methods. In future,
we will other combinations of methods for better accuracy.
VALIDATION METHOD OF FUZZY ASSOCIATION RULES BASED ON FUZZY FORMAL CONCEPT AN...cscpconf
The document proposes a new validation method for fuzzy association rules based on three steps: (1) applying the EFAR-PN algorithm to extract a generic base of non-redundant fuzzy association rules using fuzzy formal concept analysis, (2) categorizing the extracted rules into groups, and (3) evaluating the relevance of the rules using structural equation modeling, specifically partial least squares. The method aims to address issues with existing fuzzy association rule extraction algorithms such as large numbers of extracted rules, redundancy, and difficulties with manual validation.
PROBABILITY BASED CLUSTER EXPANSION OVERSAMPLING TECHNIQUE FOR IMBALANCED DATAcscpconf
In many applications of data mining, class imbalance is noticed when examples in one class are
overrepresented. Traditional classifiers result in poor accuracy of the minority class due to the
class imbalance. Further, the presence of within class imbalance where classes are composed of
multiple sub-concepts with different number of examples also affect the performance of
classifier. In this paper, we propose an oversampling technique that handles between class and
within class imbalance simultaneously and also takes into consideration the generalization
ability in data space. The proposed method is based on two steps- performing Model Based
Clustering with respect to classes to identify the sub-concepts; and then computing the
separating hyperplane based on equal posterior probability between the classes. The proposed
method is tested on 10 publicly available data sets and the result shows that the proposed
method is statistically superior to other existing oversampling methods.
CHARACTER AND IMAGE RECOGNITION FOR DATA CATALOGING IN ECOLOGICAL RESEARCHcscpconf
Data collection is an essential, but manpower intensive procedure in ecological research. An
algorithm was developed by the author which incorporated two important computer vision
techniques to automate data cataloging for butterfly measurements. Optical Character
Recognition is used for character recognition and Contour Detection is used for imageprocessing.
Proper pre-processing is first done on the images to improve accuracy. Although
there are limitations to Tesseract’s detection of certain fonts, overall, it can successfully identify
words of basic fonts. Contour detection is an advanced technique that can be utilized to
measure an image. Shapes and mathematical calculations are crucial in determining the precise
location of the points on which to draw the body and forewing lines of the butterfly. Overall,
92% accuracy were achieved by the program for the set of butterflies measured.
SOCIAL MEDIA ANALYTICS FOR SENTIMENT ANALYSIS AND EVENT DETECTION IN SMART CI...cscpconf
Smart cities utilize Internet of Things (IoT) devices and sensors to enhance the quality of the city
services including energy, transportation, health, and much more. They generate massive
volumes of structured and unstructured data on a daily basis. Also, social networks, such as
Twitter, Facebook, and Google+, are becoming a new source of real-time information in smart
cities. Social network users are acting as social sensors. These datasets so large and complex
are difficult to manage with conventional data management tools and methods. To become
valuable, this massive amount of data, known as 'big data,' needs to be processed and
comprehended to hold the promise of supporting a broad range of urban and smart cities
functions, including among others transportation, water, and energy consumption, pollution
surveillance, and smart city governance. In this work, we investigate how social media analytics
help to analyze smart city data collected from various social media sources, such as Twitter and
Facebook, to detect various events taking place in a smart city and identify the importance of
events and concerns of citizens regarding some events. A case scenario analyses the opinions of
users concerning the traffic in three largest cities in the UAE
SOCIAL NETWORK HATE SPEECH DETECTION FOR AMHARIC LANGUAGEcscpconf
The anonymity of social networks makes it attractive for hate speech to mask their criminal
activities online posing a challenge to the world and in particular Ethiopia. With this everincreasing
volume of social media data, hate speech identification becomes a challenge in
aggravating conflict between citizens of nations. The high rate of production, has become
difficult to collect, store and analyze such big data using traditional detection methods. This
paper proposed the application of apache spark in hate speech detection to reduce the
challenges. Authors developed an apache spark based model to classify Amharic Facebook
posts and comments into hate and not hate. Authors employed Random forest and Naïve Bayes
for learning and Word2Vec and TF-IDF for feature selection. Tested by 10-fold crossvalidation,
the model based on word2vec embedding performed best with 79.83%accuracy. The
proposed method achieve a promising result with unique feature of spark for big data.
GENERAL REGRESSION NEURAL NETWORK BASED POS TAGGING FOR NEPALI TEXTcscpconf
This article presents Part of Speech tagging for Nepali text using General Regression Neural
Network (GRNN). The corpus is divided into two parts viz. training and testing. The network is
trained and validated on both training and testing data. It is observed that 96.13% words are
correctly being tagged on training set whereas 74.38% words are tagged correctly on testing
data set using GRNN. The result is compared with the traditional Viterbi algorithm based on
Hidden Markov Model. Viterbi algorithm yields 97.2% and 40% classification accuracies on
training and testing data sets respectively. GRNN based POS Tagger is more consistent than the
traditional Viterbi decoding technique.
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UPRAHUL
This Dissertation explores the particular circumstances of Mirzapur, a region located in the
core of India. Mirzapur, with its varied terrains and abundant biodiversity, offers an optimal
environment for investigating the changes in vegetation cover dynamics. Our study utilizes
advanced technologies such as GIS (Geographic Information Systems) and Remote sensing to
analyze the transformations that have taken place over the course of a decade.
The complex relationship between human activities and the environment has been the focus
of extensive research and worry. As the global community grapples with swift urbanization,
population expansion, and economic progress, the effects on natural ecosystems are becoming
more evident. A crucial element of this impact is the alteration of vegetation cover, which plays a
significant role in maintaining the ecological equilibrium of our planet.Land serves as the foundation for all human activities and provides the necessary materials for
these activities. As the most crucial natural resource, its utilization by humans results in different
'Land uses,' which are determined by both human activities and the physical characteristics of the
land.
The utilization of land is impacted by human needs and environmental factors. In countries
like India, rapid population growth and the emphasis on extensive resource exploitation can lead
to significant land degradation, adversely affecting the region's land cover.
Therefore, human intervention has significantly influenced land use patterns over many
centuries, evolving its structure over time and space. In the present era, these changes have
accelerated due to factors such as agriculture and urbanization. Information regarding land use and
cover is essential for various planning and management tasks related to the Earth's surface,
providing crucial environmental data for scientific, resource management, policy purposes, and
diverse human activities.
Accurate understanding of land use and cover is imperative for the development planning
of any area. Consequently, a wide range of professionals, including earth system scientists, land
and water managers, and urban planners, are interested in obtaining data on land use and cover
changes, conversion trends, and other related patterns. The spatial dimensions of land use and
cover support policymakers and scientists in making well-informed decisions, as alterations in
these patterns indicate shifts in economic and social conditions. Monitoring such changes with the
help of Advanced technologies like Remote Sensing and Geographic Information Systems is
crucial for coordinated efforts across different administrative levels. Advanced technologies like
Remote Sensing and Geographic Information Systems
9
Changes in vegetation cover refer to variations in the distribution, composition, and overall
structure of plant communities across different temporal and spatial scales. These changes can
occur natural.
How to Make a Field Mandatory in Odoo 17Celine George
In Odoo, making a field required can be done through both Python code and XML views. When you set the required attribute to True in Python code, it makes the field required across all views where it's used. Conversely, when you set the required attribute in XML views, it makes the field required only in the context of that particular view.
Leveraging Generative AI to Drive Nonprofit InnovationTechSoup
In this webinar, participants learned how to utilize Generative AI to streamline operations and elevate member engagement. Amazon Web Service experts provided a customer specific use cases and dived into low/no-code tools that are quick and easy to deploy through Amazon Web Service (AWS.)
Philippine Edukasyong Pantahanan at Pangkabuhayan (EPP) CurriculumMJDuyan
(𝐓𝐋𝐄 𝟏𝟎𝟎) (𝐋𝐞𝐬𝐬𝐨𝐧 𝟏)-𝐏𝐫𝐞𝐥𝐢𝐦𝐬
𝐃𝐢𝐬𝐜𝐮𝐬𝐬 𝐭𝐡𝐞 𝐄𝐏𝐏 𝐂𝐮𝐫𝐫𝐢𝐜𝐮𝐥𝐮𝐦 𝐢𝐧 𝐭𝐡𝐞 𝐏𝐡𝐢𝐥𝐢𝐩𝐩𝐢𝐧𝐞𝐬:
- Understand the goals and objectives of the Edukasyong Pantahanan at Pangkabuhayan (EPP) curriculum, recognizing its importance in fostering practical life skills and values among students. Students will also be able to identify the key components and subjects covered, such as agriculture, home economics, industrial arts, and information and communication technology.
𝐄𝐱𝐩𝐥𝐚𝐢𝐧 𝐭𝐡𝐞 𝐍𝐚𝐭𝐮𝐫𝐞 𝐚𝐧𝐝 𝐒𝐜𝐨𝐩𝐞 𝐨𝐟 𝐚𝐧 𝐄𝐧𝐭𝐫𝐞𝐩𝐫𝐞𝐧𝐞𝐮𝐫:
-Define entrepreneurship, distinguishing it from general business activities by emphasizing its focus on innovation, risk-taking, and value creation. Students will describe the characteristics and traits of successful entrepreneurs, including their roles and responsibilities, and discuss the broader economic and social impacts of entrepreneurial activities on both local and global scales.
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
This presentation was provided by Racquel Jemison, Ph.D., Christina MacLaughlin, Ph.D., and Paulomi Majumder. Ph.D., all of the American Chemical Society, for the second session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session Two: 'Expanding Pathways to Publishing Careers,' was held June 13, 2024.
Temple of Asclepius in Thrace. Excavation resultsKrassimira Luka
The temple and the sanctuary around were dedicated to Asklepios Zmidrenus. This name has been known since 1875 when an inscription dedicated to him was discovered in Rome. The inscription is dated in 227 AD and was left by soldiers originating from the city of Philippopolis (modern Plovdiv).
2. 264 Computer Science & Information Technology (CS & IT)
increasing video resolutions and the increasing demand for real-time encoding require the use of
faster processors. However, power consumption should be kept to a minimum.
1.1. Motion Estimation
Motion Estimation is a part of ‘inter coding’ technique. Inter coding refers to a mechanism of
finding ‘co-relation’ between two frames (still images), which are not far away from each other as
far as the order of occurrence is concerned, one called the reference frame and the other called
current frame, and then encoding the information which is a function of this ‘co-relation’ instead
of the frame itself [4] . Motion Estimation is the basis of inter coding, which exploits temporal
redundancy between the video frames, to scope massive visual information compression as
shown in Fig 1. Motion estimation is part of prediction step. The vector between the position of
the macroblock in the current frame and position of best matching block in the previous frame is
called Motion Vector.
Motion estimation is the processes of determining motion vectors that describe the transformation
from one 2D image to another usually from adjacent frames in a video sequence. The Motion
vectors may be represented by a translational model or many other models that can approximate
the motion of a real video camera such as rotation and translation in all three dimensions and
zoom.
Figure 1. Block diagram of H.264 encoder
1.2. Target platforms
The software required is MATLAB 7.0 Version. MATLAB is a high level technical computing
language and interactive environment for algorithm development, data visualization, data analysis
and numeric computation. Using the MATLAB product, you can solve technical computing
problems faster than with traditional programming languages such as C, C++ and FORTRAN.
One special characteristics of MATLAB 7.0 is that, it has direct link to the FPGA through Xilinx
blockset in Xilinx 10.1, through system generator by creating JTAG.
The hardware required is FPGA target platform SPARTAN 3e with Xilinx 10.1. The Spartan-3E
family of Field-Programmable Gate Arrays (FPGAs) is specifically designed to meet the needs of
high volume, cost-sensitive consume electronic applications. Because of their exceptionally low
cost, Spartan-3E FPGAs are ideally suited to a wide range of consumer electronics applications,
3. Computer Science & Information Technology (CS & IT) 265
including broadband access, home networking, display/projection, and digital television
equipment. The Spartan-3E family is a superior alternative to mask programmed ASICs. Xilinx
System Generator is a MATLAB Simulink blockset that facilitates the design and targeting of
Xilinx FPGAs. Within MATLAB, designers can both target a Xilinx FPGA hardware platform
and verify the hardware output, making it easier for an algorithm developer to make the leap into
hardware and a firmware developer to better grasp the algorithm.
The rest of this paper is organized as follows: Section 2 gives a brief overview of the related
work. The section 3 gives the Motion Estimation algorithm. Scanning methods are discussed in
section 4. Co-design and Co-Simulation approaches are focussed on section 5. Implemented
Results are discussed in Section 6. Section 7 gives the Conclusion and final Suggestions for
future work.
2. RELATED WORK
Complex data flow control are major challenges in high definition integer motion estimation
hardware implementation [6]. The proposed multi-resolution motion estimation algorithm
reached a good balance between complexity and performance with rate distortion optimized
variable block size motion estimation support [20, 22]. Performance and complexity jointly
optimized integer motion estimation (IME) engine which is the biggest challenge in HD video
encoder architecture. System throughput burden, memory bandwidth, and hardware cost are huge
challenges in HD FSBM based ME architecture due to the large search window size requirement
[15]. In modern video coding standards, for example H.264, fractional-pixel motion estimation
(ME) is implemented. Many fast integer-pixel ME algorithms have been developed to reduce the
computational complexity of integer-pixel ME [16]. A parallel motion estimation algorithm based
on stream processing is discussed in [19]. Many approaches are explored to enable high data
reuse efficiency and computation parallelism for GPUs or other programmable processors. In [5],
a novel prediction method is introduced which is based on multiple search centers, which can
efficiently improve the prediction accuracy. In a typical video coding system, the computational
complexity of ME is approximately 50% to 90% of the total. Fast algorithms based on statistical
learning to reduce the computational cost involved in three main components in H.264 encoder,
i.e., intermode decision, multi-reference motion estimation (ME), and intra-mode prediction [7].
Since the execution speed is a major challenge for real time applications, several methods have
been proposed for fast intermode decision in H.264 recently [9]. In contrast to intra prediction,
the partition of a MB into several smaller-size blocks for better ME is called intermode decision.
In [1, 8], not only the rate distortion performance is considered but also its implementation
complexity using an efficient hardware/software Co-design.
Mode decision is a process such that for each block-size, bit-rate and distortion are calculated by
actually encoding and decoding the video. Therefore, the encoder can achieve the best Rate
Distortion (RD) performance, at the expense of calculation complexity. Intraprediction use
previously decoded, spatially-local macroblocks to predict the next macroblock and it works well
for low-detail images [1, 22]. The combination of using only one reference frame, 16×16 block
size, the de-blocking filter, and turning off the RDO option, achieves the best balance between
computation complexity and picture quality [22]. The tests mainly aimed at finding how rate-
distortion optimization and different block size partitions affect the compression performance of
the transcoded videos. H.264/AVC employs rate distortion optimization (RDO) technique to
determine the optimal mode from a set of coding modes with different block sizes. On the
contrary, selecting a smaller block size usually results in a smaller residual [14]. If RDO
calculation is performed on each possible coding mode, its extremely high computational
complexity makes it a bottleneck for a real-time implementation of H.264/AVC encoder
4. 266 Computer Science & Information Technology (CS & IT)
3. MOTION ESTIMATION ALGORITHMS
The method for finding motion vectors can be categorized into pixel based methods (DIRECT)
and feature based method (INDIRECT). Evaluation metrics for direct methods are Sum of
Absolute Differences (SAD), sum of squared difference (SSD) and sum of absolute transformed
difference (SATD) [1]. Variable Block Size (VBS) ME allows different MVs for different sub-
blocks and can achieve better matching for all sub-blocks and higher coding efficiency than Fixed
Block Size ME (FBSME). VBSME has good RD performance compared with FBSME, but it has
huge computational requirement and irregular memory access making it hard for efficient
hardware implementation. In [22], H.264 allows a 16 × 16 MB to be partitioned into seven kinds
of sub-blocks as shown in Fig 2.
Figure 2. Variable block size of H.264 encoder
4. SCANNING METHODS
Different scanning methods and search patterns are discussed in this section. 3SS yields better
speedup when compared to FS, DS ME algorithms, by taking a Leon3 uniprocessor video
encoding system as the reference platform. The quality of fast ME algorithms have the following
relations: DS~CDS~4SS~HEXBS.
In Raster Scan, the search locations in the first row are scanned from left to right, followed by the
second row from left to right, and so on. Raster Scan as shown in fig 3(a) is effective in reusing
data horizontally with relatively high data re-use ratio but with redundant loading. The encoding
consists of two phases: Zigzag scanning is shown in fig 3(b), to convert the 2D image matrix into
a 1−D vector and share generation. The decoding also consists of two phases: secret image
recovery and Inverse zigzag scanning to convert 1−D vector to 2D image matrix.
The data re-usability is improved slightly in some architecture by another scanning order called
Snake Scan as shown in Fig. 3(c). Snake Scan processes the first row from left to right, then the
second row from right to left, and then the third row from left to right, and so on. A novel
scanning order called Smart Snake (SS) is proposed in [5], which can achieve variable data re-use
ratios and minimum redundant data loading. Search window is divided into an array of non-
overlapping rectangular sub-regions that span the search window as shown in Fig. 3(d). In each
rectangular sub-region, Snake Scan is performed to achieve significantly higher data re-use. After
one sub-region is searched, it will move into an adjacent region and Snake Scan will be applied
again. In different sub-regions, Snake Scan may be performed from top to bottom (L1), or from
bottom to top (L2). It may start from left and end at right (L1, L2, L3), or start from right and end
at left (L4, L5, L6). It may be horizontal (L1, L2) or vertical (L3, L4).
5. Computer Science & Information Technology (CS & IT) 267
(a) Raster Scan (b) Zigzag Scan
(c) Snake Scan (d) Smart Snake Scan
Figure 3. Scanning methods
5. CO-DESIGN AND CO-SIMULATION APPROACHES
The emphasizes of Co-Design is on the area of system specification, hardware software
partitioning, architectural design, and the iteration in between the software and hardware as the
design proceeds to next stage. The hardware and software co design makes it possible. A Multi
Core H.264 video encoder is proposed in [13, 19], by applying a novel hardware software co-
design methodology which is suitable for implementing complexity video coding embedded
systems. The hardware and the software components of the system are designed together to
obtain the intended Performance levels. At the hardware level, the designer must select the
system CPU, hardware accelerators, peripheral devices, memory and the corresponding
interconnection structure. One method is to represent all systems in HDL. The second method is
to use a simulator that supports all different HDLs used. The third method is to use different
simulators for each system and verify the integrated system using co-simulation. Co-simulation is
useful in HW/SW co-design. Co-simulation can be done by either connecting two simulators
known as direct coupling or by the use of a co-simulation backplane. The co-simulation allows
for designing in much short iteration while verifying functional behaviour.
6. IMPLEMENTED RESULTS
An important coding tool of H.264 is the variable block size matching algorithm for the ME
(Motion Estimation).The motion estimator has two inputs: a macroblock from the current frame
and a search area from the previous frame. The residual of current and reference frame is
simulated using Xilinx blockset. The hardware/software co-simulation is implemented using
Xilinx Spartan 3E FPGA by creating JTAG. The result is shown in fig 4.
6. 268 Computer Science & Information Technology (CS & IT)
Figure 4. Residual of Current & Reference frame with HW/SW Co-Simulation
Figure 5. Xilinx resource estimator for HW/SW Co-Simulation
The interface between hardware and software environment is done using System Generator.
Hardware utilization of motion estimation is shown in Fig 5. The current and reference frame are
estimated by evaluation metrics of direct method Motion Estimation Algorithm using Sum of
Absolute Differences. The Simulated result and the implemented hardware/software co-
simulation are shown in fig 6. The peak signal-to-noise ratio (PSNR) between two images is
45.57dB which is used as a quality measurement.
Figure 6. Residual of Current & reference frame using SAD with HW/SW Co-Simulation
7. Computer Science & Information Technology (CS & IT) 269
7. CONCLUSION AND FUTURE WORK
Motion estimation in H.264 encoder is simulated using MATLAB with Zigzag Scanning method.
The ideal hardware / software tool produces automatically a set of high quality partitions in short,
predictable computation time allowing the designer to interact. Hardware software co-simulation
is implemented in Xilinx Spartan 3E kit using system generator as interface.
Development of encoder is still a challenging issue, particularly for real time applications. For
improving speed up factors, power consumption and rate distortion of motion estimation
algorithm, HW/SW partitioning techniques can be done. The software part of H.264 encoder can
be compiled by LABVIEW and executed in LABVIEW embedded module of ARM processor to
increase the speed of the execution time. The most intensive part of H.264 encoder is considered
as hardware, for which we need to improve the performance and reduce the power consumption.
At the same time, the design can be extended to produce the better compressed image quality at
lower bit rate.
REFERENCES
[1] Zhuo Zhao and Ping Liang, “A statistical analysis of H.264/AVC FME mode reduction”, IEEE
transactions on circuits and systems for video technology, (2011)
[2] Huitao Gu, Shuming Chen, Shuwei Sun, Shenggang Chen, Xiaowen Chen,The 11th International
Conference on Parallel and Distributed Computing, Applications and Technologies, “Multiple Search
Centers Based Fast Motion Estimation Algorithm for H.264/AVC”, (2011).
[3] Zhenyu Liu, Member, IEEE, Junwei Zhou, Dongsheng Wang, Member, IEEE, and Takeshi Ikenaga,
Member, “Register Length Analysis and VLSI Optimization of VBS Hadamard Transform in
H.264/AVC”, ( 2011)
[4] Chen-Kuo Chiang, Student Member, IEEE, Wei-Hau Pan, Chiuan Hwang, Shin-Shan Zhuang,and
Shang-Hong Lai, Member, “Fast H.264 Encoding Based on Statistical Learning”, (2011)
[5] Xing Wen, Lu Fang and Jiali Li: “Novel RD-Optimized VBSME with Matching Highly Data Re
Usable Hardware Architecture”. In: IEEE Transactions on Circuits and Systems for Video
Technology, Vol. 21, No. 2, pp. 206–219 (2011).
[6] Meihua GU, Ningmei YU, Lei ZHU, Wenhua: “High Throughput and Cost Efficient VLSI
Architecture of Integer Motion Estimation for H.264/AVC”. In: Journal of Computational
Information Systems, pp 1310–1318 (2011).
[7] Xiaomin Wu, Weizhang Xu, Nanhao Zhu, Zhanxin Yang , China, “A Fast Motion Estimation
Algorithm for H.264”, International Conference on Signal Acquisition and Processing (2010).
[8] Haithem Chaouch, Salah DHAHRI, Abdelkrim ZITOUNI and Rached TOURKI Laboratory of
Electronics and Micro-Electronics (LAB-IT06) Faculty of Sciences of Monastir Monastir,
Tunisia,International Conference on Design & Technology of Integrated Systems in Nanoscale Era,
“Low Power Architecture of Motion Estimation and Efficient Intra Prediction Based on
HardwareDesign for H.264”, (2010).
[9] Zhi Liu, Member, IEEE, Liquan Shen, and Zhaoyang Zhang, “An Efficient Intermode Decision
Algorithm Based on Motion Homogeneity for H.264/AVC” (2010).
[10] Haibing Yin, Huizhu Jia, Honggang Qi, Xianghu Ji, Xiaodong Xie, and Wen Gao, “A Hardware-
Efficient Multi-Resolution Block Matching Algorithm and Its VLSI Architecture for High Definition
MPEG-Like Video Encoders”, (2010).
8. 270 Computer Science & Information Technology (CS & IT)
[11] Chih-Hung Kuo, Member, IEEE, Li-Chuan Chang, Student Member, IEEE, Kuan-Wei Fan,and
Bin- Da Liu, Fellow, “Hardware/Software Codesign of a Low-Cost Rate Control Scheme for
H.264/AVC”, (2010).
[12] Heng-Yao Lin, Kuan-Hsien Wu, Bin-Da Liu, Fellow, IEEE, and Jar-Ferr Yang, Fellow, “An Efficient
VLSI Architecture for Transform-Based Intra Prediction in H.264/AVC”, (2010).
[13] Tiago Dias, Nuno Roma, Leonel Sousa: “Hardware/Software Co-Design Of H.264/AVC Encoders
For Multi-Core Embedded Systems” (2010).
[14] Qiang Tang, Student Member, IEEE, and Panos Nasiopoulos, Member, “Efficient Motion Re-
Estimation with Rate-Distortion Optimization for MPEG-2 to H.264/AVC Transcoding”, (2010).
[15] G. A. Ruiz and J. A. Michell: “An Efficient VLSI Architecture of Fractional Motion Estimation in
H.264 for HDTV.” In: J Sign Process Syst, Springer (2010).
[16] Ka-Ho Ng, Lai-Man Po, Shen-Yuan Li, Ka-Man Wong and Li-Ping Wang Department of Electronic
Engineering, City University of Hong Kong, Hong Kong SAR, China, “A Linear Prediction Based
Fractional-Pixel Motion Estimation Algorithm” (2009).
[17] Camille Mazataud, Benny Bing: “A Practical Survey of H.264 Capabilities”. In: Seventh Annual
Communication Networks and Services Research Conference, pp 25–32 (2009).
[18] L. Smit, G. Rauwerda, A. Molderink, P. Wolkotte, and G. Smit: “Implementation of a 2-D 8×8 IDCT
on the reconfigurable Montium core”. In: International Conference on Field Programmable Logic and
Applications, pp 562–566 (2007).
[19] C.-Y. Lee, Y.-C. Lin, C.-L. Wu, C.-H. Chang, Y.-M. Tsao, and S.-Y. Chien: “Multi Pass and Frame
Parallel algorithms of Motion Estimation in H.264/AVC for generic GPU”. In: International
Conference on Multimedia and Expo, pp 1603–1606 (2007).
[20] Y. Song, Z. Liu, T. Ikenaga, and S. Goto: “Ultra low complexity fast Variable Block Size Motion
Estimation algorithm in H.264/AVC”. In: IEEE International Conference on Multimedia and Expo,
pp. 376–379 (2007).
[21] Nukhet Ozbek, Turhan Tunali: “A Survey on the H.264/AVC Standard”. In: Turk JElec Engin,
VOL.13, NO.3 (2005).
[22] Li Zhang, Wen Gao: “Improved FFSBM Algorithm and its VLSI Architecture For Variable Block
Size Motion Estimation Of H.264”. In: Proceedings of 2005 International Symposium on Intelligent
Signal Processing and Communication Systems, (2005).
9. Computer Science & Information Technology (CS & IT) 271
Authors
Valarmathi received B.E. degree in Electronics and Communication Engineering from
Karpaga Vinayaga College of Engineering & Technology, Kanchipuram. She is currently
doing Master degree with specialization of Embedded Systems in the same institution
affiliated to Anna University of Technology, Chennai. Also currently working on the
project based on Motion Estimation Algorithm in H.264/AVC encoder.
Vani received B.E. degree in Electronics and Communication Engineering from Bharathiar
University, Coimbatore and M.E. degree in Communication Systems from Anna University,
Chennai. She is currently pursuing Ph.D. in Anna University of Technology, Chennai and
also working as Assistant Professor in Department of Electronics and Communication
Engineering, Meenakshi College of Engineering, Chennai.
Sangeetha Marikkannan born in 1975 in Tamilnadu, India, received her B.E. degree from
the Bharathidasan University, Trichy, in 1996 and M.E. degree from the University of
Madras in 1999. She is currently with Karpaga Vinayaga College of Engineering and
Technology in the Department of Electronics and Communication Engineering and Ph.D.
degree with the Anna University, Chennai, India. She is a member of the IEEE and ISTE.
Her research interests are Hardware/Software Co-design and High Level Synthesis.