Sybian Technologies Pvt Ltd
Final Year Projects & Real Time live Projects
JAVA(All Domains)
DOTNET(All Domains)
ANDROID
EMBEDDED
VLSI
MATLAB
Project Support
Abstract, Diagrams, Review Details, Relevant Materials, Presentation,
Supporting Documents, Software E-Books,
Software Development Standards & Procedure
E-Book, Theory Classes, Lab Working Programs, Project Design & Implementation
24/7 lab session
Final Year Projects For BE,ME,B.Sc,M.Sc,B.Tech,BCA,MCA
PROJECT DOMAIN:
Cloud Computing
Networking
Network Security
PARALLEL AND DISTRIBUTED SYSTEM
Data Mining
Mobile Computing
Service Computing
Software Engineering
Image Processing
Bio Medical / Medical Imaging
Contact Details:
Sybian Technologies Pvt Ltd,
No,33/10 Meenakshi Sundaram Building,
Sivaji Street,
(Near T.nagar Bus Terminus)
T.Nagar,
Chennai-600 017
Ph:044 42070551
Mobile No:9790877889,9003254624,7708845605
Mail Id:sybianprojects@gmail.com,sunbeamvijay@yahoo.com
A new partial image encryption method for document images using variance base...IJECEIAES
The proposed method partially and completely encrypts the gray scale Document images. The complete image encryption is also performed to compare the performance with the existing encryption methods. The partial encryption is carried out by segmenting the image using the Quad-tree decomposition method based on the variance of the image block. The image blocks with uniform pixel levels are considered insignificant blocks and others the significant blocks. The pixels in the significant blocks are permuted by using 1D Skew tent chaotic map. The partially encrypted image blocks are further permuted using 2D Henon map to increase the security level and fed as input to complete encryption. The complete encryption is carried out by diffusing the partially encrypted image. Two levels of diffusion are performed. The first level simply modifies the pixels in the partially encrypted image with the Bernoulli’s chaotic map. The second level establishes the interdependency between rows and columns of the first level diffused image. The experiment is conducted for both partial and complete image encryption on the Document images. The proposed scheme yields better results for both partial and complete encryption on Speed, statistical and dynamical attacks. The results ensure better security when compared to existing encryption schemes.
Corrosion Detection Using A.I : A Comparison of Standard Computer Vision Tech...csandit
In this paper we present a comparison between stand
ard computer vision techniques and Deep
Learning approach for automatic metal corrosion (ru
st) detection. For the classic approach, a
classification based on the number of pixels contai
ning specific red components has been
utilized. The code written in Python used OpenCV li
braries to compute and categorize the
images. For the Deep Learning approach, we chose Ca
ffe, a powerful framework developed at
“Berkeley Vision and Learning Center” (BVLC). The
test has been performed by classifying
images and calculating the total accuracy for the t
wo different approaches.
Modified Skip Line Encoding for Binary Image Compressionidescitation
Image Compression is an important issue in
Internet, mobile communication, digital library, digital
photography, multimedia, teleconferencing and other
applications. Application areas of Image Compression would
focus on the problem of optimizing storage space and
transmission bandwidth. In this paper, a modified form of skip
line encoding is proposed to further reduce the redundancy in
the image. The performance is found to be better than the
skip-line encoding.
JOINT IMAGE WATERMARKING, COMPRESSION AND ENCRYPTION BASED ON COMPRESSED SENS...ijma
ABSTRACT
Image usage over the internet becomes more and more important each day. Over 3 billion images are shared each day over the internet which raise a concern about how to protect images copyrights? Or how to utilize image sharing experience? This paper proposes a new robust image watermarking algorithm based on compressed sensing (CS) and quantization index modulation (QIM) watermark embedding. The algorithm capitalizes on the CS to compress and encrypt images jointly with Entropy Coding, Arnold Cat Map, Pseudo-random numbers and Advanced Encryption Standard (AES). Our proposed algorithm works under the JPEG standard umbrella. Watermark embedding is done in 3 different locations inside the image using QIM. Those locations differ with each 8-by-8 image block. Choosing which combination of coefficients to be used in QIM watermark embedding depends on selecting a combination from combinations table, which is generated at the same time with projection matrices using a 10-digits Pseudorandom number secret key SK1. After quantization phase, the algorithm shuffles image blocks using Arnold’s Cat Map with a 10-digits Pseudo-random number secret key SK2, followed by a unique method for splitting every 8x8 block into two unequal parts. Part number one will act as the host for two QIM watermarks then goes through encoding phase using Run-Length Encoding (RLE) followed by Huffman Encoding, while part number two goes through sparse watermark embedding followed by a third QIM watermark embedding and compression phase using CS, then Huffman encoder is used to encode this part. The algorithm aims to combine image watermarking, compression and encryption capabilities in one algorithm while balancing how those capabilities works with each other to achieve significant improvement in terms of image watermarking, compression and encryption. 15 different images usually used in image processing benchmarking were used for testing the algorithm capabilities and experiments show that our proposed algorithm achieves robust watermarking jointly with encryption and compression under the JPEG standard framework.
Image Compression Using Intra Prediction of H.264/AVC and Implement of Hiding...ijsrd.com
The current still image compression technique lacks the required level of standardization and still have something can be improve according to compression rate, computation, and so on. This paper employs the technique of H.264/MPEG-4 Advanced Video Coding to improve still image compression. The H.264/MPEG-4 standard promises much higher compression and quality compared to other existing standard, such as MPEG-4 and H.263. This paper utilizes the intra prediction approach of H.264/AVC and Huffman coding to improve the compression rate. Each 4x4 block is predicted by choosing the best mode out of the 9 different modes. The best prediction mode is selected by SAE (Sum of Absolute Error) method. Also this paper deals with an image hiding algorithm based on singular value decomposition algorithm. This paper propose a data hiding algorithm, applying on encoded bit-stream. Before embedding the secret image into cover image, the residue of the cover image is first calculated and encoded using Huffman coding. At the decoder side the secret image is extracted and the cover image is reconstructed with sufficient peak signal to noise ratio.
Video Compression Algorithm Based on Frame Difference Approaches ijsc
The huge usage of digital multimedia via communications, wireless communications, Internet, Intranet and cellular mobile leads to incurable growth of data flow through these Media. The researchers go deep in developing efficient techniques in these fields such as compression of data, image and video. Recently, video compression techniques and their applications in many areas (educational, agriculture, medical …) cause this field to be one of the most interested fields. Wavelet transform is an efficient method that can be used to perform an efficient compression technique. This work deals with the developing of an efficient video compression approach based on frames difference approaches that concentrated on the calculation of frame near distance (difference between frames). The
selection of the meaningful frame depends on many factors such as compression performance, frame details, frame size and near distance between frames. Three different approaches are applied for removing the lowest frame difference. In this paper, many videos are tested to insure the efficiency of this technique, in addition a good performance results has been obtained.
A new partial image encryption method for document images using variance base...IJECEIAES
The proposed method partially and completely encrypts the gray scale Document images. The complete image encryption is also performed to compare the performance with the existing encryption methods. The partial encryption is carried out by segmenting the image using the Quad-tree decomposition method based on the variance of the image block. The image blocks with uniform pixel levels are considered insignificant blocks and others the significant blocks. The pixels in the significant blocks are permuted by using 1D Skew tent chaotic map. The partially encrypted image blocks are further permuted using 2D Henon map to increase the security level and fed as input to complete encryption. The complete encryption is carried out by diffusing the partially encrypted image. Two levels of diffusion are performed. The first level simply modifies the pixels in the partially encrypted image with the Bernoulli’s chaotic map. The second level establishes the interdependency between rows and columns of the first level diffused image. The experiment is conducted for both partial and complete image encryption on the Document images. The proposed scheme yields better results for both partial and complete encryption on Speed, statistical and dynamical attacks. The results ensure better security when compared to existing encryption schemes.
Corrosion Detection Using A.I : A Comparison of Standard Computer Vision Tech...csandit
In this paper we present a comparison between stand
ard computer vision techniques and Deep
Learning approach for automatic metal corrosion (ru
st) detection. For the classic approach, a
classification based on the number of pixels contai
ning specific red components has been
utilized. The code written in Python used OpenCV li
braries to compute and categorize the
images. For the Deep Learning approach, we chose Ca
ffe, a powerful framework developed at
“Berkeley Vision and Learning Center” (BVLC). The
test has been performed by classifying
images and calculating the total accuracy for the t
wo different approaches.
Modified Skip Line Encoding for Binary Image Compressionidescitation
Image Compression is an important issue in
Internet, mobile communication, digital library, digital
photography, multimedia, teleconferencing and other
applications. Application areas of Image Compression would
focus on the problem of optimizing storage space and
transmission bandwidth. In this paper, a modified form of skip
line encoding is proposed to further reduce the redundancy in
the image. The performance is found to be better than the
skip-line encoding.
JOINT IMAGE WATERMARKING, COMPRESSION AND ENCRYPTION BASED ON COMPRESSED SENS...ijma
ABSTRACT
Image usage over the internet becomes more and more important each day. Over 3 billion images are shared each day over the internet which raise a concern about how to protect images copyrights? Or how to utilize image sharing experience? This paper proposes a new robust image watermarking algorithm based on compressed sensing (CS) and quantization index modulation (QIM) watermark embedding. The algorithm capitalizes on the CS to compress and encrypt images jointly with Entropy Coding, Arnold Cat Map, Pseudo-random numbers and Advanced Encryption Standard (AES). Our proposed algorithm works under the JPEG standard umbrella. Watermark embedding is done in 3 different locations inside the image using QIM. Those locations differ with each 8-by-8 image block. Choosing which combination of coefficients to be used in QIM watermark embedding depends on selecting a combination from combinations table, which is generated at the same time with projection matrices using a 10-digits Pseudorandom number secret key SK1. After quantization phase, the algorithm shuffles image blocks using Arnold’s Cat Map with a 10-digits Pseudo-random number secret key SK2, followed by a unique method for splitting every 8x8 block into two unequal parts. Part number one will act as the host for two QIM watermarks then goes through encoding phase using Run-Length Encoding (RLE) followed by Huffman Encoding, while part number two goes through sparse watermark embedding followed by a third QIM watermark embedding and compression phase using CS, then Huffman encoder is used to encode this part. The algorithm aims to combine image watermarking, compression and encryption capabilities in one algorithm while balancing how those capabilities works with each other to achieve significant improvement in terms of image watermarking, compression and encryption. 15 different images usually used in image processing benchmarking were used for testing the algorithm capabilities and experiments show that our proposed algorithm achieves robust watermarking jointly with encryption and compression under the JPEG standard framework.
Image Compression Using Intra Prediction of H.264/AVC and Implement of Hiding...ijsrd.com
The current still image compression technique lacks the required level of standardization and still have something can be improve according to compression rate, computation, and so on. This paper employs the technique of H.264/MPEG-4 Advanced Video Coding to improve still image compression. The H.264/MPEG-4 standard promises much higher compression and quality compared to other existing standard, such as MPEG-4 and H.263. This paper utilizes the intra prediction approach of H.264/AVC and Huffman coding to improve the compression rate. Each 4x4 block is predicted by choosing the best mode out of the 9 different modes. The best prediction mode is selected by SAE (Sum of Absolute Error) method. Also this paper deals with an image hiding algorithm based on singular value decomposition algorithm. This paper propose a data hiding algorithm, applying on encoded bit-stream. Before embedding the secret image into cover image, the residue of the cover image is first calculated and encoded using Huffman coding. At the decoder side the secret image is extracted and the cover image is reconstructed with sufficient peak signal to noise ratio.
Video Compression Algorithm Based on Frame Difference Approaches ijsc
The huge usage of digital multimedia via communications, wireless communications, Internet, Intranet and cellular mobile leads to incurable growth of data flow through these Media. The researchers go deep in developing efficient techniques in these fields such as compression of data, image and video. Recently, video compression techniques and their applications in many areas (educational, agriculture, medical …) cause this field to be one of the most interested fields. Wavelet transform is an efficient method that can be used to perform an efficient compression technique. This work deals with the developing of an efficient video compression approach based on frames difference approaches that concentrated on the calculation of frame near distance (difference between frames). The
selection of the meaningful frame depends on many factors such as compression performance, frame details, frame size and near distance between frames. Three different approaches are applied for removing the lowest frame difference. In this paper, many videos are tested to insure the efficiency of this technique, in addition a good performance results has been obtained.
EMPIRICAL STUDY OF ALGORITHMS AND TECHNIQUES IN VIDEO STEGANOGRAPHYJournal For Research
Steganography is the art and science of hiding the actual important information under graphics, text, cover file etc. These techniques may be applied without fear of image destruction because they are more integrated into the image. Information can be in the form of text, audio, video. The purpose of steganography is to covert communication and to hide a message from a third party or intruder. Steganography is often confused with cryptography because the two are similar in the way that both are used to protect confidential information. Though there are many types of steganography, video Steganography is more reliable due to high capacity image, more data embedment, perceptual redundancy etc. This research paper deals with various Video Steganography techniques and algorithms including Spatial Domain, Pseudorandom permutations, TPVD (Tri-way pixel value differencing), Motion Vector Technique, Video Compression, and Motion Vector Technique. The Video compression which uses modern coding techniques to reduce redundancy in video data has been also studied and analyzed. In fact, Video compression operates on square-shaped groups or blocks of neighboring pixels, often called macro blocks. These pixel groups or blocks of pixels are compared from one frame to the next and the video compression code sends only the differences within those blocks. Generally, the motion field in video compression is assumed to be translational with horizontal component and vertical component and denoted in vector form for the spatial variables in the underlying image, such as three steps search, etc. The study also discusses and focusses on the evolution of the Video Steganography techniques and algorithms over the years based on its application and subsequent merits and demerits. Further, Advanced Video Steganography Algorithm/Bit Exchange Method based on the bit shifting and XOR operation in the secret message file has been studied and implemented. The encrypted secret message is embed in the cover file in alternate byte. The bits are substituted in LSB & LSB+3 bits in the cover file. Finally, the simulation and evaluation of the above mentioned approach is performed using MATLAB tools.
Improved block based segmentation for jpeg compressed document imageseSAT Journals
Abstract
Image Compression is to minimize the size in bytes of a graphics file without degrading the quality of the image to an unacceptable
level. The compound image compression normally based on three classification methods that is object based, layer based and block
based. This paper presents a block-based segmentation. for visually lossless compression of scanned documents that contain not only
photographic images but also text and graphic images. In low bit rate applications they suffer with undesirable compression artifacts,
especially for document images. Existing methods can reduce these artifacts by using post processing methods without changing the
encoding process. Some of these post processing methods requires classification of the encoded blocks into different categories.
Keywords- AC energy, Discrete Cosine Transform (DCT), JPEG, K-means clustering, Threshold value
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Selection of intra prediction modes for intra frame coding in advanced video ...eSAT Journals
Abstract This paper proposes selection of Intra prediction modes for Intra frame coding in Advanced Video Coding Standard using Matlab. The proposed algorithm selects prediction modes for intra frame coding. There are nine prediction modes are there to predict the intra frame in AVC using Intra prediction,but all the prediction modes are not required for all the applications. Intra prediction is the first process of advanced video coding standard. It predicts a macro block by referring to its previous macro blocks to reduce spatial redundancy,appling all the prediction modes to predict intra frame it leads to more computational complexity is increased at the encoder of AVC. In the proposed algoriyhm, applied all the prediction modes(0-8) for prediction of intra frame but only few modes such as mode0, mode1, mode2,mode4,mode6 gives good PSNR, high comprssion ratio and low bit rate. Out of these modes mode2 gives good PSNR, compression ratio and redced bit rate, mode5, mode7 and mode8 gives lower PSNR, low compression ratio and increased bitrate compared to mode0,mode1, mode2, mode4 and mode6. The simulation results are presented using Matlab. The PSNR , compressed ratio and bit rate achived for different quantization parameters of mother daughter frames , foreman frames was presented. Keywords: AVC, PSNR, CAVLC, Macroblock, Prediction modes.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Optimization of image compression and ciphering based on EZW techniquesTELKOMNIKA JOURNAL
This paper presents the design and optimization of image compression and ciphering depend on optimized embedded zero tree of wavelet (EZW) techniques. Nowadays, the compression and ciphering of image have become particularly important in a protected image storage and communication. The challenge is put in application for both compression and encryption where the parameters of images such as quality and size are critical in secure image transmission. A new technique for secure image storage and transmission is proposed in this work. The compression is achieved by remodel the EZW scheme combine with discrete cosine transform (DCT). Encrypted the XOR ten bits by initial threshold of EZW with random bits produced from linear-feedback shift register (LFSR). The obtained result shows that the suggested techniques provide acceptable compression ratio, reduced the computational time for both compression and encryption, immunity against the statistical and the frequency attacks.
With the enhance in the digital media, modification
and transfer of information is very easy. So this
work focus on transferring data by hiding in the
image. Here a robust approach is achieved by using
the skew tent map as an encryption/ decryption
algorithm at the sender and receiver side. In this
work image is transformed into inverse S-order as
the initial step of the work so little confusion can be
created for the intruder. Here whole data hiding is
done by modifying by using the modified histogram
shifting method. This approach was utilized to the
point that hiding information and image can be
effectively recovered with no information loss. An
investigation is done on the genuine dataset image.
Assessment parameter esteems and demonstrates
that the proposed work has kept up the SNR, PSNR,
Throughput, Data Hiding Execution Time and
Extraction Time values with high security of the
information.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Rate Distortion Performance for Joint Source Channel Coding of JPEG image Ove...CSCJournals
This paper presents the rate distortion behavior of Joint Source Channel Coding (JSCC) scheme for still image transmission. The focus is on DCT based Source coding JPEG, Rate Compatible Punctured Convolution codes (RCPC) for transmission over Additive White Gaussian Noise (AWGN) channel under the constraint of fixed transmission bandwidth. Information transmission has a tradeoff between compression ratio and received quality of image. The compressed stream is more susceptible to channel errors, thus error control coding techniques are used along with images to minimize the effect of channel errors. But there is a clear tradeoff between channel coding redundancies versus source quality with constant channel bit rate. This paper proposes JSCC scheme based on Unequal Error Protection (UEP) for robust image transmission. With the conventional error control coding schemes that uses Equal Error Protection (EEP), all the information bits are equally protected. The use of the UEP schemes provides a varying amount of error protection according to the importance of the data. The received image quality can be improved using UEP compared to Equal Error Protection (EEP).
Secured Data Transmission Using Video Steganographic SchemeIJERA Editor
Steganography is the art of hiding information in ways that avert the revealing of hiding messages. Video Steganography is focused on spatial and transform domain. Spatial domain algorithm directly embedded information in the cover image with no visual changes. This kind of algorithms has the advantage in Steganography capacity, but the disadvantage is weak robustness. Transform domain algorithm is embedding the secret information in the transform space. This kind of algorithms has the advantage of good stability, but the disadvantage of small capacity. These kinds of algorithms are vulnerable to steganalysis. This paper proposes a new Compressed Video Steganographic scheme. The data is hidden in the horizontal and the vertical components of the motion vectors. The PSNR value is calculated so that the quality of the video after the data hiding is evaluated.
SECURE OMP BASED PATTERN RECOGNITION THAT SUPPORTS IMAGE COMPRESSIONsipij
In this paper, we propose a secure Orthogonal Matching Pursuit (OMP) based pattern recognition scheme that well supports image compression. The secure OMP is a sparse coding algorithm that chooses atoms sequentially and calculates sparse coefficients from encrypted images. The encryption is carried out by using a random unitary transform. The proposed scheme offers two prominent features. 1) It is capable of
pattern recognition that works in the encrypted image domain. Even if data leaks, privacy can be maintained because data remains encrypted. 2) It realizes Encryption-then-Compression (EtC) systems, where image encryption is conducted prior to compression. The pattern recognition can be carried out using a
few sparse coefficients. On the basis of the pattern recognition results, the scheme can compress selected images with high quality by estimating a sufficient number of sparse coefficients. We use the INRIA dataset to demonstrate its performance in detecting humans in images. The proposal is shown to realize human detection with encrypted images and efficiently compress the images selected in the image recognition stage.
FPGA Based Pattern Generation and Synchonization for High Speed Structured Li...TELKOMNIKA JOURNAL
Recently, structured light 3D imaging devices have gained a keen attention due to their potential
applications to robotics, industrial manufacturing and medical imaging. Most of these applications require
high 3D precision yet high speed in image capturing for hard and/or soft real time environments. This
paper presents a method of high speed image capturing for structured light 3D imaging sensors with FPGA
based structured light pattern generation and projector-camera synchronization. Suggested setup reduces
the time for pattern projection and camera triggering to 16msec from 100msec that should be required by
conventional methods.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Uncompressed Image Steganography using BPCS: Survey and AnalysisIOSR Journals
Abstract: Steganography is the art and science of hide secret information in some carrier data without leaving
any apparent evidence of data alternation. In the past, people use hidden tattoos, invisible ink or punching on
papers to convey stenographic data. Now, information is first hide in digital image, text, video and audio. This
paper discusses existing BPCS (Bit Plane Complexity Segmentation) steganography techniques and presences
of some modification. BPCS technique makes use of the characteristics of the human visible system. BPCS
scheme allows for large capacity of embedded secret data and is highly customized. This algorithm offers higher
hiding capacity due to that it exploits the variance of complex regions in each bit plane. In contrast, the BPCS
algorithm provided a much more effective method for obtaining a 50% capacity since visual attacks did not
suffice for detection.
Keywords: BPCS, Data security, Information hiding, Steganography, Stego image
Steganography is a best method for in secret communicating information during the transference of data. Images are an appropriate method that used in steganography can be used to protection the simple bits and pieces. Several systems, this one as color scale images steganography and grayscale images steganography, are used on color and store data in different techniques. These color images can have very big amounts of secret data, by using three main color modules. The different color modules, such as HSV-(hue, saturation, and value), RGB-(red, green, and blue), YCbCr-(luminance and chrominance), YUV, YIQ, etc. This paper uses unusual module to hide data: an adaptive procedure that can increase security ranks when hiding a top secret binary image in a RGB color image, which we implement the steganography in the YCbCr module space. We performed Exclusive-OR (XOR) procedures between the binary image and the RGB color image in the YCBCR module space. The converted byte stored in the 8-bit LSB is not the actual bytes; relatively, it is obtained by translation to another module space and applies the XOR procedure. This technique is practical to different groups of images. Moreover, we see that the adaptive technique ensures good results as the peak signal to noise ratio (PSNR) and stands for mean square error (MSE) are good. When the technique is compared with our previous works and other existing techniques, it is shown to be the best in both error and message capability. This technique is easy to model and simple to use and provides perfect security with unauthorized.
Data Steganography for Optical Color Image CryptosystemsCSCJournals
In this paper, an optical color image cryptosystem with a data hiding scheme is proposed. In the proposed optical cryptosystem, a confidential color image is embedded into the host image of the same size. Then the stego-image is encrypted by using the double random phase encoding algorithm. The seeds to generate random phase data are hidden in the encrypted stego-image by a content-dependent and low distortion data embedding technique. The confidential image and secret data delivery is accomplished by hiding the image into the host image and embedding the data into the encrypted stego-image. Experimental results show that the proposed data steganographic cryptosystem provides large data hiding capacity and high reconstructed image quality.
Selective encryption presents a great solution to optimize time efficiency during encryption
process. In this paper a novel selective encryption scheme based on DCT transform with AES
algorithm is presented. In the DCT method, the basic idea is to decompose the image into 8×8
blocks and these blocks are transformed from the spatial domain to the frequency domain by the
DCT. Then, the DCT coefficients correlated to the lower frequencies of the image block are
encrypted. The proposed cryptosystem is evaluated using various security and statistical
analysis; results show that the proposed algorithm is strong against attacks and suitable for
practical application.
Harnessing the cloud for securely outsourcing large scale systems of linear e...Muthu Samy
Sybian Technologies Pvt Ltd
Final Year Projects & Real Time live Projects
JAVA(All Domains)
DOTNET(All Domains)
ANDROID
EMBEDDED
VLSI
MATLAB
Project Support
Abstract, Diagrams, Review Details, Relevant Materials, Presentation,
Supporting Documents, Software E-Books,
Software Development Standards & Procedure
E-Book, Theory Classes, Lab Working Programs, Project Design & Implementation
24/7 lab session
Final Year Projects For BE,ME,B.Sc,M.Sc,B.Tech,BCA,MCA
PROJECT DOMAIN:
Cloud Computing
Networking
Network Security
PARALLEL AND DISTRIBUTED SYSTEM
Data Mining
Mobile Computing
Service Computing
Software Engineering
Image Processing
Bio Medical / Medical Imaging
Contact Details:
Sybian Technologies Pvt Ltd,
No,33/10 Meenakshi Sundaram Building,
Sivaji Street,
(Near T.nagar Bus Terminus)
T.Nagar,
Chennai-600 017
Ph:044 42070551
Mobile No:9790877889,9003254624,7708845605
Mail Id:sybianprojects@gmail.com,sunbeamvijay@yahoo.com
Secret key extraction from wireless signal strength in real environmentsMuthu Samy
Sybian Technologies Pvt Ltd
Final Year Projects & Real Time live Projects
JAVA(All Domains)
DOTNET(All Domains)
ANDROID
EMBEDDED
VLSI
MATLAB
Project Support
Abstract, Diagrams, Review Details, Relevant Materials, Presentation,
Supporting Documents, Software E-Books,
Software Development Standards & Procedure
E-Book, Theory Classes, Lab Working Programs, Project Design & Implementation
24/7 lab session
Final Year Projects For BE,ME,B.Sc,M.Sc,B.Tech,BCA,MCA
PROJECT DOMAIN:
Cloud Computing
Networking
Network Security
PARALLEL AND DISTRIBUTED SYSTEM
Data Mining
Mobile Computing
Service Computing
Software Engineering
Image Processing
Bio Medical / Medical Imaging
Contact Details:
Sybian Technologies Pvt Ltd,
No,33/10 Meenakshi Sundaram Building,
Sivaji Street,
(Near T.nagar Bus Terminus)
T.Nagar,
Chennai-600 017
Ph:044 42070551
Mobile No:9790877889,9003254624,7708845605
Mail Id:sybianprojects@gmail.com,sunbeamvijay@yahoo.com
Achieving data privacy through secrecy views and null based virtual upadatesMuthu Samy
Sybian Technologies Pvt Ltd
Final Year Projects & Real Time live Projects
JAVA(All Domains)
DOTNET(All Domains)
ANDROID
EMBEDDED
VLSI
MATLAB
Project Support
Abstract, Diagrams, Review Details, Relevant Materials, Presentation,
Supporting Documents, Software E-Books,
Software Development Standards & Procedure
E-Book, Theory Classes, Lab Working Programs, Project Design & Implementation
24/7 lab session
Final Year Projects For BE,ME,B.Sc,M.Sc,B.Tech,BCA,MCA
PROJECT DOMAIN:
Cloud Computing
Networking
Network Security
PARALLEL AND DISTRIBUTED SYSTEM
Data Mining
Mobile Computing
Service Computing
Software Engineering
Image Processing
Bio Medical / Medical Imaging
Contact Details:
Sybian Technologies Pvt Ltd,
No,33/10 Meenakshi Sundaram Building,
Sivaji Street,
(Near T.nagar Bus Terminus)
T.Nagar,
Chennai-600 017
Ph:044 42070551
Mobile No:9790877889,9003254624,7708845605
Mail Id:sybianprojects@gmail.com,sunbeamvijay@yahoo.com
Saigon Boat Adventure: Modern City Life to Little City FarmTung Thanh
Where is your hotel located? If you are like most tourists, your perception of Saigon encompasses mostly of the dynamic center of district 1 and the culturally rich Chinatown of district 5. If you are like most tourists, you have never seen an old Saigon of district 8, tasted the street food paradise in district 4, heard stories of district 7 or marveled at the quiet luxury of district 2.
EMPIRICAL STUDY OF ALGORITHMS AND TECHNIQUES IN VIDEO STEGANOGRAPHYJournal For Research
Steganography is the art and science of hiding the actual important information under graphics, text, cover file etc. These techniques may be applied without fear of image destruction because they are more integrated into the image. Information can be in the form of text, audio, video. The purpose of steganography is to covert communication and to hide a message from a third party or intruder. Steganography is often confused with cryptography because the two are similar in the way that both are used to protect confidential information. Though there are many types of steganography, video Steganography is more reliable due to high capacity image, more data embedment, perceptual redundancy etc. This research paper deals with various Video Steganography techniques and algorithms including Spatial Domain, Pseudorandom permutations, TPVD (Tri-way pixel value differencing), Motion Vector Technique, Video Compression, and Motion Vector Technique. The Video compression which uses modern coding techniques to reduce redundancy in video data has been also studied and analyzed. In fact, Video compression operates on square-shaped groups or blocks of neighboring pixels, often called macro blocks. These pixel groups or blocks of pixels are compared from one frame to the next and the video compression code sends only the differences within those blocks. Generally, the motion field in video compression is assumed to be translational with horizontal component and vertical component and denoted in vector form for the spatial variables in the underlying image, such as three steps search, etc. The study also discusses and focusses on the evolution of the Video Steganography techniques and algorithms over the years based on its application and subsequent merits and demerits. Further, Advanced Video Steganography Algorithm/Bit Exchange Method based on the bit shifting and XOR operation in the secret message file has been studied and implemented. The encrypted secret message is embed in the cover file in alternate byte. The bits are substituted in LSB & LSB+3 bits in the cover file. Finally, the simulation and evaluation of the above mentioned approach is performed using MATLAB tools.
Improved block based segmentation for jpeg compressed document imageseSAT Journals
Abstract
Image Compression is to minimize the size in bytes of a graphics file without degrading the quality of the image to an unacceptable
level. The compound image compression normally based on three classification methods that is object based, layer based and block
based. This paper presents a block-based segmentation. for visually lossless compression of scanned documents that contain not only
photographic images but also text and graphic images. In low bit rate applications they suffer with undesirable compression artifacts,
especially for document images. Existing methods can reduce these artifacts by using post processing methods without changing the
encoding process. Some of these post processing methods requires classification of the encoded blocks into different categories.
Keywords- AC energy, Discrete Cosine Transform (DCT), JPEG, K-means clustering, Threshold value
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Selection of intra prediction modes for intra frame coding in advanced video ...eSAT Journals
Abstract This paper proposes selection of Intra prediction modes for Intra frame coding in Advanced Video Coding Standard using Matlab. The proposed algorithm selects prediction modes for intra frame coding. There are nine prediction modes are there to predict the intra frame in AVC using Intra prediction,but all the prediction modes are not required for all the applications. Intra prediction is the first process of advanced video coding standard. It predicts a macro block by referring to its previous macro blocks to reduce spatial redundancy,appling all the prediction modes to predict intra frame it leads to more computational complexity is increased at the encoder of AVC. In the proposed algoriyhm, applied all the prediction modes(0-8) for prediction of intra frame but only few modes such as mode0, mode1, mode2,mode4,mode6 gives good PSNR, high comprssion ratio and low bit rate. Out of these modes mode2 gives good PSNR, compression ratio and redced bit rate, mode5, mode7 and mode8 gives lower PSNR, low compression ratio and increased bitrate compared to mode0,mode1, mode2, mode4 and mode6. The simulation results are presented using Matlab. The PSNR , compressed ratio and bit rate achived for different quantization parameters of mother daughter frames , foreman frames was presented. Keywords: AVC, PSNR, CAVLC, Macroblock, Prediction modes.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Optimization of image compression and ciphering based on EZW techniquesTELKOMNIKA JOURNAL
This paper presents the design and optimization of image compression and ciphering depend on optimized embedded zero tree of wavelet (EZW) techniques. Nowadays, the compression and ciphering of image have become particularly important in a protected image storage and communication. The challenge is put in application for both compression and encryption where the parameters of images such as quality and size are critical in secure image transmission. A new technique for secure image storage and transmission is proposed in this work. The compression is achieved by remodel the EZW scheme combine with discrete cosine transform (DCT). Encrypted the XOR ten bits by initial threshold of EZW with random bits produced from linear-feedback shift register (LFSR). The obtained result shows that the suggested techniques provide acceptable compression ratio, reduced the computational time for both compression and encryption, immunity against the statistical and the frequency attacks.
With the enhance in the digital media, modification
and transfer of information is very easy. So this
work focus on transferring data by hiding in the
image. Here a robust approach is achieved by using
the skew tent map as an encryption/ decryption
algorithm at the sender and receiver side. In this
work image is transformed into inverse S-order as
the initial step of the work so little confusion can be
created for the intruder. Here whole data hiding is
done by modifying by using the modified histogram
shifting method. This approach was utilized to the
point that hiding information and image can be
effectively recovered with no information loss. An
investigation is done on the genuine dataset image.
Assessment parameter esteems and demonstrates
that the proposed work has kept up the SNR, PSNR,
Throughput, Data Hiding Execution Time and
Extraction Time values with high security of the
information.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Rate Distortion Performance for Joint Source Channel Coding of JPEG image Ove...CSCJournals
This paper presents the rate distortion behavior of Joint Source Channel Coding (JSCC) scheme for still image transmission. The focus is on DCT based Source coding JPEG, Rate Compatible Punctured Convolution codes (RCPC) for transmission over Additive White Gaussian Noise (AWGN) channel under the constraint of fixed transmission bandwidth. Information transmission has a tradeoff between compression ratio and received quality of image. The compressed stream is more susceptible to channel errors, thus error control coding techniques are used along with images to minimize the effect of channel errors. But there is a clear tradeoff between channel coding redundancies versus source quality with constant channel bit rate. This paper proposes JSCC scheme based on Unequal Error Protection (UEP) for robust image transmission. With the conventional error control coding schemes that uses Equal Error Protection (EEP), all the information bits are equally protected. The use of the UEP schemes provides a varying amount of error protection according to the importance of the data. The received image quality can be improved using UEP compared to Equal Error Protection (EEP).
Secured Data Transmission Using Video Steganographic SchemeIJERA Editor
Steganography is the art of hiding information in ways that avert the revealing of hiding messages. Video Steganography is focused on spatial and transform domain. Spatial domain algorithm directly embedded information in the cover image with no visual changes. This kind of algorithms has the advantage in Steganography capacity, but the disadvantage is weak robustness. Transform domain algorithm is embedding the secret information in the transform space. This kind of algorithms has the advantage of good stability, but the disadvantage of small capacity. These kinds of algorithms are vulnerable to steganalysis. This paper proposes a new Compressed Video Steganographic scheme. The data is hidden in the horizontal and the vertical components of the motion vectors. The PSNR value is calculated so that the quality of the video after the data hiding is evaluated.
SECURE OMP BASED PATTERN RECOGNITION THAT SUPPORTS IMAGE COMPRESSIONsipij
In this paper, we propose a secure Orthogonal Matching Pursuit (OMP) based pattern recognition scheme that well supports image compression. The secure OMP is a sparse coding algorithm that chooses atoms sequentially and calculates sparse coefficients from encrypted images. The encryption is carried out by using a random unitary transform. The proposed scheme offers two prominent features. 1) It is capable of
pattern recognition that works in the encrypted image domain. Even if data leaks, privacy can be maintained because data remains encrypted. 2) It realizes Encryption-then-Compression (EtC) systems, where image encryption is conducted prior to compression. The pattern recognition can be carried out using a
few sparse coefficients. On the basis of the pattern recognition results, the scheme can compress selected images with high quality by estimating a sufficient number of sparse coefficients. We use the INRIA dataset to demonstrate its performance in detecting humans in images. The proposal is shown to realize human detection with encrypted images and efficiently compress the images selected in the image recognition stage.
FPGA Based Pattern Generation and Synchonization for High Speed Structured Li...TELKOMNIKA JOURNAL
Recently, structured light 3D imaging devices have gained a keen attention due to their potential
applications to robotics, industrial manufacturing and medical imaging. Most of these applications require
high 3D precision yet high speed in image capturing for hard and/or soft real time environments. This
paper presents a method of high speed image capturing for structured light 3D imaging sensors with FPGA
based structured light pattern generation and projector-camera synchronization. Suggested setup reduces
the time for pattern projection and camera triggering to 16msec from 100msec that should be required by
conventional methods.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Uncompressed Image Steganography using BPCS: Survey and AnalysisIOSR Journals
Abstract: Steganography is the art and science of hide secret information in some carrier data without leaving
any apparent evidence of data alternation. In the past, people use hidden tattoos, invisible ink or punching on
papers to convey stenographic data. Now, information is first hide in digital image, text, video and audio. This
paper discusses existing BPCS (Bit Plane Complexity Segmentation) steganography techniques and presences
of some modification. BPCS technique makes use of the characteristics of the human visible system. BPCS
scheme allows for large capacity of embedded secret data and is highly customized. This algorithm offers higher
hiding capacity due to that it exploits the variance of complex regions in each bit plane. In contrast, the BPCS
algorithm provided a much more effective method for obtaining a 50% capacity since visual attacks did not
suffice for detection.
Keywords: BPCS, Data security, Information hiding, Steganography, Stego image
Steganography is a best method for in secret communicating information during the transference of data. Images are an appropriate method that used in steganography can be used to protection the simple bits and pieces. Several systems, this one as color scale images steganography and grayscale images steganography, are used on color and store data in different techniques. These color images can have very big amounts of secret data, by using three main color modules. The different color modules, such as HSV-(hue, saturation, and value), RGB-(red, green, and blue), YCbCr-(luminance and chrominance), YUV, YIQ, etc. This paper uses unusual module to hide data: an adaptive procedure that can increase security ranks when hiding a top secret binary image in a RGB color image, which we implement the steganography in the YCbCr module space. We performed Exclusive-OR (XOR) procedures between the binary image and the RGB color image in the YCBCR module space. The converted byte stored in the 8-bit LSB is not the actual bytes; relatively, it is obtained by translation to another module space and applies the XOR procedure. This technique is practical to different groups of images. Moreover, we see that the adaptive technique ensures good results as the peak signal to noise ratio (PSNR) and stands for mean square error (MSE) are good. When the technique is compared with our previous works and other existing techniques, it is shown to be the best in both error and message capability. This technique is easy to model and simple to use and provides perfect security with unauthorized.
Data Steganography for Optical Color Image CryptosystemsCSCJournals
In this paper, an optical color image cryptosystem with a data hiding scheme is proposed. In the proposed optical cryptosystem, a confidential color image is embedded into the host image of the same size. Then the stego-image is encrypted by using the double random phase encoding algorithm. The seeds to generate random phase data are hidden in the encrypted stego-image by a content-dependent and low distortion data embedding technique. The confidential image and secret data delivery is accomplished by hiding the image into the host image and embedding the data into the encrypted stego-image. Experimental results show that the proposed data steganographic cryptosystem provides large data hiding capacity and high reconstructed image quality.
Selective encryption presents a great solution to optimize time efficiency during encryption
process. In this paper a novel selective encryption scheme based on DCT transform with AES
algorithm is presented. In the DCT method, the basic idea is to decompose the image into 8×8
blocks and these blocks are transformed from the spatial domain to the frequency domain by the
DCT. Then, the DCT coefficients correlated to the lower frequencies of the image block are
encrypted. The proposed cryptosystem is evaluated using various security and statistical
analysis; results show that the proposed algorithm is strong against attacks and suitable for
practical application.
Harnessing the cloud for securely outsourcing large scale systems of linear e...Muthu Samy
Sybian Technologies Pvt Ltd
Final Year Projects & Real Time live Projects
JAVA(All Domains)
DOTNET(All Domains)
ANDROID
EMBEDDED
VLSI
MATLAB
Project Support
Abstract, Diagrams, Review Details, Relevant Materials, Presentation,
Supporting Documents, Software E-Books,
Software Development Standards & Procedure
E-Book, Theory Classes, Lab Working Programs, Project Design & Implementation
24/7 lab session
Final Year Projects For BE,ME,B.Sc,M.Sc,B.Tech,BCA,MCA
PROJECT DOMAIN:
Cloud Computing
Networking
Network Security
PARALLEL AND DISTRIBUTED SYSTEM
Data Mining
Mobile Computing
Service Computing
Software Engineering
Image Processing
Bio Medical / Medical Imaging
Contact Details:
Sybian Technologies Pvt Ltd,
No,33/10 Meenakshi Sundaram Building,
Sivaji Street,
(Near T.nagar Bus Terminus)
T.Nagar,
Chennai-600 017
Ph:044 42070551
Mobile No:9790877889,9003254624,7708845605
Mail Id:sybianprojects@gmail.com,sunbeamvijay@yahoo.com
Secret key extraction from wireless signal strength in real environmentsMuthu Samy
Sybian Technologies Pvt Ltd
Final Year Projects & Real Time live Projects
JAVA(All Domains)
DOTNET(All Domains)
ANDROID
EMBEDDED
VLSI
MATLAB
Project Support
Abstract, Diagrams, Review Details, Relevant Materials, Presentation,
Supporting Documents, Software E-Books,
Software Development Standards & Procedure
E-Book, Theory Classes, Lab Working Programs, Project Design & Implementation
24/7 lab session
Final Year Projects For BE,ME,B.Sc,M.Sc,B.Tech,BCA,MCA
PROJECT DOMAIN:
Cloud Computing
Networking
Network Security
PARALLEL AND DISTRIBUTED SYSTEM
Data Mining
Mobile Computing
Service Computing
Software Engineering
Image Processing
Bio Medical / Medical Imaging
Contact Details:
Sybian Technologies Pvt Ltd,
No,33/10 Meenakshi Sundaram Building,
Sivaji Street,
(Near T.nagar Bus Terminus)
T.Nagar,
Chennai-600 017
Ph:044 42070551
Mobile No:9790877889,9003254624,7708845605
Mail Id:sybianprojects@gmail.com,sunbeamvijay@yahoo.com
Achieving data privacy through secrecy views and null based virtual upadatesMuthu Samy
Sybian Technologies Pvt Ltd
Final Year Projects & Real Time live Projects
JAVA(All Domains)
DOTNET(All Domains)
ANDROID
EMBEDDED
VLSI
MATLAB
Project Support
Abstract, Diagrams, Review Details, Relevant Materials, Presentation,
Supporting Documents, Software E-Books,
Software Development Standards & Procedure
E-Book, Theory Classes, Lab Working Programs, Project Design & Implementation
24/7 lab session
Final Year Projects For BE,ME,B.Sc,M.Sc,B.Tech,BCA,MCA
PROJECT DOMAIN:
Cloud Computing
Networking
Network Security
PARALLEL AND DISTRIBUTED SYSTEM
Data Mining
Mobile Computing
Service Computing
Software Engineering
Image Processing
Bio Medical / Medical Imaging
Contact Details:
Sybian Technologies Pvt Ltd,
No,33/10 Meenakshi Sundaram Building,
Sivaji Street,
(Near T.nagar Bus Terminus)
T.Nagar,
Chennai-600 017
Ph:044 42070551
Mobile No:9790877889,9003254624,7708845605
Mail Id:sybianprojects@gmail.com,sunbeamvijay@yahoo.com
Saigon Boat Adventure: Modern City Life to Little City FarmTung Thanh
Where is your hotel located? If you are like most tourists, your perception of Saigon encompasses mostly of the dynamic center of district 1 and the culturally rich Chinatown of district 5. If you are like most tourists, you have never seen an old Saigon of district 8, tasted the street food paradise in district 4, heard stories of district 7 or marveled at the quiet luxury of district 2.
Nymble blocking misbehaviouring users in anonymizing networksMuthu Samy
Sybian Technologies Pvt Ltd
Final Year Projects & Real Time live Projects
JAVA(All Domains)
DOTNET(All Domains)
ANDROID
EMBEDDED
VLSI
MATLAB
Project Support
Abstract, Diagrams, Review Details, Relevant Materials, Presentation,
Supporting Documents, Software E-Books,
Software Development Standards & Procedure
E-Book, Theory Classes, Lab Working Programs, Project Design & Implementation
24/7 lab session
Final Year Projects For BE,ME,B.Sc,M.Sc,B.Tech,BCA,MCA
PROJECT DOMAIN:
Cloud Computing
Networking
Network Security
PARALLEL AND DISTRIBUTED SYSTEM
Data Mining
Mobile Computing
Service Computing
Software Engineering
Image Processing
Bio Medical / Medical Imaging
Contact Details:
Sybian Technologies Pvt Ltd,
No,33/10 Meenakshi Sundaram Building,
Sivaji Street,
(Near T.nagar Bus Terminus)
T.Nagar,
Chennai-600 017
Ph:044 42070551
Mobile No:9790877889,9003254624,7708845605
Mail Id:sybianprojects@gmail.com,sunbeamvijay@yahoo.com
Sybian Technologies Pvt Ltd
Final Year Projects & Real Time live Projects
JAVA(All Domains)
DOTNET(All Domains)
ANDROID
EMBEDDED
VLSI
MATLAB
Project Support
Abstract, Diagrams, Review Details, Relevant Materials, Presentation,
Supporting Documents, Software E-Books,
Software Development Standards & Procedure
E-Book, Theory Classes, Lab Working Programs, Project Design & Implementation
24/7 lab session
Final Year Projects For BE,ME,B.Sc,M.Sc,B.Tech,BCA,MCA
PROJECT DOMAIN:
Cloud Computing
Networking
Network Security
PARALLEL AND DISTRIBUTED SYSTEM
Data Mining
Mobile Computing
Service Computing
Software Engineering
Image Processing
Bio Medical / Medical Imaging
Contact Details:
Sybian Technologies Pvt Ltd,
No,33/10 Meenakshi Sundaram Building,
Sivaji Street,
(Near T.nagar Bus Terminus)
T.Nagar,
Chennai-600 017
Ph:044 42070551
Mobile No:9790877889,9003254624,7708845605
Mail Id:sybianprojects@gmail.com,sunbeamvijay@yahoo.com
Sybian Technologies Pvt Ltd
Final Year Projects & Real Time live Projects
JAVA(All Domains)
DOTNET(All Domains)
ANDROID
EMBEDDED
VLSI
MATLAB
Project Support
Abstract, Diagrams, Review Details, Relevant Materials, Presentation,
Supporting Documents, Software E-Books,
Software Development Standards & Procedure
E-Book, Theory Classes, Lab Working Programs, Project Design & Implementation
24/7 lab session
Final Year Projects For BE,ME,B.Sc,M.Sc,B.Tech,BCA,MCA
PROJECT DOMAIN:
Cloud Computing
Networking
Network Security
PARALLEL AND DISTRIBUTED SYSTEM
Data Mining
Mobile Computing
Service Computing
Software Engineering
Image Processing
Bio Medical / Medical Imaging
Contact Details:
Sybian Technologies Pvt Ltd,
No,33/10 Meenakshi Sundaram Building,
Sivaji Street,
(Near T.nagar Bus Terminus)
T.Nagar,
Chennai-600 017
Ph:044 42070551
Mobile No:9790877889,9003254624,7708845605
Mail Id:sybianprojects@gmail.com,sunbeamvijay@yahoo.com
Harnessing the cloud for securely outsourcing large scale systems of linear e...Muthu Samy
Sybian Technologies Pvt Ltd
Final Year Projects & Real Time live Projects
JAVA(All Domains)
DOTNET(All Domains)
ANDROID
EMBEDDED
VLSI
MATLAB
Project Support
Abstract, Diagrams, Review Details, Relevant Materials, Presentation,
Supporting Documents, Software E-Books,
Software Development Standards & Procedure
E-Book, Theory Classes, Lab Working Programs, Project Design & Implementation
24/7 lab session
Final Year Projects For BE,ME,B.Sc,M.Sc,B.Tech,BCA,MCA
PROJECT DOMAIN:
Cloud Computing
Networking
Network Security
PARALLEL AND DISTRIBUTED SYSTEM
Data Mining
Mobile Computing
Service Computing
Software Engineering
Image Processing
Bio Medical / Medical Imaging
Contact Details:
Sybian Technologies Pvt Ltd,
No,33/10 Meenakshi Sundaram Building,
Sivaji Street,
(Near T.nagar Bus Terminus)
T.Nagar,
Chennai-600 017
Ph:044 42070551
Mobile No:9790877889,9003254624,7708845605
Mail Id:sybianprojects@gmail.com,sunbeamvijay@yahoo.com
Efficient Architecture for Variable Block Size Motion Estimation in H.264/AVCIDES Editor
This paper proposes an efficient VLSI architecture
for the implementation of variable block size motion
estimation (VBSME). To improve the performance video
compression the Variable Block Size Motion Estimation
(VBSME) is the critical path. Variable Block Size Motion
Estimation feature has been introduced in to the H.264/AVC.
This feature induces significant complexities into the design
of the H.264/AVC video codec. This paper we compare the
existing architectures for VBSME. An efficient architecture
to improve the performance of Spiral Search for Variable Size
Motion Estimation in H.264/AVC is proposed. Among various
architectures available for VBSME spiral search provides
hardware friendly data flow with efficient utilization of
resources. The proposed implementation is verified using the
MATLAB on foreman, coastguard and train sequences. The
proposed Adaptive thresholding technique reduces the average
number of computations significantly with negligible effect
on the video quality. The results are verified using hardware
implementation on Xilinx Virtex 4 it was able to achieve real
time video coding of 60 fps at 95.56 MHz CLK frequency.
Machine learning-based energy consumption modeling and comparing of H.264 and...IJECEIAES
Advancement of the prediction models used in a variety of fields is a result of the contribution of machine learning approaches. Utilizing such modeling in feature engineering is exceptionally imperative and required. In this research, we show how to utilize machine learning to save time in research experiments, where we save more than five thousand hours of measuring the energy consumption of encoding recordings. Since measuring the energy consumption has got to be done by humans and since we require more than eleven thousand experiments to cover all the combinations of video sequences, video bit rate, and video encoding settings, we utilize machine learning to model the energy consumption utilizing linear regression. VP8 codec has been offered by Google as a free video encoder in an effort to replace the popular H.264 video encoder standard. This research model energy consumption and describes the major differences between H.264/AVC and VP8 encoders based on of energy consumption and performance through experiments that are machine learning-based modeling. Twentynine uncompressed video segments from a standard data-set are used, with several sizes, details, and dynamics, where the frame sizes ranging from QCIF(176x144) to 2160p(3840x2160). For fairness in comparison analysis, we use seven settings in VP8 encoder and fifteen types of tuning in H.264/AVC. The settings cover various video qualities. The performance metrics include video qualities, encoding time, and encoding energy consumption.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
HARDWARE SOFTWARE CO-SIMULATION OF MOTION ESTIMATION IN H.264 ENCODERcscpconf
This paper proposes about motion estimation in H.264/AVC encoder. Compared with standards
such as MPEG-2 and MPEG-4 Visual, H.264 can deliver better image quality at the same
compressed bit rate or at a lower bit rate. The increase in compression efficiency comes at the
expense of increase in complexity, which is a fact that must be overcome. An efficient Co-design
methodology is required, where the encoder software application is highly optimized and
structured in a very modular and efficient manner, so as to allow its most complex and time
consuming operations to be offloaded to dedicated hardware accelerators. The Motion
Estimation algorithm is the most computationally intensive part of the encoder which is simulated using MATLAB. The hardware/software co-simulation is done using system generator tool and implemented using Xilinx FPGA Spartan 3E for different scanning methods.
Efficient document compression using intra frame prediction tecthniqueeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
A Novel Approaches For Chromatic Squander Less Visceral Coding Techniques Usi...IJERA Editor
Recent advances in video capturing and display technologies, along with the exponentially increasing demand of
video services, challenge the video coding research community to design new algorithms able to significantly
improve the compression performance of the current H.264/AVC standard. This target is currently gaining
evidence with the standardization activities in the High Efficiency Video Coding (HEVC) project. The distortion
models used in HEVC are mean squared error (MSE) and sum of absolute difference (SAD). However, they are
widely criticized for not correlating well with perceptual image quality. The structural similarity (SSIM) index
has been found to be a good indicator of perceived image quality. Meanwhile, it is computationally simple
compared with other state-of-the-art perceptual quality measures and has a number of desirable mathematical
properties for optimization tasks. We propose a perceptual video coding method to improve upon the current
HEVC based on an SSIM-inspired divisive normalization scheme as an attempt to transform the DCT domain
frame prediction residuals to a perceptually uniform space before encoding.
Based on the residual divisive normalization process, we define a distortion model for mode selection and show
that such a divisive normalization strategy largely simplifies the subsequent perceptual rate-distortion
optimization procedure. We further adjust the divisive normalization factors based on local content of the video
frame. Experiments show that the scheme can achieve significant gain in terms of rate-SSIM performance and
better visual quality when compared with HEVC
A Novel Approaches For Chromatic Squander Less Visceral Coding Techniques Usi...IJERA Editor
Recent advances in video capturing and display technologies, along with the exponentially increasing demand of
video services, challenge the video coding research community to design new algorithms able to significantly
improve the compression performance of the current H.264/AVC standard. This target is currently gaining
evidence with the standardization activities in the High Efficiency Video Coding (HEVC) project. The distortion
models used in HEVC are mean squared error (MSE) and sum of absolute difference (SAD). However, they are
widely criticized for not correlating well with perceptual image quality. The structural similarity (SSIM) index
has been found to be a good indicator of perceived image quality. Meanwhile, it is computationally simple
compared with other state-of-the-art perceptual quality measures and has a number of desirable mathematical
properties for optimization tasks. We propose a perceptual video coding method to improve upon the current
HEVC based on an SSIM-inspired divisive normalization scheme as an attempt to transform the DCT domain
frame prediction residuals to a perceptually uniform space before encoding.
Based on the residual divisive normalization process, we define a distortion model for mode selection and show
that such a divisive normalization strategy largely simplifies the subsequent perceptual rate-distortion
optimization procedure. We further adjust the divisive normalization factors based on local content of the video
frame. Experiments show that the scheme can achieve significant gain in terms of rate-SSIM performance and
better visual quality when compared with HEVC
ERROR RESILIENT FOR MULTIVIEW VIDEO TRANSMISSIONS WITH GOP ANALYSIS ijma
The work in this paper examines the effects of group of pictures on H.264 multiview video coding bitstream
over an erroneous network with different error rates. The study considers analyzing the bitrate
performance for different GOP and error rates to see the effects on the quality of the reconstructed
multiview video. However, by analyzing the multiview video content it is possible to identify an optimum
GOP size depending on the type of application used. In a comparison test, the H.264 data partitioning and
the multi-layer data partitioning technique with different error rates and GOP are evaluated in terms of
quality perception. The results of the simulation confirm that Multi-layer data partitioning technique shows
a better performance at higher error rates with different GOP. Further experiments in this work have
shown the effects of GOP in terms of visual quality and bitrate for different multiview video sequences
Patch-Based Image Learned Codec using Overlappingsipij
End-to-end learned image and video codecs, based on auto-encoder architecture, adapt naturally to image resolution, thanks to their convolutional aspect. However, while coding high resolution images, these codecs face hardware problems such as memory saturation. This paper proposes a patch-based image coding solution based on an end-to-end learned model, which aims to remedy to the hardware limitation while maintaining the same quality as full resolution image coding. Our method consists in coding overlapping patches of the image and reconstructing them into a decoded image using a weighting function. This approach manages to be on par with the performance of full resolution image coding using an endto-end learned model, and even slightly outperforms it, while being adaptable to different memory sizes. Moreover, this work undertakes a full study on the effect of the patch size on this solution’s performance, and consequently determines the best patch resolution in terms of coding time and coding efficiency. Finally, the method introduced in this work is also compatible with any learned codec based
on a conv/deconvolutional autoencoder architecture without having to retrain the model.
Error resilient for multiview video transmissions with gop analysisijma
The work in this paper examines the effects of group of pictures on H.264 multiview video coding bitstream
over an erroneous network with different error rates. The study considers analyzing the bitrate
performance for different GOP and error rates to see the effects on the quality of the reconstructed
multiview video. However, by analyzing the multiview video content it is possible to identify an optimum
GOP size depending on the type of application used. In a comparison test, the H.264 data partitioning and
the multi-layer data partitioning technique with different error rates and GOP are evaluated in terms of
quality perception. The results of the simulation confirm that Multi-layer data partitioning technique shows
a better performance at higher error rates with different GOP. Further experiments in this work have
shown the effects of GOP in terms of visual quality and bitrate for different multiview video sequences.
A REAL-TIME H.264/AVC ENCODER&DECODER WITH VERTICAL MODE FOR INTRA FRAME AND ...csandit
The video coding standards are being developed to satisfy the requirements of applications for
various purposes, better picture quality, higher coding efficiency, and more error robustness.
The new international video coding standard H.264 /AVC aims at having significant
improvements in coding efficiency, and error robustness in comparison with the previous
standards such as MPEG-2, H261, H263,and H264. Video stream needs to be processed from
several steps in order to encode and decode the video such that it is compressed efficiently with
available limited resources of hardware and software. All advantages and disadvantages of
available algorithms should be known to implement a codec to accomplish final requirement.
The purpose of this project is to implement all basic building blocks of H.264 video encoder and
decoder. The significance of the project is the inclusion of all components required to encode
and decode a video in MatLab .
Optimal coding unit decision for early termination in high efficiency video c...IJECEIAES
Video compression is an emerging research topic in the field of block based video encoders. Due to the growth of video coding technologies, high efficiency video coding (HEVC) delivers superior coding performance. With the increased encoding complexity, the HEVC enhances the rate-distortion (RD) performance. In the video compression, the out-sized coding units (CUs) have higher encoding complexity. Therefore, the computational encoding cost and complexity remain vital concerns, which need to be considered as an optimization task. In this manuscript, an enhanced whale optimization algorithm (EWOA) is implemented to reduce the computational time and complexity of the HEVC. In the EWOA, a cosine function is incorporated with the controlling parameter A and two correlation factors are included in the WOA for controlling the position of whales and regulating the movement of search mechanism during the optimization and search processes. The bit streams in the Luma-coding tree block are selected using EWOA that defines the CU neighbors and is used in the HEVC. The results indicate that the EWOA achieves best bit rate (BR), time saving, and peak signal to noise ratio (PSNR). The EWOA showed 0.006-0.012 dB higher PSNR than the existing models in the real-time videos.
IOSR journal of VLSI and Signal Processing (IOSRJVSP) is an open access journal that publishes articles which contribute new results in all areas of VLSI Design & Signal Processing. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced VLSI Design & Signal Processing concepts and establishing new collaborations in these areas.
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
The Art Pastor's Guide to Sabbath | Steve ThomasonSteve Thomason
What is the purpose of the Sabbath Law in the Torah. It is interesting to compare how the context of the law shifts from Exodus to Deuteronomy. Who gets to rest, and why?
We all have good and bad thoughts from time to time and situation to situation. We are bombarded daily with spiraling thoughts(both negative and positive) creating all-consuming feel , making us difficult to manage with associated suffering. Good thoughts are like our Mob Signal (Positive thought) amidst noise(negative thought) in the atmosphere. Negative thoughts like noise outweigh positive thoughts. These thoughts often create unwanted confusion, trouble, stress and frustration in our mind as well as chaos in our physical world. Negative thoughts are also known as “distorted thinking”.
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
Ethnobotany and Ethnopharmacology:
Ethnobotany in herbal drug evaluation,
Impact of Ethnobotany in traditional medicine,
New development in herbals,
Bio-prospecting tools for drug discovery,
Role of Ethnopharmacology in drug evaluation,
Reverse Pharmacology.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
The Indian economy is classified into different sectors to simplify the analysis and understanding of economic activities. For Class 10, it's essential to grasp the sectors of the Indian economy, understand their characteristics, and recognize their importance. This guide will provide detailed notes on the Sectors of the Indian Economy Class 10, using specific long-tail keywords to enhance comprehension.
For more information, visit-www.vavaclasses.com
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
Scanned document compression using block based hybrid video codec
1. SUBMITTED TO IEEE TRANS. IMAGE PROCESSING DEC. 2010 1
Scanned Document Compression Using a
Block-based Hybrid Video Codec
Alexandre Zaghetto, Member, IEEE, and Ricardo L. de Queiroz, Senior Member, IEEE
Abstract—This paper proposes a hybrid pattern
matching/transform-based compression method for scanned
documents. The idea is to use regular video interframe
prediction as a pattern matching algorithm that can be applied
to document coding. We show that this interpretation may
generate residual data that can be efficiently compressed by
a transform-based encoder. The efficiency of this approach
is demonstrated using H.264/AVC as a high quality single-
and multi-page document compressor. The proposed method,
called Advanced Document Coding (ADC), uses segments of
the originally independent scanned pages of a document to
create a video sequence, which is then encoded through regular
H.264/AVC. The encoding performance is unrivaled. Results
show that ADC outperforms AVC-I (H.264/AVC operating in
pure intra mode) and JPEG2000 by up to 2.7 dB and 6.2 dB,
respectively. Superior subjective quality is also achieved.
Index Terms—Scanned document compression, advanced doc-
ument coding, pattern matching, H.264/AVC.
I. INTRODUCTION
COMPRESSION of scanned documents can be tricky. The
scanned document is either compressed as a continuous-
tone picture, or it is binarized before compression. The binary
document can then be compressed using any available two-
level lossless compression algorithm (such as JBIG [1] and
JBIG2 [2]), or it may undergo character recognition [3].
Binarization may cause strong degradation to object contours
and textures, such that, whenever possible, continuous-tone
compression is preferred [4]. In single/multi-page document
compression, each page may be individually encoded by
some continuous-tone image compression algorithm, such as
JPEG [5] or JPEG2000 [6], [7]. Multi-layer approaches such
as the mixed raster content (MRC) imaging model [8]–[12]
are also challenged by soft edges in scanned documents, often
requiring pre- and post-processing [13].
Natural text along a document typically presents repetitive
symbols such that dictionary-based compression methods be-
come very efficient. For continuous-tone imagery, the recur-
rence of similar patterns is illustrated in Fig. 1. Nevertheless,
an efficient dictionary-based encoder relying on continuous-
tone pattern matching is not that trivial. We propose an encoder
that explores such a recurrence through the use of pattern-
matching predictors and efficient transform encoding of the
residual data.
Copyright (c) 2013 IEEE. Personal use of this material is permitted.
However, permission to use this material for any other purposes must be
obtained from the IEEE by sending a request to pubs-permissions@ieee.org.
The authors are with the Department of Computer Science,
Universidade de Brasilia, Brazil, e-mail: alexandre@cic.unb.br,
queiroz@ieee.org.
Fig. 1. Digitized books usually present recurrent patterns across different
pages and across regions of the same page.
It is important to place our proposal within the proper
scenario. Three premises are assumed. Firstly, we want to
avoid complex multi-coder schemes such as MRC. Secondly,
the decoder should be as standard as possible. Since we are
dealing with scanned compound documents (mixed pictures
and text), natural image encoders, such as JPEG2000, are the
most adequate. Non-standard encoders, based on fractals [14]–
[16], texture prediction [17], [18], template matching [19]
or multiscale pattern recurrence [20], [21], are good options
out of the scope of what is being proposed. Thirdly, one
should provide high quality reconstructed versions of scanned
documents. This is especially important if rare books of
historical value must be digitally stored, thus discarding optical
character recognition (OCR) and token-based methods [2],
[10]. In summary, we want a standard single coder approach
that operates on natural images and delivers high-quality
reconstructed compound documents.
The proposed coder makes heavy use of the H.264/AVC
standard video coder [22]. H.264/AVC has been well explained
in the literature [23]–[27]. H.264/AVC leads to substantial
performance improvement when compared to other existing
standards [25], [28], such as MPEG-2 [29] and H.263 [30].
Among such improvements we can mention [22], [31]: in-
terframe variable block size prediction; arbitrary reference
frames; quarter-pel motion estimation; intraframe macroblock
prediction; context-adaptive binary arithmetic coding; and in-
loop deblocking filter. Results point to at least a factor of
two improvement over previous standards. The many cod-
ing advances brought into H.264/AVC not only set a new
benchmark for video compression, but they also make it a
This is the author's version of an article that has been published in this journal. Changes were made to this version by the publisher prior to publication.
The final version of record is available at http://dx.doi.org/10.1109/TIP.2013.2251641
Copyright (c) 2013 IEEE. Personal use is permitted. For any other purposes, permission must be obtained from the IEEE by emailing pubs-permissions@ieee.org.
2. SUBMITTED TO IEEE TRANS. IMAGE PROCESSING DEC. 2010 2
formidable compressor for still images [32]. The intraframe
macroblock prediction, combined with the context-adaptive
binary arithmetic coding (CABAC) [33] turns the H.264/AVC
into a powerful still image compression method (i.e. working
on a video sequence composed of one single frame). We refer
to this coder as AVC-I. Gains of the AVC-I over JPEG2000
are typically in the order of 0.25 dB to 0.5 dB in PSNR
for pictorial images [31], [32], [34]. For compound images
(mixture of text and picture) [4], the PSNR gains are more
substantial, even surpassing the mark of 3 dB improvement
over JPEG2000, in some cases [31].
The hypothesis presented is this paper is that a scanned
document encoder that employs state-of-the-art video coding
techniques and generates an H.264/AVC-decodable bit-stream
yields the best rate-distortion performance compared to other
continuous-tone still image compressors. 1
.
II. THE PROPOSED METHOD AND ITS IMPLEMENTATION
USING AVC
The proposed document coder has a generic concept and an
implementation based on a stock H.264/AVC video coder. We
now describe the desired features and how one can implement
them using AVC. The generic description may help the reader
to adapt other video coders for that purpose or to develop
non-standard-based (proprietary) variations.
A. Block-based pattern matching
The encoder is based on pattern matching. The document
image is segmented into blocks of pixels. Each block is
matched to an existing pattern in a dictionary which is popu-
lated by the previous contents of the same document. In order
to do that, we partition the scanned document, which may be
made of one or more scanned pages of H ×W pixels, into Np
(H/ Np × W/ Np pixels) sub-pages or frames. Hence, a
scanned book may be decomposed into many frames. Figure 2
illustrates the page pre-processing (partition) algorithm, while
Fig. 3 shows an example of a frame sequence built from a
3-pages set.
Blocks have for example 16×16 pixels and each one is
matched to an existing pattern in a previous frame. In this way,
the previous frames make a dynamic dictionary of patterns to
look when encoding the present frame, which is continuously
being updated as more frames are encoded. Once a match is
found, the matching pattern is used to predict the block and
the prediction error (residue) is encoded along with the frame
number and position where the match was found (reference
vector). A block can be partitioned into smaller blocks to ease
prediction at the cost of spending more bits to encode reference
vectors. Figure 4 illustrates the effect of using the pattern
matching prediction algorithm. Figures 4 (a) and (b) show
examples of a reference and a current text area, respectively.
Figures 4 (c), (e) and (g) represent the predictions of the
current text using 16×16, 8×8 and 4×4-pixel block partitions.
Figures 4 (d), (f) and (h) are the corresponding residual data.
1Preliminary results of the proposed method over multi-page text-only
documents have been presented at a conference [35].
0 1 2
3 4 5
6 7 8
9 10 11
Fig. 2. A document page is partitioned into segments (labeled in sequence).
Each one is considered a frame and can be sequentially encoded.
Fig. 3. Example of a frame sequence, built from a 3 pages set, Np = 4
frames/page. Frames 1 to 4, 5 to 8, and 9 to 12 are built from pages 1, 2 and
3, respectively.
Notice that the 4×4-pixel prediction generates a lower-energy
residual, when compared to the 16 × 16 and 8 × 8 prediction,
however, they require encoding more reference vectors.
In this context, video coders often use motion estimation
techniques which are essentially the same as pattern matching.
The H.264/AVC is capable of partitioning macroblocks of 16×
16 pixels into any valid combination of blocks of 16 × 8,
8 × 16, 8 × 8, 4 × 8, 8 × 4, and 4 × 4 pixels. Our algorithm is
then to feed the document frames as video frames into AVC
since its motion estimation algorithm will take care of the
pattern matching search for us. However, motion estimation
algorithms always take advantage of the fact that video content
at the same frame position in neighbor frames are typically
very correlated. Since this is not our case, it is advisable to
make the search window to cover as much as possible of the
reference frames, or the whole frame, in order to enrich the
dictionary and to remove the spatial dependency.
This is the author's version of an article that has been published in this journal. Changes were made to this version by the publisher prior to publication.
The final version of record is available at http://dx.doi.org/10.1109/TIP.2013.2251641
Copyright (c) 2013 IEEE. Personal use is permitted. For any other purposes, permission must be obtained from the IEEE by emailing pubs-permissions@ieee.org.
3. SUBMITTED TO IEEE TRANS. IMAGE PROCESSING DEC. 2010 3
(a) (b)
(c) (d)
(e) (f)
(g)
2
(h)
Fig. 4. Illustration of approximate pattern matching using interframe
prediction: (a) reference text; (b) current text; (c), (e) and (g) predicted text
(block size: 16×16, 8×8 and 4×4 pixels, respectively); and (d), (f) and (h)
prediction residue (block size: 16 × 16, 8 × 8 and 4 × 4 pixels, respectively).
Each zoomed image patch has 178× 178 pixels.
B. Inter- and intra-frame prediction
AVC also allows for intra-frame prediction, in which a block
(partitioned or not) can be predicted from neighboring blocks
by means of directional extrapolation of the border pixels.
The decision to use or not intra-frame prediction is typically
based on rate-distortion optimization (RDO) and we use RDO
in all our simulations. However, AVC does not allow for
in-frame motion vectors (IFMV), but many variations using
such a feature and other sophisticated methods of intra-frame
prediction do exist [19]. Apparently, HEVC will also support
IFMV [36]. Breaking up the pages into frames allows for some
intra-document prediction similar to IFMV, yet using a stock
video coder. Furthermore, the information derived from IFMV
is typically very small compared to all compressed data, such
that the advantage should not be much relevant. Because of
that, we do not use IFMV.
Another issue is the random access to different book pages.
In order to get to a book page, we are forced to decompress
all the frames it uses as reference. So, if random access is an
issue, we suggest to periodically use no-reference frames, i.e.
frames in which inter-frame prediction is not allowed, relying
on pure intra-frame prediction/extrapolation.
In our encoder, using the AVC structure, each block-
partitioning combination and prediction mode is tested and the
best one is picked through RDO. With RDO within AVC, in
the k-th configuration test in a macroblock, AVC computes the
rate Rk (bits spent to encode the block) and distortion Dk (sum
of absolute differences - SAD) achieved by reconstructing the
block. One picks the block partition method that minimizes
Jk = Rk + λDk.
The process is then repeated for every macroblock. As usual,
λ controls compression ratios and is varied to find the RD
curves in our simulations.
C. Residual coding
The residual macroblock, i.e. the prediction error, is trans-
formed using 4 8×8-pixel discrete cosine transform (DCT) or
an integer approximation of it. The transformed blocks are
quantized and encoded using arithmetic coding. H.264/AVC
uses an integer transform with similar properties as the DCT
and the resulting transformed coefficients are quantized and
entropy encoded using CABAC.
D. Compound documents and region classification
Compound document compression usually segments the
image into regions and classifies each one as containing text
and graphics or images (or halftones, for instance). Once
a region is classified, it can be encoded using a proper
algorithm. This approach is driven by objects such as text
characters so that regions of the image are labeled based on
our estimate of its contents. Our method, however, is driven
by the compression itself. Rather than only testing pattern-
matching-based prediction for every block partition, we also
test prediction by extrapolating neighboring blocks, as in
”intra-prediction´´ in H.264/AVC. The RD-optimized selection
of the best prediction assures that the best option is picked.
Text and graphics shall contain recurrent patterns and will
be often encoded using patterns from previous regions, while
pictorial regions may resort to intra-prediction. In this sense,
segmentation is embedded into the encoding process. In fact,
the block prediction and RDO may have the same effect of
This is the author's version of an article that has been published in this journal. Changes were made to this version by the publisher prior to publication.
The final version of record is available at http://dx.doi.org/10.1109/TIP.2013.2251641
Copyright (c) 2013 IEEE. Personal use is permitted. For any other purposes, permission must be obtained from the IEEE by emailing pubs-permissions@ieee.org.
4. SUBMITTED TO IEEE TRANS. IMAGE PROCESSING DEC. 2010 4
Fig. 5. Configuration parameters that have greater influence on the encoder
performance: Rf (number of reference frames) and Sr (search range).
a segmentation map, even though benefiting the compression
process, and not the true identification of image contents.
E. Encoder and decoder summary
In our concept, the frames are fed into AVC in sequence
just like in a regular video coder. Because of that relation to
AVC, we refer to the proposed method as advanced document
coding, or, simply, ADC. In a nutshell, ADC operation can
be summarized as (i) break the book pages into frames; (ii)
feed all frames to H.264/AVC resulting in an AVC-compatible
stream; (iii) decode the bit stream; and (iv) assemble the
decoded frames into the final document book pages.
In order to work well, H.246/AVC should operate in ”High”
profile, following an IPPP... framework. The encoder should
periodically use no-reference I-frames in the case random
access is desired. RDO should be turned on. Motion estimation
should be set to full search over a window that is as large
as possible. Note that other video coders such as HEVC
and MPEG-2 will also work, even though achieving different
performance levels due to their different sophistication levels.
III. EXPERIMENTAL RESULTS
In our tests, different page sets are compressed using
JPEG2000, AVC-I (H.264/AVC operating in pure intra mode)
and the proposed ADC. The reason we chose JPEG2000 and
AVC-I for comparison is that these are the most suitable
standards that would meet the three premises presented in
Section I.
Distortion metrics based on visual models such as Structural
Similarity (SSIM) [37] and Video Quality Metric (VQM) [38]
have been extensively tested for pictorial content. However,
they are unproven for text and graphics, which rely more on
resolution than on number of gray levels. Readability is very
important and some alternative metrics such as OCR efficiency
are considered. A good objective metric to reflect subjective
perception of text has not been well explored yet. Hence, we
opted to stick to the traditional PSNR as a distortion metric.
In JPEG2000 and AVC-I compression, the pages are sepa-
rately encoded. As for ADC, the first frame of the sequence
is encoded as an I-frame (only intraframe prediction modes
are used) and all the remaining frames are encoded as P-
frames (in addition to intraframe prediction, only past frames
0.2 0.3 0.4 0.5 0.6 0.7 0.8
20
25
30
35
40
45
Bitrate (bpp)
PSNR(dB)
Sequence "guita" (Np
= 4, Sr
= 32 and Rf
= 1, 3, 5)
ADC (S
r
= 32, R
f
= 5)
ADC (S
r
= 32, R
f
= 3)
ADC (Sr
= 32, Rf
= 1)
AVC−I
JPEG2000
(a)
0.2 0.3 0.4 0.5 0.6 0.7 0.8
20
25
30
35
40
45
Bitrate (bpp)
PSNR(dB)
Sequence "guita" (Np
= 4, Sr
= 08, 16, 32 and Rf
= 5)
ADC (S
r
= 32, R
f
= 5)
ADC (S
r
= 16, R
f
= 5)
ADC (Sr
= 08, Rf
= 5)
AVC−I
JPEG2000
(b)
Fig. 6. Comparison between JPEG2000, AVC-I and the proposed ADC, for
different combinations of search ranges (Sr) and number of reference frames
(Rf ) for test sequence “guita”. The number of frames/page (Np) is 4.
are used as reference by the interframe prediction). We also
considered that each page may be segmented into Np = 4
frames, Np = 16 frames, or not segmented at all (Np = 1,
for multi-page documents only). Two configuration parameters
have greater influence on the encoder performance. One is
the number of reference frames, Rf , the other is the search
range, Sr, as illustrated in Fig. 5. Initially, we evaluated the
effect of choosing different values for Sr and Rf . Figure 6
shows PSNR plots comparing JPEG2000, AVC-I and ADC
(Np = 4 frames/page), for different combinations of Sr and
Rf . The PSNR was calculated using the global mean square
error (MSE). The higher the Sr and Rf values, the better the
rate-distortion performance. In particular, for Sr = 32 pixels
and Rf = 5 frames, ADC outperforms AVC-I by more than 2
dB and JPEG2000 by more than 5 dB, at 0.5 bit/pixel (bpp).
Our test set is composed by 18 documents divided into the
This is the author's version of an article that has been published in this journal. Changes were made to this version by the publisher prior to publication.
The final version of record is available at http://dx.doi.org/10.1109/TIP.2013.2251641
Copyright (c) 2013 IEEE. Personal use is permitted. For any other purposes, permission must be obtained from the IEEE by emailing pubs-permissions@ieee.org.
5. SUBMITTED TO IEEE TRANS. IMAGE PROCESSING DEC. 2010 5
TABLE I
AVERAGE OBJECTIVE (PSNR IN DB) IMPROVEMENT OVER EXISTING
STANDARDS FOR THE 4 DOCUMENT TEST SETS.
Document Set 0 1 2 3
JPEG2000 6.26 5.61 4.47 2.41
AVC-I 2.75 2.58 1.56 0.89
following 4 classes2
:
• Class 0: 6 multi-page text-only documents;
• Class 1: 6 single-page text-only documents;
• Class 2: 3 multi-page compound documents; and
• Class 3: 3 single-page compound documents.
Figure 7 illustrates one example of each class. Examples
of PSNR plots are shown in Figs. 8. Figure 9 shows average
PSNR improvement of ADC over JPEG2000 and AVC-I for
each of the document class test sets. In all cases, JPEG2000
and AVC-I are objectively outperformed, considering Sr = 32,
Rf = 5 and Np = 16. Figure 10 (a) shows a zoomed
part of the original “cerrado” sequence. Its encoded and
reconstructed versions using AVC-I, JPEG2000 and ADC,
at approximately 0.25 bits/pixel, are shown in Figs. 10 (b),
(c) and (d), respectively. ADC also yields superior subjective
quality. As a reference, in Table I we present the gains
of the proposed method over AVC-I and JPEG2000 using
Bjontegaard’s method [39] applied to the curves in Fig. 9. As
one can see from the table and from the graphics, the gains
are very substantial.
Our software-based tests using the popular and efficient
x246 implementation of the H.264/AVC standard and the
Kakadu implementation of JPEG 2000, indicate that ADC
(AVC running with 5 reference frames, slowest search, 32×32
window, in IPPPP... mode) is near 10× slower than JPEG
2000. x264 in an Intel Core i7 platform has been shown to
encode RGB video at a rate of 3 to 30 million pixels per
second (Mps). The variation is due to the various x264 settings
that affect speed and quality. Of course, in order to do that,
the system was be dedicated to the task, as an appliance. A
scanned letter-sized (8.5× 11 in) page at 600 pixels per inch
(ppi) yields about 33 million pixels. Hence, we can expect a
page compression speed roughly in the order of 5 to 50 pages
per minute (ppm). This page rate may be acceptable for many
on-the-fly applications and is definitely reasonable for off-line
compression of books and such. A rigorous complexity study
of the encoding algorithms presented here is beyond the scope
of this paper.
IV. CONCLUSIONS
In this paper, we presented a pattern matching/transform-
based encoder for scanned documents named ADC. The reason
why we decided to use H.264/AVC tools to implement the
proposed method is because its interframe prediction scheme
allied with RDO yield an efficient pattern matching algorithm.
2The entire test set is available at http://image.unb.br/queiroz/testset
In addition, the intraframe prediction, the DCT-based trans-
form and the CABAC contribute to improve the encoding
efficiency.
In essence, our work can be summarized as splitting the
document into many pages, forming frames, and feeding the
frames to AVC. Despite the simplicity of the idea, the perfor-
mance for scanned documents is unrivaled, to our knowledge.
Results show that ADC objectively outperforms AVC-I and
JPEG2000 by up to 2.7 dB and 6.2 dB, respectively, with more
significant gains observed for multi-page text-only documents.
Furthermore, the encoder outputs documents with superior
subjective quality. Replacing H.264/AVC by HEVC in ADC
would yield even larger gains.
REFERENCES
[1] JBIG, “Information Technology - Coded Representation of Picture and
Audio Information - Progressive Bi-level Image Compression. ITU-T
Recommendation T.82,” Mar. 1993.
[2] JBIG2, “Information Technology - Coded Representation of Picture and
Audio Information - Lossy/Lossless Coding of Bi-level Images. ITU-T
Recommendation T.88,” Mar. 2000.
[3] S. Mori, C.Y. Suen, and K.; Yamamoto, “Historical review of OCR
research and development,” Proceedings of the IEEE, vol. 80, no. 7, pp.
1029–1058, Jul. 1992.
[4] R. L. de Queiroz, Compressing Compound Documents, in the Document
and Image Compression Handbook, by M. Barni, Marcel-Dekker, EUA,
2005.
[5] W. B. Pennebaker and J. L. Mitchell, JPEG Still Image Data Compres-
sion Standard, Chapman and Hall, 1993.
[6] JPEG, “Information Technology - JPEG2000 Image Coding System -
Part 1: Core Coding System. ISO/IEC 15444-1,” 2000.
[7] D. S. Taubman and M. W. Marcellin, JPEG 2000: Image Compression
Fundamentals, Standards and Practice, Kluwer Academic, EUA, 2002.
[8] MRC, “Mixed Raster Content (MRC). ITU-T Recommendation T.44,”
1999.
[9] R. L. de Queiroz, R. Buckley, and M. Xu, “Mixed Raster Content
(MRC) model for compound image compression,” Proc. of SPIE Visual
Communications and Image Processing, vol. 3653, pp. 1106–1117, Jan.
1999.
[10] P. Haffner, P. G. Howard, P. Simard, Y. Bengio, and Y. Lecun, “High
quality document image compression with DjVu,” Journal of Electronic
Imaging, vol. 7, pp. 410–425, 1998.
[11] G. Feng and C. A. Bouman, “High-quality MRC document coding,”
IEEE Trans. on Image Processing, vol. 15, no. 10, pp. 3152–3169, Oct.
2006.
[12] A. Zaghetto, R. L de Queiroz, and D. Mukherjee, “MRC compression
of compound documents using threshold segmentation, iterative data-
filling and H.264/AVC-INTRA,” Proc. Indian Conference on Computer
Vision, Graphics and Image Processing, Dec. 2008.
[13] A. Zaghetto and R. L de Queiroz, “Improved layer processing for MRC
compression of scanned documents,” Proc. of IEEE Intl. Conference on
Image Processing, pp. 1993 – 1996, Nov. 2009.
[14] E. Walach and E. Karnin, “A fractal-based approach to image com-
pression,” in Proc. IEEE Intl. Conf. on Acoustics, Speech, and Signal
Processing, vol. 11, pp. 529 – 532, Apr. 1986.
[15] S.M. Kocsis, “Fractal-based image compression,” in Proc. 23rd.
Asilomar Conf. on Signals, Systems and Computers, vol. 1, pp. 177
–181, 1989.
[16] A. Wakatani, “Improvement of adaptive fractal image coding on GPUs,”
in Proc. IEEE Intl. Conf. on Consumer Electronics, pp. 255 –256, Jan.
2012.
[17] D.C. Garcia and R.L. de Queiroz, “Least-squares directional intra
prediction in h.264/avc,” IEEE Signal Processing Letters, vol. 17, no.
10, pp. 831–834, Oct. 2010.
[18] Y. Liu L. Liu and E. Delp, “Enhanced intra prediction using contex-
tadaptive linear prediction,” in Proc. of PCS 2007 - Picture Coding
Symp., Nov. 2007.
[19] C. Lan, J. Xu, F. Wu, and G. Shi, “Intra frame coding with template
matching prediction and adaptive transform,” in Proc. IEEE Intl. Conf.
Image Processing,, pp. 1221 –1224, Sep., 2010.
This is the author's version of an article that has been published in this journal. Changes were made to this version by the publisher prior to publication.
The final version of record is available at http://dx.doi.org/10.1109/TIP.2013.2251641
Copyright (c) 2013 IEEE. Personal use is permitted. For any other purposes, permission must be obtained from the IEEE by emailing pubs-permissions@ieee.org.
6. SUBMITTED TO IEEE TRANS. IMAGE PROCESSING DEC. 2010 6
(a) (b) (c) (d)
Fig. 7. Examples of documents used in our experiments: (a) class 0: multi-page text-only documents; (b) class 1: single-page text-only documents; (c) class
2: multi-page compound documents; and (d) class 3: single-page compound documents.
[20] E. B. Lima Filho, E. A. B. da Silva, M. B. de Carvalho, and F. S. Pinag´e,
“Universal image compression using multiscale recurrent patterns with
adaptive probability model,” IEEE Trans. on Image Processing, vol. 17,
no. 4, pp. 512–527, Apr. 2008.
[21] N. Francisco, N. Rodrigues, E. da Silva, M. de Carvalho, S. de Faria, and
V. da Silva, “Scanned compound document encoding using multiscale
recurrent patterns,” IEEE Trans. on Image Processing, vol. 19, no. 10,
pp. 2712–2724, Apr. 2010.
[22] JVT, “Advanced Video Coding for Generic Audiovisual Services. ITU-T
Recommendation H.264,” Nov. 2007.
[23] I. E. G. Richardson, H.264 and MPEG-4 video compression, Wiley,
EUA, 2003.
[24] T. Wiegand, G. J. Sullivan, G. Bjontegaard, and A. Luthra, “Overview
of the H.264/AVC video coding standard,” IEEE Trans. on Circuits and
Systems for Video Technology, vol. 13, no. 7, pp. 560–576, July 2003.
[25] T. Wiegand, H. Schwarz, A. Joch, F. Kossentini, and G. J. Sullivan,
“Rate-constrained coder control and comparison of video coding stan-
dards,” IEEE Trans. on Circuits and Systems for Video Technology, vol.
13, no. 7, pp. 688–703, July 2003.
[26] J. Ostermann, J. Bormans, P. List, D. Marpe, M. Narroschke, F. Pereira,
T. Stockhammer, and T. Wedi, “Video Coding with H.264/AVC: Tools,
Performance, and Complexity,” IEEE Circuits and Systems Magazine,
vol. 4, no. 1, pp. 7–28, Mar. 2004.
[27] G. J. Sullivan, P. Topiwala, and A. Luthra, “The H.264/AVC Advanced
Video Coding Standard: Overview and Introduction to the Fidelity Range
Extensions,” Proc. of SPIE Conference on Applications of Digital Image
Processing XXVII, Special Session on Advances in the New Emerging
Standard: H.264/AVC, vol. 5558, pp. 53–74, Aug. 2004.
[28] N. Kamaci and Y. Altunbasak, “Performance comparison of the
emerging H.264 video coding standard with the existing standards,”
Proc. Intl. Conf. on Multimedia and Expo, vol. 1, pp. 345–348, July
2003.
[29] B. G. Haskell, A. Puri, and A. N. Netravalli, Digital Video: An
Introduction to MPEG-2, Chapman and Hall, EUA, 1997.
[30] ITU-T, “Video Coding for Low Bit Rate Communication. ITU-T
Recommendation H.263,” Version 1: Nov. 1995, Version 2: Jan. 1998,
Version 3: Nov. 2000.
[31] R. L. de Queiroz, R. S. Ortis, A. Zaghetto, and T. A. Fonseca, “Fringe
benefits of the H.264/AVC,” Proc. of International Telecommunications
Symposium, pp. 208–212, Sep. 2006.
[32] D. Marpe, V. George, and T. Wiegand, “Performance comparison of
intra-only H.264/AVC and JPEG2000 for a set of monochrome ISO/IEC
test images,” Contribution JVT ISO/IEC MPEG and ITU-T VCEG, Doc.
JVT M-014, Oct. 2004.
[33] D. Marpe, H. Schwarz, and T. Wiegand, “Context-based adaptive binary
arithmetic coding in the H.264/AVC video compression standard,” IEEE
Trans. on Circuits and Systems for Video Technology, vol. 13, no. 7, pp.
620 – 636, July 2003.
[34] A. Al, B. P. Rao, S. S. Kudva, S. Babu, D. Sumam, and A. V.
Rao, “Quality and complexity comparison of H.264 intra mode with
JPEG2000 and JPEG,” Proc. of IEEE International Conference on Image
Processing, vol. 1, pp. 24–27, Oct. 2004.
[35] A. Zaghetto and R. L de Queiroz, “High-quality scanned book compres-
sion using pattern matching,” Proc. of IEEE International Conference
on Image Processing, pp. 26 – 29, Sep. 2010.
[36] K. Ugur et. al. , “High performance, low complexity video coding and
the emerging HEVC standard,” IEEE Trans. on Circuits and Systems
for Video Technology, vol. 20, no. 12, pp. 1688–1697, Dec. 2010.
[37] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image
quality assessment: From error visibility to structural similarity,” IEEE
Transactios on Image Processing, vol. 13, no. 4, pp. 600–612, Apr. 2004.
[38] M.H. Pinson, and S. Wolf, “A new standardized method for objectively
measuring video quality,” IEEE Transactions on Broadcasting, vol.50,
no.3, pp. 312- 322, Sept. 2004.
[39] G. Bjontegaard, “Calculation of average PSNR differences between
RD-curves,” presented at the 13th VCEG-M33 Meeting, Austin, TX,
Apr. 2001.
Alexandre Zaghetto received the Engineer degree
in 2002, from the Federal University of Rio de
Janeiro, Rio de Janeiro, Brazil, the M.Sc. degree
in 2004, from the University of Brasilia, Brasilia,
Brazil, and and the Ph.D. degree in 2009 also from
the University of Brasilia, all in Electrical Engi-
neering. In 2009, he became Associate Professor at
the Computer Science Department at University of
Brasilia. His main research interests are in image and
video processing, compound document coding, bio-
metrics, fuzzy logic and artificial neural networks.
This is the author's version of an article that has been published in this journal. Changes were made to this version by the publisher prior to publication.
The final version of record is available at http://dx.doi.org/10.1109/TIP.2013.2251641
Copyright (c) 2013 IEEE. Personal use is permitted. For any other purposes, permission must be obtained from the IEEE by emailing pubs-permissions@ieee.org.
7. SUBMITTED TO IEEE TRANS. IMAGE PROCESSING DEC. 2010 7
0.2 0.3 0.4 0.5 0.6 0.7 0.8
25
30
35
40
45
Bitrate(bpp)
PSNR(dB) Comparison between coders: "guita"
ADC − 16 frames/page
ADC − 4 frames/page
ADC − 1 frame/page
AVC−I
JPEG2000
(a)
0.2 0.4 0.6 0.8
25
30
35
40
Bitrate(bpp)
PSNR(dB)
Comparison between coders: page 2 of "guita"
ADC − 16 frames/page
ADC − 4 frames/page
AVC−I
JPEG2000
(b)
0.3 0.4 0.5 0.6 0.7 0.8
30
35
40
Bitrate(bpp)
PSNR(dB)
Comparison between coders: "paper"
ADC − 16 frames/page
ADC − 4 frames/page
AVC−I
JPEG2000
(c)
0.4 0.6 0.8 1
28
30
32
34
36
38
40
42
Bitrate(bpp)
PSNR(dB)
Comparison between coders: "carta"
ADC − 16 frames/page
ADC − 4 frames/page
AVC−I
JPEG2000
(d)
Fig. 8. Examples of PSNR plots for: (a) class 0 (multi-page, text-only), document “guita” (number of pages: 2, size: 1568 × 1024 pixels); (b) class 1
(single-page, text-only), page 2 of document “guita”(1568 × 1024 pixels); (c) class 2 (multi-page, compound), document “paper” (number of pages: 4, size:
2304 × 1632 pixels); and (d) class 3 (single-page, compound), document “carta” (2152 × 1632 pixels). Search range and number of reference frames are
Sr = 32 and Rf = 5, respectively.
Ricardo L. de Queiroz received the Engineer de-
gree from Universidade de Brasilia , Brazil, in 1987,
the M.Sc. degree from Universidade Estadual de
Campinas, Brazil, in 1990, and the Ph.D. degree
from The University of Texas at Arlington , in 1994,
all in Electrical Engineering.
In 1990-1991, he was with the DSP research
group at Universidade de Brasilia, as a research
associate. He joined Xerox Corp. in 1994, where
he was a member of the research staff until 2002.
In 2000-2001 he was also an Adjunct Faculty at
the Rochester Institute of Technology. He joined the Electrical Engineering
Department at Universidade de Brasilia in 2003. In 2010, he became a Full
Professor at the Computer Science Department at Universidade de Brasilia.
Dr. de Queiroz has published over 150 articles in Journals and conferences
and contributed chapters to books as well. He also holds 46 issued patents.
He is an elected member of the IEEE Signal Processing Society’s Multimedia
Signal Processing (MMSP) Technical Committee and a former member of the
Image, Video and Multidimensional Signal Processing (IVMSP) Technical
Committee. He is a past editor for the EURASIP Journal on Image and
Video Processing, IEEE Signal Processing Letters, IEEE Transactions on
Image Processing, and IEEE Transactions on Circuits and Systems for Video
Technology. He has been appointed an IEEE Signal Processing Society
Distinguished Lecturer for the 2011-2012 term.
Dr. de Queiroz has been actively involved with the Rochester chapter of
the IEEE Signal Processing Society, where he served as Chair and organized
the Western New York Image Processing Workshop since its inception until
2001. He is now helping organizing IEEE SPS Chapters in Brazil and
just founded the Brasilia IEEE SPS Chapter. He was the General Chair
of ISCAS’2011, and MMSP’2009, and is the General Chair of SBrT’2012.
He was also part of the organizing committee of ICIP’2002. His research
interests include image and video compression, multirate signal processing,
and color imaging. Dr. de Queiroz is a Senior Member of IEEE, a member
of the Brazilian Telecommunications Society and of the Brazilian Society of
Television Engineers.
This is the author's version of an article that has been published in this journal. Changes were made to this version by the publisher prior to publication.
The final version of record is available at http://dx.doi.org/10.1109/TIP.2013.2251641
Copyright (c) 2013 IEEE. Personal use is permitted. For any other purposes, permission must be obtained from the IEEE by emailing pubs-permissions@ieee.org.
8. SUBMITTED TO IEEE TRANS. IMAGE PROCESSING DEC. 2010 8
0.2 0.4 0.6 0.8
26
28
30
32
34
36
38
40
42
Bitrate(bpp)
AveragePSNR(dB)
Average over class 0
ADC − 16 frames/page
AVC−I
JPEG2000
(a)
0.4 0.6 0.8 1
26
28
30
32
34
36
38
40
42
Bitrate(bpp)
AveragePSNR(dB)
Average over class 1
ADC − 16 frames/page
AVC−I
JPEG2000
(b)
0.4 0.6 0.8 1 1.2
26
28
30
32
34
36
38
40
Bitrate(bpp)
AveragePSNR(dB)
Average over class 2
ADC − 16 frames/page
AVC−I
JPEG2000
(c)
0.4 0.6 0.8 1 1.2 1.4 1.6
28
30
32
34
36
38
40
Bitrate(bpp)
AveragePSNR(dB)
Average over class 3
ADC − 16 frames/page
AVC−I
JPEG2000
(c)
Fig. 9. Comparison of ADC against JPEG2000 and AVC-I in terms of PSNR averaged for documents in: (a) class 0; (b) class 1; (c) class 2; and (d) class 3.
This is the author's version of an article that has been published in this journal. Changes were made to this version by the publisher prior to publication.
The final version of record is available at http://dx.doi.org/10.1109/TIP.2013.2251641
Copyright (c) 2013 IEEE. Personal use is permitted. For any other purposes, permission must be obtained from the IEEE by emailing pubs-permissions@ieee.org.
9. SUBMITTED TO IEEE TRANS. IMAGE PROCESSING DEC. 2010 9
(a) (b)
(c) (d)
Fig. 10. Subjective comparison among coders: (a) zoomed part of “cerrado” sequence; reconstructed versions using (b) AVC-I, (c) JPEG2000 and (d) ADC,
at approximately 0.25 bits/pixels. ADC yields superior subjective quality.
This is the author's version of an article that has been published in this journal. Changes were made to this version by the publisher prior to publication.
The final version of record is available at http://dx.doi.org/10.1109/TIP.2013.2251641
Copyright (c) 2013 IEEE. Personal use is permitted. For any other purposes, permission must be obtained from the IEEE by emailing pubs-permissions@ieee.org.