International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
A CONCERT EVALUATION OF EXEMPLAR BASED IMAGE INPAINTING ALGORITHMS FOR NATURA...cscpconf
Image inpainting derives from restoration of art works, and has been applied to repair ancient
art works. Inpainting is a technique of restoring a partially damaged or occluded image in an
undetectable way. It fills the damaged part of an image by employing information of the
undamaged part according to some rules to make it look “reasonable” to human eyes. Digital
image inpainting is relatively new area of research, but numerous and different approaches to
tackle the inpainting problem have been proposed since the concept was first introduced. This
paper analyzes and compares the recent exemplar based inpainting algorithms by Minqin Wang
and Hao Guo et al. A number of examples on real images are demonstrated to evaluate the
results of algorithms using Peak Signal to Noise Ratio (PSNR)
Comparative Analysis of Lossless Image Compression Based On Row By Row Classi...IJERA Editor
Lossless image compression is needed in many fields like medical imaging, telemetry, geophysics, remote
sensing and other applications, which require exact replica of original image and loss of information is not
tolerable. In this paper, a near lossless image compression algorithm based on row by row classifier with
encoding schemes like Lempel Ziv Welch (LZW), Huffman and Run Length Encoding (RLE) on color images
is proposed. The algorithm divides the image into three parts R, G and B, apply row by row classification on
each part and result of this classification is records in the mask image. After classification the image data is
decomposed into two sequences each for R, G and B and mask image is hidden in them. These sequences are
encoded using different encoding schemes like LZW, Huffman and RLE. An exhaustive comparative analysis is
performed to evaluate these techniques, which reveals that the pro
A CONCERT EVALUATION OF EXEMPLAR BASED IMAGE INPAINTING ALGORITHMS FOR NATURA...cscpconf
Image inpainting derives from restoration of art works, and has been applied to repair ancient
art works. Inpainting is a technique of restoring a partially damaged or occluded image in an
undetectable way. It fills the damaged part of an image by employing information of the
undamaged part according to some rules to make it look “reasonable” to human eyes. Digital
image inpainting is relatively new area of research, but numerous and different approaches to
tackle the inpainting problem have been proposed since the concept was first introduced. This
paper analyzes and compares the recent exemplar based inpainting algorithms by Minqin Wang
and Hao Guo et al. A number of examples on real images are demonstrated to evaluate the
results of algorithms using Peak Signal to Noise Ratio (PSNR)
Comparative Analysis of Lossless Image Compression Based On Row By Row Classi...IJERA Editor
Lossless image compression is needed in many fields like medical imaging, telemetry, geophysics, remote
sensing and other applications, which require exact replica of original image and loss of information is not
tolerable. In this paper, a near lossless image compression algorithm based on row by row classifier with
encoding schemes like Lempel Ziv Welch (LZW), Huffman and Run Length Encoding (RLE) on color images
is proposed. The algorithm divides the image into three parts R, G and B, apply row by row classification on
each part and result of this classification is records in the mask image. After classification the image data is
decomposed into two sequences each for R, G and B and mask image is hidden in them. These sequences are
encoded using different encoding schemes like LZW, Huffman and RLE. An exhaustive comparative analysis is
performed to evaluate these techniques, which reveals that the pro
Query Image Searching With Integrated Textual and Visual Relevance Feedback f...IJERA Editor
There are many researchers who have studied the relevance feedback in the literature of content based image
retrieval (CBIR) community, but none of CBIR search engines support it because of scalability, effectiveness
and efficiency issues. In this, we had implemented an integrated relevance feedback for retrieving of web
images. Here, we had concentrated on integration of both textual features (TF) and visual features (VF) based
relevance feedback (RF), simultaneously we also tested them individually. The TFRF employs and effective
search result clustering (SRC) algorithm to get salient phrases. Then a new user interface (UI) is proposed to
support RF. Experimental results show that the proposed algorithm is scalable, effective and accurated
AN ENHANCED EDGE ADAPTIVE STEGANOGRAPHY APPROACH USING THRESHOLD VALUE FOR RE...ijcsa
This paper attempts to improve the quality and the modification rate of a Stego Image. The input image
provided for estimating the quality of an image and the modified rate is a bitmap image. The threshold
value is used as a parameter for selecting the high frequency pixels from the Cover Image. The data
embedding process are performed on the pixels that are found with the help of Threshold value by using
LSBMR. The quality of an image is estimated by the value of PSNR and the modification rate of an image is
estimated by the value of MSE. The proposed approach achieves about 0.2 to 0.6 % of improvement in the
quality of an image and about 4 to 10 % of improvement in the modification rate of an image compared to
the edge detection techniques such as Sobel and Canny.
This is a paper I wrote as part of my seminar "Inverse Problems in Computer Vision" while pursuing my M.Sc Medical Engineering at FAU, Erlangen, Germany.
The paper details a state-of-the-art method used for Single Image Super Resolution using Deep Convolutional Networks and the possible extensions to the original approach by considering compression and noise artifacts.
Adaptive block-based pixel value differencing steganographyOsama Hosam
Steganography is the science of hiding secure data in digital carriers such as images and videos. Pixel value differencing
(PVD) steganography algorithms embed data into images depending on pixel neighborhood differences. We have pro-
posed PVD scheme for embedding secure data into digital images. The image is divided into non-overlapping 33 blocks.
The block’s median pixel is used as a reference for calculating pixel differences. The distance between the minimum and
maximum differences are fine tuned for spreading the secure data on a wide range of image regions with high-intensity
fluctuations. The embedding procedure embeds secure data into the content regions with edges and intensity transitions.
Texture images provide higher embedding size compared with regular images. The results showed that the proposed
algorithm is successfully able to avoid smooth regions in the embedding process. In addition, the proposed algorithm
shows better embedding quality compared with the state of the art PVD approaches especially with low-embedding rates.
Comparative Study and Analysis of Image Inpainting TechniquesIOSR Journals
Abstract: Image inpainting is a technique to fill missing region or reconstruct damage area from an image.It
removes an undesirable object from an image in visually plausible way.For filling the part of image, it use
information from the neighboring area. In this dissertation work, we present a Examplar based method for
filling in the missing information in an image, which takes structure synthesis and texture sysnthesis together.
In exemplar based approach it used local information from an image to patch propagation.We have also
implement Nonlocal Mean approach for exemplar based image inpainting.In Nonlocal mean approach it find
multiple samples of best exemplar patches for patch propagation and weight their contribution according to
their similarity to the neighborhood under evaluation. We have further extended this algorithm by considering
collaborative filtering method to synthesize and propagate with multiple samples of best exemplar patches. We
have to preformed experiment on many images and found that our algorithm successfully inpaint the target
region.We have tested the accuracy of our algorithm by finding parameter like PSNR and compared PSNR
value for all three different approaches.
Keywords: Texture Synthesis, Structure Synthesis, Patch Propagation ,imageinpainting ,nonlocal approach,
collabrative filtering.
EXTENDED WAVELET TRANSFORM BASED IMAGE INPAINTING ALGORITHM FOR NATURAL SCENE...cscpconf
This paper proposes an exemplar based image inpainting using extended wavelet transform. The
Image inpainting modifies an image with the available information outside the region to be
inpainted in an undetectable way. The extended wavelet transform is in two dimensions. The
Laplacian pyramid is first used to capture the point discontinuities, and then followed by a
directional filter bank to link point discontinuities into linear structures. The proposed model
effectively captures the edges and contours of natural scene images
An Efficient Block Matching Algorithm Using Logical ImageIJERA Editor
Motion estimation, which has been widely used in various image sequence coding schemes, plays a key role in the transmission and storage of video signals at reduced bit rates. There are two classes of motion estimation methods, Block matching algorithms (BMA) and Pel-recursive algorithms (PRA). Due to its implementation simplicity, block matching algorithms have been widely adopted by various video coding standards such as CCITT H.261, ITU-T H.263, and MPEG. In BMA, the current image frame is partitioned into fixed-size rectangular blocks. The motion vector for each block is estimated by finding the best matching block of pixels within the search window in the previous frame according to matching criteria. The goal of this work is to find a fast method for motion estimation and motion segmentation using proposed model. Recent day Communication between ends is facilitated by the development in the area of wired and wireless networks. And it is a challenge to transmit large data file over limited bandwidth channel. Block matching algorithms are very useful in achieving the efficient and acceptable compression. Block matching algorithm defines the total computation cost and effective bit budget. To efficiently obtain motion estimation different approaches can be followed but above constraints should be kept in mind. This paper presents a novel method using three step and diamond algorithms with modified search pattern based on logical image for the block based motion estimation. It has been found that, the improved PSNR value obtained from proposed algorithm shows a better computation time (faster) as compared to original Three step Search (3SS/TSS ) method .The experimental results based on the number of video sequences were presented to demonstrate the advantages of proposed motion estimation technique.
The development of multimedia system technology in Content based Image Retrieval (CBIR) System is
one in every of the outstanding area to retrieve the images from an oversized collection of database. The feature
vectors of the query image are compared with feature vectors of the database images to get matching images.It is
much observed that anyone algorithm isn't beneficial in extracting all differing kinds of natural images. Thus an
intensive analysis of certain color, texture and shape extraction techniques are allotted to spot an efficient CBIR
technique that suits for a selected sort of images. The Extraction of an image includes feature description and
feature extraction. During this paper, we tend to projected Color Layout Descriptor (CLD), grey Level Co-
Occurrences Matrix (GLCM), Marker-Controlled Watershed Segmentation feature extraction technique that
extract the matching image based on the similarity of Color, Texture and shape within the database. For
performance analysis, the image retrieval timing results of the projected technique is calculated and compared
with every of the individual feature.
WEB IMAGE RETRIEVAL USING CLUSTERING APPROACHEScscpconf
Image retrieval system is an active area to propose a new approach to retrieve images from the
large image database. In this concerned, we proposed an algorithm to represent images using
divisive based and partitioned based clustering approaches. The HSV color component and Haar wavelet transform is used to extract image features. These features are taken to segment an image to obtain objects. For segmenting an image, we used modified k-means clustering algorithm to group similar pixel together into K groups with cluster centers. To modify Kmeans, we proposed a divisive based clustering algorithm to determine the number of cluster and get back with number of cluster to k-means to obtain significant object groups. In addition, we also discussed the similarity distance measure using threshold value and object uniqueness to quantify the results.
AN EFFICIENT M-ARY QIM DATA HIDING ALGORITHM FOR THE APPLICATION TO IMAGE ERR...IJNSA Journal
Methods like edge directed interpolation and projection onto convex sets (POCS) that are widely used for image error concealment to produce better image quality are complex in nature and also time consuming. Moreover, those methods are not suitable for real time error concealment where the decoder may not have sufficient computation power or done in online. In this paper, we propose a data-hiding scheme for error concealment of digital image. Edge direction information of a block is extracted in the encoder and is embedded imperceptibly into the host media using quantization index modulation (QIM), thus reduces work load of the decoder. The system performance in term of fidelity and computational load is improved using M-ary data modulation based on near-orthogonal QIM. The decoder extracts the embedded
features (edge information) and those features are then used for recovery of lost data. Experimental results duly support the effectiveness of the proposed scheme.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Performance analysis on color image mosaicing techniques on FPGAIJECEIAES
Today, the surveillance systems and other monitoring systems are considering the capturing of image sequences in a single frame. The captured images can be combined to get the mosaiced image or combined image sequence. But the captured image may have quality issues like brightness issue, alignment issue (correlation issue), resolution issue, manual image registration issue etc. The existing technique like cross correlation can offer better image mosaicing but faces brightness issue in mosaicing. Thus, this paper introduces two different methods for mosaicing i.e., (a) Sliding Window Module (SWM) based Color Image Mosaicing (CIM) and (b) Discrete Cosine Transform (DCT) based CIM on Field Programmable Gate Array (FPGA). The SWM based CIM adopted for corner detection of two images and perform the automatic image registration while DCT based CIM aligns both the local as well as global alignment of images by using phase correlation approach. Finally, these two methods performances are analyzed by comparing with parameters like PSNR, MSE, device utilization and execution time. From the analysis it is concluded that the DCT based CIM can offers significant results than SWM based CIM.
Object gripping algorithm for robotic assistance by means of deep leaning IJECEIAES
This paper exposes the use of recent deep learning techniques in the state of the art, little addressed in robotic applications, where a new algorithm based on Faster R-CNN and CNN regression is exposed. The machine vision systems implemented, tend to require multiple stages to locate an object and allow a robot to take it, increasing the noise in the system and the processing times. The convolutional networks based on regions allow one to solve this problem, it is used for it two convolutional architectures, one for classification and location of three types of objects and one to determine the grip angle for a robotic gripper. Under the establish virtual environment, the grip algorithm works up to 5 frames per second with a 100% object classification, and with the implementation of the Faster R-CNN, it allows obtain 100% accuracy in the classifications of the test database, and over a 97% of average precision locating the generated boxes in each element, gripping successfully the objects.
Super-resolution (SR) is the process of obtaining a high resolution (HR) image or
a sequence of HR images from a set of low resolution (LR) observations. The block
matching algorithms used for motion estimation to obtain motion vectors between the
frames in Super-resolution. The implementation and comparison of two different types of
block matching algorithms viz. Exhaustive Search (ES) and Spiral Search (SS) are
discussed. Advantages of each algorithm are given in terms of motion estimation
computational complexity and Peak Signal to Noise Ratio (PSNR). The Spiral Search
algorithm achieves PSNR close to that of Exhaustive Search at less computation time than
that of Exhaustive Search. The algorithms that are evaluated in this paper are widely used
in video super-resolution and also have been used in implementing various video standards
like H.263, MPEG4, H.264.
COLOR IMAGE ENCRYPTION BASED ON MULTIPLE CHAOTIC SYSTEMSIJNSA Journal
This paper proposed a novel color image encryption scheme based on multiple chaotic systems. The
ergodicity property of chaotic system is utilized to perform the permutation process; a substitution
operation is applied to achieve the diffusion effect. In permutation stage, the 3D color plain-image matrix
is converted to a 2D image matrix, then two generalized Arnold maps are employed to generate hybrid
chaotic sequences which are dependent on the plain-image’s content. The generated chaotic sequences are
then applied to perform the permutation process. The encryption’s key streams not only depend on the
cipher keys but also depend on plain-image and therefore can resist chosen-plaintext attack as well as
known-plaintext attack. In the diffusion stage, four pseudo-random gray value sequences are generated by
another generalized Arnold map. The gray value sequences are applied to perform the diffusion process by
bitxoring operation with the permuted image row-by-row or column-by-column to improve the encryption
rate. The security and performance analysis have been performed, including key space analysis, histogram
analysis, correlation analysis, information entropy analysis, key sensitivity analysis, differential analysis
etc. The experimental results show that the proposed image encryption scheme is highly secure thanks to its
large key space and efficient permutation-substitution operation, and therefore it is suitable for practical
image and video encryption
Query Image Searching With Integrated Textual and Visual Relevance Feedback f...IJERA Editor
There are many researchers who have studied the relevance feedback in the literature of content based image
retrieval (CBIR) community, but none of CBIR search engines support it because of scalability, effectiveness
and efficiency issues. In this, we had implemented an integrated relevance feedback for retrieving of web
images. Here, we had concentrated on integration of both textual features (TF) and visual features (VF) based
relevance feedback (RF), simultaneously we also tested them individually. The TFRF employs and effective
search result clustering (SRC) algorithm to get salient phrases. Then a new user interface (UI) is proposed to
support RF. Experimental results show that the proposed algorithm is scalable, effective and accurated
AN ENHANCED EDGE ADAPTIVE STEGANOGRAPHY APPROACH USING THRESHOLD VALUE FOR RE...ijcsa
This paper attempts to improve the quality and the modification rate of a Stego Image. The input image
provided for estimating the quality of an image and the modified rate is a bitmap image. The threshold
value is used as a parameter for selecting the high frequency pixels from the Cover Image. The data
embedding process are performed on the pixels that are found with the help of Threshold value by using
LSBMR. The quality of an image is estimated by the value of PSNR and the modification rate of an image is
estimated by the value of MSE. The proposed approach achieves about 0.2 to 0.6 % of improvement in the
quality of an image and about 4 to 10 % of improvement in the modification rate of an image compared to
the edge detection techniques such as Sobel and Canny.
This is a paper I wrote as part of my seminar "Inverse Problems in Computer Vision" while pursuing my M.Sc Medical Engineering at FAU, Erlangen, Germany.
The paper details a state-of-the-art method used for Single Image Super Resolution using Deep Convolutional Networks and the possible extensions to the original approach by considering compression and noise artifacts.
Adaptive block-based pixel value differencing steganographyOsama Hosam
Steganography is the science of hiding secure data in digital carriers such as images and videos. Pixel value differencing
(PVD) steganography algorithms embed data into images depending on pixel neighborhood differences. We have pro-
posed PVD scheme for embedding secure data into digital images. The image is divided into non-overlapping 33 blocks.
The block’s median pixel is used as a reference for calculating pixel differences. The distance between the minimum and
maximum differences are fine tuned for spreading the secure data on a wide range of image regions with high-intensity
fluctuations. The embedding procedure embeds secure data into the content regions with edges and intensity transitions.
Texture images provide higher embedding size compared with regular images. The results showed that the proposed
algorithm is successfully able to avoid smooth regions in the embedding process. In addition, the proposed algorithm
shows better embedding quality compared with the state of the art PVD approaches especially with low-embedding rates.
Comparative Study and Analysis of Image Inpainting TechniquesIOSR Journals
Abstract: Image inpainting is a technique to fill missing region or reconstruct damage area from an image.It
removes an undesirable object from an image in visually plausible way.For filling the part of image, it use
information from the neighboring area. In this dissertation work, we present a Examplar based method for
filling in the missing information in an image, which takes structure synthesis and texture sysnthesis together.
In exemplar based approach it used local information from an image to patch propagation.We have also
implement Nonlocal Mean approach for exemplar based image inpainting.In Nonlocal mean approach it find
multiple samples of best exemplar patches for patch propagation and weight their contribution according to
their similarity to the neighborhood under evaluation. We have further extended this algorithm by considering
collaborative filtering method to synthesize and propagate with multiple samples of best exemplar patches. We
have to preformed experiment on many images and found that our algorithm successfully inpaint the target
region.We have tested the accuracy of our algorithm by finding parameter like PSNR and compared PSNR
value for all three different approaches.
Keywords: Texture Synthesis, Structure Synthesis, Patch Propagation ,imageinpainting ,nonlocal approach,
collabrative filtering.
EXTENDED WAVELET TRANSFORM BASED IMAGE INPAINTING ALGORITHM FOR NATURAL SCENE...cscpconf
This paper proposes an exemplar based image inpainting using extended wavelet transform. The
Image inpainting modifies an image with the available information outside the region to be
inpainted in an undetectable way. The extended wavelet transform is in two dimensions. The
Laplacian pyramid is first used to capture the point discontinuities, and then followed by a
directional filter bank to link point discontinuities into linear structures. The proposed model
effectively captures the edges and contours of natural scene images
An Efficient Block Matching Algorithm Using Logical ImageIJERA Editor
Motion estimation, which has been widely used in various image sequence coding schemes, plays a key role in the transmission and storage of video signals at reduced bit rates. There are two classes of motion estimation methods, Block matching algorithms (BMA) and Pel-recursive algorithms (PRA). Due to its implementation simplicity, block matching algorithms have been widely adopted by various video coding standards such as CCITT H.261, ITU-T H.263, and MPEG. In BMA, the current image frame is partitioned into fixed-size rectangular blocks. The motion vector for each block is estimated by finding the best matching block of pixels within the search window in the previous frame according to matching criteria. The goal of this work is to find a fast method for motion estimation and motion segmentation using proposed model. Recent day Communication between ends is facilitated by the development in the area of wired and wireless networks. And it is a challenge to transmit large data file over limited bandwidth channel. Block matching algorithms are very useful in achieving the efficient and acceptable compression. Block matching algorithm defines the total computation cost and effective bit budget. To efficiently obtain motion estimation different approaches can be followed but above constraints should be kept in mind. This paper presents a novel method using three step and diamond algorithms with modified search pattern based on logical image for the block based motion estimation. It has been found that, the improved PSNR value obtained from proposed algorithm shows a better computation time (faster) as compared to original Three step Search (3SS/TSS ) method .The experimental results based on the number of video sequences were presented to demonstrate the advantages of proposed motion estimation technique.
The development of multimedia system technology in Content based Image Retrieval (CBIR) System is
one in every of the outstanding area to retrieve the images from an oversized collection of database. The feature
vectors of the query image are compared with feature vectors of the database images to get matching images.It is
much observed that anyone algorithm isn't beneficial in extracting all differing kinds of natural images. Thus an
intensive analysis of certain color, texture and shape extraction techniques are allotted to spot an efficient CBIR
technique that suits for a selected sort of images. The Extraction of an image includes feature description and
feature extraction. During this paper, we tend to projected Color Layout Descriptor (CLD), grey Level Co-
Occurrences Matrix (GLCM), Marker-Controlled Watershed Segmentation feature extraction technique that
extract the matching image based on the similarity of Color, Texture and shape within the database. For
performance analysis, the image retrieval timing results of the projected technique is calculated and compared
with every of the individual feature.
WEB IMAGE RETRIEVAL USING CLUSTERING APPROACHEScscpconf
Image retrieval system is an active area to propose a new approach to retrieve images from the
large image database. In this concerned, we proposed an algorithm to represent images using
divisive based and partitioned based clustering approaches. The HSV color component and Haar wavelet transform is used to extract image features. These features are taken to segment an image to obtain objects. For segmenting an image, we used modified k-means clustering algorithm to group similar pixel together into K groups with cluster centers. To modify Kmeans, we proposed a divisive based clustering algorithm to determine the number of cluster and get back with number of cluster to k-means to obtain significant object groups. In addition, we also discussed the similarity distance measure using threshold value and object uniqueness to quantify the results.
AN EFFICIENT M-ARY QIM DATA HIDING ALGORITHM FOR THE APPLICATION TO IMAGE ERR...IJNSA Journal
Methods like edge directed interpolation and projection onto convex sets (POCS) that are widely used for image error concealment to produce better image quality are complex in nature and also time consuming. Moreover, those methods are not suitable for real time error concealment where the decoder may not have sufficient computation power or done in online. In this paper, we propose a data-hiding scheme for error concealment of digital image. Edge direction information of a block is extracted in the encoder and is embedded imperceptibly into the host media using quantization index modulation (QIM), thus reduces work load of the decoder. The system performance in term of fidelity and computational load is improved using M-ary data modulation based on near-orthogonal QIM. The decoder extracts the embedded
features (edge information) and those features are then used for recovery of lost data. Experimental results duly support the effectiveness of the proposed scheme.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Performance analysis on color image mosaicing techniques on FPGAIJECEIAES
Today, the surveillance systems and other monitoring systems are considering the capturing of image sequences in a single frame. The captured images can be combined to get the mosaiced image or combined image sequence. But the captured image may have quality issues like brightness issue, alignment issue (correlation issue), resolution issue, manual image registration issue etc. The existing technique like cross correlation can offer better image mosaicing but faces brightness issue in mosaicing. Thus, this paper introduces two different methods for mosaicing i.e., (a) Sliding Window Module (SWM) based Color Image Mosaicing (CIM) and (b) Discrete Cosine Transform (DCT) based CIM on Field Programmable Gate Array (FPGA). The SWM based CIM adopted for corner detection of two images and perform the automatic image registration while DCT based CIM aligns both the local as well as global alignment of images by using phase correlation approach. Finally, these two methods performances are analyzed by comparing with parameters like PSNR, MSE, device utilization and execution time. From the analysis it is concluded that the DCT based CIM can offers significant results than SWM based CIM.
Object gripping algorithm for robotic assistance by means of deep leaning IJECEIAES
This paper exposes the use of recent deep learning techniques in the state of the art, little addressed in robotic applications, where a new algorithm based on Faster R-CNN and CNN regression is exposed. The machine vision systems implemented, tend to require multiple stages to locate an object and allow a robot to take it, increasing the noise in the system and the processing times. The convolutional networks based on regions allow one to solve this problem, it is used for it two convolutional architectures, one for classification and location of three types of objects and one to determine the grip angle for a robotic gripper. Under the establish virtual environment, the grip algorithm works up to 5 frames per second with a 100% object classification, and with the implementation of the Faster R-CNN, it allows obtain 100% accuracy in the classifications of the test database, and over a 97% of average precision locating the generated boxes in each element, gripping successfully the objects.
Super-resolution (SR) is the process of obtaining a high resolution (HR) image or
a sequence of HR images from a set of low resolution (LR) observations. The block
matching algorithms used for motion estimation to obtain motion vectors between the
frames in Super-resolution. The implementation and comparison of two different types of
block matching algorithms viz. Exhaustive Search (ES) and Spiral Search (SS) are
discussed. Advantages of each algorithm are given in terms of motion estimation
computational complexity and Peak Signal to Noise Ratio (PSNR). The Spiral Search
algorithm achieves PSNR close to that of Exhaustive Search at less computation time than
that of Exhaustive Search. The algorithms that are evaluated in this paper are widely used
in video super-resolution and also have been used in implementing various video standards
like H.263, MPEG4, H.264.
COLOR IMAGE ENCRYPTION BASED ON MULTIPLE CHAOTIC SYSTEMSIJNSA Journal
This paper proposed a novel color image encryption scheme based on multiple chaotic systems. The
ergodicity property of chaotic system is utilized to perform the permutation process; a substitution
operation is applied to achieve the diffusion effect. In permutation stage, the 3D color plain-image matrix
is converted to a 2D image matrix, then two generalized Arnold maps are employed to generate hybrid
chaotic sequences which are dependent on the plain-image’s content. The generated chaotic sequences are
then applied to perform the permutation process. The encryption’s key streams not only depend on the
cipher keys but also depend on plain-image and therefore can resist chosen-plaintext attack as well as
known-plaintext attack. In the diffusion stage, four pseudo-random gray value sequences are generated by
another generalized Arnold map. The gray value sequences are applied to perform the diffusion process by
bitxoring operation with the permuted image row-by-row or column-by-column to improve the encryption
rate. The security and performance analysis have been performed, including key space analysis, histogram
analysis, correlation analysis, information entropy analysis, key sensitivity analysis, differential analysis
etc. The experimental results show that the proposed image encryption scheme is highly secure thanks to its
large key space and efficient permutation-substitution operation, and therefore it is suitable for practical
image and video encryption
São Gonçalo terá um novo shopping e mais 3 mil empregos, com a inauguração do Novo Shopping São Gonçalo Alcantara.
No coração comercial e administrativo de São Gonçalo, na região metropolitana do Rio, o shopping terá três lojas âncora, mega-lojas, 243 lojas satélites, além de operações de alimentação, entretenimento, cinema, boliche, uma universidade e torre de escritórios com salas comerciais, distribuídos em 240 mil metros quadrados de área construída.
Presentación realizada en Málaga en el Greencities & Sostenibilidad 2015, en el contexto de la misión Extenda de apoyo a la internacionalización de la empresa andaluza. Resalta puntos importantes del mercado de las Smart Cities a nivel internacional a considerar por las empresas que se quieren introducir en el mísmo
Super-Spatial Structure Prediction Compression of Medicalijeei-iaes
The demand to preserve raw image data for further processing has been increased with the hasty growth of digital technology. In medical industry the images are generally in the form of sequences which are much correlated. These images are very important and hence lossless compression Technique is required to reduce the number of bits to store these image sequences and take less time to transmit over the network The proposed compression method combines Super-Spatial Structure Prediction with inter-frame coding that includes Motion Estimation and Motion Compensation to achieve higher compression ratio. Motion Estimation and Motion Compensation is made with the fast block-matching process Inverse Diamond Search method. To enhance the compression ratio we propose a new scheme Bose, Chaudhuri and Hocquenghem (BCH). Results are compared in terms of compression ratio and Bits per pixel to the prior arts. Experimental results of our proposed algorithm for medical image sequences achieve 30% more reduction than the other state-of-the-art lossless image compression methods.
41 9147 quantization encoding algorithm based edit tyasIAESIJEECS
In the field of digital data there is a demand in bandwidth for the transmission of the videos and images all over the worlds. So in order to reduce the storage space in the field of image applications there is need for the image compression process with lesser transmission bandwidth. So in this paper we are proposing a new image compression technique for the compression of the satellite images by using the Region of Interest (ROI) based on the lossy image technique called the Quantization encoding algorithm for the compression. The performance of our method can be evaluated and analyzing the PSNR values of the output images.
Modified Skip Line Encoding for Binary Image Compressionidescitation
Image Compression is an important issue in
Internet, mobile communication, digital library, digital
photography, multimedia, teleconferencing and other
applications. Application areas of Image Compression would
focus on the problem of optimizing storage space and
transmission bandwidth. In this paper, a modified form of skip
line encoding is proposed to further reduce the redundancy in
the image. The performance is found to be better than the
skip-line encoding.
Empirical Coding for Curvature Based Linear Representation in Image Retrieval...iosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Image Segmentation Using Pairwise Correlation ClusteringIJERA Editor
A pairwise hypergraph based image segmentation framework is formulated in a supervised manner for various images. The image segmentation is to infer the edge label over the pairwise hypergraph by maximizing the normalized cuts. Correlation clustering which is a graph partitioning algorithm, was shown to be effective in a number of applications such as identification, clustering of documents and image segmentation.The partitioning result is derived from a algorithm to partition a pairwise graph into disjoint groups of coherent nodes. In the pairwise correlation clustering, the pairwise graph which is used in the correlation clustering is generalized to a superpixel graph where a node corresponds to a superpixel and a link between adjacent superpixels corresponds to an edge. This pairwise correlation clustering also considers the feature vector which extracts several visual cues from a superpixel, including brightness, color, texture, and shape. Significant progress in clustering has been achieved by algorithms that are based on pairwise affinities between the datasets. The experimental results are shown by calculating the typical cut and inference in an undirected graphical model and datasets.
A Smart Camera Processing Pipeline for Image Applications Utilizing Marching ...sipij
Image processing in machine vision is a challenging task because often real-time requirements have to be met in these systems. To accelerate the processing tasks in machine vision and to reduce data transfer latencies, new architectures for embedded systems in intelligent cameras are required. Furthermore, innovative processing approaches are necessary to realize these architectures efficiently. Marching Pixels are such a processing scheme, based on Organic Computing principles, and can be applied for example to determine object centroids in binary or gray-scale images. In this paper, we present a processing pipeline for smart camera systems utilizing such Marching Pixel algorithms. It consists of a buffering template for image pre-processing tasks in a FPGA to enhance captured images and an ASIC for the efficient realization of Marching Pixel approaches. The ASIC achieves a speedup of eight for the realization of Marching Pixel algorithms, compared to a common medium performance DSP platform.
AN OPTIMIZED BLOCK ESTIMATION BASED IMAGE COMPRESSION AND DECOMPRESSION ALGOR...IAEME Publication
In this paper, we implemented a new model of image compression and decompression method to search the aimed image based on the robust image block variance estimation. Many methods of image compression have been proposed in the literature to minimize the error rate and compression ratio. For encoding the medium type of images, traditional models use hierarchical scheme that enables the use of upper, left, and lower pixels for the pixel prediction, whereas the conventional raster scan prediction methods use upper and left pixels. In this proposed work, we have implemented block estimation and image distortion rate to optimize the compression ration and to minimize the error rate. Experimental results show that proposed model gives a high compression rate and less rate compared to traditional models.
Strong Image Alignment for Meddling Recognision PurposeIJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
Scene Text Detection of Curved Text Using Gradiant Vector Flow MethodIJTET Journal
Abstract--Text detection and recognition is a hot topic for researchers in the field of image processing and multimedia. Content based Image Retrieval (CBIR) community fills the semantic gap between low-level and high-level features. For text detection and extraction that achieve reasonable accuracy for multi-oriented text and natural scene text (camera images), several methods have been developed. In general most of the methods use classifier and large number of training samples to improve the accuracy in text detection. In general, connected components are used to tackle the multi-orientation problem. The connected component analysis based features with classifier training, work well for achieving better accuracy when the images are highly contrast. However, when the same methods used directly for text detection in video it results in disconnections, loss of shapes etc, because of low contrast and complex background. For such cases, deciding geometrical features of the components and classifier is not that easy. To overcome from this problem the proposed research uses Gradiant Vector Flow and Grouping based Method for Arbitrarily Oriented Scene text Detection method. The GVF of edge pixels in the Sobel edge map of the input frame is explored to identify the dominant edge pixels which represent text components. The method extracts dominant pixel’s edge components corresponding to the Sobel edge map, which is called Text Candidates (TC) of the text lines. Experimental results on different datasets including text data that is oriented arbitrary, non-horizontal text data also horizontal text data, Hua’s data and ICDAR-03 data (Camera images) show that the proposed method outperforms existing methods.
Improving the iterative back projection estimation through Lorentzian sharp i...IJECEIAES
This study proposed an enhancement technique for improvising the estimation technique in iterative back projection (IBP) by using the Lorentzian error function with a sharp infinite symmetrical filter (SISEF). The IBP estimation is an iteratively based error correction that can minimize the error reconstruction significantly. However, the IBP has a drawback in that it suffers from jaggy and ringing artifacts as a result of the iterative reconstruction method and the absence of edge guidance. Furthermore, because the IBP estimator tended to oscillate at the same solution frequently, numerous iterations were required. Therefore, this study proposed edge enhancement to enhance the estimator by using the combination of the IBP with Lorentzian SISEF to produce a finer high-resolution output image. As a result, the SISEF is used to improvise the estimator by providing high accuracy of edge detail information for enhancing the edge image. At the same time, the Lorentzian error norm helps to increase the robustness of the IBP algorithm from contamination of additional noise and the ringing artifacts.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Ad04603175180
1. Arpita C. Raut Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 4, Issue 6( Version 3), June 2014, pp.175-180
www.ijera.com 175 | P a g e
Adaptive Super-Spatial Prediction Approach For Lossless Image
Compression
Arpita C. Raut*, Dr. R. R Sedamkar**
*(Department of electronics, viva college of diploma engg. & Tech., Virar (W), India )
** (Department of Computer Engg., Thakur college of Engg. & Tech., Kandivali (E), India)
ABSTRACT
Existing prediction based lossless image compression schemes perform prediction of an image data using their
spatial neighborhood technique which can’t predict high-frequency image structure components, such as edges,
patterns, and textures very well which will limit the image compression efficiency. To exploit these structure
components, adaptive super-spatial prediction approach is developed. The super-spatial prediction approach is
adaptive to compress high frequency structure components from the grayscale image. The motivation behind the
proposed prediction approach is taken from motion prediction in video coding, which attempts to find an
optimal prediction of structure components within the previously encoded image regions. This prediction
approach is efficient for image regions with significant structure components with respect to parameters as
compression ratio, bit rate as compared to CALIC (Context-based adaptive lossless image coding).
Keywords - Adaptive Super-Spatial prediction approach, Bit Rate, Compression performance, Context-based
adaptive lossless image coding (CALIC), structure components.
I. INTRODUCTION
Importance of image compression is increasing
day by day with advancing communication
technology which becomes a solution for image
applications that requires large storage space. The
goal of image compression is to represent an image in
compact form by minimizing the number of bits as
possible without degrading the quality of an image
thereby reducing the memory requirement to store
images and increases transmission rates.
Image compression can be broadly classified into
two categories:
1) Lossless image compression: The original image
and the image after compression and decompression
are exactly the same because the compression and
decompression algorithms are exactly inverse of each
other.
2) Lossy image compression create redundancy by
discarding some information and then remove it and
it is not recoverable therefore lossy compression does
not allow the exact original data to be reconstructed
from the compressed data.
Image compression algorithms are divided into
two main categories according to a method that is
used to remove spatial redundancy.
a) Transformation-based image compression
b) Prediction-based image compression
Transform based coding also known as block
quantization, exploits spatial frequency information
contained in the image to achieve compression. An
image is first transforms from its spatial domain
representation to a different type of representation
using well known transform and then encodes the
transformed values (coefficients) so that a large
fraction of its energy is compact into relatively few
transform coefficients, which are then quantized
independently.
Prediction based coding for image predicts a
pixel color value based on the pixel color values of
its neighboring pixels and encodes the difference
between the past data and actual current data to get
more efficient compression. As the difference
becomes smaller, the information to be encoded
becomes smaller as well. Prediction based methods
depends on prediction, context modeling and entropy
coding. Predictor removes a large amount of spatial
redundancy, exploits smooth areas in images.
Context modeling further improves prediction by
providing the information about pixels context, such
as horizontal or vertical edges. Entropy coding
removes statistical redundancy, gives a final encoded
bit stream.
In image compression the main task is to
efficiently represent and encode high frequency
image structure components, such as edges, patterns,
and textures. Existing lossless image compression
schemes such as CALIC [1],[2], LOCO-I[3] can’t
predict pixel values very well near edges, boundaries
or sharp transitions of pixel values which will limit
the image compression efficiency. Hence, there is a
need to develop an efficient image prediction scheme
to exploit these structure components from the
grayscale image.
This paper introduced a new prediction
methodology which attempts to predict the high
frequency structure components from the grayscale
RESEARCH ARTICLE OPEN ACCESS
2. Arpita C. Raut Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 4, Issue 6( Version 3), June 2014, pp.175-180
www.ijera.com 176 | P a g e
image using adaptive super-spatial prediction
approach for lossless image compression which will
enhance the accuracy of the prediction and
compression efficiency. The motivation behind the
proposed prediction approach is taken from motion
prediction in video coding, which breaks the
neighborhood constraint and finds an optimal
prediction of structure components within the
previously encoded image regions. As the super-
spatial prediction approach is adaptive to compress
structure components, an image is classified into two
regions: Structure and nonstructure regions using
multidirectional GAP (Gradient Adjusted Predictor)
[4] which improves speed of the prediction algorithm
with the help of parallel implementation. Structure
regions (SRs) are encoded using Adaptive super-
spatial prediction approach, while non-structure
regions (NSRs) are encoded using CALIC. The
proposed Adaptive super-spatial prediction approach
is designed in such a way that it has good efficiency
in terms of compression ratio, compared to existing
lossless image compression algorithms, CALIC.
Implementation of proposed prediction approach is
done in CALIC.
This Adaptive Super-Spatial prediction approach
is novel approach because the best matching of
structure components are simply searched within
previously encoded image regions therefore no
codebook is required in this compression scheme.
The structure of paper is as follows. Section 2
represents literature survey. Section 3 represents
Adaptive Super-Spatial prediction approach. Section
4 explains the residue encoding scheme used. In
section 5, encoder and decoder block diagram of the
complete algorithm is given and at the end simulation
results, conclusion and future scope.
II. LITERATURE SURVEY
In image compression schemes like vector
quantization and sequential data compression [5]
where better image prediction and coding efficiency
is achieved by relaxing neighborhood constraint. In
sequential data compression, representation of a
substring of text is done by a displacement/length
reference to a substring previously seen in the text.
In lossless image compression by vector
quantization, an input image is processed as vectors
of image pixels. The encoder takes in a vector and
finds the best match from its stored codebook. The
address of the best match, the residual between the
original vector and its best match are then transmitted
to the decoder. The decoder uses the address to
access an identical codebook, and obtains the
reconstructed vector. The encoding performance of
VQ-based methods largely depends on the codebook
design. Extensions of VQ method are visual pattern
image coding (VPIC) [6] and visual pattern vector
quantization (VPVQ) [7]. It is observed that, these
methods suffer from poor coding efficiency.
Therefore these algorithms are not competitive when
compared with the state-of-the-art such as context
based adaptive lossless image coding (CALIC) [1] in
terms of coding efficiency.
JPEG-LS is based on predictive coding
technique which is simple, easy to implement,
consumes less memory, and is faster than JPEG
2000, though JPEG 2000 supports progressive
transmission. JPEG-LS work efficiently on
continuous-tone images. CALIC and LOCO-I are the
most efficient lossless image compression
algorithms. LOCO-I uses same principles to CALIC,
but CALIC with arithmetic coding remains
considered as a benchmark to which the performance
of other compression schemes, gives high
compression ratio as compared to LOCO-I and
JPEG-LS.
Super-spatial structure prediction [8], [9] is
based on motion prediction in video coding which
breaks the neighborhood constraint, attempts to find
an optimal prediction of structure components within
the previously encoded image regions.
In still image compression using texture and non
texture prediction model [10], an image classified
into texture and non-texture regions by using an
Artificial Neural Network (ANN) Classifier. The
texture region is encoded with the Similar Block
Matching (SBM) encoder and the non-texture region
is encoded with SPIHT encoding.
III. ADAPTIVE SUPER-SPATIAL APPROACH
OF PREDICTION
This section explains the basic idea of adaptive
super-spatial prediction approach for structure region
prediction. It is observed that an image consist of
many objects and each object consist of significant
amount of structural components. These structure
components may repeat themselves at various
locations and scales. For efficient image
compression, it is necessary to exploits this type of
image correlation. Figure 1 (a) shows Barbara image
and (b) shows four similar structure blocks extracted
from Barbara image.
An idea of an adaptive super-spatial prediction
approach borrows from motion prediction in video
coding where block matching concept is used. In
motion prediction as shown in fig. 2, the best match
of the current block is search in an area in the
reference frame, based on some distortion metric.
The chosen reference block from reference frame
becomes the predictor of the current block. The
prediction residual and the motion vector are then
encoded and sent to the decoder.
3. Arpita C. Raut Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 4, Issue 6( Version 3), June 2014, pp.175-180
www.ijera.com 177 | P a g e
Fig. 1 (a) Barbara image (b) similar structure blocks
extracted from Barbara.
Fig. 2 Motion prediction in video coding.
In adaptive super-spatial prediction approach as
shown in fig.3, the prediction of current image block
is search within the previously encoded image region.
The chosen reference block from previously encoded
image region is selected as the optimal prediction for
current block. To measure the block difference, the
sum of absolute difference (SAD) is used, because it
doesn’t need multiplication operation which is
computationally more attractive for real time
application as compared to MSE, as it needs square
operation.
Fig.3 Adaptive Super-Spatial prediction approach .
3.1 Block based image content separation
The 512 x 512 grayscale image is partitioned
into blocks of 4x4 blocks. Multidirectional GAP
(Gradient Adjusted Predictor) [9] is performed on
block based image which reduces computational
complexity with the help of parallel implementation
and prediction error is calculated. If the prediction
error is greater than threshold then the block is
considered as structure block otherwise nonstructure
block. As per the result of multidirectional GAP
prediction, the block classification map (BCM) is
maintained. Structure blocks are encoded using
Adaptive Super-Spatial prediction approach whereas
nonstructure blocks are encoded using conventional
lossless image compression method, CALIC.
In multidirectional GAP, The image is divided
into four equal parts illustrated in the Fig. 4. We
apply the GAP in four different directions. The
interesting aspect about multidirectional GAP
technique is its parallel implementation.
Fig. 4 Four equal parts of predictive template of
multidirectional GAP.
3.2 Threshold estimation
As the Adaptive Super-Spatial prediction
approach for structure components is motivated from
motion prediction in video coding, SAD (Sum of
absolute difference) is used to compare current block
with previously encoded structure blocks using SAD
as shown in equation (1).
SAD = ∣ I𝐿
𝑗=1
𝐿
𝑖=1 𝑖, 𝑗 − Î[i, j] ∣ (1)
where I[𝑖, 𝑗] is the pixels of the original block, and
Î[i, j] that of the prediction. Most structure blocks can
find its best match in the structure regions [8] which
will reduce computational complexity. The threshold
value is used for deciding best matching of structure
block and its value is decided by experimenting
different images and its compression results.
3.3 Implementation in CALIC
CALIC(context based, adaptive lossless image
codec) is a spatial prediction based scheme, in which
GAP is used for image prediction which uses large
number of modeling contexts to condition a nonlinear
predictor and make it adaptive to varying source
statistics without suffering from context dilution
problem. CALIC only estimates expectation of
prediction errors conditioned on a large number of
4. Arpita C. Raut Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 4, Issue 6( Version 3), June 2014, pp.175-180
www.ijera.com 178 | P a g e
contexts rather than a large number of conditional
error probabilities.
CALIC uses two step prediction/ residual
approach. CALIC is a one-pass coding scheme that
encodes and decodes in raster scan order. It uses the
previous two scan lines of coded pixels to do the
prediction and form the context. CALIC operates in
two modes: binary and continuous tone modes.
CALIC selects one of the two modes based on local
casual template without using any side information.
The compression methodologies for these two modes
are different. Binary mode is selected when the
current locality of input image has no more than two
distinct intensity values. Context-based adaptive
ternary arithmetic coder is used to encode three
symbols, including escape symbols which triggers a
return to continuous tone mode
Continuous tone mode has four major components:
1) GAP, gradient-adjusted prediction uses the context
gradient information to predict the intensity of
current pixel. This step is linear prediction.
2) Context selection and quantization, further remove
the correlation between the prediction errors of GAP
step by condition the error onto different context
error energies. The quantization of context error
energies results in totally 8 different error energy
levels.
3) Context modeling of prediction errors: Error
modeling classifies the errors into different texture
catalogs and then use the corresponding sample
means to get final prediction. This step makes the
final prediction to be nonlinear.
4) Entropy coding of prediction errors using
Arithmetic coding or Huffman coding. Generally
Arithmetic coding is used for better performance.
IV. PREDICTION RESIDUE ENCODING
Arithmetic coding [11], [12] is a form of entropy
encoding used in lossless data compression,
especially suitable for small alphabet (binary sources)
with highly skewed probabilities. The arithmetic
codes generate non-block codes; therefore, a one-to
one correspondence between source symbols and
code words does not exist. Instead, an entire
sequence of source symbols is assigned to a single
code word that defines an interval of real numbers
between 0 and 1. As the number of symbols in the
message increases, the interval used to represent it
becomes smaller and the number of bits needed to
represent the interval becomes larger. Each symbol in
the message reduces the size of the interval according
to its probability of occurrence. Arithmetic coding
typically has a better compression ratio than Huffman
coding, as it produces a single symbol rather than
several separate codeword.
V. DESIGN OF PROPOSED WORK
5.1 Encoder
Fig. 5 Encoder of proposed work.
The image is segmented into 4x4 blocks.
Multidirectional GAP is applied on image of 4x4
blocks. If the prediction error is greater than
threshold then the block will be considered as
structure otherwise nonstructure block. Structure
blocks are encoded using Adaptive Super-Spatial
prediction approach and nonstructure blocks are
encoded using CALIC. This produces a block map
consisting of addresses of reference blocks and
prediction residues and prediction error. This
prediction residues and prediction error are given to
arithmetic encoder. An encoded residues along with
block map forms compressed data as shown in fig. 5.
The output bit stream of the proposed encoder
consists of bits for the following major syntax
components: bits for non-structure regions, bits for
prediction residual of structure blocks, addresses of
reference blocks.
5.2 Decoder
The compressed data consisting of block map
and encoded residues is given as input to the decoder.
The encoded residues are given to the arithmetic
decoder to obtain the original set of residues which is
then added to block map to reconstruct the final
lossless image as shown in fig. 6.
5. Arpita C. Raut Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 4, Issue 6( Version 3), June 2014, pp.175-180
www.ijera.com 179 | P a g e
Fig. 6 Decoder of proposed work
VI. SIMULATION RESULTS AND
DISCUSSION
All the simulations were done using matlab 7.1
on standard grayscale images like Lena, Barbara,
Cameraman, Mandrill having the size of 512x512
pixels in pgm format and 8 bpp in value.
Comparative analysis between proposed work
and CALIC is done and result is obtained using
compression ratio and bit rate parameters on different
images with different structure and nonstructure
regions.
Table 1- Compression Performance Comparison with
CALIC
The compression performance comparison with
CALIC [1] is tabulated in Table 1. It is clear that
Barbara image is compressed by 20 Kb more by
proposed work than that of CALIC which consist of
more structure components as compared to other
images.
From the graph as shown in fig.7, Cameraman
image have better compression ratio (2.844:1) as
compared to CALIC (2.461:1) than other images.
Fig. 7 Comparison of compression ratio.
TABLE 2- Coding Bit Rate Comparison with
CALIC
Table 2 shows comparison of coding bit rate of
the proposed work based on new prediction approach
with CALIC [1] and calculated bit rate saving.
Fig. 8 Comparison of coding bit rate.
From the graph as shown in fig.8 shows
performance analysis of proposed work with CALIC
using bit rate parameter. In best case, bit rate saving
is more in case of Barbara image which consist of
more structure components which are repeated at
various locations as compared to other images.
Therefore proposed prediction approach is efficiently
compressed high frequency structure components
from the image. Fig. 9 shows original and
reconstructed lossless image of Lena.
6. Arpita C. Raut Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 4, Issue 6( Version 3), June 2014, pp.175-180
www.ijera.com 180 | P a g e
Fig. 9 a) original and b) Reconstructed Lena Image
VII. CONCLUSION AND FUTURE SCOPE
In this work, an efficient image prediction
scheme, called Adaptive super-spatial prediction
approach is developed. By using CALIC as the base
code, the image was classified into structure and non
structure regions using multidirectional GAP with the
help of parallel implementation and then they were
encoded accordingly. The experimental results
indicate that the proposed hybrid scheme of structure
and nonstructure region prediction scheme is efficient
than existing lossless image compression scheme,
CALIC, especially for images with significant
structure components
The Adaptive super-spatial prediction approach
algorithm has outperformed the state-of-the-art
algorithms. As this approach deals only with single
image and does not perform correlation among the
frames in a sequence and there is too much
correlation among medical sequences, therefore in
future this study can be further extended to real time
applications for video compression in medical
images.
VIII. ACKNOWLEDGEMENTS
I would like to express my gratitude to my
guide Dr. R.R Sedamkar and Prof. Vinitkumar
Dongre for his supervision and continuous
encouragement in my ME programme.
Finally, my love and gratitude goes to my family for
the support and encouragement, specially, to my
husband.
REFERENCES
[1] X.Wu, N.Menon, “Context-based, adaptive,
lossless image coding” , IEEE Trans.,
Commun., vol. 45, no. 4, pp. 437-444, Apr.
1997.
[2] Hao Hu, “A study of CALIC”, paper
submitted to computer science & Electrical
Engg. Department at university of Maryland
Baltimore country, dec. 2004
[3] M.J .Weinberger, G. Seroussi, and G.
Sapiro, “The LOCO-I lossless image
compression algorithm: Principles and
standardization into JPEG-LS,” IEEE
Trans. Image Process., vol. 9, no. 8,pp.
1309-1324,Aug. 2000.
[4] Bajpai V, Goyal D., Debnath S., Tiwari A.K
, "Multidirectional Gradient Adjusted
Predictor," International Conference on
Signal and Image Processing (ICSIP),pp.
349-352,Dec 2010.
[5] J. Ziv, and A. Lempel, “A universal
algorithm for sequential data compression,”
IEEE Trans. Inform. Theory, vol. 23, no. 3,
pp. 337-343, May 1977.
[6] D. Chen, , and A. Bovik, “Visual pattern
image coding,” IEEE Trans. Commun., vol.
38, no. 12, pp. 2137-21476, Dec. 1990.
[7] F. Wu and X. Sun, “Image compression by
visual pattern vector quantization (VPVQ),”
in Proc. IEEE Data Compression Conf. pp.
123–131, Dec 2008.
[8] Xiwen Zhao and Zhihai He, “Local
Structure Learning and Prediction for
Efficient Lossless Image Compression”,
IEEE, pp. 1286-1289, 2010
[9] Xiwen Owen Zhao, and Zhihai HenryHe,
“Lossless Image Compression Using Super-
Spatial Structure Prediction” IEEE SINAL
PROCESSING LETTERS, vol. 17, pp. 383-
386, Apr. 2010.
[10] Mohanbabbu G., and P.Renuga, “Still image
compression using texture and non texture
prediction model” American Journal of
applied sciences, pp. 519-524, 2012.
[11] David Salomon, “Data Compression: A
Complete Reference”, 3rd Edition,springer,
2007.
[12] Khalid Sayood, “Introduction to Data
compression”, 3rd Edition, Elseiver
Publications, 2006.