International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
EFFICIENT IMAGE COMPRESSION USING LAPLACIAN PYRAMIDAL FILTERS FOR EDGE IMAGESijcnac
This project presents a new image compression technique for the coding of retinal and
fingerprint images. Retinal images are used to detect diseases like diabetes or
hypertension. Fingerprint images are used for the security purpose. In this work, the
contourlet transform of the retinal and fingerprint image is taken first. The coefficients of
the contourlet transform are quantized using adaptive multistage vector quantization
scheme. The number of code vectors in the adaptive vector quantization scheme depends
on the dynamic range of the input image.
Comparative Analysis of Lossless Image Compression Based On Row By Row Classi...IJERA Editor
This document proposes and evaluates a near lossless image compression algorithm that divides color images into red, green, and blue channels. It classifies pixels in each channel row-by-row and records the results in mask images. The image data is then decomposed into sequences based on the classification and the mask images are hidden in the least significant bits of the sequences. Different encoding schemes like LZW, Huffman, and RLE are applied and compared. Experimental results on test images show the proposed algorithm achieves smaller bits per pixel than simple encoding schemes. PSNR values also indicate very little difference between original and reconstructed images.
AN OPTIMIZED BLOCK ESTIMATION BASED IMAGE COMPRESSION AND DECOMPRESSION ALGOR...IAEME Publication
This document presents a new optimized block estimation based image compression and decompression algorithm. The proposed method divides images into blocks and estimates each block from the previous frame using sum of absolute differences to determine the best matching block. It then compresses the luminance channel using JPEG-LS coding and predicts chrominance channels using hierarchical decomposition and directional prediction. Experimental results on test images show the proposed method achieves higher compression rates and lower distortion compared to traditional models that use hierarchical schemes and raster scan prediction.
AN EFFICIENT M-ARY QIM DATA HIDING ALGORITHM FOR THE APPLICATION TO IMAGE ERR...IJNSA Journal
Methods like edge directed interpolation and projection onto convex sets (POCS) that are widely used for image error concealment to produce better image quality are complex in nature and also time consuming. Moreover, those methods are not suitable for real time error concealment where the decoder may not have sufficient computation power or done in online. In this paper, we propose a data-hiding scheme for error concealment of digital image. Edge direction information of a block is extracted in the encoder and is embedded imperceptibly into the host media using quantization index modulation (QIM), thus reduces work load of the decoder. The system performance in term of fidelity and computational load is improved using M-ary data modulation based on near-orthogonal QIM. The decoder extracts the embedded
features (edge information) and those features are then used for recovery of lost data. Experimental results duly support the effectiveness of the proposed scheme.
1) The document proposes using homogeneous motion discovery to generate additional reference frames for 4K video coding. Motion is estimated between reference frames and the current frame to generate affine motion models and associated masks.
2) Experimental results on 3 video sequences show average bit rate savings of 3.78% over HEVC by using the additional reference frames generated from the affine motion models.
3) The approach provides a simpler computation method for high resolution video coding compared to motion hint estimation, which requires super-pixel segmentation that becomes impractical for resolutions like 4K.
Energy efficiency is one of the most critical issue in design of System on Chip. In Network On
Chip (NoC) based system, energy consumption is influenced dramatically by mapping of
Intellectual Property (IP) which affect the performance of the system. In this paper we test the
antecedently extant proposed algorithms and introduced a new energy proficient algorithm
stand for 3D NoC architecture. In addition a hybrid method has also been implemented using
bioinspired optimization (particle swarm optimization) technique. The proposed algorithm has
been implemented and evaluated on randomly generated benchmark and real life application
such as MMS, Telecom and VOPD. The algorithm has also been tested with the E3S benchmark
and has been compared with the existing algorithm (spiral and crinkle) and has shown better
reduction in the communication energy consumption and shows improvement in the
performance of the system. Comparing our work with spiral and crinkle, experimental result
shows that the average reduction in communication energy consumption is 19% with spiral and
17% with crinkle mapping algorithms, while reduction in communication cost is 24% and 21%
whereas reduction in latency is of 24% and 22% with spiral and crinkle. Optimizing our work
and the existing methods using bio-inspired technique and having the comparison among them
an average energy reduction is found to be of 18% and 24%.
This document proposes a method for remote sensing image retrieval using convolutional neural networks with weighted distance and result re-ranking. It has two stages: 1) An offline stage where a pre-trained CNN is fine-tuned on labeled images to extract features for the retrieval dataset. 2) An online stage where the fine-tuned CNN extracts features from a query image and calculates weighted distances to retrieved images, giving more preference to images from similar classes to the query. Experiments on two datasets show the method improves retrieval performance compared to state-of-the-art methods.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
EFFICIENT IMAGE COMPRESSION USING LAPLACIAN PYRAMIDAL FILTERS FOR EDGE IMAGESijcnac
This project presents a new image compression technique for the coding of retinal and
fingerprint images. Retinal images are used to detect diseases like diabetes or
hypertension. Fingerprint images are used for the security purpose. In this work, the
contourlet transform of the retinal and fingerprint image is taken first. The coefficients of
the contourlet transform are quantized using adaptive multistage vector quantization
scheme. The number of code vectors in the adaptive vector quantization scheme depends
on the dynamic range of the input image.
Comparative Analysis of Lossless Image Compression Based On Row By Row Classi...IJERA Editor
This document proposes and evaluates a near lossless image compression algorithm that divides color images into red, green, and blue channels. It classifies pixels in each channel row-by-row and records the results in mask images. The image data is then decomposed into sequences based on the classification and the mask images are hidden in the least significant bits of the sequences. Different encoding schemes like LZW, Huffman, and RLE are applied and compared. Experimental results on test images show the proposed algorithm achieves smaller bits per pixel than simple encoding schemes. PSNR values also indicate very little difference between original and reconstructed images.
AN OPTIMIZED BLOCK ESTIMATION BASED IMAGE COMPRESSION AND DECOMPRESSION ALGOR...IAEME Publication
This document presents a new optimized block estimation based image compression and decompression algorithm. The proposed method divides images into blocks and estimates each block from the previous frame using sum of absolute differences to determine the best matching block. It then compresses the luminance channel using JPEG-LS coding and predicts chrominance channels using hierarchical decomposition and directional prediction. Experimental results on test images show the proposed method achieves higher compression rates and lower distortion compared to traditional models that use hierarchical schemes and raster scan prediction.
AN EFFICIENT M-ARY QIM DATA HIDING ALGORITHM FOR THE APPLICATION TO IMAGE ERR...IJNSA Journal
Methods like edge directed interpolation and projection onto convex sets (POCS) that are widely used for image error concealment to produce better image quality are complex in nature and also time consuming. Moreover, those methods are not suitable for real time error concealment where the decoder may not have sufficient computation power or done in online. In this paper, we propose a data-hiding scheme for error concealment of digital image. Edge direction information of a block is extracted in the encoder and is embedded imperceptibly into the host media using quantization index modulation (QIM), thus reduces work load of the decoder. The system performance in term of fidelity and computational load is improved using M-ary data modulation based on near-orthogonal QIM. The decoder extracts the embedded
features (edge information) and those features are then used for recovery of lost data. Experimental results duly support the effectiveness of the proposed scheme.
1) The document proposes using homogeneous motion discovery to generate additional reference frames for 4K video coding. Motion is estimated between reference frames and the current frame to generate affine motion models and associated masks.
2) Experimental results on 3 video sequences show average bit rate savings of 3.78% over HEVC by using the additional reference frames generated from the affine motion models.
3) The approach provides a simpler computation method for high resolution video coding compared to motion hint estimation, which requires super-pixel segmentation that becomes impractical for resolutions like 4K.
Energy efficiency is one of the most critical issue in design of System on Chip. In Network On
Chip (NoC) based system, energy consumption is influenced dramatically by mapping of
Intellectual Property (IP) which affect the performance of the system. In this paper we test the
antecedently extant proposed algorithms and introduced a new energy proficient algorithm
stand for 3D NoC architecture. In addition a hybrid method has also been implemented using
bioinspired optimization (particle swarm optimization) technique. The proposed algorithm has
been implemented and evaluated on randomly generated benchmark and real life application
such as MMS, Telecom and VOPD. The algorithm has also been tested with the E3S benchmark
and has been compared with the existing algorithm (spiral and crinkle) and has shown better
reduction in the communication energy consumption and shows improvement in the
performance of the system. Comparing our work with spiral and crinkle, experimental result
shows that the average reduction in communication energy consumption is 19% with spiral and
17% with crinkle mapping algorithms, while reduction in communication cost is 24% and 21%
whereas reduction in latency is of 24% and 22% with spiral and crinkle. Optimizing our work
and the existing methods using bio-inspired technique and having the comparison among them
an average energy reduction is found to be of 18% and 24%.
This document proposes a method for remote sensing image retrieval using convolutional neural networks with weighted distance and result re-ranking. It has two stages: 1) An offline stage where a pre-trained CNN is fine-tuned on labeled images to extract features for the retrieval dataset. 2) An online stage where the fine-tuned CNN extracts features from a query image and calculates weighted distances to retrieved images, giving more preference to images from similar classes to the query. Experiments on two datasets show the method improves retrieval performance compared to state-of-the-art methods.
Object gripping algorithm for robotic assistance by means of deep leaning IJECEIAES
This document presents a new algorithm using deep learning techniques for object gripping by a robot. It uses a Faster R-CNN to classify and locate three types of objects (cylinder, parallelepiped, toroid) with 100% accuracy. It also uses a CNN for regression to estimate the rotation angle of the parallelepiped with a mean error of 0.769 degrees. Testing in a virtual environment showed the algorithm could classify objects and grip them successfully at 5 frames per second using a three-finger gripper for increased stability over a two-finger gripper.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
1) The document discusses robust adaptive beamforming techniques to improve robustness against uncertainties in array manifold, such as direction-of-arrival mismatch.
2) It proposes an alternative realization of robust linearly constrained minimum variance beamforming that uses an ellipsoidal uncertainty constraint on the steering vector.
3) A key contribution is integrating diagonal loading techniques by deriving an optimum variable loading level, providing "loading-on-demand" instead of fixed loading. This allows accurate computation of the diagonal loading level based on the uncertainty constraint.
This document provides information on several remote sensing projects from IEEE 2015. It lists the titles, languages, and abstracts for 8 projects related to classification and analysis of hyperspectral and multispectral images. The projects focus on techniques such as sparse representation in tangent space, Gabor feature-based collaborative representation, level set evolutions for object extraction, and dimension reduction using spatial and spectral regularization.
This document discusses Gabor convolutional networks (GCNs), which incorporate Gabor filters into deep convolutional neural networks (DCNNs) to improve robustness to orientation and scale changes. GCNs modulate the learned convolution filters with Gabor filters, generating convolutional Gabor orientation filters (GoFs) that endow the ability to capture spatial localization, orientation selectivity, and spatial frequency selectivity. This allows GCNs to learn more compact yet enhanced representations with improved generalization to image transformations, outperforming conventional DCNN architectures. The incorporation of Gabor filters into the convolution operation can be integrated into any deep learning model to enhance robustness to scale and rotation variations.
This document compares the performance of three mobile ad hoc network (MANET) routing protocols: AODV, FSR, and IERP. It uses the QualNet network simulator to evaluate these protocols based on various metrics like throughput, average jitter, average end-to-end delay, and packet delivery ratio. The protocols are evaluated under different node speeds on a grid topology network with 90 nodes over an area of 1500x1500 meters. Simulation results show that AODV generally performs best in terms of throughput and packet delivery ratio across varying node speeds, while FSR performs worst for these metrics. IERP shows the worst performance for average end-to-end delay and average jitter as node speed increases.
The proposed system implements an image watermarking technique that incorporates human visual system (HVS) models into watermark embedding. The watermarking is performed in the wavelet domain. The algorithm first calculates the coarseness of different subbands (HH, HL, LH) to select the subband with the highest coarseness for watermark embedding. It then embeds the watermark bits into the selected subband by modifying the least significant bits of coefficients based on their values. Experimental results on test images show the technique is robust, with average watermark extraction rates of 80-95% and high PSNR values, even after filtering.
Scene Text Detection of Curved Text Using Gradiant Vector Flow MethodIJTET Journal
Abstract--Text detection and recognition is a hot topic for researchers in the field of image processing and multimedia. Content based Image Retrieval (CBIR) community fills the semantic gap between low-level and high-level features. For text detection and extraction that achieve reasonable accuracy for multi-oriented text and natural scene text (camera images), several methods have been developed. In general most of the methods use classifier and large number of training samples to improve the accuracy in text detection. In general, connected components are used to tackle the multi-orientation problem. The connected component analysis based features with classifier training, work well for achieving better accuracy when the images are highly contrast. However, when the same methods used directly for text detection in video it results in disconnections, loss of shapes etc, because of low contrast and complex background. For such cases, deciding geometrical features of the components and classifier is not that easy. To overcome from this problem the proposed research uses Gradiant Vector Flow and Grouping based Method for Arbitrarily Oriented Scene text Detection method. The GVF of edge pixels in the Sobel edge map of the input frame is explored to identify the dominant edge pixels which represent text components. The method extracts dominant pixel’s edge components corresponding to the Sobel edge map, which is called Text Candidates (TC) of the text lines. Experimental results on different datasets including text data that is oriented arbitrary, non-horizontal text data also horizontal text data, Hua’s data and ICDAR-03 data (Camera images) show that the proposed method outperforms existing methods.
Reduced-reference Video Quality Metric Using Spatial Information in Salient R...TELKOMNIKA JOURNAL
In multimedia transmission, it is important to rely on an objective quality metric which accurately
represents the subjective quality of processed images and video sequences. Maintaining acceptable
Quality of Experience in video transmission requires the ability to measure the quality of the video seen at
the receiver end. Reduced-reference metrics make use of side-information that is transmitted to the
receiver for estimating the quality of the received sequence with low complexity. This attribute enables
real-time assessment and visual degradation detection caused by transmission and compression errors. A
novel reduced-reference video quality known as the Spatial Information in Salient Regions Reduced
Reference Metric is proposed. The approach proposed makes use of spatial activity to estimate the
received sequence distortion after concealment. The statistical elements analysed in this work are based
on extracted edges and their luminance distributions. Results highlight that the proposed edge dissimilarit y
measure has a good correlation with DMOS scores from the LIVE Video Database.
ROBUST COLOUR IMAGE WATERMARKING SCHEME BASED ON FEATURE POINTS AND IMAGE NOR...csandit
Geometric attacks can desynchronize the location of the watermark and hence cause incorrect
watermark detection. This paper presents a robust colour image watermarking scheme based on
visually significant feature points and image normalization technique. The feature points are
used as synchronization marks between watermark embedding and detection. The watermark is
embedded into the non overlapped normalized circular regions in the luminance component or
the blue component of a color image. The embedding of the watermark is carried out by
modifying the DCT coefficients values in selected blocks. The original unmarked image is not
required for watermark extraction Experimental results show that the proposed scheme
successfully makes the watermark perceptually invisible as well as robust to common signal
processing and geometric attacks.
Compression technique using dct fractal compressionAlexander Decker
This document summarizes and compares different image compression techniques, including DCT, fractal compression, and their applications in steganography. It discusses how DCT works by transforming image data into frequency domains, while fractal compression exploits self-similarity within images. The document reviews several existing studies on combining these techniques with steganography and encryption. Specifically, it examines approaches that use DCT and fractal compression to improve data hiding capacity and security. Overall, the document provides an overview of key compression algorithms and their applications in digital watermarking and steganography.
11.compression technique using dct fractal compressionAlexander Decker
1) The document discusses and compares different image compression techniques, specifically DCT and fractal compression.
2) Fractal compression works by finding self-similar patterns within an image during encoding, but can have a long computation time. DCT transforms an image into frequency coefficients that can be quantized for compression.
3) The document reviews previous work combining DCT and fractal compression with steganography and encryption to improve hiding capacity, imperceptibility, and security against subterfuge attacks. However, prior methods had limitations like low data hiding amounts or lack of protection for compressed data.
Reversible Data Hiding Using Contrast Enhancement ApproachCSCJournals
Reverse Data Hiding is a technique used to hide the object's data details. This technique is used to ensure the security and to protect the integrity of the object from any modification by preventing intended and unintended changes. Digital watermarking is a key ingredient to multimedia protection. However, most existing techniques distort the original content as a side effect of image protection. As a way to overcome such distortion, reversible data embedding has recently been introduced and is growing rapidly. In reversible data embedding, the original content can be completely restored after the removal of the watermark. Therefore, it is very practical to protect legal, medical, or other important imagery. In this paper a novel removable (lossless) data hiding technique is proposed. This technique is based on the histogram modification to produce extra space for embedding, and the redundancy in digital images is exploited to achieve a very high embedding capacity. This method has been applied to various standard images. The experimental results have demonstrated a promising outcome and the proposed technique achieved satisfactory and stable performance both on embedding capacity and visual quality. The proposed method capacity is up to 129K bits with PSNR between 42-45dB. The performance is hence better than most exiting reversible data hiding algorithms.
IRJET- Semantic Segmentation using Deep LearningIRJET Journal
The document discusses semantic image segmentation using deep learning techniques. It summarizes several state-of-the-art semantic segmentation models like U-Net, Dilated U-Net, PSPNet, Fully Convolutional DenseNets, Global Convolutional Network (GCN), DeepLabV3, and proposes an optimized FRRN model. It implements these models on the CamVid dataset and evaluates their performance using the intersection-over-union score, finding that the optimized FRRN model achieves a score of 0.87.
Projected Barzilai-Borwein Methods Applied to Distributed Compressive Spectru...Polytechnique Montreal
Cognitive radio allows unlicensed (cognitive) users to use licensed frequency bands by exploiting spectrum sensing techniques to detect whether or not the licensed (primary) users are present. In this paper, we present a compressed sensing applied to spectrum-occupancy detection in wide-band applications. The collected analog signals from each cognitive radio (CR) receiver at a fusion center are transformed to discrete-time signals by using analog-to-information converter (AIC) and then employed to calculate the autocorrelation. For signal reconstruction, we exploit a novel approach to solve the optimization problem consisting of minimizing both a quadratic (l2) error term and an l1-regularization term. In specific, we propose the Basic gradient projection (GP) and projected Barzilai-Borwein (PBB) algorithm to offer a better performance in terms of the mean squared error of the power spectrum density estimate and the detection probability of licensed signal occupancy.
Este documento describe un sitio web y juegos en línea llamados Bomomo que ayudan a desarrollar la inteligencia espacial y habilidades matemáticas y de dibujo en niños. Los niños usan el mouse de la computadora para dibujar figuras y líneas de colores que se mueven, mejorando su concentración, observación y resolución de problemas espaciales. Estos juegos también ayudan a fortalecer habilidades como la memoria y el aprendizaje, y se pueden practicar en cualquier momento y lugar sin neces
http://inarocket.com
Learn BEM fundamentals as fast as possible. What is BEM (Block, element, modifier), BEM syntax, how it works with a real example, etc.
Object gripping algorithm for robotic assistance by means of deep leaning IJECEIAES
This document presents a new algorithm using deep learning techniques for object gripping by a robot. It uses a Faster R-CNN to classify and locate three types of objects (cylinder, parallelepiped, toroid) with 100% accuracy. It also uses a CNN for regression to estimate the rotation angle of the parallelepiped with a mean error of 0.769 degrees. Testing in a virtual environment showed the algorithm could classify objects and grip them successfully at 5 frames per second using a three-finger gripper for increased stability over a two-finger gripper.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
1) The document discusses robust adaptive beamforming techniques to improve robustness against uncertainties in array manifold, such as direction-of-arrival mismatch.
2) It proposes an alternative realization of robust linearly constrained minimum variance beamforming that uses an ellipsoidal uncertainty constraint on the steering vector.
3) A key contribution is integrating diagonal loading techniques by deriving an optimum variable loading level, providing "loading-on-demand" instead of fixed loading. This allows accurate computation of the diagonal loading level based on the uncertainty constraint.
This document provides information on several remote sensing projects from IEEE 2015. It lists the titles, languages, and abstracts for 8 projects related to classification and analysis of hyperspectral and multispectral images. The projects focus on techniques such as sparse representation in tangent space, Gabor feature-based collaborative representation, level set evolutions for object extraction, and dimension reduction using spatial and spectral regularization.
This document discusses Gabor convolutional networks (GCNs), which incorporate Gabor filters into deep convolutional neural networks (DCNNs) to improve robustness to orientation and scale changes. GCNs modulate the learned convolution filters with Gabor filters, generating convolutional Gabor orientation filters (GoFs) that endow the ability to capture spatial localization, orientation selectivity, and spatial frequency selectivity. This allows GCNs to learn more compact yet enhanced representations with improved generalization to image transformations, outperforming conventional DCNN architectures. The incorporation of Gabor filters into the convolution operation can be integrated into any deep learning model to enhance robustness to scale and rotation variations.
This document compares the performance of three mobile ad hoc network (MANET) routing protocols: AODV, FSR, and IERP. It uses the QualNet network simulator to evaluate these protocols based on various metrics like throughput, average jitter, average end-to-end delay, and packet delivery ratio. The protocols are evaluated under different node speeds on a grid topology network with 90 nodes over an area of 1500x1500 meters. Simulation results show that AODV generally performs best in terms of throughput and packet delivery ratio across varying node speeds, while FSR performs worst for these metrics. IERP shows the worst performance for average end-to-end delay and average jitter as node speed increases.
The proposed system implements an image watermarking technique that incorporates human visual system (HVS) models into watermark embedding. The watermarking is performed in the wavelet domain. The algorithm first calculates the coarseness of different subbands (HH, HL, LH) to select the subband with the highest coarseness for watermark embedding. It then embeds the watermark bits into the selected subband by modifying the least significant bits of coefficients based on their values. Experimental results on test images show the technique is robust, with average watermark extraction rates of 80-95% and high PSNR values, even after filtering.
Scene Text Detection of Curved Text Using Gradiant Vector Flow MethodIJTET Journal
Abstract--Text detection and recognition is a hot topic for researchers in the field of image processing and multimedia. Content based Image Retrieval (CBIR) community fills the semantic gap between low-level and high-level features. For text detection and extraction that achieve reasonable accuracy for multi-oriented text and natural scene text (camera images), several methods have been developed. In general most of the methods use classifier and large number of training samples to improve the accuracy in text detection. In general, connected components are used to tackle the multi-orientation problem. The connected component analysis based features with classifier training, work well for achieving better accuracy when the images are highly contrast. However, when the same methods used directly for text detection in video it results in disconnections, loss of shapes etc, because of low contrast and complex background. For such cases, deciding geometrical features of the components and classifier is not that easy. To overcome from this problem the proposed research uses Gradiant Vector Flow and Grouping based Method for Arbitrarily Oriented Scene text Detection method. The GVF of edge pixels in the Sobel edge map of the input frame is explored to identify the dominant edge pixels which represent text components. The method extracts dominant pixel’s edge components corresponding to the Sobel edge map, which is called Text Candidates (TC) of the text lines. Experimental results on different datasets including text data that is oriented arbitrary, non-horizontal text data also horizontal text data, Hua’s data and ICDAR-03 data (Camera images) show that the proposed method outperforms existing methods.
Reduced-reference Video Quality Metric Using Spatial Information in Salient R...TELKOMNIKA JOURNAL
In multimedia transmission, it is important to rely on an objective quality metric which accurately
represents the subjective quality of processed images and video sequences. Maintaining acceptable
Quality of Experience in video transmission requires the ability to measure the quality of the video seen at
the receiver end. Reduced-reference metrics make use of side-information that is transmitted to the
receiver for estimating the quality of the received sequence with low complexity. This attribute enables
real-time assessment and visual degradation detection caused by transmission and compression errors. A
novel reduced-reference video quality known as the Spatial Information in Salient Regions Reduced
Reference Metric is proposed. The approach proposed makes use of spatial activity to estimate the
received sequence distortion after concealment. The statistical elements analysed in this work are based
on extracted edges and their luminance distributions. Results highlight that the proposed edge dissimilarit y
measure has a good correlation with DMOS scores from the LIVE Video Database.
ROBUST COLOUR IMAGE WATERMARKING SCHEME BASED ON FEATURE POINTS AND IMAGE NOR...csandit
Geometric attacks can desynchronize the location of the watermark and hence cause incorrect
watermark detection. This paper presents a robust colour image watermarking scheme based on
visually significant feature points and image normalization technique. The feature points are
used as synchronization marks between watermark embedding and detection. The watermark is
embedded into the non overlapped normalized circular regions in the luminance component or
the blue component of a color image. The embedding of the watermark is carried out by
modifying the DCT coefficients values in selected blocks. The original unmarked image is not
required for watermark extraction Experimental results show that the proposed scheme
successfully makes the watermark perceptually invisible as well as robust to common signal
processing and geometric attacks.
Compression technique using dct fractal compressionAlexander Decker
This document summarizes and compares different image compression techniques, including DCT, fractal compression, and their applications in steganography. It discusses how DCT works by transforming image data into frequency domains, while fractal compression exploits self-similarity within images. The document reviews several existing studies on combining these techniques with steganography and encryption. Specifically, it examines approaches that use DCT and fractal compression to improve data hiding capacity and security. Overall, the document provides an overview of key compression algorithms and their applications in digital watermarking and steganography.
11.compression technique using dct fractal compressionAlexander Decker
1) The document discusses and compares different image compression techniques, specifically DCT and fractal compression.
2) Fractal compression works by finding self-similar patterns within an image during encoding, but can have a long computation time. DCT transforms an image into frequency coefficients that can be quantized for compression.
3) The document reviews previous work combining DCT and fractal compression with steganography and encryption to improve hiding capacity, imperceptibility, and security against subterfuge attacks. However, prior methods had limitations like low data hiding amounts or lack of protection for compressed data.
Reversible Data Hiding Using Contrast Enhancement ApproachCSCJournals
Reverse Data Hiding is a technique used to hide the object's data details. This technique is used to ensure the security and to protect the integrity of the object from any modification by preventing intended and unintended changes. Digital watermarking is a key ingredient to multimedia protection. However, most existing techniques distort the original content as a side effect of image protection. As a way to overcome such distortion, reversible data embedding has recently been introduced and is growing rapidly. In reversible data embedding, the original content can be completely restored after the removal of the watermark. Therefore, it is very practical to protect legal, medical, or other important imagery. In this paper a novel removable (lossless) data hiding technique is proposed. This technique is based on the histogram modification to produce extra space for embedding, and the redundancy in digital images is exploited to achieve a very high embedding capacity. This method has been applied to various standard images. The experimental results have demonstrated a promising outcome and the proposed technique achieved satisfactory and stable performance both on embedding capacity and visual quality. The proposed method capacity is up to 129K bits with PSNR between 42-45dB. The performance is hence better than most exiting reversible data hiding algorithms.
IRJET- Semantic Segmentation using Deep LearningIRJET Journal
The document discusses semantic image segmentation using deep learning techniques. It summarizes several state-of-the-art semantic segmentation models like U-Net, Dilated U-Net, PSPNet, Fully Convolutional DenseNets, Global Convolutional Network (GCN), DeepLabV3, and proposes an optimized FRRN model. It implements these models on the CamVid dataset and evaluates their performance using the intersection-over-union score, finding that the optimized FRRN model achieves a score of 0.87.
Projected Barzilai-Borwein Methods Applied to Distributed Compressive Spectru...Polytechnique Montreal
Cognitive radio allows unlicensed (cognitive) users to use licensed frequency bands by exploiting spectrum sensing techniques to detect whether or not the licensed (primary) users are present. In this paper, we present a compressed sensing applied to spectrum-occupancy detection in wide-band applications. The collected analog signals from each cognitive radio (CR) receiver at a fusion center are transformed to discrete-time signals by using analog-to-information converter (AIC) and then employed to calculate the autocorrelation. For signal reconstruction, we exploit a novel approach to solve the optimization problem consisting of minimizing both a quadratic (l2) error term and an l1-regularization term. In specific, we propose the Basic gradient projection (GP) and projected Barzilai-Borwein (PBB) algorithm to offer a better performance in terms of the mean squared error of the power spectrum density estimate and the detection probability of licensed signal occupancy.
Este documento describe un sitio web y juegos en línea llamados Bomomo que ayudan a desarrollar la inteligencia espacial y habilidades matemáticas y de dibujo en niños. Los niños usan el mouse de la computadora para dibujar figuras y líneas de colores que se mueven, mejorando su concentración, observación y resolución de problemas espaciales. Estos juegos también ayudan a fortalecer habilidades como la memoria y el aprendizaje, y se pueden practicar en cualquier momento y lugar sin neces
http://inarocket.com
Learn BEM fundamentals as fast as possible. What is BEM (Block, element, modifier), BEM syntax, how it works with a real example, etc.
How to Build a Dynamic Social Media PlanPost Planner
Stop guessing and wasting your time on networks and strategies that don’t work!
Join Rebekah Radice and Katie Lance to learn how to optimize your social networks, the best kept secrets for hot content, top time management tools, and much more!
Watch the replay here: bit.ly/socialmedia-plan
The document discusses how personalization and dynamic content are becoming increasingly important on websites. It notes that 52% of marketers see content personalization as critical and 75% of consumers like it when brands personalize their content. However, personalization can create issues for search engine optimization as dynamic URLs and content are more difficult for search engines to index than static pages. The document provides tips for SEOs to help address these personalization and SEO challenges, such as using static URLs when possible and submitting accurate sitemaps.
Lightning Talk #9: How UX and Data Storytelling Can Shape Policy by Mika Aldabaux singapore
How can we take UX and Data Storytelling out of the tech context and use them to change the way government behaves?
Showcasing the truth is the highest goal of data storytelling. Because the design of a chart can affect the interpretation of data in a major way, one must wield visual tools with care and deliberation. Using quantitative facts to evoke an emotional response is best achieved with the combination of UX and data storytelling.
This document summarizes a study of CEO succession events among the largest 100 U.S. corporations between 2005-2015. The study analyzed executives who were passed over for the CEO role ("succession losers") and their subsequent careers. It found that 74% of passed over executives left their companies, with 30% eventually becoming CEOs elsewhere. However, companies led by succession losers saw average stock price declines of 13% over 3 years, compared to gains for companies whose CEO selections remained unchanged. The findings suggest that boards generally identify the most qualified CEO candidates, though differences between internal and external hires complicate comparisons.
An Effect of Compressive Sensing on Image SteganalysisIRJET Journal
This document discusses using compressive sensing (CS) to recover secret signals that have been embedded in images through steganography. It proposes applying CS to extract features from stego-images using a block CS measurements matrix in order to detect differences between cover and stego-images. The reconstructed image from CS is then used to recover the embedded secret signal values. The method is compared to existing steganalysis techniques that cannot recover secret signals but can only detect their presence. It is concluded that applying CS in this domain could provide an improved way to detect and recover hidden secret messages from digital images.
Modified Skip Line Encoding for Binary Image Compressionidescitation
This paper proposes a modified skip line encoding technique for lossless compression of binary images. Skip line encoding exploits correlation between successive scan lines by encoding only one line and skipping similar lines. The proposed technique improves upon existing skip line encoding by allowing a scan line to be skipped if a similar line exists anywhere in the image, rather than just successive lines. Experimental results on sample images show the modified technique achieves higher compression ratios than conventional skip line encoding.
A Novel Approaches For Chromatic Squander Less Visceral Coding Techniques Usi...IJERA Editor
Recent advances in video capturing and display technologies, along with the exponentially increasing demand of
video services, challenge the video coding research community to design new algorithms able to significantly
improve the compression performance of the current H.264/AVC standard. This target is currently gaining
evidence with the standardization activities in the High Efficiency Video Coding (HEVC) project. The distortion
models used in HEVC are mean squared error (MSE) and sum of absolute difference (SAD). However, they are
widely criticized for not correlating well with perceptual image quality. The structural similarity (SSIM) index
has been found to be a good indicator of perceived image quality. Meanwhile, it is computationally simple
compared with other state-of-the-art perceptual quality measures and has a number of desirable mathematical
properties for optimization tasks. We propose a perceptual video coding method to improve upon the current
HEVC based on an SSIM-inspired divisive normalization scheme as an attempt to transform the DCT domain
frame prediction residuals to a perceptually uniform space before encoding.
Based on the residual divisive normalization process, we define a distortion model for mode selection and show
that such a divisive normalization strategy largely simplifies the subsequent perceptual rate-distortion
optimization procedure. We further adjust the divisive normalization factors based on local content of the video
frame. Experiments show that the scheme can achieve significant gain in terms of rate-SSIM performance and
better visual quality when compared with HEVC
A Novel Approaches For Chromatic Squander Less Visceral Coding Techniques Usi...IJERA Editor
Recent advances in video capturing and display technologies, along with the exponentially increasing demand of
video services, challenge the video coding research community to design new algorithms able to significantly
improve the compression performance of the current H.264/AVC standard. This target is currently gaining
evidence with the standardization activities in the High Efficiency Video Coding (HEVC) project. The distortion
models used in HEVC are mean squared error (MSE) and sum of absolute difference (SAD). However, they are
widely criticized for not correlating well with perceptual image quality. The structural similarity (SSIM) index
has been found to be a good indicator of perceived image quality. Meanwhile, it is computationally simple
compared with other state-of-the-art perceptual quality measures and has a number of desirable mathematical
properties for optimization tasks. We propose a perceptual video coding method to improve upon the current
HEVC based on an SSIM-inspired divisive normalization scheme as an attempt to transform the DCT domain
frame prediction residuals to a perceptually uniform space before encoding.
Based on the residual divisive normalization process, we define a distortion model for mode selection and show
that such a divisive normalization strategy largely simplifies the subsequent perceptual rate-distortion
optimization procedure. We further adjust the divisive normalization factors based on local content of the video
frame. Experiments show that the scheme can achieve significant gain in terms of rate-SSIM performance and
better visual quality when compared with HEVC
JOINT IMAGE WATERMARKING, COMPRESSION AND ENCRYPTION BASED ON COMPRESSED SENS...ijma
ABSTRACT
Image usage over the internet becomes more and more important each day. Over 3 billion images are shared each day over the internet which raise a concern about how to protect images copyrights? Or how to utilize image sharing experience? This paper proposes a new robust image watermarking algorithm based on compressed sensing (CS) and quantization index modulation (QIM) watermark embedding. The algorithm capitalizes on the CS to compress and encrypt images jointly with Entropy Coding, Arnold Cat Map, Pseudo-random numbers and Advanced Encryption Standard (AES). Our proposed algorithm works under the JPEG standard umbrella. Watermark embedding is done in 3 different locations inside the image using QIM. Those locations differ with each 8-by-8 image block. Choosing which combination of coefficients to be used in QIM watermark embedding depends on selecting a combination from combinations table, which is generated at the same time with projection matrices using a 10-digits Pseudorandom number secret key SK1. After quantization phase, the algorithm shuffles image blocks using Arnold’s Cat Map with a 10-digits Pseudo-random number secret key SK2, followed by a unique method for splitting every 8x8 block into two unequal parts. Part number one will act as the host for two QIM watermarks then goes through encoding phase using Run-Length Encoding (RLE) followed by Huffman Encoding, while part number two goes through sparse watermark embedding followed by a third QIM watermark embedding and compression phase using CS, then Huffman encoder is used to encode this part. The algorithm aims to combine image watermarking, compression and encryption capabilities in one algorithm while balancing how those capabilities works with each other to achieve significant improvement in terms of image watermarking, compression and encryption. 15 different images usually used in image processing benchmarking were used for testing the algorithm capabilities and experiments show that our proposed algorithm achieves robust watermarking jointly with encryption and compression under the JPEG standard framework.
Survey paper on image compression techniquesIRJET Journal
This document summarizes and compares several popular image compression techniques: wavelet compression, JPEG/DCT compression, vector quantization (VQ), fractal compression, and genetic algorithm compression. It finds that all techniques perform satisfactorily at 0.5 bits per pixel, but for very low bit rates like 0.25 bpp, wavelet compression techniques like EZW perform best in terms of compression ratio and quality. Specifically, EZW and JPEG are more practical than others at low bit rates. The document also notes advantages and disadvantages of each technique and concludes hybrid approaches may achieve even higher compression ratios while maintaining image quality.
33 8951 suseela g suseela paper8 (edit)new2IAESIJEECS
The contemporary advancements in CMOS technology and Multimedia systems enabled image communication over resource constrained Visual Sensor Network (VSN). However, the size of image data is huge. It is wise to send the compressed image over the bandwidth limited network. Compression algorithms also should be of energy efficient with low complexity and should offer acceptable image quality. In this work an image coder offering low bit rate less than 0.5bpp is aimed. In this paper, a low memory and energy efficient skipped zonal based Binary DCT is proposed that codes the transformed samples using Golomb Rice code. Simulation results offered low bitrate 0.39 bpp with PSNR of 27.34 dB for the standard gray scale test image of Lena size 512 x512 pixels.
34 8951 suseela g suseela paper8 (edit)newIAESIJEECS
The document proposes a low memory and energy efficient image compression technique called Skipped Zonal based Binary DCT (SZBinDCT) for transmitting images over resource constrained visual sensor networks. SZBinDCT first removes alternating rows and columns of pixels to reduce redundancy. It then applies a low complexity Binary DCT transform to the remaining pixels. Only the top left zone of transformed coefficients containing most energy are considered. These coefficients are quantized and entropy encoded using Golomb Rice coding. Simulation results show the technique compresses images to 0.39 bpp with 27.34 dB PSNR while consuming only 11.352 microjoules of energy per 8x8 block, significantly less than other techniques. The low complexity compression allows extended
33 8951 suseela g suseela paper8 (edit)new2IAESIJEECS
Modern manufacturing methods permit the study and prediction of surface roughness since the acquisition of signals and its processing is made instantaneously. With the availability of better computing facilities and newer algorithms in the machine learning domain, online surface roughness prediction will lead to the manufacture of intelligent machines that alert the operator when the process crosses the specified range of roughness. Prediction of surface roughness by multiple linear regression, regression tree and M5P tree methods using multivariable predictors and a single response dependent variable Ra (surface roughness) is attempted. Vibration signal from the boring operation has been acquired for the study that predicts the surface roughness on the inner face of the workpiece. A machine learning approach was used to extract the statistical features and analyzed by four different cases to achieve higher predictability, higher accuracy, low computing effort and reduction of the root mean square error. One case among them was carried out upon feature reduction using Principle Component Analysis (PCA) to examine the effect of feature reduction.
This document proposes a design method for efficient parallel processing in 3D standard-chip stacking systems using a standard bus. It presents a model for mapping parallel algorithms to a 3D-SCSS and describes a design flow. As an example, it maps the scale pyramid generation process of an image recognition algorithm across multiple processor chips in the 3D-SCSS. Analysis shows the independent resize approach reduces data transfer compared to iterative resize, though it requires synchronization. Estimated power consumption is a minimum of 691.2μW for data transfer at 10 frames per second.
IRJET- An Overview of Hiding Information in H.264/Avc Compressed VideoIRJET Journal
This document provides an overview of hiding information in H.264/AVC compressed video. It discusses different information hiding techniques such as bit plane replacement, spread spectrum, histogram manipulation and matrix encryption. It identifies locations within the H.264 video compression process where information can be hidden, such as during prediction, transformation, quantization and entropy coding. It reviews related information hiding strategies for each location and compares strategies based on payload, overhead, video quality and complexity. The document aims to provide a better understanding of information hiding in compressed video and identify new opportunities.
This document presents a novel approach for jointly optimizing spatial prediction and transform coding in video compression. It aims to improve performance and reduce complexity compared to existing techniques. The proposed method uses singular value decomposition (SVD) to compress images. SVD decomposes an image matrix into three matrices, allowing the image to be approximated using only a few singular values. This achieves compression by removing redundant information. The document outlines the SVD approach for image compression and measures compression performance using compression ratio and mean squared error between the original and compressed images. It then discusses trends in image and video coding, including combining natural and synthetic content. Finally, it provides a block diagram of the proposed system and compares its compression performance to existing discrete cosine transform-
Deep Convolutional Neural Networks (CNNs) have achieved impressive performance in
edge detection tasks, but their large number of parameters often leads to high memory and energy
costs for implementation on lightweight devices. In this paper, we propose a new architecture, called
Efficient Deep-learning Gradients Extraction Network (EDGE-Net), that integrates the advantages of Depthwise Separable Convolutions and deformable convolutional networks (DeformableConvNet) to address these inefficiencies. By carefully selecting proper components and utilizing
network pruning techniques, our proposed EDGE-Net achieves state-of-the-art accuracy in edge
detection while significantly reducing complexity. Experimental results on BSDS500 and NYUDv2
datasets demonstrate that EDGE-Net outperforms current lightweight edge detectors with only
500k parameters, without relying on pre-trained weights.
Deep Convolutional Neural Networks (CNNs) have achieved impressive performance in
edge detection tasks, but their large number of parameters often leads to high memory and energy
costs for implementation on lightweight devices. In this paper, we propose a new architecture, called
Efficient Deep-learning Gradients Extraction Network (EDGE-Net), that integrates the advantages of Depthwise Separable Convolutions and deformable convolutional networks (DeformableConvNet) to address these inefficiencies. By carefully selecting proper components and utilizing
network pruning techniques, our proposed EDGE-Net achieves state-of-the-art accuracy in edge
detection while significantly reducing complexity. Experimental results on BSDS500 and NYUDv2
datasets demonstrate that EDGE-Net outperforms current lightweight edge detectors with only
500k parameters, without relying on pre-trained weights.
Efficient Image Compression Technique using Clustering and Random PermutationIJERA Editor
Multimedia data compression is a challenging situation for compression technique, due to the possibility of loss
of data as well as it require large amount of storage place. The minimization of storage place and proper
transmission of these data need compression. In this dissertation we proposed a block based DWT image
compression technique using genetic algorithm and HCC code matrix. The HCC code matrix compressed into
two different set redundant and non-redundant which generate similar pattern of block coefficient. The similar
block coefficient generated by particle of swarm optimization. The process of particle of swarm optimization is
select for the optimal block of DWT transform function. For the experimental purpose we used some standard
image such as Lena, Barbara and cameraman image. The size of resolution of this image is 256*256. The source
of image is Google
Efficient Image Compression Technique using Clustering and Random PermutationIJERA Editor
Multimedia data compression is a challenging situation for compression technique, due to the possibility of loss
of data as well as it require large amount of storage place. The minimization of storage place and proper
transmission of these data need compression. In this dissertation we proposed a block based DWT image
compression technique using genetic algorithm and HCC code matrix. The HCC code matrix compressed into
two different set redundant and non-redundant which generate similar pattern of block coefficient. The similar
block coefficient generated by particle of swarm optimization. The process of particle of swarm optimization is
select for the optimal block of DWT transform function. For the experimental purpose we used some standard
image such as Lena, Barbara and cameraman image. The size of resolution of this image is 256*256. The source
of image is Google.
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
COMPOSITE IMAGELET IDENTIFIER FOR ML PROCESSORSIRJET Journal
The document proposes a composite imagelet identifier technique for machine learning processors that uses seam carving to manipulate edges and pixilation in images for processing. It discusses using existing algorithms like SSIM and Dijkstra's algorithm to calculate image energy and identify optimal seam locations for manipulation. The technique is evaluated using test images in MATLAB and is presented as having potential applications in areas like forestry, animal husbandry and safety monitoring.
This document summarizes JPEG image compression techniques. It discusses how images are divided into blocks and transformed from the spatial domain to the frequency domain using the Discrete Cosine Transform (DCT). It then describes how the DCT coefficients are quantized and arranged in zigzag order before entropy encoding with Huffman coding. The goal of JPEG compression is to store image data using as little space as possible while maintaining enough visual detail. The techniques discussed aim to remove irrelevant and redundant image data through DCT, quantization, and entropy encoding.
20 Comprehensive Checklist of Designing and Developing a WebsitePixlogix Infotech
Dive into the world of Website Designing and Developing with Pixlogix! Looking to create a stunning online presence? Look no further! Our comprehensive checklist covers everything you need to know to craft a website that stands out. From user-friendly design to seamless functionality, we've got you covered. Don't miss out on this invaluable resource! Check out our checklist now at Pixlogix and start your journey towards a captivating online presence today.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!
T4408103107
1. Ankita Hundet et al Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 4, Issue 4( Version 7), April 2014, pp.103-107
www.ijera.com 103 | P a g e
Survey for Image Representation Using Block Compressive
Sensing For Compression Applications
Ankita Hundet, Dr. R.C. Jain, Vivek Sharma
Department of Information Technology SATI Vidisha, India
Department of Information Technology SATI Vidisha, India
Department of Information Technology SATI Vidisha, India
Abstract
Compressing sensing theory have been favourable in evolving data compression techniques, though it was put
forward with objective to achieve dimension reduced sampling for saving data sampling cost. In this paper two
sampling methods are explored for block CS (BCS) with discrete cosine transform (DCT) based image
representation for compression applications - (a) coefficient random permutation (b) adaptive sampling. CRP
method has the potency to balance the sparsity of sampled vectors in DCT field of image, and then in improving
the CS sampling efficiency. To attain AS we design an adaptive measurement matrix used in CS based on the
energy distribution characteristics of image in DCT domain, which has a good impact in magnifying the CS
performance. It has been revealed in our experimental results that our proposed methods are efficacious in
reducing the dimension of the BCS-based image representation and/or improving the recovered image quality.
The planned BCS based image representation scheme could be an efficient alternative for applications of
encrypted image compression and/or robust image compression.
Keywords—Block compressive sensing, Coefficient random permutation, compressive sensing, random
sampling, Image representation,
I. INTRODUCTION
Recent years have seen important interest in
the paradigm of compressed sensing (CS) which
permit, in certain conditions, signals to be sampled at
sub-Nyquist rates via linear projection onto a random
basis while still enabling exact reconstruction of the
original signal. As applied to 2D images, though, CS
faces some challenges including a computationally
expensive reconstruction process and huge memory
required to store the random sampling operator. in
recent times, numerous fast algorithms have been
developed for CS rebuilding, while the latter
challenge was addressed in [1] using a block-based
sampling operation. Additionally in projection- based
Landweber iterations were planned to achieve fast CS
reconstruction while simultaneously imposing
smoothing with the goal of improving the
reconstructed-image quality by eliminating blocking
artifacts. Sparse representation and compressive
sensing establishes a more rigorous mathematical
framework for studying high-dimensional data and
ways to discover the structures of the data, giving rise
to a large range of efficient algorithms. When
transmitting data over insecure bandwidth-limited
channels, data compression and encryption is for all
time needed. An encryption algorithm converts the
data from comprehensible to incomprehensible
structure, thus making the encrypted data difficult to
compress using any of the classical compression
algorithms, which relies on intelligence embedded in
the data. Hence, traditionally the encryption always
follows compression. While such a scheme is suitable
for most of the applications, there are some
applications which need encryption to be carried out
before compression [2]. Consider for example, an
information owner and a network operator who does
not trust each other. In such a case, to protect his
content, the information owner encrypts his data
before giving it to network operator. Due to the
bandwidth limitation the network operator is forced
to compress this encrypted data stream [3]-[5].
II. PROBLEM FORMULATION
The idea of CS is to represent the signal by
non adaptive random projections to reduce the
sampling rate, which is considered as an advantage of
CS. However, the main challenges existed in CS for
practical applications include how to reduce
efficiently measurement rate with preserving good
recovered image quality and decreasing the
implementation complexity. To address these
problems, many researches [6] have reported
to use the prior knowledge of the sampled signal to
enhance the performance of CS. Although most of
them focused on decoder-based reconstruction
optimization, encoder-based sampling optimization
may have important sense for addressing these
problems. Block-based sampling for fast CS of
RESEARCH ARTICLE OPEN ACCESS
2. Ankita Hundet et al Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 4, Issue 4( Version 7), April 2014, pp.103-107
www.ijera.com 104 | P a g e
natural images, where the original image is divided
into small blocks and each block is sampled
independently using the same measurement operator.
The possibility of exploiting block CS is motivated
by the great success of block DCT coding systems
which are widely used in the JPEG and the MPEG
standards[7]. The main advantages of our proposed
system include: (a) Measurement operator can be
easily stored and implemented through a random
under sampled filter bank. Block-based measurement
is more advantageous for real-time applications as the
encoder does not need to send the sampled data until
the whole image is measured; (b) Since each block is
processed independently, the initial solution can be
easily obtained and the reconstruction process can be
substantially speeded up; For natural images, our
preliminary results show that block CS systems offer
comparable performances to existing CS schemes
with much lower implementation cost.
III.IMAGE REPRESENTATION
The objective is to signify and express the
resulting aggregate of segmented pixels in a form
suitable for further computer processing after
segmenting an image into regions.
• Two choice for representing a region:
External characteristics: its boundary.
Internal characteristics: the pixels comprise the
region. For example, a region may be represented by
(a) its boundary with the boundary describe by
features such as its length, (b) the orientation of the
straight line joining the extreme points, and (c) the
number of concavities in the boundary.
• An external representation is chosen while the
primary focus is on shape characteristics.
• An internal representation is chosen when the
primary focus is on reflectivity properties, such as
color and texture. The segmentation techniques yield
raw data in the form of pixels along a boundary or
pixels contained in a region. Although these data are
sometimes used directly to obtain descriptors (as in
determining the texture of a region), standard practice
is to use schemes that compact the data into
representations that are considerably more useful in
the computation of descriptors. This section
introduces some basic representation schemes for this
purpose. To represent a boundary by a connected
sequence of straight line segments of specified length
and direction.
IV. IMAGE REPRESENTATION
TECHNIQUES
Over the last two decades, great
improvements have been made in image and video
compression techniques driven by a growing demand
for storage and transmission of visual information.
By far, many image and video compression
standards, such as JPEG, JPEG2000 and MPEG-4
AVC/H.264 etc., have been proposed. However,
these mainstream compression coding techniques still
may be not very efficiency in some special
compression applications, for examples of robust
image coding and encrypted image compression.
Robust coding is very important in digital multimedia
transmission over internet and wireless networks,
where the image codec needs to have not only
excellent compression performance to reduce data
rate, but also high robustness performance to resist
transmission error due to channel noise and/or packet
loss.
A) CRP Based Block Compressive Sensing
As mentioned above, the theory foundation
of CS is that the signal sampled is sparse or
compressible, and the required minimal number of
measurement dimension for perfect recovery is
determined by the sparsity of the sampled signal.
When an image is represented by a BCS scheme, it
will be inefficient to assign the same number of
measurement dimension to each sampled vector
corresponding to the different image block, because
the image block with different spatial characteristic
has significantly different sparsity from each other. In
general, the image blocks located at smooth region
should have stronger sparsity than those located at
rich edge or texture region. The method of bits
random permutations has been used in channel
coding in communication system for design of
interleaved, which can maximally scatter the burst
error generated in the process of data channel
transmission, thus plays a very important role in
increasing the reliability of data transmission.
Motivated by the above fact, the CRP in DCT
domain of image is exploited to equalize the sparsity
of the sampled vectors in CS encoding stage for
enhancing measurement efficiency of BCS of image
in this Section, which is followed by a CRP based
BCS scheme. The proposed CRP can be at the same
time used to achieve encrypting image in transform
domain at information owner side. In traditional
BCS, the CS sampling does not exploit the inter
coefficient correlation within a sampled signal vector,
so coefficient permutations across signal vectors will
not adversely affect encoding performance. On the
other hand, because the random Permutations make
the distribution of coefficients become randomization
and homogenization, such random permutations will
make sparsity of all the sampled vectors nearly
identical, which improves reconstruction
performance since it is likely that before CRP some
less sparse vectors will not be reconstructed well [8].
3. Ankita Hundet et al Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 4, Issue 4( Version 7), April 2014, pp.103-107
www.ijera.com 105 | P a g e
Figure 1: The architecture of BCS scheme with CRP
B) AS Based Block Compressive Sensing
In conventional CS scheme, a fixed
measurement matrix is used to achieve non-
adaptively sampling signal. This scheme is simple
but not high efficiency, because it identically samples
all signals with different feature, and non-
distinctively extracts all information components of a
signal that maybe have different importance to
recovery signal quality [9]. In this Section, a novel
method of developing adaptive measurement matrix
is proposed to achieve adaptive Sampling (AS) in our
BCS in DCT domain. As what we have known,
image signals are bandwidth limited, and their energy
are mostly distributed in the low-frequency
components. In addition, the human eyes have certain
masking effect on some parts of the frequency
spectrum of image signals, which like low-pass filters
that makes them more sensitive to low frequency
components than high frequency ones. So, the low
frequency components occupying large part of
energy of image signals are more important to image
quality than the high frequency components
occupying less part of energy. Considering these
facts, we propose to develop an adaptive
measurement matrix in BCS framework based on the
energy distribution characteristic of the sampled
image signal in DCT domain to enhance the
performance of BCS. The key idea is to adaptively
extract more information of the important frequency
components than that of the relatively non-important
frequency components in sampling stage to reduce
recovery error. It is implemented by adaptively
scaling the coefficients of measurement matrix
according to the sampled image‟s characteristic, i.e.
unequally measuring the different frequency
components based on their importance to image
quality.
C) Compressed Sensing
Compressed sensing is a signal processing
technique for ably acquire and reconstructing
a signal, by finding solution to underdetermined
linear systems. This takes profit of signal‟s
sparseness or compressibility in particular domain,
we design compressed data acquisition protocols
which perform as if it were possible to directly
acquire just allowing the entire signal to be
determined from relatively few measurements. The
important information about the signals/images in
effect, not acquiring that part of the data that would
eventually just be “thrown away” by lossy
compression[10]. Moreover, the protocols are non
adaptive and parallelizable; they do not require
knowledge of the signal/image to be acquired in
advance other than knowledge that the data will be
compressible and do not attempt any “understanding”
of the underlying object to guide an active or
adaptive sensing strategy. The measurements ended
in the compressed sensing protocol are holographic
thus, not easy pixel samples and must be process
nonlinearly. In specific applications, this principle
might enable dramatically reduced measurement
time, dramatically reduced sampling rates, or reduced
utilize of analog-to digital converter resources [10].
Shuffling the class attribute values
belonging to heterogeneous leaves of a decision tree.
If a leaf corresponds to a group of records having
different class attribute values, then the leaf is
identified to be a heterogeneous leaf.
Figure2: Overall joint source-channel coding system
block diagram [11]
Recently, we developed an interactive and
adaptive joint source-channel coding algorithm by
which the centre and size of the image are specified
by the user after receiving the low-resolution
background image.
4. Ankita Hundet et al Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 4, Issue 4( Version 7), April 2014, pp.103-107
www.ijera.com 106 | P a g e
D) Random Permutations Based Block
Bompressive Sensing
Compressive sensing (CS) is a novel theory
framework for signal acquisition, which has been
widely concerned in many application fields. A new
block CS scheme for image compression
applications, in which a method of coefficients
random permutations (CRP) in transform domain is
exploited for optimal sampling of CS, considering the
fact that the image blocks with different spatial
characteristics have different sparsity or
compressibility [12]. The proposed random
permutations technique makes the sparsity of all the
sampled coefficients vectors more evenly, which
results in requiring approximate equal number of
measurement for well restoration. New results show
that our proposed scheme can efficiently enhance the
CS performance in increasing reconstructed image
quality or reduce the measurement ratio.
(a) (b)
Figure 2: Visual quality comparisons with
measurement rate equaling to 0.4 and block size of
8×8. “(a) Original image, (b) reconstruction image by
CRP-based CS method.
A novel scheme of image compressive
sensing for compression applications is proposed in
this paper. By employing the coefficients random
permutations technique in sampling process, the
sparsity of all sampled vectors could be balanced
efficiently, which can considerably increase the
utilization of sensing resources. The simulation
results illustrate that our proposed sampling scheme
can enhance the performance of compressive sensing
efficiently in both objective and subjective
assessment manners with little increasing the
complexity.
E) Reweighted Compressive Sampling For Image
Compression
Shannon sampling theory tells us that if we
want to reconstruct a band limited signal without
distortion, then sampling the original signal at a rate
which is at least twice of its highest frequency is
necessary. However, a newly emerged theory called
Compressive Sampling (CS), also known as
Compressive Sensing proves that this is not always
true. It tells that as long as the signal is sparse or
compressible, then we can exactly recover the
original signals through only a few measurements,
namely, we can achieve data sampling and
compression at the same time for designing highly
efficient data compression techniques while the
performance of existing ones seem to reach the
bottleneck. Some initial exploration about applying
CS theory in data compression has been made;
however, the performance is limited. The main
problem that prevents CS framework from being
applied in real data compression system is the
efficiency decrease for complex signals which are
Usually hard to be perfectly sparsely represented.
One major reason causing this efficiency dropping is
that conventional CS framework samples data
randomly without showing discrimination to different
components, in other words, it does not consider
about the characteristics of the signals. However, this
may not be the best way to sample when it comes to
the complex signals, resembling images. A lot of
efforts have been made trying to improve the
performance of the CS signal recovery algorithms by
adjusting the behavior of the decoder adaptively to
the signal characteristics. However, usually their
methods are based on the blind estimation of the
signal magnitudes and iterative refining process in
the decoding side. For instance, in, the authors
propose an iterative reweighting scheme to enhance
the sparsity of the reconstruction results. In their
work, the solution of an unweighted minimization
reconstruction is used as the initial guess, and then
the weighting coefficients are refined iteratively, until
the convergence is reached. However, the complexity
for these iterative processing schemes is too high for
the methods to be practical, and they are often
guaranteed only to converge to a local (not
necessarily) optimum. In this paper, we propose a
reweighted Compressive Sampling for CS based
image compression by introducing a weighting
scheme into the encoding procedure. This method
works on the block-based processing mechanism.
First the images will be divided into small blocks,
and then the weighting coefficients are determined by
taking advantage of the statistical characteristics of
natural images and subsequently sent together with
the CS measurements to help the decoder to achieve
better reconstruction results. The main advantage of
5. Ankita Hundet et al Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 4, Issue 4( Version 7), April 2014, pp.103-107
www.ijera.com 107 | P a g e
our proposed method is that it can sufficiently utilize
image characteristics to achieve considerable
performance gain without notably increasing the
complexity of the system structure or the
computation procedures [13].
G) Robust Image Compression Based On
Compressing Sensing
The existing image compression methods
(e.g., JPEG2000, etc.) are vulnerable to bit-loss, and
this is generally tackle by channel coding that follow.
Though, source coding and channel coding have
contradictory requirement. In this paper, we deal with
the problem with an alternative paradigm, and a
novel compressive sensing (CS) based compression
scheme is therefore pro- posed [14]. Discrete wavelet
transform (DWT) is applied for sparse representation,
and base on the property of 2-D DWT, a fast CS
measurements captivating technique is presented.
Unlike the un- equally important discrete wavelet
coefficients, the resultant CS measurements hold
nearly the same amount of information and have
minimal effects for bit-loss. At the decoder side, one
can merely recreate the image via minimization.
New-results show that the proposed CS-based image
codec with- out resorting to error protection is more
robust compared with existing CS technique and
relevant joint source channel coding (JSCC) schemes.
V. CONCLUSION
With the help of compressive sensing using
the recently developed HALS algorithm based on
nonlinear thresholding a progressive transmission
system has been introduced in this paper. The main
contributions of this effort are on a. selection and
coding of a small number of samples (sampled below
Nyquist rate) and b. introducing an adaptive
thresholding technique for selection and
reconstruction of those samples. Keep the inferences
that disclose private information about organizations
or individuals at a minimum.Compression of
encrypted image is not possible by using any of the
classical compression techniques. We proposed a
system and showed that lossy compression of
encrypted image data is indeed possible by using
compressive sensing techniques. The basis pursuit
algorithm was appropriately modified to enable joint
decompression and decryption. Simulation results
were provided to demonstrate the compression results
of the proposed method.
REFERENCES
[1] S. Mun, J.E. Fowler, Block compressed
sensing of images using directional
Transforms, in: Proc. Int. Conf. Image
Processing (ICIP), 2009, pp. 3021–3024.
[2] M. Johnson, P. Ishwar, V. Prabhakaran, D.
Schonberg and K. Ramachandran, ”On
compressing encrypted data”, IEEE Trans.
Signal Processing vol. 52, pp. 2992-3006, Oct.
2004.
[3] D. Schonberg, S. Draper and K.
Ramachandran, ”On compression of encrypted
images”, Proc. International conference on
Image processing, Atlanta, GA, pp. 269-272,
Oct. 2006.
[4] A. A. Kumar and A. Makur, ”Distributed
source coding based encryption and lossless
compression of gray scale and color images”,
Proc. IEEE 10th workshop on multimedia and
signal processing, Cairns, Australia, pp. 760-
764, Oct. 2008..
[5] A. Mitra, Y. V. S. Rao and S. R. M. Prasanna,
” A new image encryption approach using
combinational permutation techniques”,
International J. of comp. Sc., Vol. 1, PP. 127-
131, May 2006.
[6] L. Gan, Block compressed sensing of natural
images, in: Proc. of Int. Conf. Digital Signal
Processing, 2007, pp. 403–406.
[7] R. Baranuik, ”Compressive Sensing”, IEEE
signal processing magazine, Vol. 24, PP. 118-
120, July 2007.
[8] L. Gan, Block compressed sensing of natural
images, in: Proc. of Int. Conf. Digital Signal
Processing, 2007, pp. 403–406.
[9] E. J. Candes and M. B. Wakin, ”An
introduction to compressive sampling”, IEEE
signal processing magazine, Vol. 25, pp. 21-
30, Mar. 2008.
[10] D.L. Donoho, Compressed sensing, IEEE
Trans. Inform. Theory 52 (4) (2006) 1289–
1306.
[11] S. Sanei, A. H. Phan, J.-L. Lo V. Abolghasemi
and A. Cichocki, „„A compressive sensing
approach for progressive transmission of
images‟‟, in Proc. Int. Conf. DSP‟2009, pp. 1–
5, 2009.
[12] Z. Gao, C. Xiong, C. Zhou, H. Wang,
“Compressive Sampling with Coefficients
Random Permutations for Image
Compression”, in: Proc. Int. Conf. on
Multimedia and Signal Processing, 2011, pp.
321–324.
[13] Y. Yang, O.C. Au, L. Fang, X. Wen, W. Tang,
Reweighted compressive sampling for image
compression, in: Proc. Int. Conf. Picture
Coding Symposium (PCS), 2009, pp. 89–92.
[14] C. Deng, W. Lin, B.-S. Lee, C.T. Lau, Robust
image compression based on compressive
sensing, in: Proc. Int. Conf. Multimedia &
Expo (ICME), 2010, pp. 462–467
[14] R.G. Baraniuk, V. Cevher, M. Duarte, C.
Hegde, Model-based compressive sensing,
IEEE Trans. Inf. Theory 56 (4) (2010) 1982–
2001.