Synthetic aperture radar (SAR) data represents a significant resource of information for a large variety of researchers. Thus, there is a strong interest in developing data encoding and decoding algorithms which can obtain higher compression ratios while keeping image quality to an acceptable level. In this work, results of different wavelet-based image compression and segmentation based wavelet image compression are assessed through controlled experiments on synthetic SAR images. The effects of dissimilar wavelet functions, number of decompositions are examined in order to find optimal family for SAR images. The choice of optimal wavelets in segmentation based wavelet image compression is coiflet for low frequency and high frequency component. The results presented here is a good reference for SAR application developers to choose the wavelet families and also it concludes that wavelets transform is rapid, robust and reliable tool for SAR image compression. Numerical results confirm the potency of this approach.
DESPECKLING OF SAR IMAGES BY OPTIMIZING AVERAGED POWER SPECTRAL VALUE IN CURV...ijistjournal
The document describes a novel algorithm for despeckling synthetic aperture radar (SAR) images using particle swarm optimization (PSO) in the curvelet domain. The algorithm first identifies homogeneous regions in the speckled image using variance calculations. It then uses PSO to optimize the thresholding of curvelet coefficients, with the objective of minimizing the average power spectral value. This provides an optimized threshold to apply curvelet-based despeckling. The proposed method is tested on standard images and shown to outperform conventional filters like median and Lee filters in reducing speckle noise.
Boosting ced using robust orientation estimationijma
In this paper, Coherence Enhancement Diffusion (CED) is boosted feeding external orientation using new
robust orientation estimation. In CED, proper scale selection is very important as the gradient vector at
that scale reflects the orientation of local ridge. For this purpose a new scheme is proposed in which pre
calculated orientation, by using local and integration scales. From the experiments it is found the proposed
scheme is working much better in noisy environment as compared to the traditional Coherence
Enhancement Diffusion
This paper presents an approach for image restoration in the presence of blur and noise. The image is divided into independent regions modeled with a Gaussian prior. Wavelet-based methods are used for image denoising, while classical Wiener filtering is used for deblurring. The algorithm finds the maximum a posteriori estimate at the intersection of convex sets generated by Wiener filtering. It provides efficient image restoration without sacrificing the simplicity of filtering, and generates a better restored image compared to previous methods.
Medical image fusion using curvelet transform 2-3-4-5IAEME Publication
The document summarizes a research paper on medical image fusion using curvelet transform. It discusses how the curvelet transform is better than wavelet transform at representing edges in images. It then presents an image fusion algorithm that uses both discrete wavelet transform and fast discrete curvelet transform. The algorithm first applies DWT, then applies FDCT wrapping to the frequency bands obtained from DWT. This extracts information from source images to create a fused image with improved clarity and ability to see both soft tissues and hard bones. Experimental results showed the method was effective at fusing images like MRI and CT scans.
FINGERPRINTS IMAGE COMPRESSION BY WAVE ATOMScsandit
The document presents a study comparing fingerprint image compression using wavelets and wave atoms transforms. It finds that wave atoms transforms provide better performance than current wavelet-based standards like WSQ. Specifically:
- Wave atoms achieved higher PSNR values and compression ratios than wavelets when reconstructing images from a reduced number of coefficients.
- An algorithm was proposed using wave atom decomposition, non-uniform quantization, and entropy coding that achieved a compression ratio of 18 with a PSNR of 35.04 dB, outperforming the WSQ standard.
- Minutiae detection on original and reconstructed images showed wave atoms better preserved local fingerprint structures. Therefore, wave atoms are concluded to be more suitable than wavelets
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document provides an overview of wavelet-based image fusion techniques. It discusses wavelet transform theory, including continuous and discrete wavelet transforms. For discrete wavelet transforms, it describes decimated, undecimated, and non-separated approaches. It explains how wavelet transforms can extract detail information from images at different resolutions, which can then be combined to create a fused image containing the best characteristics of the original images. While providing improved results over traditional fusion methods, wavelet-based approaches still have limitations such as artifact introduction that researchers continue working to address.
Introduction to Wavelet Transform and Two Stage Image DE noising Using Princi...ijsrd.com
In past two decades there are various techniques are developed to support variety of image processing applications. The applications of image processing include medical, satellite, space, transmission and storage, radar and sonar etc. But noise in image effect all applications. So it is necessary to remove noise from image. There are various methods and techniques are there to remove noise from images. Wavelet transform (WT) has been proved to be effective in noise removal but this have some problems that is overcome by PCA method. This paper presents an efficient image de-noising scheme by using principal component analysis (PCA) with local pixel grouping (LPG). This method provides better preservation of image local structures. In this method a pixel and its nearest neighbors are modeled as a vector variable whose training samples are selected from the local window by using block matching based LPG. In image de-noising, a compromise has to be found between noise reduction and preserving significant image details. PCA is a statistical technique for simplifying a dataset by reducing datasets to lower dimensions. It is a standard technique commonly used for data reduction in statistical pattern recognition and signal processing. This paper proposes a de-noising technique by using a new statistical approach, principal component analysis with local pixel grouping (LPG). This procedure is iterated second time to further improve the de-noising performance, and the noise level is adaptively adjusted in the second stage.
DESPECKLING OF SAR IMAGES BY OPTIMIZING AVERAGED POWER SPECTRAL VALUE IN CURV...ijistjournal
The document describes a novel algorithm for despeckling synthetic aperture radar (SAR) images using particle swarm optimization (PSO) in the curvelet domain. The algorithm first identifies homogeneous regions in the speckled image using variance calculations. It then uses PSO to optimize the thresholding of curvelet coefficients, with the objective of minimizing the average power spectral value. This provides an optimized threshold to apply curvelet-based despeckling. The proposed method is tested on standard images and shown to outperform conventional filters like median and Lee filters in reducing speckle noise.
Boosting ced using robust orientation estimationijma
In this paper, Coherence Enhancement Diffusion (CED) is boosted feeding external orientation using new
robust orientation estimation. In CED, proper scale selection is very important as the gradient vector at
that scale reflects the orientation of local ridge. For this purpose a new scheme is proposed in which pre
calculated orientation, by using local and integration scales. From the experiments it is found the proposed
scheme is working much better in noisy environment as compared to the traditional Coherence
Enhancement Diffusion
This paper presents an approach for image restoration in the presence of blur and noise. The image is divided into independent regions modeled with a Gaussian prior. Wavelet-based methods are used for image denoising, while classical Wiener filtering is used for deblurring. The algorithm finds the maximum a posteriori estimate at the intersection of convex sets generated by Wiener filtering. It provides efficient image restoration without sacrificing the simplicity of filtering, and generates a better restored image compared to previous methods.
Medical image fusion using curvelet transform 2-3-4-5IAEME Publication
The document summarizes a research paper on medical image fusion using curvelet transform. It discusses how the curvelet transform is better than wavelet transform at representing edges in images. It then presents an image fusion algorithm that uses both discrete wavelet transform and fast discrete curvelet transform. The algorithm first applies DWT, then applies FDCT wrapping to the frequency bands obtained from DWT. This extracts information from source images to create a fused image with improved clarity and ability to see both soft tissues and hard bones. Experimental results showed the method was effective at fusing images like MRI and CT scans.
FINGERPRINTS IMAGE COMPRESSION BY WAVE ATOMScsandit
The document presents a study comparing fingerprint image compression using wavelets and wave atoms transforms. It finds that wave atoms transforms provide better performance than current wavelet-based standards like WSQ. Specifically:
- Wave atoms achieved higher PSNR values and compression ratios than wavelets when reconstructing images from a reduced number of coefficients.
- An algorithm was proposed using wave atom decomposition, non-uniform quantization, and entropy coding that achieved a compression ratio of 18 with a PSNR of 35.04 dB, outperforming the WSQ standard.
- Minutiae detection on original and reconstructed images showed wave atoms better preserved local fingerprint structures. Therefore, wave atoms are concluded to be more suitable than wavelets
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document provides an overview of wavelet-based image fusion techniques. It discusses wavelet transform theory, including continuous and discrete wavelet transforms. For discrete wavelet transforms, it describes decimated, undecimated, and non-separated approaches. It explains how wavelet transforms can extract detail information from images at different resolutions, which can then be combined to create a fused image containing the best characteristics of the original images. While providing improved results over traditional fusion methods, wavelet-based approaches still have limitations such as artifact introduction that researchers continue working to address.
Introduction to Wavelet Transform and Two Stage Image DE noising Using Princi...ijsrd.com
In past two decades there are various techniques are developed to support variety of image processing applications. The applications of image processing include medical, satellite, space, transmission and storage, radar and sonar etc. But noise in image effect all applications. So it is necessary to remove noise from image. There are various methods and techniques are there to remove noise from images. Wavelet transform (WT) has been proved to be effective in noise removal but this have some problems that is overcome by PCA method. This paper presents an efficient image de-noising scheme by using principal component analysis (PCA) with local pixel grouping (LPG). This method provides better preservation of image local structures. In this method a pixel and its nearest neighbors are modeled as a vector variable whose training samples are selected from the local window by using block matching based LPG. In image de-noising, a compromise has to be found between noise reduction and preserving significant image details. PCA is a statistical technique for simplifying a dataset by reducing datasets to lower dimensions. It is a standard technique commonly used for data reduction in statistical pattern recognition and signal processing. This paper proposes a de-noising technique by using a new statistical approach, principal component analysis with local pixel grouping (LPG). This procedure is iterated second time to further improve the de-noising performance, and the noise level is adaptively adjusted in the second stage.
This paper presents a trifocal Rotman Lens Design
approach. The effects of focal ratio and element spacing on
the performance of Rotman Lens are described. A three beam
prototype feeding 4 element antenna array working in L-band
has been simulated using RLD v1.7 software. Simulated
results show that the simulated lens has a return loss of –
12.4dB at 1.8GHz. Beam to array port phase error variation
with change in the focal ratio and element spacing has also
been investigated.
This document discusses a study that used MATLAB curve fitting to determine the optical constants and thickness of tin oxide (SnO2) thin films. A transmission equation was customized in MATLAB and used to fit experimental transmittance data from SnO2 films. The curve fitting determined values for six parameters: refractive index, absorption coefficient, thickness, and constants related to dispersion models. The fitted curve closely matched the measured transmission curve, indicating the optical constants and thickness were accurately determined through this MATLAB curve fitting method.
Band Clustering for the Lossless Compression of AVIRIS Hyperspectral ImagesIDES Editor
Hyperspectral images can be efficiently compressed
through a linear predictive model, as for example the one
used in the SLSQ algorithm. In this paper we exploit this
predictive model on the AVIRIS images by individuating,
through an off-line approach, a common subset of bands, which
are not spectrally related with any other bands. These bands
are not useful as prediction reference for the SLSQ 3-D
predictive model and we need to encode them via other
prediction strategies which consider only spatial correlation.
We have obtained this subset by clustering the AVIRIS bands
via the clustering by compression approach. The main result
of this paper is the list of the bands, not related with the
others, for AVIRIS images. The clustering trees obtained for
AVIRIS and the relationship among bands they depict is also
an interesting starting point for future research.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Sub-windowed laser speckle image velocimetry by fast fourier transform technique
Abstract
In this work, laser speckle velocimetry, a unique optical method for velocity measurement of fluid flow has been described. A laser sheet is developed and is illuminated on microscopic seeded particles to produce the speckle pattern at the recording plane. Double frame- single-exposure speckle images are captured in such a way that the second speckle image is shifted exactly in a known direction. The auto-correlation method has the ambiguity of direction of flow. To rectify this, spatial shift of the second image has been premeditated. Cross-correlation of sub interrogation areas is obtained by Fast Fourier Transform technique. Four sub-windows processed to obtain the velocity information with vector map analysis precisely.
This document summarizes research on reducing acoustic radiation from vibrating composite panels by attaching point masses. Analytical models are developed to predict the natural frequencies, mode shapes and response of rectangular composite panels with attached point masses. The receptance method is used to calculate the vibration characteristics of the combined panel-mass system from the individual components. By adjusting the size, number and location of point masses, the directivity of sound radiated by the panel can be controlled. The models show good agreement with numerical simulations and provide physical insight into achieving changes in sound directivity.
Detection and Classification in Hyperspectral Images using Rate Distortion an...Pioneer Natural Resources
This document summarizes an experiment that compares two methods of bit allocation for compressing hyperspectral imagery using JPEG2000: 1) the traditional high bit rate quantizer approach and 2) the rate distortion optimal (RDO) approach. The experiment shows that both methods perform well at relatively low bit rates, achieving over 96% classification accuracy. However, at very low bit rates, the RDO approach outperforms the high bit rate quantizer approach, achieving 90% accuracy at 0.0375 bpppb compared to less than 90% for the high bit rate method. The RDO approach also achieves lower mean squared error than the high bit rate quantizer approach.
Modified Adaptive Lifting Structure Of CDF 9/7 Wavelet With Spiht For Lossy I...idescitation
We present a modified structure of 2-D cdf 9/7 wavelet
transforms based on adaptive lifting in image coding. Instead
of alternately applying horizontal and vertical lifting, as in
present practice, Adaptive lifting performs lifting-based
prediction in local windows in the direction of high pixel
correlation. Hence, it adapts far better to the image orientation
features in local windows. The predicting and updating signals
of Adaptive lifting can be derived even at the fractional pixel
precision level to achieve high resolution, while still
maintaining perfect reconstruction. To enhance the
performance of adaptive based modified structure of 2-D CDF
9/7 is coupled with SPIHT coding algorithm to improve the
drawbacks of wavelet transform. Experimental results show
that the proposed modified scheme based image coding
technique outperforms JPEG 2000 in both PSNR and visual
quality, with the improvement up to 6.0 dB than existing
structure on images with rich orientation features .
Investigation of repeated blasts at Aitik mine using waveform cross correlationIvan Kitov
We present results of signal detection from repeated events at the Aitik and Kiruna mines in Sweden as based on waveform cross correlation. Several advanced methods based on tensor Singular Value Decomposition is applied to waveforms measured at seismic array ARCES, which consists of three-component sensors.
ADAPTIVE, SCALABLE, TRANSFORMDOMAIN GLOBAL MOTION ESTIMATION FOR VIDEO STABIL...cscpconf
Video Stabilization, which is important for better analysis and user experience, is typically done through Global Motion Estimation (GME) and Compensation. GME can be done in image domain using many techniques or in Transform domain using the well-known Phase Correlation methods which relate motion to phase shift in the spectrum. While image domain methods are generally slower (due to dense vector field computations), they can do global as well as local motion estimation. Transform domain methods cannot normally do local motion, but are faster and more accurate on homogeneous images, and are resilient to even rapid illumination changes and large motion. However both these approaches can become very time consuming if one needs more accuracy and smoothness because of the nature of the tradeoff. We show here that wavelet transforms can be used in a novel way to achieve a very smooth stabilization along with a significant speedup in this Fourier domain computation without sacrificing accuracy. We
do this by adaptively selecting and combining motion computed on a specific pair of sub-bands using the wavelet interpolation capability. Our approach yields a smooth, scalable, fast and
adaptive algorithm (based on time requirement and recent motion history) to yield significantly better accuracy than a single level wavelet decomposition based approach.
This document introduces the Continuous Crooklet Transform (CCrT) and Discrete Crooklet Transform (DCrT) as novel transforms that aim to overcome limitations of wavelet and curvelet transforms. The CCrT is defined as the convolution of an input signal with scaled and translated versions of a principal "crooklet" function. The DCrT decomposes a signal using a filter bank of low-pass and high-pass filters in a manner similar to curvelet transforms. The Crooklet Transform fits image properties better than wavelets and has less computational complexity than curvelets. Potential applications of the Crooklet Transform include image processing, feature extraction, image compression, and medical imaging.
The document discusses optimizing rate allocation for compressing hyperspectral images using JPEG2000. It proposes using the discrete wavelet transform instead of the Karhunen-Loeve transform for decorrelation due to lower computational complexity. A mixed model is used for rate distortion optimal bit allocation instead of experimentally obtained rate distortion curves. Comparisons show the mixed model approach results in lower mean squared error than traditional bit allocation schemes, while having lower implementation complexity than prior methods.
This document presents an image fusion algorithm based on wavelet transform and second generation curvelet transform. It begins with an introduction to image fusion and discusses limitations of wavelet transforms. It then introduces the second generation curvelet transform, which represents edges and singularities better than wavelets. The proposed algorithm uses both wavelet and curvelet transforms to decompose and fuse images. It applies the algorithm to experiments fusing multi-focus images and complementary CT and MRI images. Results show the curvelet-based fusion produces clearer images that better preserve useful information from the source images.
Lossless image compression using new biorthogonal waveletssipij
Even though a large number of wavelets exist, one needs new wavelets for their specific applications. One
of the basic wavelet categories is orthogonal wavelets. But it was hard to find orthogonal and symmetric
wavelets. Symmetricity is required for perfect reconstruction. Hence, a need for orthogonal and symmetric
arises. The solution was in the form of biorthogonal wavelets which preserves perfect reconstruction
condition. Though a number of biorthogonal wavelets are proposed in the literature, in this paper four new
biorthogonal wavelets are proposed which gives better compression performance. The new wavelets are
compared with traditional wavelets by using the design metrics Peak Signal to Noise Ratio (PSNR) and
Compression Ratio (CR). Set Partitioning in Hierarchical Trees (SPIHT) coding algorithm was utilized to
incorporate compression of images.
An Efficient Algorithm for the Segmentation of Astronomical ImagesIOSR Journals
This document proposes an efficient algorithm for segmenting celestial objects from astronomical images. The algorithm uses multiple preprocessing steps including removing bright point sources, stationary wavelet transform, total variation denoising, and adaptive histogram equalization. Level set segmentation is then used as the key technique for segmentation. Preprocessing helps overcome issues like noise, weak object edges, and low contrast. Level set segmentation can segment objects while retaining their texture and shape information for subsequent classification. The algorithm is tested on various celestial objects and shown to effectively segment them.
WAVELET DECOMPOSITION AND ALPHA STABLE FUSIONsipij
This article gives a new method of fusing multifocal images combining the Laplacian pyramid and the wavelet decomposition using the stable distance alpha as a selection rule. We start by decomposing multifocal images into several pyramid levels, then applying the wavelet decomposition to each level. the originality of this work is to use the stable distance alpha to fuse the wavelet images at each level of the Pyramid. To obtain the final fused image, we reconstructed the combined image at each level of the pyramid. We compare our method to other existing methods in the literature and we deduce that it is almost better.
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
This paper presents an approach for image restoration in the presence of blur and noise. The image is divided into independent regions modeled with a Gaussian prior. Wavelet based methods are used for image denoising, while classical Wiener filtering is used for deblurring. The algorithm finds the maximum a posteriori estimate at the intersection of convex sets generated by Wiener filtering. It provides efficient image restoration without sacrificing the simplicity of filtering, and generates a better restored image.
A Comparative Study of Wavelet and Curvelet Transform for Image DenoisingIOSR Journals
Abstract : This paper describes a comparison of the discriminating power of the various multiresolution based thresholding techniques i.e., Wavelet, curve let for image denoising.Curvelet transform offer exact reconstruction, stability against perturbation, ease of implementation and low computational complexity. We propose to employ curve let for facial feature extraction and perform a thorough comparison against wavelet transform; especially, the orientation of curve let is analysed. Experiments show that for expression changes, the small scale coefficients of curve let transform are robust, though the large scale coefficients of both transform are likely influenced. The reason behind the advantages of curvelet lies in its abilities of sparse representation that are critical for compression, estimation of images which are denoised and its inverse problems, thus the experiments and theoretical analysis coincide . Keywords: Curvelet transform, Face recognition, Feature extraction, Sparse representation Thresholding rules,Wavelet transform..
Boosting CED Using Robust Orientation Estimationijma
n this paper, Coherence Enhancement Diffusion (CED) is boosted feeding external orientation using new
robust orientation estimation. In CED, proper scale selection is very important as the gradient vector at
that scale reflects the orientation of local ridge. For this purpose a new scheme is proposed in which pre
calculated orientation, by using local and integration scales. From the experiments it is found the proposed
scheme is working much better in noisy environment as compared to the traditional Coherence
Enhancement Diffusion
FINGERPRINTS IMAGE COMPRESSION BY WAVE ATOMScscpconf
The fingerprint images compression based on geometric transformed presents important
research topic, these last year’s many transforms have been proposed to give the best
representation to a particular type of image “fingerprint image”, like classics wavelets and
wave atoms. In this paper we shall present a comparative study between this transforms, in
order to use them in compression. The results show that for fingerprint images, the wave atom
offers better performance than the current transform based compression standard. The wave
atoms transformation brings a considerable contribution on the compression of fingerprints
images by achieving high values of ratios compression and PSNR, with a reduced number of
coefficients. In addition, the proposed method is verified with objective and subjective testing.
This paper presents a trifocal Rotman Lens Design
approach. The effects of focal ratio and element spacing on
the performance of Rotman Lens are described. A three beam
prototype feeding 4 element antenna array working in L-band
has been simulated using RLD v1.7 software. Simulated
results show that the simulated lens has a return loss of –
12.4dB at 1.8GHz. Beam to array port phase error variation
with change in the focal ratio and element spacing has also
been investigated.
This document discusses a study that used MATLAB curve fitting to determine the optical constants and thickness of tin oxide (SnO2) thin films. A transmission equation was customized in MATLAB and used to fit experimental transmittance data from SnO2 films. The curve fitting determined values for six parameters: refractive index, absorption coefficient, thickness, and constants related to dispersion models. The fitted curve closely matched the measured transmission curve, indicating the optical constants and thickness were accurately determined through this MATLAB curve fitting method.
Band Clustering for the Lossless Compression of AVIRIS Hyperspectral ImagesIDES Editor
Hyperspectral images can be efficiently compressed
through a linear predictive model, as for example the one
used in the SLSQ algorithm. In this paper we exploit this
predictive model on the AVIRIS images by individuating,
through an off-line approach, a common subset of bands, which
are not spectrally related with any other bands. These bands
are not useful as prediction reference for the SLSQ 3-D
predictive model and we need to encode them via other
prediction strategies which consider only spatial correlation.
We have obtained this subset by clustering the AVIRIS bands
via the clustering by compression approach. The main result
of this paper is the list of the bands, not related with the
others, for AVIRIS images. The clustering trees obtained for
AVIRIS and the relationship among bands they depict is also
an interesting starting point for future research.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Sub-windowed laser speckle image velocimetry by fast fourier transform technique
Abstract
In this work, laser speckle velocimetry, a unique optical method for velocity measurement of fluid flow has been described. A laser sheet is developed and is illuminated on microscopic seeded particles to produce the speckle pattern at the recording plane. Double frame- single-exposure speckle images are captured in such a way that the second speckle image is shifted exactly in a known direction. The auto-correlation method has the ambiguity of direction of flow. To rectify this, spatial shift of the second image has been premeditated. Cross-correlation of sub interrogation areas is obtained by Fast Fourier Transform technique. Four sub-windows processed to obtain the velocity information with vector map analysis precisely.
This document summarizes research on reducing acoustic radiation from vibrating composite panels by attaching point masses. Analytical models are developed to predict the natural frequencies, mode shapes and response of rectangular composite panels with attached point masses. The receptance method is used to calculate the vibration characteristics of the combined panel-mass system from the individual components. By adjusting the size, number and location of point masses, the directivity of sound radiated by the panel can be controlled. The models show good agreement with numerical simulations and provide physical insight into achieving changes in sound directivity.
Detection and Classification in Hyperspectral Images using Rate Distortion an...Pioneer Natural Resources
This document summarizes an experiment that compares two methods of bit allocation for compressing hyperspectral imagery using JPEG2000: 1) the traditional high bit rate quantizer approach and 2) the rate distortion optimal (RDO) approach. The experiment shows that both methods perform well at relatively low bit rates, achieving over 96% classification accuracy. However, at very low bit rates, the RDO approach outperforms the high bit rate quantizer approach, achieving 90% accuracy at 0.0375 bpppb compared to less than 90% for the high bit rate method. The RDO approach also achieves lower mean squared error than the high bit rate quantizer approach.
Modified Adaptive Lifting Structure Of CDF 9/7 Wavelet With Spiht For Lossy I...idescitation
We present a modified structure of 2-D cdf 9/7 wavelet
transforms based on adaptive lifting in image coding. Instead
of alternately applying horizontal and vertical lifting, as in
present practice, Adaptive lifting performs lifting-based
prediction in local windows in the direction of high pixel
correlation. Hence, it adapts far better to the image orientation
features in local windows. The predicting and updating signals
of Adaptive lifting can be derived even at the fractional pixel
precision level to achieve high resolution, while still
maintaining perfect reconstruction. To enhance the
performance of adaptive based modified structure of 2-D CDF
9/7 is coupled with SPIHT coding algorithm to improve the
drawbacks of wavelet transform. Experimental results show
that the proposed modified scheme based image coding
technique outperforms JPEG 2000 in both PSNR and visual
quality, with the improvement up to 6.0 dB than existing
structure on images with rich orientation features .
Investigation of repeated blasts at Aitik mine using waveform cross correlationIvan Kitov
We present results of signal detection from repeated events at the Aitik and Kiruna mines in Sweden as based on waveform cross correlation. Several advanced methods based on tensor Singular Value Decomposition is applied to waveforms measured at seismic array ARCES, which consists of three-component sensors.
ADAPTIVE, SCALABLE, TRANSFORMDOMAIN GLOBAL MOTION ESTIMATION FOR VIDEO STABIL...cscpconf
Video Stabilization, which is important for better analysis and user experience, is typically done through Global Motion Estimation (GME) and Compensation. GME can be done in image domain using many techniques or in Transform domain using the well-known Phase Correlation methods which relate motion to phase shift in the spectrum. While image domain methods are generally slower (due to dense vector field computations), they can do global as well as local motion estimation. Transform domain methods cannot normally do local motion, but are faster and more accurate on homogeneous images, and are resilient to even rapid illumination changes and large motion. However both these approaches can become very time consuming if one needs more accuracy and smoothness because of the nature of the tradeoff. We show here that wavelet transforms can be used in a novel way to achieve a very smooth stabilization along with a significant speedup in this Fourier domain computation without sacrificing accuracy. We
do this by adaptively selecting and combining motion computed on a specific pair of sub-bands using the wavelet interpolation capability. Our approach yields a smooth, scalable, fast and
adaptive algorithm (based on time requirement and recent motion history) to yield significantly better accuracy than a single level wavelet decomposition based approach.
This document introduces the Continuous Crooklet Transform (CCrT) and Discrete Crooklet Transform (DCrT) as novel transforms that aim to overcome limitations of wavelet and curvelet transforms. The CCrT is defined as the convolution of an input signal with scaled and translated versions of a principal "crooklet" function. The DCrT decomposes a signal using a filter bank of low-pass and high-pass filters in a manner similar to curvelet transforms. The Crooklet Transform fits image properties better than wavelets and has less computational complexity than curvelets. Potential applications of the Crooklet Transform include image processing, feature extraction, image compression, and medical imaging.
The document discusses optimizing rate allocation for compressing hyperspectral images using JPEG2000. It proposes using the discrete wavelet transform instead of the Karhunen-Loeve transform for decorrelation due to lower computational complexity. A mixed model is used for rate distortion optimal bit allocation instead of experimentally obtained rate distortion curves. Comparisons show the mixed model approach results in lower mean squared error than traditional bit allocation schemes, while having lower implementation complexity than prior methods.
This document presents an image fusion algorithm based on wavelet transform and second generation curvelet transform. It begins with an introduction to image fusion and discusses limitations of wavelet transforms. It then introduces the second generation curvelet transform, which represents edges and singularities better than wavelets. The proposed algorithm uses both wavelet and curvelet transforms to decompose and fuse images. It applies the algorithm to experiments fusing multi-focus images and complementary CT and MRI images. Results show the curvelet-based fusion produces clearer images that better preserve useful information from the source images.
Lossless image compression using new biorthogonal waveletssipij
Even though a large number of wavelets exist, one needs new wavelets for their specific applications. One
of the basic wavelet categories is orthogonal wavelets. But it was hard to find orthogonal and symmetric
wavelets. Symmetricity is required for perfect reconstruction. Hence, a need for orthogonal and symmetric
arises. The solution was in the form of biorthogonal wavelets which preserves perfect reconstruction
condition. Though a number of biorthogonal wavelets are proposed in the literature, in this paper four new
biorthogonal wavelets are proposed which gives better compression performance. The new wavelets are
compared with traditional wavelets by using the design metrics Peak Signal to Noise Ratio (PSNR) and
Compression Ratio (CR). Set Partitioning in Hierarchical Trees (SPIHT) coding algorithm was utilized to
incorporate compression of images.
An Efficient Algorithm for the Segmentation of Astronomical ImagesIOSR Journals
This document proposes an efficient algorithm for segmenting celestial objects from astronomical images. The algorithm uses multiple preprocessing steps including removing bright point sources, stationary wavelet transform, total variation denoising, and adaptive histogram equalization. Level set segmentation is then used as the key technique for segmentation. Preprocessing helps overcome issues like noise, weak object edges, and low contrast. Level set segmentation can segment objects while retaining their texture and shape information for subsequent classification. The algorithm is tested on various celestial objects and shown to effectively segment them.
WAVELET DECOMPOSITION AND ALPHA STABLE FUSIONsipij
This article gives a new method of fusing multifocal images combining the Laplacian pyramid and the wavelet decomposition using the stable distance alpha as a selection rule. We start by decomposing multifocal images into several pyramid levels, then applying the wavelet decomposition to each level. the originality of this work is to use the stable distance alpha to fuse the wavelet images at each level of the Pyramid. To obtain the final fused image, we reconstructed the combined image at each level of the pyramid. We compare our method to other existing methods in the literature and we deduce that it is almost better.
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
This paper presents an approach for image restoration in the presence of blur and noise. The image is divided into independent regions modeled with a Gaussian prior. Wavelet based methods are used for image denoising, while classical Wiener filtering is used for deblurring. The algorithm finds the maximum a posteriori estimate at the intersection of convex sets generated by Wiener filtering. It provides efficient image restoration without sacrificing the simplicity of filtering, and generates a better restored image.
A Comparative Study of Wavelet and Curvelet Transform for Image DenoisingIOSR Journals
Abstract : This paper describes a comparison of the discriminating power of the various multiresolution based thresholding techniques i.e., Wavelet, curve let for image denoising.Curvelet transform offer exact reconstruction, stability against perturbation, ease of implementation and low computational complexity. We propose to employ curve let for facial feature extraction and perform a thorough comparison against wavelet transform; especially, the orientation of curve let is analysed. Experiments show that for expression changes, the small scale coefficients of curve let transform are robust, though the large scale coefficients of both transform are likely influenced. The reason behind the advantages of curvelet lies in its abilities of sparse representation that are critical for compression, estimation of images which are denoised and its inverse problems, thus the experiments and theoretical analysis coincide . Keywords: Curvelet transform, Face recognition, Feature extraction, Sparse representation Thresholding rules,Wavelet transform..
Boosting CED Using Robust Orientation Estimationijma
n this paper, Coherence Enhancement Diffusion (CED) is boosted feeding external orientation using new
robust orientation estimation. In CED, proper scale selection is very important as the gradient vector at
that scale reflects the orientation of local ridge. For this purpose a new scheme is proposed in which pre
calculated orientation, by using local and integration scales. From the experiments it is found the proposed
scheme is working much better in noisy environment as compared to the traditional Coherence
Enhancement Diffusion
FINGERPRINTS IMAGE COMPRESSION BY WAVE ATOMScscpconf
The fingerprint images compression based on geometric transformed presents important
research topic, these last year’s many transforms have been proposed to give the best
representation to a particular type of image “fingerprint image”, like classics wavelets and
wave atoms. In this paper we shall present a comparative study between this transforms, in
order to use them in compression. The results show that for fingerprint images, the wave atom
offers better performance than the current transform based compression standard. The wave
atoms transformation brings a considerable contribution on the compression of fingerprints
images by achieving high values of ratios compression and PSNR, with a reduced number of
coefficients. In addition, the proposed method is verified with objective and subjective testing.
FINGERPRINTS IMAGE COMPRESSION BY WAVE ATOMScsandit
The fingerprint images compression based on geometric transformed presents important
research topic, these last year’s many transforms have been proposed to give the best
representation to a particular type of image “fingerprint image”, like classics wavelets and
wave atoms. In this paper we shall present a comparative study between this transforms, in
order to use them in compression. The results show that for fingerprint images, the wave atom
offers better performance than the current transform based compression standard. The wave
atoms transformation brings a considerable contribution on the compression of fingerprints
images by achieving high values of ratios compression and PSNR, with a reduced number of
coefficients. In addition, the proposed method is verified with objective and subjective testing.
ANALYSIS OF INTEREST POINTS OF CURVELET COEFFICIENTS CONTRIBUTIONS OF MICROS...sipij
This paper focuses on improved edge model based on Curvelet coefficients analysis. Curvelet transform is
a powerful tool for multiresolution representation of object with anisotropic edge. Curvelet coefficients
contributions have been analyzed using Scale Invariant Feature Transform (SIFT), commonly used to study
local structure in images. The permutation of Curvelet coefficients from original image and edges image
obtained from gradient operator is used to improve original edges. Experimental results show that this
method brings out details on edges when the decomposition scale increases.
Noise Removal in SAR Images using Orthonormal Ridgelet TransformIJERA Editor
Development in the field of image processing for reducing speckle noise from digital images/satellite images is a challenging task for image processing applications. Previously many algorithms were proposed to de-speckle the noise in digital images. Here in this article we are presenting experimental results on de-speckling of Synthetic Aperture RADAR (SAR) images. SAR images have wide applications in remote sensing and mapping the surfaces of all planets. SAR can also be implemented as "inverse SAR" by observing a moving target over a substantial time with a stationary antenna. Hence denoising of SAR images is an essential task for viewing the information. Here we introduce a transformation technique called ―Ridgelet‖, which is an extension level of wavelet. Ridgelet analysis can be done in the similar way how wavelet analysis was done in the Radon domain as it translates singularities along lines into point singularities under different frequencies. Simulation results were show cased for proving that proposed work is more reliable than compared to other de-speckling processes, and the quality of de-speckled image is measured in terms of Peak Signal to Noise Ratio and Mean Square Error
Noise Removal in SAR Images using Orthonormal Ridgelet TransformIJERA Editor
Development in the field of image processing for reducing speckle noise from digital images/satellite images is a challenging task for image processing applications. Previously many algorithms were proposed to de-speckle the noise in digital images. Here in this article we are presenting experimental results on de-speckling of Synthetic Aperture RADAR (SAR) images. SAR images have wide applications in remote sensing and mapping the surfaces of all planets. SAR can also be implemented as "inverse SAR" by observing a moving target over a substantial time with a stationary antenna. Hence denoising of SAR images is an essential task for viewing the information. Here we introduce a transformation technique called ―Ridgelet‖, which is an extension level of wavelet. Ridgelet analysis can be done in the similar way how wavelet analysis was done in the Radon domain as it translates singularities along lines into point singularities under different frequencies. Simulation results were show cased for proving that proposed work is more reliable than compared to other de-speckling processes, and the quality of de-speckled image is measured in terms of Peak Signal to Noise Ratio and Mean Square Error.
Performance Evaluation of Quarter Shift Dual Tree Complex Wavelet Transform B...IJECEIAES
In this paper, multifocus image fusion using quarter shift dual tree complex wavelet transform is proposed. Multifocus image fusion is a technique that combines the partially focused regions of multiple images of the same scene into a fully focused fused image. Directional selectivity and shift invariance properties are essential to produce a high quality fused image. However conventional wavelet based fusion algorithms introduce the ringing artifacts into fused image due to lack of shift invariance and poor directionality. The quarter shift dual tree complex wavelet transform has proven to be an effective multi-resolution transform for image fusion with its directional and shift invariant properties. Experimentation with this transform led to the conclusion that the proposed method not only produce sharp details (focused regions) in fused image due to its good directionality but also removes artifacts with its shift invariance in order to get high quality fused image. Proposed method performance is compared with traditional fusion methods in terms of objective measures.
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
DESPECKLING OF SAR IMAGES BY OPTIMIZING AVERAGED POWER SPECTRAL VALUE IN CURV...ijistjournal
Synthetic Aperture Radar (SAR) images are inherently affected by multiplicative speckle noise, due to the coherent nature of scattering phenomena. In this paper, a novel algorithm capable of suppressing speckle noise using Particle Swarm Optimization (PSO) technique is presented. The algorithm initially identifies homogenous region from the corrupted image and uses PSO to optimize the Thresholding of curvelet coefficients to recover the original image. Average Power Spectrum Value (APSV) has been used as objective function of PSO. The Proposed algorithm removes Speckle noise effectively and the performance of the algorithm is tested and compared with Mean filter, Median filter, Lee filter, Statistic Lee filter, Kuan filter, frost filter and gamma filter., outperforming conventional filtering methods.
Determining the Efficient Subband Coefficients of Biorthogonal Wavelet for Gr...CSCJournals
In this paper, we propose an invisible blind watermarking scheme for the gray-level images. The cover image is decomposed using the Discrete Wavelet Transform with Biorthogonal wavelet filters and the watermark is embedded into significant coefficients of the transformation. The Biorthogonal wavelet is used because it has the property of perfect reconstruction and smoothness. The proposed scheme embeds a monochrome watermark into a gray-level image. In the embedding process, we use a localized decomposition, means that the second level decomposition is performed on the detail sub-band resulting from the first level decomposition. The image is decomposed into first level and for second level decomposition we consider Horizontal, vertical and diagonal subband separately. From this second level decomposition we take the respective Horizontal, vertical and diagonal coefficients for embedding the watermark. The robustness of the scheme is tested by considering the different types of image processing attacks like blurring, cropping, sharpening, Gaussian filtering and salt and pepper noise effect. The experimental result shows that the embedding watermark into diagonal subband coefficients is robust against different types of attacks.
An Application of Second Generation Wavelets for Image Denoising using Dual T...IDES Editor
The lifting scheme of the discrete wavelet transform
(DWT) is now quite well established as an efficient technique
for image denoising. The lifting scheme factorization of
biorthogonal filter banks is carried out with a linear-adaptive,
delay free and faster decomposition arithmetic. This adaptive
factorization is aimed to achieve a well transparent, more
generalized, complexity free fast decomposition process in
addition to preserve the features that an ordinary wavelet
decomposition process offers. This work is targeted to get
considerable reduction in computational complexity and power
required for decomposition. The hard striking demerits of
DWT structure viz., shift sensitivity and poor directionality
had already been proven to be washed out with an emergence
of dual tree complex wavelet (DT-CWT) structure. The well
versed features of DT-CWT and robust lifting scheme are
suitably combined to achieve an image denoising with prolific
rise in computational speed and directionality, also with a
desirable drop in computation time, power and complexity of
algorithm compared to all other techniques.
Elements Space and Amplitude Perturbation Using Genetic Algorithm for Antenna...CSCJournals
A simple and fast genetic algorithm (GA) developed to reduce the sidelobes in non-uniformly spaced linear antenna arrays. The proposed GA algorithm optimizes two vectors of variables to increase the Main lobe to Sidelobe power ratio (M/S) of array’s radiation pattern. The algorithm, in the first phase calculates the positions of the array elements and in the second phase, it manipulates the amplitude of excitation signals for each element. The simulations performed for 16 and 24 elements array structure. The results indicated that M/S improved in first phase from 13.2 to over 22.2dB meanwhile the half power beamwidth (HPBW) left almost unchanged. After element replacement, in the second phase, by using amplitude tapering further improvement up to 32dB was achieved. Also, the simulations shown that after element space perturbation, some antenna elements can be merged together without any performance degradation in radiation pattern in terms of gain and sidelobes level.
Review on Medical Image Fusion using Shearlet TransformIRJET Journal
This document reviews medical image fusion using the shearlet transform. It discusses how medical image fusion combines information from multimodality images like CT, MRI, PET into a single image. The shearlet transform allows for more efficient encoding of anisotropic features compared to wavelets. The proposed algorithm involves decomposing registered input images using shearlet transforms, applying fusion rules to select coefficients, and reconstructing the fused image. Medical image fusion using shearlets can improve diagnosis by combining complementary anatomical and functional details from different imaging modalities.
Fusion Based Gaussian noise Removal in the Images using Curvelets and Wavelet...CSCJournals
This document presents a fusion-based method for removing Gaussian noise from images using curvelets and wavelets with a Gaussian filter. The proposed method aims to address artifacts that appear when using curvelets alone. It first applies Gaussian filtering, wavelet denoising, and curvelet denoising separately. It then fuses the results of these three approaches to obtain a better denoised image with fewer artifacts. The method is tested on various standard test images and medical images corrupted with white Gaussian noise. Results are evaluated using peak signal-to-noise ratio and weighted peak signal-to-noise ratio, which accounts for human visual sensitivity.
Novel design of a fractional wavelet and its application to image denoisingjournalBEEI
This paper proposes a new wavelet family based on fractional calculus. The Haar wavelet is extended to fractional order by a generalization of the associated conventional low-pass filter using the fractional delay operator Z-D. The high-pass fractional filter is then designed by a simple modulation of the low-pass filter. In contrast, the scaling and wavelet functions are constructed using the cascade Daubechies algorithm. The regularity and orthogonality of the obtained wavelet basis are ensured by a good choice of the fractional filter coefficients. An application example is presented to illustrate the effectiveness of the proposed method. Thanks to the flexibility of the fractional filters, the proposed approach provides better performance in term of image denoising.
APPLICATION OF PARTICLE SWARM OPTIMIZATION TO MICROWAVE TAPERED MICROSTRIP LINEScseij
This document discusses using Particle Swarm Optimization (PSO) to design a tapered microstrip transmission line to match an arbitrary load to a 50Ω line. PSO was used to optimize the impedances of a three section tapered line to minimize reflections. Simulations found impedances that gave good matching at 5GHz. PSO converged to solutions in under 1000 iterations. This demonstrates PSO's effectiveness in solving multi-objective microwave engineering optimization problems.
Application of particle swarm optimization to microwave tapered microstrip linescseij
Application of metaheuristic algorithms has been of continued interest in the field of electrical engineering
because of their powerful features. In this work special design is done for a tapered transmission line used
for matching an arbitrary real load to a 50Ω line. The problem at hand is to match this arbitray load to 50
Ω line using three section tapered transmission line with impedances in decreasing order from the load. So
the problem becomes optimizing an equation with three unknowns with various conditions. The optimized
values are obtained using Particle Swarm Optimization. It can easily be shown that PSO is very strong in
solving this kind of multiobjective optimization problems.
Similar to Segmentation Based Multilevel Wide Band Compression for SAR Images Using Coiflet Wavelet (20)
This presentation includes basic of PCOS their pathology and treatment and also Ayurveda correlation of PCOS and Ayurvedic line of treatment mentioned in classics.
Leveraging Generative AI to Drive Nonprofit InnovationTechSoup
In this webinar, participants learned how to utilize Generative AI to streamline operations and elevate member engagement. Amazon Web Service experts provided a customer specific use cases and dived into low/no-code tools that are quick and easy to deploy through Amazon Web Service (AWS.)
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
Walmart Business+ and Spark Good for Nonprofits.pdfTechSoup
"Learn about all the ways Walmart supports nonprofit organizations.
You will hear from Liz Willett, the Head of Nonprofits, and hear about what Walmart is doing to help nonprofits, including Walmart Business and Spark Good. Walmart Business+ is a new offer for nonprofits that offers discounts and also streamlines nonprofits order and expense tracking, saving time and money.
The webinar may also give some examples on how nonprofits can best leverage Walmart Business+.
The event will cover the following::
Walmart Business + (https://business.walmart.com/plus) is a new shopping experience for nonprofits, schools, and local business customers that connects an exclusive online shopping experience to stores. Benefits include free delivery and shipping, a 'Spend Analytics” feature, special discounts, deals and tax-exempt shopping.
Special TechSoup offer for a free 180 days membership, and up to $150 in discounts on eligible orders.
Spark Good (walmart.com/sparkgood) is a charitable platform that enables nonprofits to receive donations directly from customers and associates.
Answers about how you can do more with Walmart!"
Beyond Degrees - Empowering the Workforce in the Context of Skills-First.pptxEduSkills OECD
Iván Bornacelly, Policy Analyst at the OECD Centre for Skills, OECD, presents at the webinar 'Tackling job market gaps with a skills-first approach' on 12 June 2024
Reimagining Your Library Space: How to Increase the Vibes in Your Library No ...Diana Rendina
Librarians are leading the way in creating future-ready citizens – now we need to update our spaces to match. In this session, attendees will get inspiration for transforming their library spaces. You’ll learn how to survey students and patrons, create a focus group, and use design thinking to brainstorm ideas for your space. We’ll discuss budget friendly ways to change your space as well as how to find funding. No matter where you’re at, you’ll find ideas for reimagining your space in this session.
Segmentation Based Multilevel Wide Band Compression for SAR Images Using Coiflet Wavelet
1. P.Subashini & M.Krishnaveni
International Journal of Image Processing (IJIP), Volume (5) : Issue (2) : 2011 177
Segmentation Based Multilevel Wide Band Compression for SAR
Images Using Coiflet Wavelet
Dr.P.Subashini mail.p.subashini@gmail.com
Associate Professor
Department of Computer Science
Avinashilingam Deemed University for Women
Coimbatore, 641 043, India
M.Krishnaveni krishnaveni.rd@gmail.com
Research Assistant
Department of Computer Science
Avinashilingam Deemed University for Women
Coimbatore, 641 043, India
Abstract
Synthetic aperture radar (SAR) data represents a significant resource of information for a large
variety of researchers. Thus, there is a strong interest in developing data encoding and decoding
algorithms which can obtain higher compression ratios while keeping image quality to an
acceptable level. In this work, results of different wavelet-based image compression and
segmentation based wavelet image compression are assessed through controlled experiments
on synthetic SAR images. The effects of dissimilar wavelet functions, number of decompositions
are examined in order to find optimal family for SAR images. The choice of optimal wavelets in
segmentation based wavelet image compression is coiflet for low frequency and high frequency
component. The results presented here is a good reference for SAR application developers to
choose the wavelet families and also it concludes that wavelets transform is rapid, robust and
reliable tool for SAR image compression. Numerical results confirm the potency of this approach
Keywords: Image Processing, Compression, Segmentation, Wavelets, Quality Measures, SAR
Images
1. INTRODUCTION
The necessity of efficient and consistent compression techniques for remote sensing imagery is
increasing rapidly on the bases of network transmission [2]. In image processing domain the
neglection of peculiar characteristics of the data accounts a lot in poor recognition system [9]. The
usual approach to deal with this problem is to resort to edge-preserving filters. Segmentation,
compression and filtering should be carried out jointly for extraction and better representation of
most relevant features. Therefore a fundamental approach to image filtering and compression
requires the prior segmentation of the image in homogeneous regions [5].This work includes the
strong background of research result proposed by Zhaohui Zeng and Ian Cumming for SAR
image compression using DWT [15]. Here wavelet-based despeckling is used implicitly to
distinguish between noise components (high and low) and its state boundaries[4][2]. This work
aims at studying and quantifying the potential advantages provided by wavelet towards
segmentation based image compression. This work is investigated from few SAR image
collections like ERS Synthetic Aperture Radar (SAR) imagery , ESA,JERS and radarsat . The
paper follows as. Section 2 explains the compression schemes needed for SAR segmentation.
Section 3 deals with wavelet analysis and its metrics which comprises the performance measures
of wavelet families. Section 4 describes the compression coding schemes based on
segmentation. Section 5 presents and discusses the results of a number of experiments, and
Section 6 draws conclusions with possible scenario.
2. P.Subashini & M.Krishnaveni
International Journal of Image Processing (IJIP), Volume (5) : Issue (2) : 2011 178
2. SAR IMAGE WAVELET COMPRESSION
In the scientific literature concerning SAR system, the term compression is often used to indicate
the data processing which allows focusing the received echoes. SAR domain compression[3][14]
is carried out off-line at the base station, which can leverage on more time and computation
power. So image compression became an operating instrument, for scientific research[9]. In basic
literature evaluation is done on quality metrics, neglecting the visual effects and the
consequences on the image interpretability. In order to attain above conditions and limitations of
SAR, new compression techniques on wavelet transform is characterized with good time-
frequency localization, multiresolution representation low-complexity implementation through filter
banks [5]. Since the wavelet transform is a bandpass filter with a known response function (the
wavelet function), it is possible to reconstruct the original time series using either deconvolution or
the inverse filter [15]. This is straightforward for the orthogonal wavelet transform (which has an
orthogonal basis), but for the continuous wavelet transform it is complicated by the redundancy in
time and scale. However, this redundancy also makes it possible to reconstruct the time series
using a completely different wavelet function, the easiest of which is a delta (δ) function[1]. In this
case, the reconstructed time series is just the sum of the real part of the wavelet transform over
all scales which is stated in eqn(1)
∑=
ℜ
=
J
j j
jntj
n
s
sW
OC
x
0
2/1
0
2/1
)}({
)(ψ
δδ
δ
----(1)
The factor )0(=nψ removes the energy scaling, while the sj
1/2
converts the wavelet transform to
an energy density. The factor Cδ comes from the reconstruction of a δ function from its wavelet
transform using the function )(nn =ψ . This Cδ is a constant for each wavelet function .Note that
if the original time series was complex, then the sum of the complex Wn(s) would be used instead
[3][5]. To derive Cδ for a new wavelet function, first assume a time series with a δ function at time
n = 0, given by xn = δn. This time series has a Fourier transform sωk = N-1, constant for all k.
Substituting sωk into (1), at time n = 0 (the peak), the wavelet transform becomes
)(*
1
)(
1
0
^
k
N
k
s
N
SW ωψδ ∑
−
=
= ---------------------- (2)
The reconstruction eqn (2) then gives eqn (1) .The Cδ in eqn (1) is scale independent and is a
constant for each wavelet function.
∑ ∑
−
= =
=
1
0 0
2
2
)(N
x
J
j j
jntj
s
sW
NC δ
δδ
σ -------(3)
The total energy is conserved under the wavelet transform, and the equivalent of Parseval’s
theorem for wavelet analysis is given in eqn(3).Both eqn (1) and eqn (3) should be used to check
wavelet routines for accuracy and to ensure that sufficiently small values of sj and δj have been
chosen.
3. WAVELET ANALYSIS
This section describes the method of wavelet analysis, includes a discussion of different wavelet
functions, and gives details for the analysis of the wavelet power spectrum. Results in this section
are adapted to Segmentation based wavelet compression from the continuous formulas given in
coiflets. The wavelet bases performed over SAR images are as follows. Haar wavelet is the
simplest of the wavelet transforms[10]. This transform cross-multiplies a function against the Haar
wavelet with various shifts and stretches, like the Fourier transform cross-multiplies a function
against a sine wave with two phases and many stretches. Daubechies wavelets are a family of
orthogonal wavelets defining a discrete wavelet transform and characterized by a maximal
number of vanishing moments for some given support. Meyer's wavelet construction is
3. P.Subashini & M.Krishnaveni
International Journal of Image Processing (IJIP), Volume (5) : Issue (2) : 2011 179
fundamentally a solvent method for solving the two-scale equation. Symlet wavelet is only nearly
symmetric, and is not exactly symmetrical. Coiflets are discrete wavelets designed by Ingrid
Daubechies, to have scaling functions with vanishing moments. Biorthogonal wavelet is a
wavelet where the associated wavelet transform is invertible but not necessarily orthogonal.
Reverse biorthogonal is a spline wavelet filters.
To verify the validity of the wavelet families the results are compared based on PSNR ratio and
MSE. With the extension of the work, subjective evaluation is also done based on the
decomposition level. Compression is been the main phenomena for the evaluation job [8][12].
The DWT was used with coiflets, least asymmetric compactly-supported wavelet with eight
vanishing moments with four scales. The 120 x 120 pixel region SAR images are used for
applying wavelet families. They were contaminated with speckle noise[2][3].
Some of the parameters taken for analysis of wavelet on SAR images are Mean Square Error,
Peak Signal to Noise Ratio, Compression Score and Recovery Performance.
FIGURE 1: (a)Haar wavelet (b) Daubechies wavelet (c) Meyer Wavelet (d) Symlet wavelet
(e) Coiflets wavelet (f) Bi orthogonal wavelet (g) Reverse Bi orthogonal
4. P.Subashini & M.Krishnaveni
International Journal of Image Processing (IJIP), Volume (5) : Issue (2) : 2011 180
FIGURE 2: (a) Peak Signal to Noise Ratio (b) Mean Square Error Rate (c) Recovery Performance (d)
Compression Score
A sample of one SAR image is subjectively explained by the results in the above figure (1) and
objectively in figure (2). From the above got results the coiflets of wavelet families out performs
the rest of the wavelet families.
4. SEGMENTATION BASED WAVELET COMPRESSION
The idea behind wavelet image compression, like that of other transform compression techniques
is, it removes some of the coefficient data from the transformed image [8]. Encoding may be
applied to the remaining coefficients. The compressed image is reconstructed by decoding the
coefficients, if necessary, and applying the inverse transform to the result[17]. No much image
information is lost in the process of removing transform coefficient data.
FIGURE 3: Segmentation based encoding schemes
This process of image compression through transformation and information removal provides a
useful framework for understanding wavelet analysis[8]. Mathematical references develop
Denoise Compress
Segment Denoise
Fixed/adapt
Compress
Fixed/adapt
M+T+N M+T
M+T+N T+N
Code Map
T
5. P.Subashini & M.Krishnaveni
International Journal of Image Processing (IJIP), Volume (5) : Issue (2) : 2011 181
multiresolution analysis through the definition of continuous scaling and wavelet functions[18]. In
order to compare the performance of wavelet and contourlet transform in our proposed
compression scheme, we compute the compression ratios (CR) for various quantization levels.
CR is defined as the ratio between the uncompressed size and the compressed size of an image
[13]. To compute the CR in a fairly way, the original image is encoded using the resulting number
of bits it saved. This compression eliminates almost half the coefficients, yet no detectable
deterioration of the image appears. The compression procedure contains three steps: (i)
Decomposition. (ii) Detail coefficient thresholding[16] ,for each level from 1 to N, a threshold is
selected and hard thresholding is applied to the detail coefficients, and(iii) Reconstruction.
5. PERFORMANCE EVALUATION
Fig 4 and 5 shows PSNR and compression ratio curves for various SAR images[3] through
compression techniques. Fig 6 and 7 shows the Normalized cross correlation and Normal
Absolute error rate for the sample SAR images between two methods wavelet based
compression (WBC) and segmentation based wavelet compression (SBWC). The image size
held is 256x256. These curves plot compression ratio versus image quality, as represented by
PSNR. The most desirable state on this graph is the upper right quadrant, where high
compression ratios and good image quality live.
From the Figure 6 and 7 the Cross-correlations help identify variables which are leading
indicators of other variables or how much one variable is predicted to change in relation the other
variable. The identified method is justified in all the above taken parameters which objectively
states the phenomena in prominent manner
1 2 3 4 5 6 7 8 9
18
20
22
24
26
28
30
32
34
36
38
Number of Images (256 x256 pixels)
PSNR(db)
Comparison of PSNR values for two methods
wbc
sbwc
FIGURE 4: Comparison of PSNR for WBC and SBWC
6. P.Subashini & M.Krishnaveni
International Journal of Image Processing (IJIP), Volume (5) : Issue (2) : 2011 182
1 2 3 4 5 6 7 8
65
70
75
80
85
90
95
Number of images (256x256 pixels)
CompressionRatioin%
Comparison of Compression ration for two methods
WBC
SBWC
FIGURE 5: Comparison of CR for WBC and SBWC
1 2 3 4 5 6 7 8
0.9
0.95
1
1.05
1.1
1.15
1.2
1.25
Number of images (256x256)
Normalizedcrosscorrelation
Comparison of NCC for two methods
SBWC
WBC
FIGURE 6: Comparison of NCC for WBC and SBWC
7. P.Subashini & M.Krishnaveni
International Journal of Image Processing (IJIP), Volume (5) : Issue (2) : 2011 183
1 2 3 4 5 6 7 8 9
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
Number of images (256x256)
NormalAbsoluteError
Comparison of NAE for two methods
SBWC
WBC
FIGURE 7: Comparison of NAE for WBC and SBWC
6. CONCLUSION
The objective specified is clearly evaluated by segmentation based image compression
performance, in accounting compression ratio, PSNR and time needs. It is oriented toward the
operational naval application intelligent imagery system. The images were evaluated
quantitatively and qualitive assessments are done by metrics in order to establish the impact of
wavelet and the need of segmentation based approach for image compression. The experimental
analysis is mainly performed on SAR data from various satellite projections. Thus, images were
processed in two different ways, analysis shows SBWC makes images smoother and conserves
object edges without affecting the resolution. In ending, the Segmentation based wavelet
compression technique allows output as expertly compressed image, with different operative
importance without changing the pixel accuracy and image resolution or size. The analysis
presented here will prove useful in studies of nonstationarity in time series, and the addition of
statistical significance tests will improve the quantitative nature of wavelet analysis.
7. REFERENCES
[1] Birgir Bjorn Saevarsson,Johannes R.Sveinsson and Jon Atli Benediktsson “Combined
Wavelet and Curvelet Denoising of SAR Images” Proceedings of IEEE 2004.
[2] A.Bruce and H, Gao. .Applied Wavelet Analysis with S-Plus.. Springer .Verlag New York, Inc.
1996.
[3] S. G. Chang, B Yu and M Vetterli. .Adaptive Wavelet Thresholding for image Denoising and
Compression.. IEEE Transactions on Image Processing, Vol. 9, No. 9, September 2000.
[4]. Gersho A., Gray M. “ Vector quantization and Signal compression”, Kluwer Academic
Publishers, Boston, 1992.
[5] M. Grgic, M. Ravnjak, and B. Zovko-Cihlar, “Filter comparison in wavelet transform of still
mages,” in Proc. IEEE Int. Symp. Industrial Electronics, ISIE’99, Bled, Slovenia, pp. 105–110.
1999,
8. P.Subashini & M.Krishnaveni
International Journal of Image Processing (IJIP), Volume (5) : Issue (2) : 2011 184
[6] Guozhong Chen,Xingzhao Liu “Wavelet-Based Despeckling SAR Images Using Neighbouring
Wavelet Cofficients.” Proceedings of IEEE 2005.
[7] Guozhong Chen,Xingzhao Liu “An Improved Wavelet-based Method for SAR Images
Denoising Using Data Fusion Technique”. Proceedings of IEEE 2006.
[8]] J. Lu, V. R. Algazi, and R. R. Estes, “Comparative study of wavelet image coders,” Opt.
Eng., vol. 35, pp. 2605–2619, Sept. 1996.
[9] Mario Mastriani “New Wavelet-based Superresolution Algorithm for Speckle Reduction in
SAR Images” IJCS volume 1 number 4, 2006.
[10] Mulcahy, Colm. .Image compression using the Haar wavelet transform.. Spelman Science
and Math Journal. Found at: http://www.spelman.edu/~colm/wav.html.
[11] Sonka, M. Hiaual, V. Boyle, R. Image Processing, Analysis and Machine Vision, 2nd
edition. Brooks/Cole Publishing Company.
[12] William B., Joan L, “Still Image Data Compression Standard”, Van Nostrand Reinhold, New
York,1992
[13] www.mdpi.com/journal/sensors Article Haiyan Li 1,2 , Yijun He 1,* and Wenguang Wang
Improving Ship Detection with Polarimetric SAR based on Convolution between Co-
polarization Channels.
[14] Zhaohui Zeng and Ian Cumming,” SAR Image Compression Based on the Discrete Wavelet
Transform”, Presented at the Fourth International Conference on Signal Processing ICSP'98,
Beijing, China, October 12-16, 1998.
[15] T. Chang and C. Kuo, "Texture Analysis and Classification with Tree-Structured Wavelet
Transform", IEEE Trans. Image Processing, Vol. 2, No. 4, pp. 429-441, October 1993.
[16] Chang.S and B.Yu, “Spatially adaptive wavelet thresholding with context modeling for image
denoising.IEEE trans Image processing 9(9):533-539 ,2000.
[17] S. Arivazhagan and L. Ganesan. Texture Segmentation Using Wavelet Transform. Pattern
Recognition Letters, 24(16):3197–3203, December 2003.
[18] M. G. Mostafa, T. F. Gharib, and coll. Medical Image Segmentation Using a Wavelet-Based
Multiresolution EM Algorithm. IEEE International Conference on Industrial Electronics
Technology & Automation, December 2001.