This document summarizes an experiment that compares two methods of bit allocation for compressing hyperspectral imagery using JPEG2000: 1) the traditional high bit rate quantizer approach and 2) the rate distortion optimal (RDO) approach. The experiment shows that both methods perform well at relatively low bit rates, achieving over 96% classification accuracy. However, at very low bit rates, the RDO approach outperforms the high bit rate quantizer approach, achieving 90% accuracy at 0.0375 bpppb compared to less than 90% for the high bit rate method. The RDO approach also achieves lower mean squared error than the high bit rate quantizer approach.
An efficient fusion based up sampling technique for restoration of spatially ...ijitjournal
The various up-sampling techniques available in the literature produce blurring artifacts in the upsampled,
high resolution images. In order to overcome this problem effectively, an image fusion based interpolation technique is proposed here to restore the high frequency information. The Discrete Cosine Transform interpolation technique preserves low frequency information whereas Discrete Sine Transform preserves high frequency information. Therefore, by fusing the DCT and DST based up-sampled images, more high frequency, relevant information of both the up-sampled images can be preserved in the restored,
fused image. The restoration of high frequency information lessens the degree of blurring in the fusedimage and hence improves its objective and subjective quality. Experimental result shows the proposed method achieves a Peak Signal to Noise Ratio (PSNR) improvement up to 0.9947dB than DCT interpolation and 2.8186dB than bicubic interpolation at 4:1 compression ratio.
Sub-windowed laser speckle image velocimetry by fast fourier transform technique
Abstract
In this work, laser speckle velocimetry, a unique optical method for velocity measurement of fluid flow has been described. A laser sheet is developed and is illuminated on microscopic seeded particles to produce the speckle pattern at the recording plane. Double frame- single-exposure speckle images are captured in such a way that the second speckle image is shifted exactly in a known direction. The auto-correlation method has the ambiguity of direction of flow. To rectify this, spatial shift of the second image has been premeditated. Cross-correlation of sub interrogation areas is obtained by Fast Fourier Transform technique. Four sub-windows processed to obtain the velocity information with vector map analysis precisely.
Single image super resolution with improved wavelet interpolation and iterati...iosrjce
IOSR journal of VLSI and Signal Processing (IOSRJVSP) is a double blind peer reviewed International Journal that publishes articles which contribute new results in all areas of VLSI Design & Signal Processing. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced VLSI Design & Signal Processing concepts and establishing new collaborations in these areas.Design and realization of microelectronic systems using VLSI/ULSI technologies require close collaboration among scientists and engineers in the fields of systems architecture, logic and circuit design, chips and wafer fabrication, packaging, testing and systems applications. Generation of specifications, design and verification must be performed at all abstraction levels, including the system, register-transfer, logic, circuit, transistor and process levels.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Introduction to Wavelet Transform and Two Stage Image DE noising Using Princi...ijsrd.com
In past two decades there are various techniques are developed to support variety of image processing applications. The applications of image processing include medical, satellite, space, transmission and storage, radar and sonar etc. But noise in image effect all applications. So it is necessary to remove noise from image. There are various methods and techniques are there to remove noise from images. Wavelet transform (WT) has been proved to be effective in noise removal but this have some problems that is overcome by PCA method. This paper presents an efficient image de-noising scheme by using principal component analysis (PCA) with local pixel grouping (LPG). This method provides better preservation of image local structures. In this method a pixel and its nearest neighbors are modeled as a vector variable whose training samples are selected from the local window by using block matching based LPG. In image de-noising, a compromise has to be found between noise reduction and preserving significant image details. PCA is a statistical technique for simplifying a dataset by reducing datasets to lower dimensions. It is a standard technique commonly used for data reduction in statistical pattern recognition and signal processing. This paper proposes a de-noising technique by using a new statistical approach, principal component analysis with local pixel grouping (LPG). This procedure is iterated second time to further improve the de-noising performance, and the noise level is adaptively adjusted in the second stage.
An efficient fusion based up sampling technique for restoration of spatially ...ijitjournal
The various up-sampling techniques available in the literature produce blurring artifacts in the upsampled,
high resolution images. In order to overcome this problem effectively, an image fusion based interpolation technique is proposed here to restore the high frequency information. The Discrete Cosine Transform interpolation technique preserves low frequency information whereas Discrete Sine Transform preserves high frequency information. Therefore, by fusing the DCT and DST based up-sampled images, more high frequency, relevant information of both the up-sampled images can be preserved in the restored,
fused image. The restoration of high frequency information lessens the degree of blurring in the fusedimage and hence improves its objective and subjective quality. Experimental result shows the proposed method achieves a Peak Signal to Noise Ratio (PSNR) improvement up to 0.9947dB than DCT interpolation and 2.8186dB than bicubic interpolation at 4:1 compression ratio.
Sub-windowed laser speckle image velocimetry by fast fourier transform technique
Abstract
In this work, laser speckle velocimetry, a unique optical method for velocity measurement of fluid flow has been described. A laser sheet is developed and is illuminated on microscopic seeded particles to produce the speckle pattern at the recording plane. Double frame- single-exposure speckle images are captured in such a way that the second speckle image is shifted exactly in a known direction. The auto-correlation method has the ambiguity of direction of flow. To rectify this, spatial shift of the second image has been premeditated. Cross-correlation of sub interrogation areas is obtained by Fast Fourier Transform technique. Four sub-windows processed to obtain the velocity information with vector map analysis precisely.
Single image super resolution with improved wavelet interpolation and iterati...iosrjce
IOSR journal of VLSI and Signal Processing (IOSRJVSP) is a double blind peer reviewed International Journal that publishes articles which contribute new results in all areas of VLSI Design & Signal Processing. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced VLSI Design & Signal Processing concepts and establishing new collaborations in these areas.Design and realization of microelectronic systems using VLSI/ULSI technologies require close collaboration among scientists and engineers in the fields of systems architecture, logic and circuit design, chips and wafer fabrication, packaging, testing and systems applications. Generation of specifications, design and verification must be performed at all abstraction levels, including the system, register-transfer, logic, circuit, transistor and process levels.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Introduction to Wavelet Transform and Two Stage Image DE noising Using Princi...ijsrd.com
In past two decades there are various techniques are developed to support variety of image processing applications. The applications of image processing include medical, satellite, space, transmission and storage, radar and sonar etc. But noise in image effect all applications. So it is necessary to remove noise from image. There are various methods and techniques are there to remove noise from images. Wavelet transform (WT) has been proved to be effective in noise removal but this have some problems that is overcome by PCA method. This paper presents an efficient image de-noising scheme by using principal component analysis (PCA) with local pixel grouping (LPG). This method provides better preservation of image local structures. In this method a pixel and its nearest neighbors are modeled as a vector variable whose training samples are selected from the local window by using block matching based LPG. In image de-noising, a compromise has to be found between noise reduction and preserving significant image details. PCA is a statistical technique for simplifying a dataset by reducing datasets to lower dimensions. It is a standard technique commonly used for data reduction in statistical pattern recognition and signal processing. This paper proposes a de-noising technique by using a new statistical approach, principal component analysis with local pixel grouping (LPG). This procedure is iterated second time to further improve the de-noising performance, and the noise level is adaptively adjusted in the second stage.
Modified adaptive bilateral filter for image contrast enhancementeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
IMPROVEMENT OF BM3D ALGORITHM AND EMPLOYMENT TO SATELLITE AND CFA IMAGES DENO...ijistjournal
This paper proposes a new procedure in order to improve the performance of block matching and 3-D filtering (BM3D) image denoising algorithm. It is demonstrated that it is possible to achieve a better performance than that of BM3D algorithm in a variety of noise levels. This method changes BM3D algorithm parameter values according to noise level, removes prefiltering, which is used in high noise level; therefore Peak Signal-to-Noise Ratio (PSNR) and visual quality get improved, and BM3D complexities and processing time are reduced. This improved BM3D algorithm is extended and used to denoise satellite and color filter array (CFA) images. Output results show that the performance has upgraded in comparison with current methods of denoising satellite and CFA images. In this regard this algorithm is compared with Adaptive PCA algorithm, that has led to superior performance for denoising CFA images, on the subject of PSNR and visual quality. Also the processing time has decreased significantly.
40 9148 satellite image enhancement using dual edit tyasIAESIJEECS
Drawback of losing high frequency components suffers the resolution enhancement. In this project, wavelet domain based image resolution enhancement technique using Dual Tree M-Band Wavelet Transform (DTMBWT) is proposed for resolution enhancement of the satellite images. Input images are decomposed by using DTMBWT in this proposed enhancement technique. Inverse DTMBWT is used to generate a new resolution enhanced image from the interpolation of high-frequency sub band images and the input low-resolution image. Intermediate stage has been proposed for estimating the high frequency sub bands to achieve a sharper image. It has been tested on benchmark images from public database. Peak Signal-To-Noise Ratio (PSNR) and visual results show the dominance of the proposed technique over the predictable and state-of-art image resolution enhancement techniques.
NOVEL BAND-REJECT FILTER DESIGN USING MULTILAYER BRAGG MIRROR AT 1550 NMcscpconf
Novel band-reject filter is proposed using multilayer Bragg mirror structure by computing reflection coefficient at 1550 nm wavelength for optical communication. Dimension of different
layers and material composition are modified to study the effect on rejection bandwidth, and no of layers is also varied for analyzing passband characteristics. GaN/AlxGa1-xN composiiton is taken as the choice for simulation purpose, carried out using propagation matrix method. Refractive indices of the materials are considered as function of bandgap, perating wavelength and material composition following Adachi’s model. One interesting result arises from the computation that band-reject filter may be converted into band-pass one by suitably varying ratio of thicknesses of unit cell, or by varying Al mole fraction. Simulated results can be utilised to design VCSEL mirror as optical transmitter.
Noise resistance territorial intensity-based optical flow using inverse confi...journalBEEI
This paper presents the use of the inverse confidential technique on bilateral function with the territorial intensity-based optical flow to prove the effectiveness in noise resistance environment. In general, the image’s motion vector is coded by the technique called optical flow where the sequences of the image are used to determine the motion vector. But, the accuracy rate of the motion vector is reduced when the source of image sequences is interfered by noises. This work proved that the inverse confidential technique on bilateral function can increase the percentage of accuracy in the motion vector determination by the territorial intensity-based optical flow under the noisy environment. We performed the testing with several kinds of non-Gaussian noises at several patterns of standard image sequences by analyzing the result of the motion vector in a form of the error vector magnitude (EVM) and compared it with several noise resistance techniques in territorial intensity-based optical flow method.
ER Publication,
IJETR, IJMCTR,
Journals,
International Journals,
High Impact Journals,
Monthly Journal,
Good quality Journals,
Research,
Research Papers,
Research Article,
Free Journals, Open access Journals,
erpublication.org,
Engineering Journal,
Science Journals,
Satellite Image Enhancement Using Dual Tree Complex Wavelet TransformjournalBEEI
Drawback of losing high frequency components suffers the resolution enhancement. In this project, wavelet domain based image resolution enhancement technique using Dual Tree Complex Wavelet Transform (DT-CWT) is proposed for resolution enhancement of the satellite images. Input images are decomposed by using DT-CWT in this proposed enhancement technique. Inverse DT-CWT is used to generate a new resolution enhanced image from the interpolation of high-frequency sub band images and the input low-resolution image. Intermediate stage has been proposed for estimating the high frequency sub bands to achieve a sharper image. It has been tested on benchmark images from public database. Peak Signal-To-Noise Ratio (PSNR) and visual results show the dominance of the proposed technique over the predictable and state-of-art image resolution enhancement techniques.
Image compression using Hybrid wavelet Transform and their Performance Compa...IJMER
Images may be worth a thousand words, but they generally occupy much more space in hard disk, or
bandwidth in a transmission system, than their proverbial counterpart. Compressing an image is significantly
different than compressing raw binary data. Of course, general purpose compression programs can be used to
compress images, but the result is less than optimal. This is because images have certain statistical properties
which can be exploited by encoders specifically designed for them. Also, some of the finer details in the image
can be sacrificed for the sake of saving a little more bandwidth or storage space. Compression is the process of
representing information in a compact form. Compression is a necessary and essential method for creating
image files with manageable and transmittable sizes. The data compression schemes can be divided into
lossless and lossy compression. In lossless compression, reconstructed image is exactly same as compressed
image. In lossy image compression, high compression ratio is achieved at the cost of some error in reconstructed
image. Lossy compression generally provides much higher compression than lossless compression.
Multi Resolution features of Content Based Image RetrievalIDES Editor
Many content based retrieval systems have been
proposed to manage and retrieve images on the basis of their
content. In this paper we proposed Color Histogram, Discrete
Wavelet Transform and Complex Wavelet Transform
techniques for efficient image retrieval from huge database.
Color Histogram technique is based on exact matching of
histogram of query image and database. Discrete Wavelet
transform technique retrieves images based on computation
of wavelet coefficients of subbands. Complex Wavelet
Transform technique includes computation of real and
imaginary part to extract the details from texture. The
proposed method is tested on COREL1000 database and
retrieval results have demonstrated a significant improvement
in precision and recall.
Spectroscopy or hyperspectral imaging consists in the acquisition, analysis, and extraction of the spectral information measured on a specific region or object using an airborne or satellite device. Hyperspectral imaging has become an active field of research recently. One way of analysing such data is through clustering. However, due to the high dimensionality of the data and the small distance between the different material signatures, clustering such a data is a challenging task.In this paper, we empirically compared five clustering techniques in different hyperspectral data sets. The considered clustering techniques are K-means, K-medoids, fuzzy Cmeans, hierarchical, and density-based spatial clustering of applications with noise. Four data sets are used to achieve this purpose which is Botswana, Kennedy space centre, Pavia, and Pavia University. Beside the accuracy, we adopted four more similarity measures: Rand statistics, Jaccard coefficient, Fowlkes-Mallows index, and Hubert index. According to accuracy, we found that fuzzy C-means clustering is doing better on Botswana and Pavia data sets, K-means and K-medoids are giving better results on Kennedy space centre data set, and for Pavia University the hierarchical clustering is better
Abstract: Primarily due to the progresses in super resolution imagery, the methods of segment-based image analysis for generating and updating geographical information are becoming more and more important. This work presents a image segmentation based on colour features with K-means clustering. The entire work is divided into two stages. First enhancement of color separation of satellite image using de correlation stretching is carried out and then the regions are grouped into a set of five classes using K-means clustering algorithm. At first, the spatial data is concentrated focused around every pixel, and at that point two separating procedures are added to smother the impact of pseudoedges. What's more, the spatial data weight is built and grouped with k-means bunching, and the regularization quality in every district is controlled by the bunching focus esteem. The exploratory results, on both reenacted and genuine datasets, demonstrate that the proposed methodology can adequately lessen the pseudoedges of the aggregate variety regularization in the level.
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Modified adaptive bilateral filter for image contrast enhancementeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
IMPROVEMENT OF BM3D ALGORITHM AND EMPLOYMENT TO SATELLITE AND CFA IMAGES DENO...ijistjournal
This paper proposes a new procedure in order to improve the performance of block matching and 3-D filtering (BM3D) image denoising algorithm. It is demonstrated that it is possible to achieve a better performance than that of BM3D algorithm in a variety of noise levels. This method changes BM3D algorithm parameter values according to noise level, removes prefiltering, which is used in high noise level; therefore Peak Signal-to-Noise Ratio (PSNR) and visual quality get improved, and BM3D complexities and processing time are reduced. This improved BM3D algorithm is extended and used to denoise satellite and color filter array (CFA) images. Output results show that the performance has upgraded in comparison with current methods of denoising satellite and CFA images. In this regard this algorithm is compared with Adaptive PCA algorithm, that has led to superior performance for denoising CFA images, on the subject of PSNR and visual quality. Also the processing time has decreased significantly.
40 9148 satellite image enhancement using dual edit tyasIAESIJEECS
Drawback of losing high frequency components suffers the resolution enhancement. In this project, wavelet domain based image resolution enhancement technique using Dual Tree M-Band Wavelet Transform (DTMBWT) is proposed for resolution enhancement of the satellite images. Input images are decomposed by using DTMBWT in this proposed enhancement technique. Inverse DTMBWT is used to generate a new resolution enhanced image from the interpolation of high-frequency sub band images and the input low-resolution image. Intermediate stage has been proposed for estimating the high frequency sub bands to achieve a sharper image. It has been tested on benchmark images from public database. Peak Signal-To-Noise Ratio (PSNR) and visual results show the dominance of the proposed technique over the predictable and state-of-art image resolution enhancement techniques.
NOVEL BAND-REJECT FILTER DESIGN USING MULTILAYER BRAGG MIRROR AT 1550 NMcscpconf
Novel band-reject filter is proposed using multilayer Bragg mirror structure by computing reflection coefficient at 1550 nm wavelength for optical communication. Dimension of different
layers and material composition are modified to study the effect on rejection bandwidth, and no of layers is also varied for analyzing passband characteristics. GaN/AlxGa1-xN composiiton is taken as the choice for simulation purpose, carried out using propagation matrix method. Refractive indices of the materials are considered as function of bandgap, perating wavelength and material composition following Adachi’s model. One interesting result arises from the computation that band-reject filter may be converted into band-pass one by suitably varying ratio of thicknesses of unit cell, or by varying Al mole fraction. Simulated results can be utilised to design VCSEL mirror as optical transmitter.
Noise resistance territorial intensity-based optical flow using inverse confi...journalBEEI
This paper presents the use of the inverse confidential technique on bilateral function with the territorial intensity-based optical flow to prove the effectiveness in noise resistance environment. In general, the image’s motion vector is coded by the technique called optical flow where the sequences of the image are used to determine the motion vector. But, the accuracy rate of the motion vector is reduced when the source of image sequences is interfered by noises. This work proved that the inverse confidential technique on bilateral function can increase the percentage of accuracy in the motion vector determination by the territorial intensity-based optical flow under the noisy environment. We performed the testing with several kinds of non-Gaussian noises at several patterns of standard image sequences by analyzing the result of the motion vector in a form of the error vector magnitude (EVM) and compared it with several noise resistance techniques in territorial intensity-based optical flow method.
ER Publication,
IJETR, IJMCTR,
Journals,
International Journals,
High Impact Journals,
Monthly Journal,
Good quality Journals,
Research,
Research Papers,
Research Article,
Free Journals, Open access Journals,
erpublication.org,
Engineering Journal,
Science Journals,
Satellite Image Enhancement Using Dual Tree Complex Wavelet TransformjournalBEEI
Drawback of losing high frequency components suffers the resolution enhancement. In this project, wavelet domain based image resolution enhancement technique using Dual Tree Complex Wavelet Transform (DT-CWT) is proposed for resolution enhancement of the satellite images. Input images are decomposed by using DT-CWT in this proposed enhancement technique. Inverse DT-CWT is used to generate a new resolution enhanced image from the interpolation of high-frequency sub band images and the input low-resolution image. Intermediate stage has been proposed for estimating the high frequency sub bands to achieve a sharper image. It has been tested on benchmark images from public database. Peak Signal-To-Noise Ratio (PSNR) and visual results show the dominance of the proposed technique over the predictable and state-of-art image resolution enhancement techniques.
Image compression using Hybrid wavelet Transform and their Performance Compa...IJMER
Images may be worth a thousand words, but they generally occupy much more space in hard disk, or
bandwidth in a transmission system, than their proverbial counterpart. Compressing an image is significantly
different than compressing raw binary data. Of course, general purpose compression programs can be used to
compress images, but the result is less than optimal. This is because images have certain statistical properties
which can be exploited by encoders specifically designed for them. Also, some of the finer details in the image
can be sacrificed for the sake of saving a little more bandwidth or storage space. Compression is the process of
representing information in a compact form. Compression is a necessary and essential method for creating
image files with manageable and transmittable sizes. The data compression schemes can be divided into
lossless and lossy compression. In lossless compression, reconstructed image is exactly same as compressed
image. In lossy image compression, high compression ratio is achieved at the cost of some error in reconstructed
image. Lossy compression generally provides much higher compression than lossless compression.
Multi Resolution features of Content Based Image RetrievalIDES Editor
Many content based retrieval systems have been
proposed to manage and retrieve images on the basis of their
content. In this paper we proposed Color Histogram, Discrete
Wavelet Transform and Complex Wavelet Transform
techniques for efficient image retrieval from huge database.
Color Histogram technique is based on exact matching of
histogram of query image and database. Discrete Wavelet
transform technique retrieves images based on computation
of wavelet coefficients of subbands. Complex Wavelet
Transform technique includes computation of real and
imaginary part to extract the details from texture. The
proposed method is tested on COREL1000 database and
retrieval results have demonstrated a significant improvement
in precision and recall.
Spectroscopy or hyperspectral imaging consists in the acquisition, analysis, and extraction of the spectral information measured on a specific region or object using an airborne or satellite device. Hyperspectral imaging has become an active field of research recently. One way of analysing such data is through clustering. However, due to the high dimensionality of the data and the small distance between the different material signatures, clustering such a data is a challenging task.In this paper, we empirically compared five clustering techniques in different hyperspectral data sets. The considered clustering techniques are K-means, K-medoids, fuzzy Cmeans, hierarchical, and density-based spatial clustering of applications with noise. Four data sets are used to achieve this purpose which is Botswana, Kennedy space centre, Pavia, and Pavia University. Beside the accuracy, we adopted four more similarity measures: Rand statistics, Jaccard coefficient, Fowlkes-Mallows index, and Hubert index. According to accuracy, we found that fuzzy C-means clustering is doing better on Botswana and Pavia data sets, K-means and K-medoids are giving better results on Kennedy space centre data set, and for Pavia University the hierarchical clustering is better
Abstract: Primarily due to the progresses in super resolution imagery, the methods of segment-based image analysis for generating and updating geographical information are becoming more and more important. This work presents a image segmentation based on colour features with K-means clustering. The entire work is divided into two stages. First enhancement of color separation of satellite image using de correlation stretching is carried out and then the regions are grouped into a set of five classes using K-means clustering algorithm. At first, the spatial data is concentrated focused around every pixel, and at that point two separating procedures are added to smother the impact of pseudoedges. What's more, the spatial data weight is built and grouped with k-means bunching, and the regularization quality in every district is controlled by the bunching focus esteem. The exploratory results, on both reenacted and genuine datasets, demonstrate that the proposed methodology can adequately lessen the pseudoedges of the aggregate variety regularization in the level.
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Particle Swarm Optimization for Nano-Particles Extraction from Supporting Mat...CSCJournals
Metallic and non-metallic nano-particles have attracted much interest concerning their wide applications. Transmission electron microscopy (TEM) is the state of the art method to characterize a nano-particle with respect to size, morphology, structure, or composition. This paper presents an efficient evolutionary computational method, particle swarm optimization (PSO), for automatic segmentation of nano-particles. A threshold-based segmentation technique is applied, where image entropy is attacked as a minimization problem to specify local and global thresholds. We are concerned with reducing wrong characterization of nano-particles due to concentration of liquid solutions or supporting material within the acquired image. The obtained results are compared with manual techniques and with previous researches in this area.
Improved nonlocal means based on pre classification and invariant block matchingIAEME Publication
One of the most popular image denoising methods based on self-similarity is called nonlocal
means (NLM). Though it can achieve remarkable performance, this method has a few shortcomings,
e.g., the computationally expensive calculation of the similarity measure, and the lack of reliable
candidates for some non repetitive patches. In this paper, we propose to improve NLM by integrating
Gaussian blur, clustering, and row image weighted averaging into the NLM framework.
Experimental results show that the proposed technique can perform denoising better than the original
NLM both quantitatively and visually, especially when the noise level is high.
SAR ICE IMAGE CLASSIFICATION USING PARALLELEPIPED CLASSIFIER BASED ON GRAM-SC...cscpconf
Synthetic Aperture Radar (SAR) is a special type of imaging radar that involves advanced technology and complex data processing to obtain detailed images from the lake surface. Lake ice typically reflects more of the radar energy emitted by the sensor than the surrounding area, which makes it easy to distinguish between the water and the ice surface. In this research work, SAR images are used for ice classification based on supervised and unsupervised classification
algorithms. In the pre-processing stage, Hue saturation value (HSV) and Gram–Schmidt spectral sharpening techniques are applied for sharpening and resampling to attain highresolution
pixel size. Based on the performance evaluation metrics it is proved that GramSchmidt spectral sharpening performs better than sharpening the HSV between the boundaries.
In classification stage, Gram–Schmidt spectral technique based sharpened SAR images are used as the input for classifying using parallelepiped and ISO data classifier. The performances of the classifiers are evaluated with overall accuracy and kappa coefficient. From the experimental results, ice from water is classified more accurately in the parallelepiped supervised classification algorithm.
Sar ice image classification using parallelepiped classifier based on gram sc...csandit
Synthetic Aperture Radar (SAR) is a special type of imaging radar that involves advanced
technology and complex data processing to obtain detailed images from the lake surface. Lake
ice typically reflects more of the radar energy emitted by the sensor than the surrounding area,
which makes it easy to distinguish between the water and the ice surface. In this research work,
SAR images are used for ice classification based on supervised and unsupervised classification
algorithms. In the pre-processing stage, Hue saturation value (HSV) and Gram–Schmidt
spectral sharpening techniques are applied for sharpening and resampling to attain highresolution
pixel size. Based on the performance evaluation metrics it is proved that Gram-
Schmidt spectral sharpening performs better than sharpening the HSV between the boundaries.
In classification stage, Gram–Schmidt spectral technique based sharpened SAR images are used
as the input for classifying using parallelepiped and ISO data classifier. The performances of
the classifiers are evaluated with overall accuracy and kappa coefficient. From the
experimental results, ice from water is classified more accurately in the parallelepiped
supervised classification algorithm.
Spectral Density Oriented Feature Coding For Pattern Recognition ApplicationIJERDJOURNAL
ABSTRACT:- The significant of multi spectral band resolution is explored towards selection of feature coefficients based on its energy density. Toward the feature representiaon in transformed domain, multi wavelet transformations were used for finer spectral representation. However, due to a large feature count these features are not optimal under low resource computing system. In the recognition units, running with low resources a new coding approach of feature selection, considering the band spectral density is developed. The effective selection of feature element, based on its spectral density achieve two objective of pattern recognition, the feature coefficient representiaon is minimized, hence leading to lower resource requirement, and dominant feature representation, resulting in higher retrieval performance.
IMPROVEMENT OF BM3D ALGORITHM AND EMPLOYMENT TO SATELLITE AND CFA IMAGES DENO...ijistjournal
This paper proposes a new procedure in order to improve the performance of block matching and 3-D filtering (BM3D) image denoising algorithm. It is demonstrated that it is possible to achieve a better performance than that of BM3D algorithm in a variety of noise levels. This method changes BM3D algorithm parameter values according to noise level, removes prefiltering, which is used in high noise level; therefore Peak Signal-to-Noise Ratio (PSNR) and visual quality get improved, and BM3D complexities and processing time are reduced. This improved BM3D algorithm is extended and used to denoise satellite and color filter array (CFA) images. Output results show that the performance has upgraded in comparison with current methods of denoising satellite and CFA images. In this regard this algorithm is compared with Adaptive PCA algorithm, that has led to superior performance for denoising CFA images, on the subject of PSNR and visual quality. Also the processing time has decreased significantly.
2-Dimensional Wavelet pre-processing to extract IC-Pin information for disarr...IOSR Journals
Abstract: Due to higher processing power to cost ratio, it is now possible to replace the manual detection methods used in the IC (Integrated Circuit) industry by Image-processing based automated methods, to detect a broken pin of an IC connected on a PCB during manufacturing, which will make the process faster, easier and cheaper. In this paper an accurate and fast automatic detection method is used where the top view camera shots of PCBs are processed using advanced methods of 2-dimensional discrete wavelet pre-processing before applying edge-detection. Comparison with conventional edge detection methods such as Sobel, Prewitt and Canny edge detection without 2-D DWT is also performed. Keywords :2-dimensional wavelets, Edge detection, Machine vision, Image processing, Canny.
Multispectral images are used for space Arial application, target detection and remote sensing application. MS images are very rich in spectral resolution but at a cost of spatial resolution. We propose a new method to increase a spatial resolution MS images. For spatial resolution enhancement of MS images we need to employ a super-resolution technique which uses a Principal Component Analysis (PCA) based approach by learning an edge details from database. Experiments have been carried out on both real multispectral (MS) data and MS data. This experiment is done with the usefulness for hyper spectral (HS) data as a future work.
Survey on Single image Super Resolution TechniquesIOSR Journals
Super-resolution is the process of recovering a high-resolution image from multiple lowresolutionimages
of the same scene. The key objective of super-resolution (SR) imaging is to reconstruct a
higher-resolution image based on a set of images, acquired from the same scene and denoted as ‘lowresolution’
images, to overcome the limitation and/or ill-posed conditions of the image acquisition process for
facilitating better content visualization and scene recognition. In this paper, we provide a comprehensive review
of existing super-resolution techniques and highlight the future research challenges. This includes the
formulation of an observation model and coverage of the dominant algorithm – Iterative back projection.We
critique these methods and identify areas which promise performance improvements. In this paper, future
directions for super-resolution algorithms are discussed. Finally results of available methods are given.
Survey on Single image Super Resolution TechniquesIOSR Journals
Abstract:Super-resolution is the process of recovering a high-resolution image from multiple low-resolutionimages of the same scene. The key objective of super-resolution (SR) imaging is to reconstruct a higher-resolution image based on a set of images, acquired from the same scene and denoted as ‘low-resolution’ images, to overcome the limitation and/or ill-posed conditions of the image acquisition process for facilitating better content visualization and scene recognition. In this paper, we provide a comprehensive review of existing super-resolution techniques and highlight the future research challenges. This includes the formulation of an observation model and coverage of the dominant algorithm – Iterative back projection.We critique these methods and identify areas which promise performance improvements. In this paper, future directions for super-resolution algorithms are discussed. Finally results of available methods are given. Keywords: Super-resolution, POCS, IBP, Canny Edge Detection
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Density Driven Image Coding for Tumor Detection in mri ImageIOSRjournaljce
The significant of multi spectral band resolution is explored towards selection of feature coefficients based on its energy density. Toward the feature representiaon in transformed domain, multi wavelet transformations were used for finer spectral representation. However, due to a large feature count these features are not optimal under low resource computing system. In the recognition units, running with low resources a new coding approach of feature selection, considering the band spectral density is developed. The effective selection of feature element, based on its spectral density achieve two objective of pattern recognition, the feature coefficient representiaon is minimized, hence leading to lower resource requirement, and dominant feature representation, resulting in higher retrieval performance.
Similar to Detection and Classification in Hyperspectral Images using Rate Distortion and Optimization techniques using JPEG2000 (20)
Hydraulic fracturing stimulation designs are moving towards tighter spaced clusters, longer stage length, and more proppant volumes. However, effectively evaluating the hydraulic fracturing stimulation efficiency remains a challenge. Distributed fiber optic sensing, which includes DAS and DTS, can continuously monitor the hydraulic fracturing stimulation downhole and be compared with other monitoring technology such as microseismic.
The DAS and DTS data, when integrated with the microseismic, highlight processes relevant to the completion design and allow for a better understanding and interpretation of each dataset.
This paper outlines a workflow to improve processing and interpretation of DAS and DTS data. In addition,
an estimate of the slurry distribution can be made. These methods will be demonstrated for a horizontal
Wolfcamp well in the Permian Basin. Here we compare key aspects of the microseismic, DAS, and DTS
results in several fracture stages to understand the downhole geomechanical processes. In order to interpret
the DTS data a thermal model is developed (using DTS data) to simulate the temperature behavior after
pumping has ceased. A slurry distribution is obtained by matching the simulated temperature with the
measured temperature from DTS. In addition, the DAS data signal is studied in the frequency domain and
the dominant frequencies are identified that are mostly related to fluid flow and to reduce the background
noise. This time frequency analysis enhances the ability to monitor and optimize well treatments.
After reducing the background noise, the acoustic intensity is correlated to the slurry distribution. The fluid
distribution data from DAS and DTS are compared with the microseismic and near field strain to better
understand the completion processes. We utilized fiber optic microseismic to better understand and
compare it to conventional microseismic.
Finally, we highlight the dynamics of strain and microseismic signature as fluid moves from an offset well
completion into the prior stimulated fiber well to better understand the reservoir and far field effects of the
completion.
Interpretation Special-Section: Insights into digital oilfield data using ar...Pioneer Natural Resources
Invitation to contribute to our special issue titled “Insights to the Digital Oil Field data using Artificial Intelligence & Big Data Analytics”. The scope of this special issue is to further bridge the gap between the geophysical interpretation and well planning (drilling, completions and production).
The technology communities both in industry and academia are utilizing advanced signal processing / machine learning algorithms, high compute / Big Data architectures at scale to develop these practical solutions. Your contributions to advanced algorithms in signal processing/machine learning, subsurface imaging and interpretation will be a good fit for this issue.
We hope you will find this participation both rewarding and worthwhile for our industry and to the Interpretation community in general.
Dear Colleagues,
Call for papers for another Machine Learning special issue of SEG/AAPG Journal of Interpretation focusing on the Seismic Data Analysis has been announced.
We look forward to your contribution.
Vikram Jayaram
Special Section Editor
Interpretation
An approach to offer management: maximizing sales with fare products and anci...Pioneer Natural Resources
With the growth in ancillary sales, an area of increasing importance for airlines is the concept of offer management, which entails the creation of dynamic, custom, personalized offers consisting of a flight itinerary and ancillary products offered by an airline. This practice-oriented, overview paper provides an end-to-end, future-oriented framework for determining the composition of optimal base fare and ancillary bundles by customer trip purpose segment followed by 1:1 personalization to maximize total sales. Our focus in this paper is primarily on the proposed offer management framework and its sub-components.
During the past decade, the size of 3D seismic data volumes and the number of seismic attributes have increased
to the extent that it is difficult, if not impossible, for interpreters to examine every seismic line and time
slice. To address this problem, several seismic facies classification algorithms including k-means, self-organizing
maps, generative topographic mapping, support vector machines, Gaussian mixture models, and artificial neural
networks have been successfully used to extract features of geologic interest from multiple volumes. Although
well documented in the literature, the terminology and complexity of these algorithms may bewilder the average
seismic interpreter, and few papers have applied these competing methods to the same data volume. We have
reviewed six commonly used algorithms and applied them to a single 3D seismic data volume acquired over the
Canterbury Basin, offshore New Zealand, where one of the main objectives was to differentiate the architectural
elements of a turbidite system. Not surprisingly, the most important parameter in this analysis was the choice of
the correct input attributes, which in turn depended on careful pattern recognition by the interpreter. We found
that supervised learning methods provided accurate estimates of the desired seismic facies, whereas unsupervised
learning methods also highlighted features that might otherwise be overlooked.
New exploration challenges and current research demands
3D gravity modeling with 3D geology interpretations. In
the near future, multi-parameter and multi-dimensional
interpretations represent the observed and expected in situ
geology, geophysical, and petro-physical data that will be
used for join multi-parameter, multi-dimensional
inversions. We present an initial 3D gravity model of
Osage County in northeastern Oklahoma, where there is a
greater than 40 mGal, 100 km diameter semi-circular
gravity anomaly that cannot be effectively removed by
traditional gravity processing techniques.
OPTIMIZED RATE ALLOCATION OF HYPERSPECTRAL IMAGES IN COMPRESSED DOMAIN USING ...Pioneer Natural Resources
This paper studies the application of bit allocation using JPEG2000 for compressing multi-dimensional remote sensing data. Past experiments have shown that the Karhunen- Lo
`
e
ve transform (KLT) along with rate distortion optimal(RDO) bit allocation produces good compression perfor-mance. However, this model has the unavoidable disadvan-tage of paying a price in terms of implementation complex-ity. In this research we address this complexity problem byusing the discrete wavelet transform (DWT) instead of theKLT as the decorrelator. Further, we have incorporated amixed model (MM) to find the rate distortion curves instead of the prior method of using experimental rate distortioncurves for RDO bit allocation. We compared our results tothe traditional high bit rate quantizer bit allocation modelbased on the logarithm of variances among the bands. Our comparisons show that by using the MM-RDO bit rate al-location method result in lower mean squared error (MSE)compared to the traditional bit allocation scheme. Our ap- proach also has an additional advantage of using DWT asa computationally efficient decorrelator when compared tothe KLT
Receiver deghosting method to mitigate F-K transform artifacts: A non-windo...Pioneer Natural Resources
In this study, we implemented and tested a new processing- based broadband solution for mitigating F-K transform arti- facts for receiver deghosting in a marine environment. The F- K transform has traditionally been used for flat cable (constant depth) deghosting and often times tailored to meet the slanted (variable depth) cable criteria. Recently, the usage of τ − p do- main deterministic deghost operator has been more prominent with slant cable deghosting. Irrespective of the type of trans- form or deghost operator used, a windowed process is essential due to the time and offset varying character of the ghost. This use of a windowed process usually results in poor reconstruc- tion of deghosted signals and artifacts beyond the control of the transform(s) itself. The windowing in time and offset produces edgy effects which can be clearly seen in the difference plots. Our method, using a non-windowing approach, demonstrates a better representation of the deghosted signals without the arti- facts caused by the boundary of the windows. This method has also been well-tested for both the flat and slant cable receiver deghosting workflows in synthetic and field data examples.
Distance Metric Based Multi-Attribute Seismic Facies Classification to Identi...Pioneer Natural Resources
Conventional reservoirs benefit from a long scientific history that correlates successful plays to seismic measurements through depositional, tectonic, and digenetic models. Unconventional reservoirs are less well understood, however benefit from significantly denser well control. Thus, allowing us to establish statistical rather than model-based correlations between seismic data, geology, and successful completion strategies. One of the more commonly encountered correlation techniques is based on computer assisted pattern recognition. The pattern recognition techniques have found their niche in a plethora of applications ranging from flagging suspicious credit card purchase patterns to rewarding repeating online buying patterns. Classification of a given seismic response as having a “good” or “bad” pattern requires a “distance metric”. Distance metric “learning” uses past experiences (well performance) as training data to develop a distance metric. Alternative distance metrics have demonstrated significant value in the identification and classification of repeated or anomalous behaviors in public health, security, and marketing. In this paper we examine the value of three of these alternative distance metrics of 3D seismic attributes to the identification of sweet spots in a Barnett Shale play.
We illustrate unsupervised and supervised learning algorithms that accurately classify the lithological variations in the 3D seismic data. We demonstrate blind source separation techniques such as the principal components (PCA) and noise adjusted principal
components in conjunction with Kohonen Self organizing maps to produce superior unsupervised classification maps.
Further, we utilize the PCA space training in Maximum likelihood (ML) supervised classification. Results demonstrate that the ML supervised classification produces an improved classification of the facies in the 3D seismic dataset from the Anadarko basin in central Oklahoma.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
2. 2. DESCRIPTION OF DATA
The Hyperi on Hyperspectral Imager instrument p rovides a
high re solution h ype rsp ectral image capable of resolving
220 spectral bands (from 0.4 to 2.5 )..Lm) with a 30 meter
pixel resolution. The Instrument can image a 7.5 km by 100
km land area per i mage and provide detailed spectral map
ping across all 220 channels with high radiometric accuracy.
The Hyperion instrument along with Advanced Land Im
ager (ALI) instrument (which provides accurate radiomet
ric calibration on orbi t using precisely controlled amount of
Solar irradiance) is used as an in te grated camera system on
the
Earth Observatory-I satellite(EO-l) [10].
The analy s i s of Hyperspectral Imagery by means of such
spectrometers (Hyperion) ex ploit s the fact that each ma
terial radiates different amount of electro magn eti c energy
throughout all the spectra. This unique characteristic of the
material is commonly known as a spectral signature which
can
be read using such airborne or space borne-based det ec
tors.
Fig. 1. (1) Pub li she d 1:100 000 scale geology [9] of the
evaluating the performance of
background classification, we notice that there are two ma
Mount Fitton test site. (2) Hyperi on c olor co m po s it e image
jor applications that use the spectral signature to separate
u sing minimum Euclidean distance algori thm .
As we are interested in
materials (in our experiments they
are mineral s). They are
Classification and Target detection. Classification is focused
on assigning each pixel in
the hyperspectral image into land
cover classes or themes. The central idea of target detection
is to search pixels for a pre sence of a p arti cular materi al
predete rm ined as a target. Notice tha t the classification i s
performed by means of a set of training fields which are
circled in white
as
shown in the Figure 1.
data used for experiments were collected by the Hy
perion over the semiarid Mount Fitton area which is loc ated
The
in the Northern Flinders Ranges of South Australia. Cen
tered at -29°55' S, 139°25' E,
and about 700 kms NW of
Ad elaide . This Hyperion image consists of 194 atmo sphe r
ically corrected contiguous spectral bands. Each band has
dimensions of 6702 x 256 pixels.
3. THE RDO APPROACH TO BIT RATE
ALLOCATION
In the RDO method ([6), [1]) designed to distribute bit rates
optimally among the bands we use the appro ach similar to
PCRD optimization used in JPEG2000 for the selection of
the optimal truncation points for the bit streams of the code
bl o cks
showing color-coded training fields. (3) Classified image
[5]. We use the same concepts when we go from
single imagelbands to multiple imageslbands.
In this section we briefly describe the PCRD approach.
quality or resolution.
Quality scalability is achieved by dividing the wavelet
transform ed i mage into codeblocks. After each codeblock
is encoded, a post-processing operation determines where
each code-black's embedded stream should be truncated in
order to achieve a pre-defi ned bi t-rate or distorti on bound
for the whole image. This bi tstream rescheduling module is
referre d to
the
Tier 2. It establi shes a multi-layered rep
bitstream, guaranteeing an optimal
performance at several bitrates or resolutions.
The Tier 2 component optimizes the truncation process,
and tries to reach the desired bit-rate while minimizing the
introduced distortion, utilizing Lagrangian rate allocation
principles.
The following procedure is known as PCRD optimiza
tion [5].
Assuming that the overall distortion metric is additive,
i.e.,
N
D LDi(ni),
=
(1)
i",l
it is desired to find the optimal selection of bi t
cation
points nr
minimized
The JPEG2000 Part 1 basel i ne or simply JPEG2000 [5) brings
a new paradigm to image compression standards. It pro
vides among other advantages superior low bit-rate perfor
mance, bit rat e sc al abili t y an d progr essive transmission by
as
resentation of the final
stream trun
such that the overall distortion metric is
subject to a constraint
(2)
To solve the problem, the Lagrange multiplier method is
112
3. used. It leads to the unconstrained optimization problem
Q
=
(D(A) + A . R(A))
--+
min.
(3)
The resulting objective function Q depends on N vari
ables ni, but can be represented as a sum of N terms
(4)
a training class. A pixel is considered in-class if the Eu
clidean distance from the pixel vector to the nearest class
mean vector is the smallest.
Table 1. Results obtained using high bit rate quantizer ap
proach, 0.0625 bpppb. Overall accuracy 93.0415 %
MINERALS
Hits
Misses
False alarm
AI-Poor Mica
Vegetation
Tremolite
Chlorite
Talc
Dolamite
Kaolinite
97.33
2.67
6.01
98.99
1.01
2.26
9 5. 4 1
4.59
13.63
87.25
12.75
14.11
99.08
0.92
9.11
83.02
1 6. 98
3.12
84.03
15.97
12.23
AI-Rich Mica
Therefore, to minimize the sum, we must find, for each
code-block i, a truncation point n/' that minimizes the cor
responding term Q;. This fact is a particular case of the
general result proven in [4].
The determination of the N optimal truncation points
for any given A is performed efficiently based on the exper
imental information about rate distortion dependence col
lected during generation of each code-block's embedded bit
stream. Basically, the algorithm finds the truncation points
where each rate distortion slope is closest to the fixed A.
97.14
2.86
3.92
4. EXPERIMENTS AND RESULTS
This section shows the evaluation of two bit rate allocations
based on their MSE as shown in Fi gu re 2, and background
classifications as shown in Tables 1, 2 and Figure 3. T he
classification is performed using the Minimum Euclidean
Distance algorithm.
The traditional approach (based on the high bit rate quan
tizer model) is used without the standard constraint which
forces bit rates to be integers. Instead, any non-negative bit
rate in an acceptable range can be achieved on each slice.
For the RDO approach we generate experimental data
for a set of bit rates in an acceptable range (0.0375 bpppb to
2 bpppb) and compute the corresponding distortion (MSE)
at each point using the JPEG2000 coder and decoder. Thus,
we acquire pairs (Rz, AI S Ez (Rz)) for each band z. Once
the rate distortion curves are determined, the optimal bit
rates are determined by finding all the points on rate distor
tion curves of the same slope such that the corresponding bit
rate average is the desired target rate (similar to PCRD ap
proach described in Section 3). Thus for the given average
bit rate, RDO provides bit rate allocation that minimizes the
MSE. After allocating bits based on the two strategies we
perform the classification.
Under the supervised classification task we employ train
ing data in our experiments. The classification algorithm
assigns each pixel in the image to one of the classes (deter
mined by the spectral signature of the training fields) using
minimum Euclidean distance (between the reference spec
tra and the image spectra) as a threshold criteria. The train
ing fields were created by selecting eight regions of interest
[9Jas shown in Figure 1.
Classification was performed using the Minimum Dis
tance classifier wherein the algorithm classifies pixels using
Table 2. Results obtained using RDO approach,
bpppb. Overall accuracy 93.4861 %
0.0625
MINERALS
Hits
Misses
Falsealann
AI-Poor Mica
Vegetation
Tremolite
Chlorite
97.39
2.61
6.64
97.56
2.44
1.4
91.41
8.59
14.94
90
10
10.71
Talc
96.28
3.72
1.15
Dolamite
Kaolinite
AI-Rich Mica
82.63
17.37
3.46
89.95
10.05
11.01
97.37
2.63
5.85
Figure 2 compares the MSE results for the two rate allo
cation methods. Notice that at lower bit rates the difference
in the MSE's keeps on increasing, with RDO outperforming
the traditional approach.
Figure 3 compares the classification performance of the
two rate allocation methods. Observe that at lower bit rates
RDO comparatively shows better classification performance
than the traditional approach.
Table 1 shows the Target Hits, Misses and False Alarm
of various minerals detected in the region at 0.0625 bpppb
using the Log of Variances rate allocation method.
Similarly Table 2 shows the Target Hits, Misses and
False Alarm of the same minerals detected in the region at
0.0625 bpppb using the Rate Distortion Optimal technique.
Notice that the overall accuracy of correctly classified pix
els using RDO is better than the traditional rate allocation
strategy.
113
4. Acknowledgment
We would like to thank Mary Williams (EOIR) for perform
ing atmospheric corrections on our data and Raed Aldouri
(PACES) for providing us ENV!.
6. REFERENCES
[1] Olga
M.
Kosheleva, Bryan E. Usevitch, Sergio D.
Cabrera, and Edward Vidal, Jr., "Rate distortion op
timal bit allocation methods for volumetric data using
JPEG2000," submitted to IEEE Transactions on Image
Compression Rate (bpppb) on log Scale
Processing, January 2004.
[2] "JTU-T Recommendation T.801 I ISOflEC 154441:2001 Information technology - JPEG2000 image
Fig. 2. MSE comparison ofRDO vs. high.bit rate quantiza
tion approach.
coding system - Part 2: Extensions", 2000.
[3] M .D .P l C.M.Brislawn, and S.P Brumb y, "Feature
a
,
.
Extraction from Hyperspectral Images Compressed
using the JPEG-2000 Standard", Proceedings of Fifth
IEEE Southwe st Symposium on Image Analysis and
Interpretation (SSIAI' 02), Santa Fe, NM, pp. 168-
172, April 2002.
[4] H. E. Everett 1Il, "Generalized lagrange Multiplier
method for solving problems of optimum allocation
of resources", Operations Research, vol. 11, pp. 399417,1963.
[5] D. Taubman and M. Marcellin, "JPEG2000 Image
Compression Fundamentals, Standards and Practice",
Boston, Dordrecht, London: Klu wer, 2002.
Rate (bpppb) on Log Scale
[6] Olga M. Kosheleva, "Task-Specific Metrics and Opti
mized Rate Allocation Applied to Part 2 of JPEG2000
and 3-D Meteorological Data",
Fig. 3. Detection performance of RDO vs. high bit rate
quantization approach.
Ph.D . Dissertation,
University of Texas at El Paso, August 2003.
[7] R. Jayantand P. Noll, "Digital Coding of W aveforms",
Upper Saddle River: Prentice Hall, 1984.
5.
CONCLUSION
(8] A. Gersho and R. M. Gray, "Vector Quantization and
Signal Compression", Boston: Kluwer, 1992.
The results of this paper shows that at higher bit rates we
observe that both rate allocation strategies produce similar
[9] T. 1. Cudahy, R. Hewson, J.F. Huntington, M.A.
Quigley, and P.S. Barry, "The performance of the
detection results. While at lower bit rates Rate Distortion
Optimal has better detection perfonnance.
Satellite-borne Hyperion Hyperspectral VNIR-SWIR
It is also ob
imaging system for mineral mapping at Mount Fit
served that the RDO approach has an added advantage of
ton, South Australia," Proceedings of IEEE Interna
lower MSE over the traditional high bit rate quantizer model
tional Geoscience and Remote Sensing Symposium
approach. This confirms the results from [6] that the high
IGARS01, Sydney, July 2001.
bit rate model approach is not adequate for assigning bit al
location for very low bit rates. The RDO approach allows
for the use of the more adequate model with the unavoidable
[10] http://eo1.gsfc.nasa.gov/
disadvantage of paying a price in terms of implementation
complexity.
1 14