Thresholding operators have been used successfully for denoising signals, mostly in the wavelet domain.
These operators transform a noisy coefficient into a denoised coefficient with a mapping that depends on
signal statistics and the value of the noisy coefficient itself. This paper demonstrates that a polynomial
threshold mapping can be used for enhanced denoising of Principal Component Analysis (PCA) transform
coefficients. In particular, two polynomial threshold operators are used here to map the coefficients
obtained with the popular local pixel grouping method (LPG-PCA), which eventually improves the
denoising power of LPG-PCA. The method reduces the computational burden of LPG-PCA, by eliminating
the need for a second iteration in most cases. Quality metrics and visual assessment show the improvement.
Fractal Image Compression of Satellite Color Imageries Using Variable Size of...CSCJournals
Fractal image compressions of Color Standard Lena and Satellite imageries have been carried out for the variable size range block method. The image is partitioned by considering maximum and minimum size of the range block and transforming the RGB color image into YUV form. Affine transformation and entropy coding are applied to achieve fractal compression. The Matlab simulation has been carried out for three different cases of variable range block sizes. The image is reconstructed using iterative functions and inverse transforms. The results indicate that both color Lena and Satellite imageries with R max = 16 and R min = 8, shows higher Compression ratio (CR) and good Peak Signal to Noise Ratios (PSNR). For the color standard Lena image the achievable CR~13.9 and PSNR ~25.9 dB, for Satellite rural image of CR~ 16 and PSNR ~ 23 and satellite urban image CR~16.4 and PSNR~16.5. The results of the present analysis demonstrate that, for the fractal compression scheme with variable range method applied to both color and gray scale Lena and satellite imageries, show higher CR and PSNR values compared to fixed range block size of 4 and 4 iterations. The results are presented and discussed in the paper.
IMPROVEMENT OF BM3D ALGORITHM AND EMPLOYMENT TO SATELLITE AND CFA IMAGES DENO...ijistjournal
This paper proposes a new procedure in order to improve the performance of block matching and 3-D filtering (BM3D) image denoising algorithm. It is demonstrated that it is possible to achieve a better performance than that of BM3D algorithm in a variety of noise levels. This method changes BM3D algorithm parameter values according to noise level, removes prefiltering, which is used in high noise level; therefore Peak Signal-to-Noise Ratio (PSNR) and visual quality get improved, and BM3D complexities and processing time are reduced. This improved BM3D algorithm is extended and used to denoise satellite and color filter array (CFA) images. Output results show that the performance has upgraded in comparison with current methods of denoising satellite and CFA images. In this regard this algorithm is compared with Adaptive PCA algorithm, that has led to superior performance for denoising CFA images, on the subject of PSNR and visual quality. Also the processing time has decreased significantly.
In this paper, we analyze and compare the performance of fusion methods based on four different
transforms: i) wavelet transform, ii) curvelet transform, iii) contourlet transform and iv) nonsubsampled
contourlet transform. Fusion framework and scheme are explained in detail, and two different sets of
images are used in our experiments. Furthermore, eight different performancemetrics are adopted to
comparatively analyze the fusion results. The comparison results show that the nonsubsampled contourlet
transform method performs better than the other three methods, both spatially and spectrally. We also
observed from additional experiments that the decomposition level of 3 offered the best fusion performance,
anddecomposition levels beyond level-3 did not significantly improve the fusion results.
Local Binary Fitting Segmentation by Cooperative Quantum Particle OptimizationTELKOMNIKA JOURNAL
Recently, sophisticated segmentation techniques, such as level set method, which using valid
numerical calculation methods to process the evolution of the curve by solving linear or nonlinear elliptic
equations to divide the image availably, has become being more popular and effective. In Local Binary
Fitting (LBF) algorithm, a simple contour is initialized in an image and then the steepest-descent algorithm
is employed to constrain it to minimize the fitting energy functional. Hence, the initial position of the contour
is difficult or impossible to be well chosen for the final performance. To overcoming this drawback, this
work treats the energy fitting problem as a meta-heuristic optimization algorithm and imports a varietal
particle swarm optimization (PSO) method into the inner optimization process. The experimental results of
segmentations on medical images show that the proposed method is not only effective to both simple and
complex medical images with adequate stochastic effects, but also shows the accuracy and high
efficiency.
A PROGRESSIVE MESH METHOD FOR PHYSICAL SIMULATIONS USING LATTICE BOLTZMANN ME...ijdpsjournal
In this paper, a new progressive mesh algorithm is introduced in order to perform fast physical simulations by the use of a lattice Boltzmann method (LBM) on a single-node multi-GPU architecture. This algorithm is able to mesh automatically the simulation domain according to the propagation of fluids. This method can also be useful in order to perform several types of physical simulations. In this paper, we associate this
algorithm with a multiphase and multicomponent lattice Boltzmann model (MPMC–LBM) because it is
able to perform various types of simulations on complex geometries. The use of this algorithm combined
with the massive parallelism of GPUs[5] allows to obtain very good performance in comparison with the
staticmesh method used in literature. Several simulations are shown in order to evaluate the algorithm.
This document discusses Gabor convolutional networks (GCNs), which incorporate Gabor filters into deep convolutional neural networks (DCNNs) to improve robustness to orientation and scale changes. GCNs modulate the learned convolution filters with Gabor filters, generating convolutional Gabor orientation filters (GoFs) that endow the ability to capture spatial localization, orientation selectivity, and spatial frequency selectivity. This allows GCNs to learn more compact yet enhanced representations with improved generalization to image transformations, outperforming conventional DCNN architectures. The incorporation of Gabor filters into the convolution operation can be integrated into any deep learning model to enhance robustness to scale and rotation variations.
This project report summarizes research on using deep learning methods for super resolution of document images to improve optical character recognition (OCR) accuracy. Two approaches are studied: SRCNN and incorporating OCR accuracy into the loss function. Experiments include modifying the dataset to focus on text regions and varying the SRCNN architecture. Results show the 2-layer SRCNN trained on the modified dataset achieves the best OCR accuracy, exceeding the ground truth in some cases.
Introduction to Wavelet Transform and Two Stage Image DE noising Using Princi...ijsrd.com
In past two decades there are various techniques are developed to support variety of image processing applications. The applications of image processing include medical, satellite, space, transmission and storage, radar and sonar etc. But noise in image effect all applications. So it is necessary to remove noise from image. There are various methods and techniques are there to remove noise from images. Wavelet transform (WT) has been proved to be effective in noise removal but this have some problems that is overcome by PCA method. This paper presents an efficient image de-noising scheme by using principal component analysis (PCA) with local pixel grouping (LPG). This method provides better preservation of image local structures. In this method a pixel and its nearest neighbors are modeled as a vector variable whose training samples are selected from the local window by using block matching based LPG. In image de-noising, a compromise has to be found between noise reduction and preserving significant image details. PCA is a statistical technique for simplifying a dataset by reducing datasets to lower dimensions. It is a standard technique commonly used for data reduction in statistical pattern recognition and signal processing. This paper proposes a de-noising technique by using a new statistical approach, principal component analysis with local pixel grouping (LPG). This procedure is iterated second time to further improve the de-noising performance, and the noise level is adaptively adjusted in the second stage.
Fractal Image Compression of Satellite Color Imageries Using Variable Size of...CSCJournals
Fractal image compressions of Color Standard Lena and Satellite imageries have been carried out for the variable size range block method. The image is partitioned by considering maximum and minimum size of the range block and transforming the RGB color image into YUV form. Affine transformation and entropy coding are applied to achieve fractal compression. The Matlab simulation has been carried out for three different cases of variable range block sizes. The image is reconstructed using iterative functions and inverse transforms. The results indicate that both color Lena and Satellite imageries with R max = 16 and R min = 8, shows higher Compression ratio (CR) and good Peak Signal to Noise Ratios (PSNR). For the color standard Lena image the achievable CR~13.9 and PSNR ~25.9 dB, for Satellite rural image of CR~ 16 and PSNR ~ 23 and satellite urban image CR~16.4 and PSNR~16.5. The results of the present analysis demonstrate that, for the fractal compression scheme with variable range method applied to both color and gray scale Lena and satellite imageries, show higher CR and PSNR values compared to fixed range block size of 4 and 4 iterations. The results are presented and discussed in the paper.
IMPROVEMENT OF BM3D ALGORITHM AND EMPLOYMENT TO SATELLITE AND CFA IMAGES DENO...ijistjournal
This paper proposes a new procedure in order to improve the performance of block matching and 3-D filtering (BM3D) image denoising algorithm. It is demonstrated that it is possible to achieve a better performance than that of BM3D algorithm in a variety of noise levels. This method changes BM3D algorithm parameter values according to noise level, removes prefiltering, which is used in high noise level; therefore Peak Signal-to-Noise Ratio (PSNR) and visual quality get improved, and BM3D complexities and processing time are reduced. This improved BM3D algorithm is extended and used to denoise satellite and color filter array (CFA) images. Output results show that the performance has upgraded in comparison with current methods of denoising satellite and CFA images. In this regard this algorithm is compared with Adaptive PCA algorithm, that has led to superior performance for denoising CFA images, on the subject of PSNR and visual quality. Also the processing time has decreased significantly.
In this paper, we analyze and compare the performance of fusion methods based on four different
transforms: i) wavelet transform, ii) curvelet transform, iii) contourlet transform and iv) nonsubsampled
contourlet transform. Fusion framework and scheme are explained in detail, and two different sets of
images are used in our experiments. Furthermore, eight different performancemetrics are adopted to
comparatively analyze the fusion results. The comparison results show that the nonsubsampled contourlet
transform method performs better than the other three methods, both spatially and spectrally. We also
observed from additional experiments that the decomposition level of 3 offered the best fusion performance,
anddecomposition levels beyond level-3 did not significantly improve the fusion results.
Local Binary Fitting Segmentation by Cooperative Quantum Particle OptimizationTELKOMNIKA JOURNAL
Recently, sophisticated segmentation techniques, such as level set method, which using valid
numerical calculation methods to process the evolution of the curve by solving linear or nonlinear elliptic
equations to divide the image availably, has become being more popular and effective. In Local Binary
Fitting (LBF) algorithm, a simple contour is initialized in an image and then the steepest-descent algorithm
is employed to constrain it to minimize the fitting energy functional. Hence, the initial position of the contour
is difficult or impossible to be well chosen for the final performance. To overcoming this drawback, this
work treats the energy fitting problem as a meta-heuristic optimization algorithm and imports a varietal
particle swarm optimization (PSO) method into the inner optimization process. The experimental results of
segmentations on medical images show that the proposed method is not only effective to both simple and
complex medical images with adequate stochastic effects, but also shows the accuracy and high
efficiency.
A PROGRESSIVE MESH METHOD FOR PHYSICAL SIMULATIONS USING LATTICE BOLTZMANN ME...ijdpsjournal
In this paper, a new progressive mesh algorithm is introduced in order to perform fast physical simulations by the use of a lattice Boltzmann method (LBM) on a single-node multi-GPU architecture. This algorithm is able to mesh automatically the simulation domain according to the propagation of fluids. This method can also be useful in order to perform several types of physical simulations. In this paper, we associate this
algorithm with a multiphase and multicomponent lattice Boltzmann model (MPMC–LBM) because it is
able to perform various types of simulations on complex geometries. The use of this algorithm combined
with the massive parallelism of GPUs[5] allows to obtain very good performance in comparison with the
staticmesh method used in literature. Several simulations are shown in order to evaluate the algorithm.
This document discusses Gabor convolutional networks (GCNs), which incorporate Gabor filters into deep convolutional neural networks (DCNNs) to improve robustness to orientation and scale changes. GCNs modulate the learned convolution filters with Gabor filters, generating convolutional Gabor orientation filters (GoFs) that endow the ability to capture spatial localization, orientation selectivity, and spatial frequency selectivity. This allows GCNs to learn more compact yet enhanced representations with improved generalization to image transformations, outperforming conventional DCNN architectures. The incorporation of Gabor filters into the convolution operation can be integrated into any deep learning model to enhance robustness to scale and rotation variations.
This project report summarizes research on using deep learning methods for super resolution of document images to improve optical character recognition (OCR) accuracy. Two approaches are studied: SRCNN and incorporating OCR accuracy into the loss function. Experiments include modifying the dataset to focus on text regions and varying the SRCNN architecture. Results show the 2-layer SRCNN trained on the modified dataset achieves the best OCR accuracy, exceeding the ground truth in some cases.
Introduction to Wavelet Transform and Two Stage Image DE noising Using Princi...ijsrd.com
In past two decades there are various techniques are developed to support variety of image processing applications. The applications of image processing include medical, satellite, space, transmission and storage, radar and sonar etc. But noise in image effect all applications. So it is necessary to remove noise from image. There are various methods and techniques are there to remove noise from images. Wavelet transform (WT) has been proved to be effective in noise removal but this have some problems that is overcome by PCA method. This paper presents an efficient image de-noising scheme by using principal component analysis (PCA) with local pixel grouping (LPG). This method provides better preservation of image local structures. In this method a pixel and its nearest neighbors are modeled as a vector variable whose training samples are selected from the local window by using block matching based LPG. In image de-noising, a compromise has to be found between noise reduction and preserving significant image details. PCA is a statistical technique for simplifying a dataset by reducing datasets to lower dimensions. It is a standard technique commonly used for data reduction in statistical pattern recognition and signal processing. This paper proposes a de-noising technique by using a new statistical approach, principal component analysis with local pixel grouping (LPG). This procedure is iterated second time to further improve the de-noising performance, and the noise level is adaptively adjusted in the second stage.
Modified Adaptive Lifting Structure Of CDF 9/7 Wavelet With Spiht For Lossy I...idescitation
We present a modified structure of 2-D cdf 9/7 wavelet
transforms based on adaptive lifting in image coding. Instead
of alternately applying horizontal and vertical lifting, as in
present practice, Adaptive lifting performs lifting-based
prediction in local windows in the direction of high pixel
correlation. Hence, it adapts far better to the image orientation
features in local windows. The predicting and updating signals
of Adaptive lifting can be derived even at the fractional pixel
precision level to achieve high resolution, while still
maintaining perfect reconstruction. To enhance the
performance of adaptive based modified structure of 2-D CDF
9/7 is coupled with SPIHT coding algorithm to improve the
drawbacks of wavelet transform. Experimental results show
that the proposed modified scheme based image coding
technique outperforms JPEG 2000 in both PSNR and visual
quality, with the improvement up to 6.0 dB than existing
structure on images with rich orientation features .
A PROGRESSIVE MESH METHOD FOR PHYSICAL SIMULATIONS USING LATTICE BOLTZMANN ME...ijdpsjournal
In this paper, a new progressive mesh algorithm is introduced in order to perform fast physical simulations
by the use of a lattice Boltzmann method (LBM) on a single-node multi-GPU architecture. This algorithm is
able to mesh automatically the simulation domain according to the propagation of fluids. This method can
also be useful in order to perform several types of physical simulations. In this paper, we associate this
algorithm with a multiphase and multicomponent lattice Boltzmann model (MPMC–LBM) because it is
able to perform various types of simulations on complex geometries. The use of this algorithm combined
with the massive parallelism of GPUs[5] allows to obtain very good performance in comparison with the
staticmesh method used in literature. Several simulations are shown in order to evaluate the algorithm.
Unconstrained Optimization Method to Design Two Channel Quadrature Mirror Fil...CSCJournals
This paper proposes an efficient method for the design of two-channel, quadrature mirror filter (QMF) bank for subband image coding. The choice of filter bank is important as it affects image quality as well as system design complexity. The design problem is formulated as weighted sum of reconstruction error in time domain and passband and stop-band energy of the low-pass analysis filter of the filter bank .The objective function is minimized directly, using nonlinear unconstrained method. Experimental results of the method on images show that the performance of the proposed method is better than that of the already existing methods. The impact of some filter characteristics, such as stopband attenuation, stopband edge, and filter length on the performance of the reconstructed images is also investigated.
Weighted Performance comparison of DWT and LWT with PCA for Face Image Retrie...cscpconf
This paper compares the performance of face image retrieval system based on discrete wavelet
transforms and Lifting wavelet transforms with principal component analysis (PCA). These
techniques are implemented and their performances are investigated using frontal facial images
from the ORL database. The Discrete Wavelet Transform is effective in representing image
features and is suitable in Face image retrieval, it still encounters problems especially in
implementation; e.g. Floating point operation and decomposition speed. We use the advantages
of lifting scheme, a spatial approach for constructing wavelet filters, which provides feasible
alternative for problems facing its classical counterpart. Lifting scheme has such intriguing
properties as convenient construction, simple structure, integer-to-integer transform, low
computational complexity as well as flexible adaptivity, revealing its potentials in Face image
retrieval. Comparing to PCA and DWT with PCA, Lifting wavelet transform with PCA gives
less computation and DWT-PCA gives high retrieval rate.. Especially ‘sym2’ wavelet
outperforms well comparing to all other wavelets.
The document discusses optimizing rate allocation for compressing hyperspectral images using JPEG2000. It proposes using the discrete wavelet transform instead of the Karhunen-Loeve transform for decorrelation due to lower computational complexity. A mixed model is used for rate distortion optimal bit allocation instead of experimentally obtained rate distortion curves. Comparisons show the mixed model approach results in lower mean squared error than traditional bit allocation schemes, while having lower implementation complexity than prior methods.
This document describes two distributed-memory parallelization schemes for efficiently parallelizing an explicit time-domain volume integral equation solver on the IBM Blue Gene/P supercomputer. The first scheme distributes the computationally intensive tested field computations among processors while storing the source field time histories on each processor, requiring all-to-all global communications. The second scheme distributes both the source fields and tested field computations, requiring sequential global communications. Numerical results show that both schemes scale well on Blue Gene/P, and the second more memory-efficient scheme allows solving problems with up to 3 million unknowns without acceleration. The parallel solver is demonstrated on the problem of light scattering from a red blood cell.
This paper proposes a new image compression approach that uses adaptive DCT-domain downsampling to reduce high frequency information in images for compression, and then uses learning-based mapping to compensate for the removed high frequencies during decompression. Specifically, it adaptively selects regions for downsampling in the DCT domain based on rate-distortion optimization. It then uses a database of visual patterns to map blurred image patches to corresponding high-quality patches during decompression, recovering lost high frequencies. Experimental results show it outperforms standards like H.264 and JPEG2000 especially at low bit rates.
This document proposes an Adaptive Resolution Enhancement Algorithm (AREA) using an Edge Targeted Filter (ETF) and Dual Tree Complex Wavelet Transform (DTCWT) to enhance the resolution and image quality of low-quality, low-resolution images. The ETF first detects edges in an input image and generates a mask. It then focuses on improving edge reconstruction. The output is input to DTCWT, which analyzes subband coefficients to detect flaws and interpolates coefficients to enhance spectral resolution. The inverse DTCWT produces a spatially enhanced output image with improved resolution up to 17% compared to existing techniques. Simulation results demonstrate the effectiveness of the proposed algorithm.
IRJET- Crowd Density Estimation using Novel Feature DescriptorIRJET Journal
This document proposes a new texture feature-based approach for crowd density estimation using Completed Local Binary Pattern (CLBP). The approach divides images into blocks and further divides blocks into cells to compute CLBP features for each cell. A multi-class Support Vector Machine (SVM) classifier is trained to classify each block into one of four crowd density categories (Very Low, Low, Medium, High). Experiments on the PETS 2009 dataset show the proposed CLBP descriptor achieves 95% accuracy and outperforms other texture descriptors for crowd density estimation.
This document proposes a novel framework called smooth sparse coding for learning sparse representations of data. It incorporates feature similarity or temporal information present in data sets via non-parametric kernel smoothing. The approach constructs codes that represent neighborhoods of samples rather than individual samples, leading to lower reconstruction error. It also proposes using marginal regression rather than lasso for obtaining sparse codes, providing a dramatic speedup of up to two orders of magnitude without sacrificing accuracy. The document contributes a framework for incorporating domain information into sparse coding, sample complexity results for dictionary learning using smooth sparse coding, an efficient marginal regression training procedure, and successful application to classification tasks with improved accuracy and speed.
This document summarizes a research paper that proposes two approaches called LLORMA for constructing local low-rank matrix approximations. The approaches approximate an observed matrix M as a weighted sum of several low-rank matrices, where each low-rank matrix is accurate in a local region of M. This relaxes the assumption that M has global low-rank structure. The paper analyzes the accuracy of the local low-rank modeling approaches and shows they improve prediction accuracy over classical low-rank approximation methods on recommendation tasks.
This paper proposes a new method called Local Collaborative Ranking (LCR) for recommender systems. LCR assumes the user-item rating matrix is locally low-rank, meaning the matrix is low-rank within neighborhoods defined by a distance metric on user-item pairs. LCR combines a recent local low-rank approximation approach with empirical risk minimization for ranking losses. Experiments show LCR outperforms other state-of-the-art recommendation methods. LCR is also easily parallelizable, making it suitable for large-scale industrial applications.
Design and Implementation of Efficient Analysis and Synthesis QMF Bank for Mu...TELKOMNIKA JOURNAL
The present section deals with a new type of technique for designing an efficient two channel Quadrature Mirror Filter Bank with constant phase in frequency. For achieving the Perfect Reconstruction Condition in Filter bank, an attempt has been made to design the low pass prototype filter with its impulse response and frequency response in three regions namely pass band, stop band and transition band region. With the error in terms of Reconstruction and the attenuation in the stop band as seen in the prototype filter response, one can evaluate the performance of the introduced filter with the help of filter coefficients generated in the design examples that affects the quality of filter bank design under the constraints of Near Perfect Reconstruction Conditions.
PIPELINED ARCHITECTURE OF 2D-DCT, QUANTIZATION AND ZIGZAG PROCESS FOR JPEG IM...VLSICS Design
This document presents a pipelined architecture for 2D discrete cosine transform (DCT), quantization, and zigzag processing for JPEG image compression implemented on an FPGA. The 2D-DCT is computed using separability into two 1D-DCTs separated by a transpose buffer. A quantizer divides each DCT coefficient by a value from a quantization table stored in ROM. A zigzag buffer reorders the coefficients in zigzag format. The design was synthesized to Xilinx Spartan-3E FPGA and achieved a maximum frequency of 101.35MHz, processing an 8x8 block in 6604ns with a pipeline latency of 140 cycles.
A DIGITAL COLOR IMAGE WATERMARKING SYSTEM USING BLIND SOURCE SEPARATIONcsandit
An attempt is made to implement a digital color image-adaptive watermarking scheme in
spatial domain and hybrid domain i.e host image in wavelet domain and watermark in spatial
domain. Blind Source Separation (BSS) is used to extract the watermark The novelty of the
presented scheme lies in determining the mixing matrix for BSS model using BFGS (Broyden–
Fletcher–Goldfarb–Shanno) optimization technique. This method is based on the smooth and
textured portions of the image. Texture analysis is carried based on energy content of the
image (using GLCM) which makes the method image adaptive to embed color watermark.
The performance evaluation is carried for hybrid domain of various color spaces like YIQ, HSI
and YCbCr and the feasibility of optimization algorithm for finding mixing matrix is also
checked for these color spaces. Three ICA (Independent Component Analysis)/BSS algorithms
are used in extraction procedure ,through which the watermark can be retrieved efficiently . An
effort is taken to find out the best suited color space to embed the watermark which satisfies the
condition of imperceptibility and robustness against various attacks.
The document compares three image fusion techniques: wavelet transform, IHS (Intensity-Hue-Saturation), and PCA (Principal Component Analysis). For each technique, it describes the methodology, syntax used, and features. It then applies each technique to sample images to produce fused images. The RGB values of the fused images are recorded and compared in a table. The wavelet technique uses max area selection and consistency verification for feature selection. IHS transforms RGB to IHS values and replaces intensity with a panchromatic image. PCA replaces the first principal component with a high-resolution panchromatic image. The document concludes no single technique is best and the quality depends on the application.
Image compression using Hybrid wavelet Transform and their Performance Compa...IJMER
Images may be worth a thousand words, but they generally occupy much more space in hard disk, or
bandwidth in a transmission system, than their proverbial counterpart. Compressing an image is significantly
different than compressing raw binary data. Of course, general purpose compression programs can be used to
compress images, but the result is less than optimal. This is because images have certain statistical properties
which can be exploited by encoders specifically designed for them. Also, some of the finer details in the image
can be sacrificed for the sake of saving a little more bandwidth or storage space. Compression is the process of
representing information in a compact form. Compression is a necessary and essential method for creating
image files with manageable and transmittable sizes. The data compression schemes can be divided into
lossless and lossy compression. In lossless compression, reconstructed image is exactly same as compressed
image. In lossy image compression, high compression ratio is achieved at the cost of some error in reconstructed
image. Lossy compression generally provides much higher compression than lossless compression.
This document proposes a blind color image watermarking scheme based on fan beam transform (FBT) and QR decomposition. It embeds a color image watermark into the b* component of the host image's L*a*b* color space. After applying FBT to the b* component and dividing it into blocks, each block undergoes QR decomposition. The watermark is embedded by modifying an element in the R matrix. Experimental results show the proposed method achieves better imperceptibility than previous methods, as measured by PSNR and SSIM, while maintaining robustness against various attacks. The use of the b* component and properties of FBT and QR decomposition contribute to the scheme's imperceptibility and robustness
PERFORMANCE EVALUATION OF DIFFERENT TECHNIQUES FOR TEXTURE CLASSIFICATION cscpconf
This document compares different techniques for texture classification, including wavelet transforms and co-occurrence matrices. It finds that the Haar wavelet technique is the most efficient in terms of time complexity and classification accuracy, except when images are rotated. The co-occurrence matrix method has higher time requirements but excellent classification results, except for rotated images where accuracy is greatly reduced due to its dependence on pixel values. Overall, the Haar wavelet proves to be the best method for texture classification based on the performance assessment parameters of time complexity and classification accuracy.
El documento habla sobre el cooperativismo. Explica que una cooperativa es una asociación voluntaria de personas para formar una organización democrática que gestiona las necesidades económicas, sociales y culturales de sus socios. Las cooperativas pueden especializarse en diferentes áreas como trabajo, consumo, comercialización, enseñanza o crédito. La gestión de una cooperativa la llevan a cabo el presidente, el consejo rector y la asamblea general de socios.
Este documento habla sobre la importancia del amor y la asistencia a los pobres. Discute que recibimos talentos y riquezas de Dios para ayudar a los necesitados, no solo para nosotros mismos. También menciona que debemos asumir la pobreza como una virtud y combatirla donando para ayudar a los pobres en lugar de gastar en lujos innecesarios. Finalmente, enfatiza que nuestros bienes no nos pertenecen, sino que Dios nos los dio para compartirlos con los más necesitados
This certificate of completion certifies that Veronica Chiaravalli completed the online course "Rigging a Cartoon Character in Maya" in October 2016. The course took 2 hours and 34 minutes to complete and was updated in November 2015. The certificate number is F67FC94825F74031B34F07101F8DBE25.
Modified Adaptive Lifting Structure Of CDF 9/7 Wavelet With Spiht For Lossy I...idescitation
We present a modified structure of 2-D cdf 9/7 wavelet
transforms based on adaptive lifting in image coding. Instead
of alternately applying horizontal and vertical lifting, as in
present practice, Adaptive lifting performs lifting-based
prediction in local windows in the direction of high pixel
correlation. Hence, it adapts far better to the image orientation
features in local windows. The predicting and updating signals
of Adaptive lifting can be derived even at the fractional pixel
precision level to achieve high resolution, while still
maintaining perfect reconstruction. To enhance the
performance of adaptive based modified structure of 2-D CDF
9/7 is coupled with SPIHT coding algorithm to improve the
drawbacks of wavelet transform. Experimental results show
that the proposed modified scheme based image coding
technique outperforms JPEG 2000 in both PSNR and visual
quality, with the improvement up to 6.0 dB than existing
structure on images with rich orientation features .
A PROGRESSIVE MESH METHOD FOR PHYSICAL SIMULATIONS USING LATTICE BOLTZMANN ME...ijdpsjournal
In this paper, a new progressive mesh algorithm is introduced in order to perform fast physical simulations
by the use of a lattice Boltzmann method (LBM) on a single-node multi-GPU architecture. This algorithm is
able to mesh automatically the simulation domain according to the propagation of fluids. This method can
also be useful in order to perform several types of physical simulations. In this paper, we associate this
algorithm with a multiphase and multicomponent lattice Boltzmann model (MPMC–LBM) because it is
able to perform various types of simulations on complex geometries. The use of this algorithm combined
with the massive parallelism of GPUs[5] allows to obtain very good performance in comparison with the
staticmesh method used in literature. Several simulations are shown in order to evaluate the algorithm.
Unconstrained Optimization Method to Design Two Channel Quadrature Mirror Fil...CSCJournals
This paper proposes an efficient method for the design of two-channel, quadrature mirror filter (QMF) bank for subband image coding. The choice of filter bank is important as it affects image quality as well as system design complexity. The design problem is formulated as weighted sum of reconstruction error in time domain and passband and stop-band energy of the low-pass analysis filter of the filter bank .The objective function is minimized directly, using nonlinear unconstrained method. Experimental results of the method on images show that the performance of the proposed method is better than that of the already existing methods. The impact of some filter characteristics, such as stopband attenuation, stopband edge, and filter length on the performance of the reconstructed images is also investigated.
Weighted Performance comparison of DWT and LWT with PCA for Face Image Retrie...cscpconf
This paper compares the performance of face image retrieval system based on discrete wavelet
transforms and Lifting wavelet transforms with principal component analysis (PCA). These
techniques are implemented and their performances are investigated using frontal facial images
from the ORL database. The Discrete Wavelet Transform is effective in representing image
features and is suitable in Face image retrieval, it still encounters problems especially in
implementation; e.g. Floating point operation and decomposition speed. We use the advantages
of lifting scheme, a spatial approach for constructing wavelet filters, which provides feasible
alternative for problems facing its classical counterpart. Lifting scheme has such intriguing
properties as convenient construction, simple structure, integer-to-integer transform, low
computational complexity as well as flexible adaptivity, revealing its potentials in Face image
retrieval. Comparing to PCA and DWT with PCA, Lifting wavelet transform with PCA gives
less computation and DWT-PCA gives high retrieval rate.. Especially ‘sym2’ wavelet
outperforms well comparing to all other wavelets.
The document discusses optimizing rate allocation for compressing hyperspectral images using JPEG2000. It proposes using the discrete wavelet transform instead of the Karhunen-Loeve transform for decorrelation due to lower computational complexity. A mixed model is used for rate distortion optimal bit allocation instead of experimentally obtained rate distortion curves. Comparisons show the mixed model approach results in lower mean squared error than traditional bit allocation schemes, while having lower implementation complexity than prior methods.
This document describes two distributed-memory parallelization schemes for efficiently parallelizing an explicit time-domain volume integral equation solver on the IBM Blue Gene/P supercomputer. The first scheme distributes the computationally intensive tested field computations among processors while storing the source field time histories on each processor, requiring all-to-all global communications. The second scheme distributes both the source fields and tested field computations, requiring sequential global communications. Numerical results show that both schemes scale well on Blue Gene/P, and the second more memory-efficient scheme allows solving problems with up to 3 million unknowns without acceleration. The parallel solver is demonstrated on the problem of light scattering from a red blood cell.
This paper proposes a new image compression approach that uses adaptive DCT-domain downsampling to reduce high frequency information in images for compression, and then uses learning-based mapping to compensate for the removed high frequencies during decompression. Specifically, it adaptively selects regions for downsampling in the DCT domain based on rate-distortion optimization. It then uses a database of visual patterns to map blurred image patches to corresponding high-quality patches during decompression, recovering lost high frequencies. Experimental results show it outperforms standards like H.264 and JPEG2000 especially at low bit rates.
This document proposes an Adaptive Resolution Enhancement Algorithm (AREA) using an Edge Targeted Filter (ETF) and Dual Tree Complex Wavelet Transform (DTCWT) to enhance the resolution and image quality of low-quality, low-resolution images. The ETF first detects edges in an input image and generates a mask. It then focuses on improving edge reconstruction. The output is input to DTCWT, which analyzes subband coefficients to detect flaws and interpolates coefficients to enhance spectral resolution. The inverse DTCWT produces a spatially enhanced output image with improved resolution up to 17% compared to existing techniques. Simulation results demonstrate the effectiveness of the proposed algorithm.
IRJET- Crowd Density Estimation using Novel Feature DescriptorIRJET Journal
This document proposes a new texture feature-based approach for crowd density estimation using Completed Local Binary Pattern (CLBP). The approach divides images into blocks and further divides blocks into cells to compute CLBP features for each cell. A multi-class Support Vector Machine (SVM) classifier is trained to classify each block into one of four crowd density categories (Very Low, Low, Medium, High). Experiments on the PETS 2009 dataset show the proposed CLBP descriptor achieves 95% accuracy and outperforms other texture descriptors for crowd density estimation.
This document proposes a novel framework called smooth sparse coding for learning sparse representations of data. It incorporates feature similarity or temporal information present in data sets via non-parametric kernel smoothing. The approach constructs codes that represent neighborhoods of samples rather than individual samples, leading to lower reconstruction error. It also proposes using marginal regression rather than lasso for obtaining sparse codes, providing a dramatic speedup of up to two orders of magnitude without sacrificing accuracy. The document contributes a framework for incorporating domain information into sparse coding, sample complexity results for dictionary learning using smooth sparse coding, an efficient marginal regression training procedure, and successful application to classification tasks with improved accuracy and speed.
This document summarizes a research paper that proposes two approaches called LLORMA for constructing local low-rank matrix approximations. The approaches approximate an observed matrix M as a weighted sum of several low-rank matrices, where each low-rank matrix is accurate in a local region of M. This relaxes the assumption that M has global low-rank structure. The paper analyzes the accuracy of the local low-rank modeling approaches and shows they improve prediction accuracy over classical low-rank approximation methods on recommendation tasks.
This paper proposes a new method called Local Collaborative Ranking (LCR) for recommender systems. LCR assumes the user-item rating matrix is locally low-rank, meaning the matrix is low-rank within neighborhoods defined by a distance metric on user-item pairs. LCR combines a recent local low-rank approximation approach with empirical risk minimization for ranking losses. Experiments show LCR outperforms other state-of-the-art recommendation methods. LCR is also easily parallelizable, making it suitable for large-scale industrial applications.
Design and Implementation of Efficient Analysis and Synthesis QMF Bank for Mu...TELKOMNIKA JOURNAL
The present section deals with a new type of technique for designing an efficient two channel Quadrature Mirror Filter Bank with constant phase in frequency. For achieving the Perfect Reconstruction Condition in Filter bank, an attempt has been made to design the low pass prototype filter with its impulse response and frequency response in three regions namely pass band, stop band and transition band region. With the error in terms of Reconstruction and the attenuation in the stop band as seen in the prototype filter response, one can evaluate the performance of the introduced filter with the help of filter coefficients generated in the design examples that affects the quality of filter bank design under the constraints of Near Perfect Reconstruction Conditions.
PIPELINED ARCHITECTURE OF 2D-DCT, QUANTIZATION AND ZIGZAG PROCESS FOR JPEG IM...VLSICS Design
This document presents a pipelined architecture for 2D discrete cosine transform (DCT), quantization, and zigzag processing for JPEG image compression implemented on an FPGA. The 2D-DCT is computed using separability into two 1D-DCTs separated by a transpose buffer. A quantizer divides each DCT coefficient by a value from a quantization table stored in ROM. A zigzag buffer reorders the coefficients in zigzag format. The design was synthesized to Xilinx Spartan-3E FPGA and achieved a maximum frequency of 101.35MHz, processing an 8x8 block in 6604ns with a pipeline latency of 140 cycles.
A DIGITAL COLOR IMAGE WATERMARKING SYSTEM USING BLIND SOURCE SEPARATIONcsandit
An attempt is made to implement a digital color image-adaptive watermarking scheme in
spatial domain and hybrid domain i.e host image in wavelet domain and watermark in spatial
domain. Blind Source Separation (BSS) is used to extract the watermark The novelty of the
presented scheme lies in determining the mixing matrix for BSS model using BFGS (Broyden–
Fletcher–Goldfarb–Shanno) optimization technique. This method is based on the smooth and
textured portions of the image. Texture analysis is carried based on energy content of the
image (using GLCM) which makes the method image adaptive to embed color watermark.
The performance evaluation is carried for hybrid domain of various color spaces like YIQ, HSI
and YCbCr and the feasibility of optimization algorithm for finding mixing matrix is also
checked for these color spaces. Three ICA (Independent Component Analysis)/BSS algorithms
are used in extraction procedure ,through which the watermark can be retrieved efficiently . An
effort is taken to find out the best suited color space to embed the watermark which satisfies the
condition of imperceptibility and robustness against various attacks.
The document compares three image fusion techniques: wavelet transform, IHS (Intensity-Hue-Saturation), and PCA (Principal Component Analysis). For each technique, it describes the methodology, syntax used, and features. It then applies each technique to sample images to produce fused images. The RGB values of the fused images are recorded and compared in a table. The wavelet technique uses max area selection and consistency verification for feature selection. IHS transforms RGB to IHS values and replaces intensity with a panchromatic image. PCA replaces the first principal component with a high-resolution panchromatic image. The document concludes no single technique is best and the quality depends on the application.
Image compression using Hybrid wavelet Transform and their Performance Compa...IJMER
Images may be worth a thousand words, but they generally occupy much more space in hard disk, or
bandwidth in a transmission system, than their proverbial counterpart. Compressing an image is significantly
different than compressing raw binary data. Of course, general purpose compression programs can be used to
compress images, but the result is less than optimal. This is because images have certain statistical properties
which can be exploited by encoders specifically designed for them. Also, some of the finer details in the image
can be sacrificed for the sake of saving a little more bandwidth or storage space. Compression is the process of
representing information in a compact form. Compression is a necessary and essential method for creating
image files with manageable and transmittable sizes. The data compression schemes can be divided into
lossless and lossy compression. In lossless compression, reconstructed image is exactly same as compressed
image. In lossy image compression, high compression ratio is achieved at the cost of some error in reconstructed
image. Lossy compression generally provides much higher compression than lossless compression.
This document proposes a blind color image watermarking scheme based on fan beam transform (FBT) and QR decomposition. It embeds a color image watermark into the b* component of the host image's L*a*b* color space. After applying FBT to the b* component and dividing it into blocks, each block undergoes QR decomposition. The watermark is embedded by modifying an element in the R matrix. Experimental results show the proposed method achieves better imperceptibility than previous methods, as measured by PSNR and SSIM, while maintaining robustness against various attacks. The use of the b* component and properties of FBT and QR decomposition contribute to the scheme's imperceptibility and robustness
PERFORMANCE EVALUATION OF DIFFERENT TECHNIQUES FOR TEXTURE CLASSIFICATION cscpconf
This document compares different techniques for texture classification, including wavelet transforms and co-occurrence matrices. It finds that the Haar wavelet technique is the most efficient in terms of time complexity and classification accuracy, except when images are rotated. The co-occurrence matrix method has higher time requirements but excellent classification results, except for rotated images where accuracy is greatly reduced due to its dependence on pixel values. Overall, the Haar wavelet proves to be the best method for texture classification based on the performance assessment parameters of time complexity and classification accuracy.
El documento habla sobre el cooperativismo. Explica que una cooperativa es una asociación voluntaria de personas para formar una organización democrática que gestiona las necesidades económicas, sociales y culturales de sus socios. Las cooperativas pueden especializarse en diferentes áreas como trabajo, consumo, comercialización, enseñanza o crédito. La gestión de una cooperativa la llevan a cabo el presidente, el consejo rector y la asamblea general de socios.
Este documento habla sobre la importancia del amor y la asistencia a los pobres. Discute que recibimos talentos y riquezas de Dios para ayudar a los necesitados, no solo para nosotros mismos. También menciona que debemos asumir la pobreza como una virtud y combatirla donando para ayudar a los pobres en lugar de gastar en lujos innecesarios. Finalmente, enfatiza que nuestros bienes no nos pertenecen, sino que Dios nos los dio para compartirlos con los más necesitados
This certificate of completion certifies that Veronica Chiaravalli completed the online course "Rigging a Cartoon Character in Maya" in October 2016. The course took 2 hours and 34 minutes to complete and was updated in November 2015. The certificate number is F67FC94825F74031B34F07101F8DBE25.
McDonald's offers food and services to customers. They invite people to visit one of their locations soon. However, the document provides very little information to summarize in only 3 sentences.
The document describes various web 2.0 tools used by the Escola EB 2,3 Padre Alberto Neto school library. It discusses the library's web portal Byblos which catalogs over 1500 web resources. It also describes the library blog that receives 2000 hits per week. The library uses the social network hi5 and has 150 members. It shares reading suggestions, photos of activities, virtual exhibitions, and literacy resources on hi5. The library also uses the social bookmarking site Diigo to bookmark and annotate over 400 curriculum-related web resources. Students can get web search help from the library team via instant message or email. The school's intranet provides access to a 2800 item digital library collection. A teacher
IMPROVEMENT OF BM3D ALGORITHM AND EMPLOYMENT TO SATELLITE AND CFA IMAGES DENO...ijistjournal
This paper proposes a new procedure in order to improve the performance of block matching and 3-D filtering (BM3D) image denoising algorithm. It is demonstrated that it is possible to achieve a better performance than that of BM3D algorithm in a variety of noise levels. This method changes BM3D algorithm parameter values according to noise level, removes prefiltering, which is used in high noise level; therefore Peak Signal-to-Noise Ratio (PSNR) and visual quality get improved, and BM3D complexities and processing time are reduced. This improved BM3D algorithm is extended and used to denoise satellite and color filter array (CFA) images. Output results show that the performance has upgraded in comparison with current methods of denoising satellite and CFA images. In this regard this algorithm is compared with Adaptive PCA algorithm, that has led to superior performance for denoising CFA images, on the subject of PSNR and visual quality. Also the processing time has decreased significantly.
The paper presents a nature inspired algorithm that copies the big bang theory of evolution.
This algorithm is simple with regard to number of parameters. Embedded systems are powered by
batteries and enhancing the operating time of the battery by reducing the power consumption is vital.
Embedded systems consume power while accessing the memory during their operation. An efficient
method for power management is proposed in this work. The proposed method, reduce the energy
consumption in memories from 76% up to 98% as compared to other methods reported in the
literature.
The document presents a nature-inspired Big Bang-Big Crunch (BB-BC) algorithm for minimizing power consumption in embedded systems. The BB-BC algorithm mimics the big bang theory of evolution with an initial random solution generation phase followed by an iterative convergence to the center of mass. The algorithm is shown to reduce energy consumption in system memory from 76-98% compared to Tabu Search, with better accuracy and convergence. It provides an effective method for power management in memory-intensive embedded systems.
Survey on Various Image Denoising TechniquesIRJET Journal
This document summarizes several techniques for image denoising. It begins by defining image noise and explaining how noise degrades image quality. It then reviews 7 different published techniques for image denoising, summarizing the key aspects of each technique. These include methods using local spectral component decomposition, SVD-based denoising, patch-based near-optimal denoising, LPG-PCA denoising, trivariate shrinkage filtering, SURE-LET denoising, and 3D transform-domain collaborative filtering. The document concludes that LSCD provides better denoising results according to PSNR analysis and provides an overview of the state-of-the-art in image denoising techniques.
Detection and Classification in Hyperspectral Images using Rate Distortion an...Pioneer Natural Resources
This document summarizes an experiment that compares two methods of bit allocation for compressing hyperspectral imagery using JPEG2000: 1) the traditional high bit rate quantizer approach and 2) the rate distortion optimal (RDO) approach. The experiment shows that both methods perform well at relatively low bit rates, achieving over 96% classification accuracy. However, at very low bit rates, the RDO approach outperforms the high bit rate quantizer approach, achieving 90% accuracy at 0.0375 bpppb compared to less than 90% for the high bit rate method. The RDO approach also achieves lower mean squared error than the high bit rate quantizer approach.
This document proposes a partial prediction algorithm to reduce computational complexity in H.264/AVC 4x4 intra-prediction mode decision. It exploits the inherent symmetry in spatial prediction modes to quickly determine a subset of candidate modes for each 4x4 block. Unlike other fast algorithms, it does not assume a dominant edge is present in each block. By considering pixel-to-pixel correlation and developing simple cost measures from the symmetries, it can search 3 or fewer modes instead of 9 to select the best prediction mode, reducing complexity with minimal impact on quality.
EFFICIENT IMAGE COMPRESSION USING LAPLACIAN PYRAMIDAL FILTERS FOR EDGE IMAGESijcnac
This project presents a new image compression technique for the coding of retinal and
fingerprint images. Retinal images are used to detect diseases like diabetes or
hypertension. Fingerprint images are used for the security purpose. In this work, the
contourlet transform of the retinal and fingerprint image is taken first. The coefficients of
the contourlet transform are quantized using adaptive multistage vector quantization
scheme. The number of code vectors in the adaptive vector quantization scheme depends
on the dynamic range of the input image.
This document presents a probabilistic approach to rate control for optimal color image compression and video transmission. It summarizes a rate-distortion model for color image compression that was previously introduced. Based on this model, the document proposes an improved color image compression algorithm that uses the discrete cosine transform and models subband coefficients with a Laplacian distribution. It shows how the rate-distortion model can be used for rate control of compression, allowing target rates or bitrates to be achieved for still images and video sequences. Simulation results demonstrate that the new algorithm outperforms other available compression systems such as JPEG in terms of peak signal-to-noise ratio and a perceptual quality metric.
Pipelined Architecture of 2D-DCT, Quantization and ZigZag Process for JPEG Im...VLSICS Design
This paper presents the architecture and VHDL design of a Two Dimensional Discrete Cosine Transform (2D-DCT) with Quantization and zigzag arrangement. This architecture is used as the core and path in JPEG image compression hardware. The 2D- DCT calculation is made using the 2D- DCT Separability property, such that the whole architecture is divided into two 1D-DCT calculations by using a transpose buffer. Architecture for Quantization and zigzag process is also described in this paper. The quantization process is done using division operation. This design aimed to be implemented in Spartan-3E XC3S500 FPGA. The 2D- DCT architecture uses 1891 Slices, 51I/O pins, and 8 multipliers of one Xilinx Spartan-3E XC3S500E FPGA reaches an operating frequency of 101.35 MHz One input block with 8 x 8 elements of 8 bits each is processed in 6604 ns and pipeline latency is 140 clock cycles.
Towards better performance: phase congruency based face recognitionTELKOMNIKA JOURNAL
Phase congruency is an edge detector and measurement of the significant feature in the image. It is a robust method against contrast and illumination variation. In this paper, two novel techniques are introduced for developing alow-cost human identification system based on face recognition. Firstly, the valuable phase congruency features, the gradient-edges and their associate dangles are utilized separately for classifying 130 subjects taken from three face databases with the motivation of eliminating the feature extraction phase. By doing this, the complexity can be significantly reduced. Secondly, the training process is modified when a new technique, called averaging-vectors is developed to accelerate the training process and minimizes the matching time to the lowest value. However, for more comparison and accurate evaluation,three competitive classifiers: Euclidean distance (ED),cosine distance (CD), and Manhattan distance (MD) are considered in this work. The system performance is very competitive and acceptable, where the experimental results show promising recognition rates with a reasonable matching time.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document provides information on several remote sensing projects from IEEE 2015. It lists the titles, languages, and abstracts for 8 projects related to classification and analysis of hyperspectral and multispectral images. The projects focus on techniques such as sparse representation in tangent space, Gabor feature-based collaborative representation, level set evolutions for object extraction, and dimension reduction using spatial and spectral regularization.
The document describes a study investigating the distributed computing implementation of the EGSnrc Monte Carlo system using a computer cluster. A linear accelerator model was simulated using BEAMnrc and dose calculations were performed in water phantoms and patient geometries using DOSXYZnrc. The computational performance was tested for various scenarios using different numbers of computers. The results showed almost linear scaling of performance with the number of computers. Statistical uncertainties were the same for all scenarios, demonstrating the distributed approach provides an efficient method for speeding up Monte Carlo simulations for radiotherapy applications.
PROBABILISTIC DIFFUSION IN RANDOM NETWORK G...ijfcstjournal
In this paper, we consider a random network such that there could be a link between any two nodes in the network with a certain probability (plink). Diffusion is the phenomenon of spreading information throughout the network, starting from one or more initial set of nodes (called the early adopters). Information spreads along the links with a certain probability (pdiff). Diffusion happens in rounds with the first round involving the early adopters. The nodes that receive the information for the first time are said to be covered and
become candidates for diffusion in the subsequent round. Diffusion continues until all the nodes in the network have received the information (successful diffusion) or there are no more candidate nodes to spread the information but one or more nodes are yet to receive the information (diffusion failure). On the basis of exhaustive simulations conducted in this paper, we observe that for a given plink and pdiff values, the fraction of successful diffusion attempts does not appreciably change with increase in the number of early
adopters; whereas, the average number of rounds per successful diffusion attempt decreases with increase
in the number of early adopters. The invariant nature of the fraction of successful diffusion attempts with increase in the number of early adopters for a random network (for fixed plink and pdiff values) is an interesting and noteworthy observation (for further research) and it has not been hitherto reported in the literature.
This document proposes an algorithm for efficiently computing 2D spatial convolution through image partitioning and short convolution. The algorithm partitions an input image into overlapping 6x6 blocks, which are then further partitioned into non-overlapping 3x3 sub-images. Convolution is computed for each sub-image independently using a variable-length filter, reducing computational complexity compared to FFT-based techniques. The outputs from each sub-image convolution are combined to reconstruct the original block. Simulation results demonstrate the effectiveness of the algorithm for tasks like edge detection and noise reduction through local image filtering.
Improved noisy gradient descent bit-flipping algorithm over Rayleigh fading ...IJECEIAES
Gradient descent bit flipping (GDBF) and its many variants have offered remarkable improvements over legacy, or modified, bit flipping decoding techniques in case of decoding low density parity check (LDPC) codes. GDBF method and its many variants, such as noisy gradient descent bit flipping (NGDBF) have been extensively studied and their performances have been assessed over multiple channels such as binary symmetric channel (BSC), binary erasure channel (BEC) and additive white Gaussian noise (AWGN) channel. However, performance of the said decoders in more realistic channels or channel conditions have not been equally studied. An improved noisy gradient descent bit flipping algorithm is proposed in this paper that optimally decodes LDPC encoded codewords over Rayleigh fading channel and under various fade rates. Comparing to NGDBF method, our proposed decoder provides substantial improvements in both error performance of the code, and in the number of iterations required to achieve the said error performance. It subsequently reduces the end-to-end latency in applications with low or ultra-low latency requirements.
Similar to DUAL POLYNOMIAL THRESHOLDING FOR TRANSFORM DENOISING IN APPLICATION TO LOCAL PIXEL GROUPING METHOD (20)
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Building RAG with self-deployed Milvus vector database and Snowpark Container...Zilliz
This talk will give hands-on advice on building RAG applications with an open-source Milvus database deployed as a docker container. We will also introduce the integration of Milvus with Snowpark Container Services.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
DUAL POLYNOMIAL THRESHOLDING FOR TRANSFORM DENOISING IN APPLICATION TO LOCAL PIXEL GROUPING METHOD
1. The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.1, February 2016
DOI : 10.5121/ijma.2016.8101 1
DUAL POLYNOMIAL THRESHOLDING FOR
TRANSFORM DENOISING IN APPLICATION TO
LOCAL PIXEL GROUPING METHOD
Jafet Morales, David Akopian and SosAgaian
Department of Electrical and Computer Engineering, University of Texas at San Antonio,
San Antonio, Texas, USA
ABSTRACT
Thresholding operators have been used successfully for denoising signals, mostly in the wavelet domain.
These operators transform a noisy coefficient into a denoised coefficient with a mapping that depends on
signal statistics and the value of the noisy coefficient itself. This paper demonstrates that a polynomial
threshold mapping can be used for enhanced denoising of Principal Component Analysis (PCA) transform
coefficients. In particular, two polynomial threshold operators are used here to map the coefficients
obtained with the popular local pixel grouping method (LPG-PCA), which eventually improves the
denoising power of LPG-PCA. The method reduces the computational burden of LPG-PCA, by eliminating
the need for a second iteration in most cases. Quality metrics and visual assessment show the improvement.
KEYWORDS
Principal Components, Denoising, Shrinkage, Threshold Operators
1. INTRODUCTION
Noise in an image can be due to a variety of reasons and can be introduced during the acquisition,
transmission, or processing stages. But for data to be analyzed for a meaningful purpose either by
the human eye or a computer, noise must be reduced or eliminated. The problem of noise removal
has been studied extensively. These include, but are not limited to mean filters, the nonlinear
median filter, simple adaptive filters, the Wiener filter, bandpass, and band-reject filters [1].
Modern algorithms may act on more complicated domains, such as dictionary domains [2] or the
wavelet domain [3].
When performing a PCA transformation, the principal component will point in the direction for
which the data has the highest variability (i.e. variance) and subsequent components will be
ordered from highest to lowest variability, always having the maximum variability possible while
being orthogonal to preceding components. In the context of image denoising, this means that
information for distinguishing features of an image, such as edges, will be concentrated in the
first components, whereas information about the noise will spread more evenly throughout the
rest of components [5]. In the simplest scenario, denoising in the PCA domain can be performed
by setting to zero some of the coefficients with the lowest variability.
Algorithms that denoise in a PCA domain can differ in many aspects when applied to image
denoising. First, there are many ways to decompose the image into signals that will be denoised
in their PCA domain. An obvious way to do this is to decompose the image into blocks of the
same size. To denoise each one of these signals, several samples of each of those signals must be
gathered in order to calculate the PCA transform for each one of them. Therefore, a grouping of
signals similar to the target signal needs to be performed. There exists several ways to perform
2. The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.1, February 2016
2
this grouping. Second, once in the PCA domain, the signals can be denoised in different ways.
The simplest way to do this is to set some of the last components in each PCA transformed signal
to zero. Finally, the denoised signals must somehow be aggregated into a denoised image. For
example, if there is an overlap between the windows to be denoised, then corresponding
components in the original domain, or pixel locations, can be averaged. Some algorithms do not
denoise signals directly in the PCA domain but rather use the domain to obtain better statistics
that can be used in the denoising process. The principal neighborhood dictionaries for non-local
means (PND-NLM) approach proposed in [4], for example, makes use of PCA to calculate more
accurate weights for pixels to be weight-averaged into a denoised version of the target pixel. Such
method is an improvement to the popular NLM denoising approach, which calculates the weights
based on metrics obtained from the original domain. For PND-NLM, the results show that
denoising accuracy peaks when calculating the weights using only a low number of dimensions
compared to the number of dimensions in the original domain.
A very competitive denoising solution is proposed in [5] that uses local pixel grouping and
principal component analysis (LPG-PCA). Although originally designed for grayscale and
color images, the method has been also used successfully in the denoising of color filter array
(CFA) images from single-sensor digital cameras [6]. One of the reasons LPG-PCA is attractive
is because it is directly applicable to CFA images, whereas other state-of-the-art algorithms need
the CFA images to be demosaicked or interpolated before application, as CFA images have a
mosaic structure.
One disadvantage of (LPG-PCA) is its computational burden. The LPG-PCA algorithm, as
described in [5], must be applied twice to obtain good denoising results. This is because the PCA
transform and the grouping may be biased. The transform and the neighborhood grouping can be
biased due to the noise and the lack of enough samples. Besides reducing the amount of noise, the
first application is used to setup ground for better grouping and reduced bias of the PCA
transform in the second stage. In this paper, an improvement to the first stage is proposed, which
reduces the residual noise due to biasing of the transform and biasing of the grouping. The
modification drastically improves the results obtained with a single application of LPG-PCA and
it also reduces the artifacts obtained in a second application. The proposed improvement can be
also extended to [6][7] and applied to other state-of-the-art methods such as the block-matching
and 3-D filtering (BM3D) proposed in [8].
The authors of BM3D proposed a denoising method that stacks similar neighborhoods in a 3D
image and performs a 3D transform. Their method has obtained good results when compared to
other state-of-the-art methods, such as Portilla’s Scale mixtures [9] and sparse and redundant
representations over trained dictionaries (K-SVD) [10]. Their method is also composed of two
stages. The purpose of the first stage is to setup the ground for improved grouping of blocks in the
second stage.
Section 2 of this paper presents an overview of the LPG-PCA method for image denoising. In
Secion 3, polynomial thresholding is discussed. Section 4 describes how polynomial thresholding
can be used to denoise the PCA coefficients obtained by LPG-PCA. Denoising experiments and
results are presented in Section 5 and Section 6, respectively. Finally, remarks about performance
and future outlook are discussed in Section 7.
2. LPG-PCA DENOISING
This section is an overview of the LPG-PCA method proposed by Zhang, Dong, Zhang, and Shi
[5]. In [5], a variable block of pixels and a wider training blockcentered on the variable block are
defined. In this paper the variable block is referred to as the target block, and the training block as
the training window. The target block is the one to be denoised and it is selected with a sliding
3. The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.1, February 2016
3
window that slides across the entire image. Blocks that are similar to the target block are found in
the training window in order to calculate the target block’s PCA transform. Selection of similar
neighborhoods inside the training window is performed based on block matching. Once similar
blocks are found, they are used to define and apply PCA transform. In the PCA domain, the linear
minimum mean square-error (LMMSE) technique is used to denoise the target block. The goal of
LMMSE is to find the parameters for a linear function that can map a noisy coefficient into a
clean coefficient. A different linear function is used for each component. After mapping, a
“clean” version of the target block is obtained by transforming the target block back to image
domain. Finally, redundant pixels from the sliding window are averaged.
The target block is denoted by
ܠ = ሾxଵxଶ … xሿ
(2.1)
where݉ is the total number of pixels in the block. The training window is used as the search
space for blocks that are similar to the target block. Selected blocks, including the target block are
aggregated into a matrix as follows
܆ =
ۏ
ێ
ێ
ۍ
xଵ
ଵ
xଵ
ଶ
xଶ
ଵ
xଶ
ଶ
⋯ xଵ
⋯ xଶ
⋮ ⋮
x
ଵ
x
ଶ
⋱ ⋮
⋯ x
ے
ۑ
ۑ
ې
(2.2)
Every column in ܆represents a sample of the signal to be denoised, and every row represents a
pixel position in the blocks. The average signal or block is obtained by averaging the matrix in
(2.2) horizontally. The centralized matrix of ܆ is a matrix from which the average block has been
substracted from all the columns, and it is expressed as
܆ഥ =
ۏ
ێ
ێ
ۍ
xതଵ
ଵ
xതଵ
ଶ
xതଶ
ଵ
xതଶ
ଶ
⋯ xതଵ
⋯ xതଶ
⋮ ⋮
xത
ଵ
xത
ଶ
⋱ ⋮
⋯ xത
ے
ۑ
ۑ
ې
(2.3)
The PCA transform is derived for the set of columns in (2.3), denoted further as orthonormal
transform ۾܆ഥ, which can decorrelate the dimensions of ܆ഥ. It is found by firstcomputing the
covariance matrix of ܆ഥ
ષ܆ഥ =
ଵ
୬
܆ഥ܆ഥ܂
(2.4)
where݊ is the number of the block samples for covariance matrix estimation. Since ષ܆ഥ is
symmetric, it can be diagonalized with
ષ܆ഥ =
(2.5)
4. The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.1, February 2016
4
where is the orthonormal eigenvector matrix of ષ܆ഥ and is the diagonal eigenvalue matrix.
The PCA transform of ܆ഥis then defined as
۾܆ഥ = ܂
(2.6)
and the PCA linear transformation is applied as
܇ഥ = ۾܆ഥ܆ഥ (2.7)
In other words, ۾܆ഥ is found by diagonalizing the covariance matrix of ܆ഥ. While ષ܆ഥ is not
diagonal, the covariance matrix of ܇ഥ, denoted as ષ܇ഥ should be diagonal so that the components of
܇ഥare decorrelated.
When clean samples are not available, the PCA transform can be calculated from a matrix of
noisy samples
܆ഥܞ = ܆ഥ + ܄ (2.8)
It is justified in [5], that the PCA transform of ܆ഥܞapproximates that of ܆ഥ if the noise is assumed to
be white additive with zero mean, and uncorrelated to the image. The noisy dataset in the PCA
domain can be written as
܇ഥܞ = ۾܆ഥ܆ഥܞ = ۾܆ഥሺ܆ഥ + ܄ሻ = ۾܆ഥ܆ഥ + ۾܆ഥ܄ = ܇ഥ + ܄ܡ (2.9)
where܄ܡ is the noise transformed with ۾܆ഥ.
There are several ways to remove noise from the target block ܆ഥܞin PCA transform domain. A
simple way would be to set some of the last rows in ܇ഥܞ to zero. In the methods described in
[6][7], the target block is denoised with the linear minimum mean square error (LMMSE)
technique. With LMMSE, each row in ܇ഥ is estimated with
Y = ݓYഥ୴
(2.10)
where the subscript ݇has been used to denote the row number and ݓ is the weight for row (i.e.,
PCA component) ݇. The optimum weights are calculated as
ݓ =
Ωౕഥሺ,ሻ
Ωౕഥ౬
ሺ,ሻ
(2.11)
The covariance matrix of the clean dataset in the PCA domain ષ܇ഥ is not directly available.
However, assuming ܇ഥ and ܄ܡare uncorrelated, we can write ષ܇ഥܞ
as
5. The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.1, February 2016
5
ષ܇ഥܞ
=
ܖ
܇ഥܞ܇ഥܞ
܂
=
ܖ
൫܇ഥ܇ഥ܂
+ ܇ഥ܄ܡ
܂
+ ܄ܡ܇ഥ܂
+ ܄ܡ܄ܡ
܂
൯
≈
ܖ
൫܇ഥ܇ഥ܂
+ ܄ܡ܄ܡ
܂
൯ = ષ܇ഥ + ષܞܡ
(2.12)
Because the noise is uncorrelated, ષܞܡ
is a diagonal matrix ષܞܡ
= σ୴۷, where σ୴ is the variance
of the noise. And
ષ܇ഥ = ષଢ଼ഥ౬
− ષܞܡ
= ષଢ଼ഥ౬
− σ୴۷ (2.13)
The estimate of the centralized target block is then obtained with
܆ = ۾܆ഥ
܂
܇ (2.14)
3. POLYNOMIAL THRESHOLD OPERATORS
Thresholding operators have been used extensively in the wavelet domain with the purpose of
image denoising. Their effectiveness in the wavelet domain has been shown in
[9][10][11][12][13]. A thresholding operator is a function that is used to map a noisy coefficient
value into a denoised coefficient value. Two very popular thresholding operators are the hard and
soft thresholding operators proposed in [14]. In the case of the hard thresholding operator,
coefficients that have a value less than a threshold are set to zero. In the case of the soft threshold
operator, coefficients that are lower than the threshold are set to zero and coefficients that exceed
the threshold are diminished in magnitude by the value of the threshold. There are many
approaches for finding the appropriate value for the threshold. The Universal threshold was
proposed in [14]. This threshold value is the expected maximum value of ܰ independent samples
from a normal distribution ܰሺ0, σሻ[15], calculated as
λ୳ = σඥ2log ሺܰሻ (3.1)
A class of polynomial threshold operators (PTOs) has been proposed in [11]. These operators
have a linear response for coefficients higher than the threshold and an odd polynomial response
for coefficients lower than the threshold. Several degrees of freedom allow for the slope and
intercept of the linear part to be adjusted as well as the coefficients of the polynomial section of
the operator. In the literature, these type of operators have been optimized by using a linear least
squares (LLS) optimization approach, which makes use of matrix inversions. An adaptive least
mean squares (LMS) optimization technique for denoising filter adaptation was used in [16]. The
same adaptive technique has been used in [12] in the context of wavelet thresholding.
Suppose we have a corrupted signal ܛܞand we wish to calculate ܛො, which is an estimate of the
clean signal ܛ. The polynomial threshold operator is defined as
sనෝ = T൫s୴
൯ = ቊ
aିଵs୴
− asign൫s୴
൯λหs୴
ห > λ
∑ a୰s୴
ଶ୰ାଵ
|s୴
| ≤ேିଶ
୰ୀଵ λ
(3.2)
6. The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.1, February 2016
6
whereݏనෝ is the estimate of the ݅th
sample of the clean signal, ݏ௩
is the ݅th
sample of the noisy
signal, ࢇ is a vector with coefficients for the polynomial and linear parts of the operator, and λ is
the threshold.
For denoising of an entire signal, we group the samples in a vector and rewrite equation (3.2) in
vector form
ܛො = Tሺܛܞሻ = ۴ሺܛܞሻ܉ (3.3)
where۴ሺܛܞሻis a matrix for which the ݅th
row is
ሺܛܞሻ = ቊ
ሾ0 0 … 0 s୴ − sgn൫s୴
൯λሿ หs୴
ห > λ
ൣs୴
s୴
ଷ … s୴
ଶିଷ 0 0൧ |s୴
| ≤ λ
(3.4)
The optimal coefficients of ܉, which can be used to minimize the mean-square-error (MSE)
between the clean signal and the noisy signal can be found by solving
܉ܜܘܗ = arg min܉ԡܛ − ۴ሺܛܞሻ܉ԡ
(3.5)
Even though a linear system of equations ۰܋ = ܌where ۰is a matrix and ܋and܌ are vectors, may
have no solution, the solution ̂܋to ۰܂
۰̂܋ = ۰܂
܌ is the solution that minimizes ԡ۰܋ − ܌ԡଶ
.
Therefore, the linear least squares solution in (3.5) can be obtained with
۴܂ሺܛܞሻ۴ሺܛܞሻ܉ܜܘܗ = ۴܂ሺܛܞሻܛ
܉ܜܘܗ = ቀ۴ሺܛܞሻ۴ሺܛܞሻቁ
ି
۴
ሺܛܞሻܛ (3.6)
The coefficients of ܉ܜܘܗcan then be used to denoise the signal ܛܞ.This section is an overview of
the LPG-PCA method proposed by Zhang, Dong, Zhang, and Shi [5]. In [5], a variable block of
pixels and a wider training blockcentered on the variable block are defined. In this paper the
variable block is referred to as the target block, and the training block as the training window.
4. DUAL POLYNOMIAL THRESHOLDING OF PCA COEFFICIENTS
In this section we explain how the denoised coefficients obtained with (2.10) can be further
denoised by using PTOs. The coefficients to be further denoised are the LMMSE denoised
coefficients of the target block in its PCA domain and not the coefficients from other blocks in
the training window, since those blocks are only used to calculate the PCA transform and the
weights in (2.11). The PTOs are applied to the column of ܇that corresponds to the target block
(the block to be denoised), before transforming the column back to image domain and assembling
it into the denoised version of the image. We will refer to the column of each ܇that corresponds to
the target block asܡො.
When denoising each sample of ܡො, one PTO is used from two that are available for each PCA
component. In other words, a total of ݉ × 2 PTOs are used to denoise the entire image, where m
is the number of pixels in the target block. Selection of the right PTO from the two that are
7. The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.1, February 2016
7
available for each PCA component is done with a simple binary classification scheme. If the
sample yොof ܡොis estimated to be composed mainly of noise, a highly suppressive PTO should be
used to denoise it. On the other hand, if the sample is estimated to be mostly clean, a less
suppressive PTO is to be used.
The weights calculated in (2.11) can be used to estimate the level of noise on a given sample yො.
The diagonals of the covariance matrices from which the weights are calculated can also be
written as
ષ܇ഥ = ൦
ߪ܇ഥଵ
ܿଶଵ
ܿଵଶ ߪ܇ഥଶ
⋯ ܿଵ
⋯ ⋮
⋮ ⋮
ܿଵ ⋯
⋱ ⋮
⋯ ߪ܇ഥ
൪ (4.1)
ષଢ଼ഥ౬
= ൦
ߪ܇ഥ౬భ
ܿଶଵ
ܿଵଶ ߪ܇ഥ౬మ
⋯ ܿଵ
⋯ ⋮
⋮ ⋮
ܿଵ ⋯
⋱ ⋮
⋯ ߪ܇ഥ౬ౣ
൪ (4.2)
whereߪ܇ഥ୩
and ߪ܇ഥ౬ౡ
are the variances of the kth row of ܇ഥ and ܇ഥ୴, respectively. These are the
variances of each PCA component for the given training window. The weight for the ݇
th
PCA
component in (2.11) can therefore be rewritten as
ݓ =
ఙ܇ഥೖ
మ
ఙ܇ഥ
౬ೖ
మ (4.3)
A dual polynomial threshold operator (BPTO) for further denoising of ܡොcan then be defined as
Tሺyොሻ = ൜
Tଵሺyොሻݓ ≥ ߜ
Tଶሺyොሻݓ < ߜ
(4.4)
Where T݇1 corresponds to PTO 1 of the ݇th
PCA component of the block and Tଶ corresponds to
PTO 2 of the ݇
th
PCA component of the block, and ߜ is a threshold for the ݇
th
weight, which can
be heuristically adjusted for different type of images.
Training of the operator T, which can be used to denoise the entire image consists of finding the
optimum polynomial coefficient sets ܉ଵand ܉ଶfor each pair of PTOs. The total number of
coefficients will be ܰ × ݉ × 2, given that the total number of coefficients per polynomial is ܰ
and the number of pixels in each target block is m. The thresholds ߣଵ and ߣଶ in (3.2), can be
calculated with (3.1), unless a different type of threshold is chosen. To estimate ܉ଵ,܉ଶ,and the
optimum weight threshold ߜ for each component݇, we can calculate ܉ଵand ܉ଶusing(3.5),for
discrete values of ߜ, selecting the ߜ,܉ଵand ܉ଶ that minimize MSE between the training
image and its noisy counterpart.
5. CASE STUDY
Figure 1 shows a noisy brain MRI image, and the image that results after denoising with LPG-
PCA. The original image was artificially corrupted with Gaussian noise (ߪ = 20).
8. The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.1, February 2016
Figure 1. Brain MRI image corrupted with Gaussian noise (
Fig. 2 shows a plot of the clean vs. noisy LPG
Only the second PCA component
LMMSE to clean the noisy coefficients. The LMMSE denoised coefficients vs. noisy coefficients
are plotted in Fig. 3. Ideally, the plots in Fig. 2 and Fig. 3 would be very similar.
Figure 2. Clean (y-axis) vs. noisy (x
The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.1, February 2016
Brain MRI image corrupted with Gaussian noise (ߪ = 20) (left); denoised with LPG
(right)
Fig. 2 shows a plot of the clean vs. noisy LPG-PCA coefficients of the corrupted MRI image.
Only the second PCA component is shown in Fig. 2. The original LPG-PCA method uses
LMMSE to clean the noisy coefficients. The LMMSE denoised coefficients vs. noisy coefficients
are plotted in Fig. 3. Ideally, the plots in Fig. 2 and Fig. 3 would be very similar.
axis) vs. noisy (x-axis) LPG-PCA coefficients for the 2nd
component of corrupted
brain MRI image
The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.1, February 2016
8
) (left); denoised with LPG-PCA
PCA coefficients of the corrupted MRI image.
PCA method uses
LMMSE to clean the noisy coefficients. The LMMSE denoised coefficients vs. noisy coefficients
component of corrupted
9. The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.1, February 2016
Figure 3. LMMSE denoised (y-axis) vs. noisy (x
It is clear from the plots in Fig. 2
LPG-PCA coefficients can be further improved. In particular, the coefficients obtained with
LMMSE denoising do not reflect the sparsity of the original signal, shown in Fig. 2, since ther
are not many denoised coefficients clustered around zero. The proposed adaptive polynomial
technique can be used to map the LMMSE denoised coefficients to better denoised coefficients
by training the polynomial with similar images that have been corrupte
distribution. Two polynomials have been used per component instead of one in order to reduce
biasing of nonzero coefficients by the coefficients that are supposed to be zero. Thus, the
coefficients for a particular component are clas
with a separate polynomial. Classification into each group is done by thresholding the weights
calculated from the LMMSE denoising procedure as in (4.4). The correct threshold is found by
minimizing the MSE of the reconstructed coefficients separately for every component during
training.
Fig. 4 shows the clean and noisy images that were used to train the PTOs used to denoise the
image in Fig. 1. The original training image is the concatenation of 4 brain MR
noisy counterpart has been corrupted with Gaussian noise (
the size of the target image in every dimension.
Figure 4. Original training image (left); corrupted with Gaussian noise (
The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.1, February 2016
axis) vs. noisy (x-axis) LPG-PCA coefficients for the 2nd
component of
corrupted brain MRI image
Fig. 2-3 that the correction obtained by applying only LMMSE to the
PCA coefficients can be further improved. In particular, the coefficients obtained with
LMMSE denoising do not reflect the sparsity of the original signal, shown in Fig. 2, since ther
are not many denoised coefficients clustered around zero. The proposed adaptive polynomial
technique can be used to map the LMMSE denoised coefficients to better denoised coefficients
by training the polynomial with similar images that have been corrupted with a similar noise
distribution. Two polynomials have been used per component instead of one in order to reduce
biasing of nonzero coefficients by the coefficients that are supposed to be zero. Thus, the
coefficients for a particular component are classified into two subgroups. Each group is denoised
with a separate polynomial. Classification into each group is done by thresholding the weights
calculated from the LMMSE denoising procedure as in (4.4). The correct threshold is found by
of the reconstructed coefficients separately for every component during
Fig. 4 shows the clean and noisy images that were used to train the PTOs used to denoise the
image in Fig. 1. The original training image is the concatenation of 4 brain MRI images and its
noisy counterpart has been corrupted with Gaussian noise (ߪ = 20). The training images are twice
the size of the target image in every dimension.
. Original training image (left); corrupted with Gaussian noise (ߪ = 20) (right)
The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.1, February 2016
9
component of
3 that the correction obtained by applying only LMMSE to the
PCA coefficients can be further improved. In particular, the coefficients obtained with
LMMSE denoising do not reflect the sparsity of the original signal, shown in Fig. 2, since there
are not many denoised coefficients clustered around zero. The proposed adaptive polynomial
technique can be used to map the LMMSE denoised coefficients to better denoised coefficients
d with a similar noise
distribution. Two polynomials have been used per component instead of one in order to reduce
biasing of nonzero coefficients by the coefficients that are supposed to be zero. Thus, the
sified into two subgroups. Each group is denoised
with a separate polynomial. Classification into each group is done by thresholding the weights
calculated from the LMMSE denoising procedure as in (4.4). The correct threshold is found by
of the reconstructed coefficients separately for every component during
Fig. 4 shows the clean and noisy images that were used to train the PTOs used to denoise the
I images and its
). The training images are twice
) (right)
10. The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.1, February 2016
After training, the PTOs can be used to further denoise the LMMSE denoised LPG
coefficients of a target image. In the case of the image in Fig. 1, the denoised coefficients vs.
noisy coefficients for the 2nd
component are shown in Fig. 5. The plot shows many more
coefficients closer to zero than those in Fig. 3, which were calculated by applying only LMMSE.
These are the coefficients for which the weight in (4.3) is much lower, which means they were
mostly made of noise. Coefficients with higher weights are almost left untouched.
Figure 5. LMMSE Polynomial denoised coefficients (y
component of the MRI image in Fig. 1
Figure 6. Brain MRI image denoised with
with LMMSE and polynomial thresholding (right)
The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.1, February 2016
After training, the PTOs can be used to further denoise the LMMSE denoised LPG
coefficients of a target image. In the case of the image in Fig. 1, the denoised coefficients vs.
component are shown in Fig. 5. The plot shows many more
coefficients closer to zero than those in Fig. 3, which were calculated by applying only LMMSE.
These are the coefficients for which the weight in (4.3) is much lower, which means they were
de of noise. Coefficients with higher weights are almost left untouched.
. LMMSE Polynomial denoised coefficients (y-axis) vs. noisy coefficients (x-axis) for 2
component of the MRI image in Fig. 1
. Brain MRI image denoised with LMMSE without polynomial thresholding (left); denoised
with LMMSE and polynomial thresholding (right)
The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.1, February 2016
10
After training, the PTOs can be used to further denoise the LMMSE denoised LPG-PCA
coefficients of a target image. In the case of the image in Fig. 1, the denoised coefficients vs.
component are shown in Fig. 5. The plot shows many more
coefficients closer to zero than those in Fig. 3, which were calculated by applying only LMMSE.
These are the coefficients for which the weight in (4.3) is much lower, which means they were
axis) for 2nd
LMMSE without polynomial thresholding (left); denoised
11. The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.1, February 2016
11
The image that results by using LMMSE and the image that results when post-processing with the
two polynomials per component are shown together for comparison in Fig. 6.
6. PERFORMANCE ASSESSMENT
The proposed modification to LPG-PCA was tested on the denoising of three datasets of five
128X128 8-bit grayscale images each. The first dataset contains slices from a 3D brain MRI
image; the second dataset contains slices from a microscopic 3D femur bone image; and the third
dataset contains slices from a 3D neuron cells image. Slices from 3D images were used because
of the similarity between the images, meaning that a single trained set of polynomials could be
used to denoise each type of image. The datasets were corrupted by Gaussian noise with ߪ =
10,20,30and 40.
The fast version of LPG-PCA was used in the experiments. This version of the algorithm
performs a dimensionality reduction so that only 40 percent of the PCA coefficients are used in
the grouping phase of the algorithm. All of the coefficients are then used in the PCA
decomposition and denoising phases of the algorithm.
In the implementation of polynomial thresholding a total of ܰ = 4 coefficients were used for
each PTO. The threshold for the polynomial operator was set to be the universal threshold. While
training the PTOs for each component, the threshold for the LMMSE weights was varied between
0 and 3 at intervals of .02. At each one of these thresholds the optimum polynomials were
calculated. The polynomial threshold operator, the LMMSE weight threshold and their
corresponding two polynomials that minimized MSE for each component were chosen as the
optimum parameters for denoising each component in a particular set of images. Polynomial
thresholding was not used, however, for PCA components in which the number of coefficients
above the threshold was less than 20. In these cases the LMMSE denoised coefficients were used
as the final denoised coefficients. This was done in order to avoid overfitting of the polynomial to
a very few coefficients.
The performance of the modified LPG-PCA algorithm was compared to that of the original LPG-
PCA algorithm for one and two iterations. The amount of noise to be reduced by the second
iteration is considerably less so that a different set of polynomials were trained to be used for the
second iteration. This means that two sets of polynomials were generated during training.
The peak signal to noise ratio (PSNR) results in Table 1 show that a single iteration of the
modified LPG-PCA method performs much better than a single iteration of the original LPG-
PCA. In fact, a single iteration of the modified LPG-PCA performs better than two iterations of
the original method for most of the experiments when using PSNR as a measure of performance.
When using the structural similarity index (SSIM) as the performance metric, the results of a
single iteration of the modified LPG-PCA are very close to those obtained with two iterations of
the original LPG-PCA. In the second iteration of the modified LPG-PCA there is an additional
improvement to both metrics in most cases. In both iterations the improvement to the original
method is better for high levels of noise. The visual results from Fig. 7-9 show that the
improvement by the modification is seen in a reduction of the residual noise, but also in the
artifacts created by the original method.
7. CONCLUSION
An enhanced LPG-PCA method has been proposed in this paper. The linear mapping of PCA
coefficients by LMMSE in the original implementation of LPG-PCA has been improved by
adding an optimal nonlinear mapping stage. The additional mapping of coefficients is done by
separating coefficients for every PCA component into two subgroups and using a separate
12. The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.1, February 2016
12
polynomial for each group. The polynomials are trained to denoise specific types of images.
Image degradation metrics and visual appearance show that the proposed stage is an improvement
to the original LPG-PCA method. Possible future directions would be to explore whether the
method described can improve denoising in other domains and to assess its performance for other
types of noise.
Table 1. PSNR and SSIM results for denoising brain MRI, neuron cells, and femur bone images with LPG-
PCA and LPG-PCA with polynomial thresholding
Dataset
Noise
Level
PSNR SSIM
LPG-PCA
One
iteration
LPG-PCA
Modifie
d One
iteration
LPG-PCA
Two
iteration
s
LPG-PCA
Modified
Two
iteration
s
LPG-
PCA
One
iteratio
n
LPG-PCA
Modifie
d One
iteration
LPG-PCA
Two
iteration
s
LPG-PCA
Modified
Two
iteration
s
Brain
MRI
ߪ = 10 36.4 37.3 37.4 36.0 0.92 0.94 0.95 0.95
ߪ = 20 31.7 33.2 33.1 33.8 0.78 0.86 0.87 0.89
ߪ = 30 29.0 31.0 30.8 31.6 0.66 0.79 0.80 0.84
ߪ = 40 27.0 29.4 29.3 29.7 0.55 0.73 0.73 0.79
Neuron
s
ߪ = 10 33.5 33.9 33.9 34.0 0.92 0.95 0.95 0.95
ߪ = 20 28.9 29.6 29.5 29.8 0.78 0.88 0.88 0.91
ߪ = 30 26.3 27.2 27.1 27.5 0.65 0.82 0.82 0.86
ߪ = 40 24.4 25.5 25.5 25.9 0.54 0.75 0.74 0.83
Femur
ߪ = 10 34.2 34.8 34.6 35.2 0.88 0.90 0.91 0.92
ߪ = 20 29.3 30.0 29.7 30.5 0.68 0.75 0.75 0.80
ߪ = 30 26.6 27.4 27.2 27.9 0.50 0.61 0.61 0.67
ߪ = 40 24.9 26.0 25.8 26.3 0.38 0.51 0.51 0.57
13. The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.1, February 2016
13
[
Figure 7. Brain MRI training image (top left); original image for testing (top right); first iteration
denoising with LPG-PCA (middle left); first iteration denoising with LPG-PCA and polynomial
thresholding (middle right); second iteration denoising with LPG-PCA (bottom left); second
iteration denoising with LPG-PCA and polynomial thresholding (bottom right)
14. The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.1, February 2016
14
Figure 8. Femur bone training image (top left); original image for testing (top right); first iteration]
denoising with LPG-PCA (middle left); first iteration denoising with LPG-PCA and polynomial
thresholding (middle right); second iteration denoising with LPG-PCA (bottom left); second
iteration denoising with LPG-PCA and polynomial thresholding (bottom right)
15. The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.1, February 2016
15
Figure 9. Neuron cells training image (top left); original image for testing (top right); first iteration
Figure 9. Neuron cells training image (top left); original image for testing (top right); first iteration
denoising with LPG-PCA (middle left); first iteration denoising with LPG-PCA and polynomial
thresholding (middle right); second iteration denoising with LPG-PCA (bottom left); second
iteration denoising with LPG-PCA and polynomial thresholding (bottom right)
16. The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.1, February 2016
16
REFERENCES
[1] R.C. Gonzalez, R.E. Woods, (2002), Digital Image Processing, Second Ed., Prentice-Hall, Englewood
cliffs, NJ
[2] M. Elad, M. Aharon, (2006), "Image Denoising Via Sparse and Redundant Representations Over
Learned Dictionaries," IEEE Trans. Image Processing, Vol. 15, No. 12, pp. 3736-3745
[3] J.S. Walker, (2008), A Primer on Wavelets and Their Scientific Applications, Second Ed., Chapman
and Hall/CRC
[4] T. Tasdizen, (2009), "Principal Neighborhood Dictionaries for Nonlocal Means Image Denoising,"
IEEE Transactions on Image Processing, Vol.18, No.12, pp.2649-2660
[5] L. Zhang, W. Dong, D. Zhang, G. Shi, (2010), "Two-stage image denoising by principal component
analysis with local pixel grouping," Pattern Recognition, Vol. 43, No. 4, 1531-1549
[6] L. Zhang, R. Lukac, X. Wu, D. Zhang, (2009), "PCA-Based Spatially Adaptive Denoising of CFA
Images for Single-Sensor Digital Cameras," IEEE Transactions on Image Processing, Vol.18, No.4,
pp.797-812
[7] D.D. Muresan, T.W. Parks, (2003), "Adaptive principal components and image denoising,"
Proceedings of the 2003 International Conference on Image Processing, Vol.1, pp. I- 101-4
[8] K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, (2007), "Image Denoising by Sparse 3-D Transform-
Domain Collaborative Filtering," IEEE Transactions on Image Processing, Vol.16, No.8, pp.2080-
2095
[9] J. Portilla, V. Strela, M.J. Wainwright, E.P. Simoncelli, (2003), "Image denoising using scale
mixtures of Gaussians in the wavelet domain," IEEE Transactions on Image Processing, Vol.12,
No.11, pp. 1338- 1351
[8] M. Elad and M. Aharon, (2006), “Image denoising via sparse and redundant representations over
learned dictionaries,” IEEE Transactions on Image Processing, Vol. 15, No. 12, pp. 3736–3745
[9] I.K. Fodor and C. Kamath, (2003), “Denoising through wavelet shrinkage: An empirical study,” J.
Electron. Imag., Vol. 12, No. 1, pp. 151–160
[10] E. Hostalkova, O. Vysata, A. Prochazka, (2007), "Multi-Dimensional Biomedical Image De-Noising
using Haar Transform," 15th International Conference on Digital Signal Processing, 2007, pp.175-
178
[11] C.B. Smith, S. Agaian, D. Akopian, (2008), "A Wavelet-Denoising Approach Using Polynomial
Threshold Operators," IEEE Signal Processing Letters, Vol.15, No., pp.906-909
[12] S. Sathyanarayana, D. Akopian, S. Agaian (2011), “An adaptive LMS technique for wavelet
polynomial threshold denoising,” SPIE Mobile Multimedia/Image Processing, Security, and
Applications, Vol. 8063, pp. 806308:1-10
[13] M. Chan, S. Sathyanarayana, D. Akopian, S. Agaian, “Application of wavelet polynomial threshold
for interpolation and denoising in bioimaging,” SPIE Mobile Multimedia/Image Processing, Security,
and Applications, Vol. 8063, pp. 80630Z1-12
[14] D.L. Donoho, I.M. Johnstone, (1994), “Ideal spatial adaptation via wavelet shrinkage,” Biometrika,
Vol. 81, pp. 425–455
[15] D.G. Goring, V.I. Nikora, (2003), "Discussion of ‘‘Despiking Acoustic Doppler VelocimeterData’’",
J. Hydraul. Eng. Vol. 129, No. 6, pp. 484–487
[16] D. Akopian and J. Astola, "An optimal nonlinear extension of linear filters based on distributed
arithmetic," IEEE Transactions on Image Processing, Vol. 14, No.5, May 2005, pp. 616-623.
AUTHORS
Jafet Morales is an electrical engineering Ph.D. candidate at the University of Texas
at San Antonio (UTSA) in the area of digital signal processing. He received his M.S.
degree in computer engineering from St. Mary’s University at San Antonio, Texas,
where he wrote a thesis on fundus image quality assessment. He has seven
publications in signal processing, computer science, and physics. His current
interests include WLAN-based indoor positioning, digital image noise removal
techniques, human activity recognition from motiontracking device signals, and automated messaging
systems.
17. The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.1, February 2016
17
David Akopian is a professor at the University of Texas atSan Antonio (UTSA).
Prior to joining UTSA, he was aSenior Research Engineer and Specialist with
NokiaCorporation from 1999 to 2003. From 1993 to 1999 he wasa member of the
teaching and research staff of TampereUniversity of Technology, Finland, where he
also received aPh.D. degree in electrical engineering. Dr. Akopian’s currentresearch
interests include digital signal processing algorithmsfor communication, positioning
and navigation algorithms, and dedicatedhardware architectures and platforms for
healthcare applications. Heauthored and co-authored more than 35 patents and 140 publications. Heserved
in organizing and program committees of many IEEE conferences andco-chairs SPIE’s annual Multimedia
on Mobile Devices conference. His research has been supported by National Science Foundation,
NationalInstitutes of Health, U.S. Air Force, U.S. Navy, and Texas Higher EducationCoordinating Board.
SosAgaian is the Peter T. Flawn Professor of Electrical and Computer Engineering
at the University of Texas, San Antonio and Professor at the University of Texas
HealthScience Center, San Antonio. Dr. Agaian received the M.S. degree (summa
cum laude) in mathematics and mechanicsfrom Yerevan State University, Armenia,
the Ph.D. degree in math and physics from the Steklov Institute of Mathematics,
Russian Academy of Sciences, and the Doctor of Engineering Sciences degree from
the Institute of Control Systems, Russian Academy of Sciences. He has authored
more than 500 scientific papers and 7 books, and holds 14 patents. He is a Fellow of
SPIE, the AmericanAssociation for theAdvancement of Science (AAAS), and Imaging Sciences and
Technology(IS&T). He also serves as a foreign member of the Armenian National Academy. He is the
recipient of the MAEStro Educator of the Year award, sponsored by the Society of Mexican American
Engineers. Varioustechnologies he has invented have been adopted by the U.S. government and across
multiple disciplines, and have been commercialized by industry. He is an Editorial Board Member of the
Journal of Pattern Recognition and Image Analysis and an Associate Editor for several journals, including
the Journal of Electronic Imaging (SPIE, IS&T) and IEEE Systems Journal. His research interests are in
multimedia processing, imaging systems, information security, artificial intelligence, computer vision, 3D
imaging sensors, image fusion, and biomedical and health Informatics.