Adaptive Median Filters
Elements of visual perception
Representing Digital Images
Spatial and Intensity Resolution
cones and rods
Brightness Adaptation
Spatial and Intensity Resolution
An adaptive method for noise removal from real world imagesIAEME Publication
The document summarizes an adaptive method for noise removal from real world images. It proposes modifying the bilateral filter, which considers both spatial and intensity distances between pixels. The modified filter adapts its strength based on the local noise level in the image. It estimates the smoothing parameter by analyzing noise strength factors within blocks of different sizes. This helps determine the appropriate block size to use for a given image region. The filter aims to remove Gaussian noise while preserving edges and details to enhance image quality. Experimental results show it performs well across different images for a wide range of noise levels.
This document discusses optimizing image convolution operations for GPUs using CUDA. It describes how to implement a separable convolution filter in two passes, one for rows and one for columns. This reduces redundant data loads compared to a naive single-pass implementation. The document also discusses techniques like loading multiple pixels per thread and padding thread blocks to achieve coalesced global memory accesses and avoid idle threads when processing boundary pixels. Overall, the key optimizations are using a separable filter, loading multiple pixels per thread, and padding for coalesced memory access.
This document discusses image compression using the discrete cosine transform (DCT). It develops simple Mathematica functions to compute the 1D and 2D DCT. The 1D DCT transforms a list of real numbers into elementary frequency components. It is computed via matrix multiplication or using the discrete Fourier transform with twiddle factors. The 2D DCT applies the 1D DCT to rows and then columns, making it separable. These functions illustrate how Mathematica can be used to prototype image processing algorithms.
The document discusses image sampling and quantization. It defines a digital image as a discrete 2D array containing intensity values of finite bits. A digital image is formed by sampling a continuous image, which involves multiplying it by a comb function of discrete delta pulses, yielding discrete image values. Quantization further discretizes the intensity values into a finite set of values. For accurate image reconstruction, the sampling frequency must be greater than twice the maximum image frequency, as stated by the sampling theorem.
The document proposes a hybrid method called Wavelet Embedded Anisotropic Diffusion (WEAD) for image denoising. WEAD is a two-stage filter that first applies anisotropic diffusion to reduce noise, followed by wavelet-based Bayesian shrinkage. This reduces the convergence time of anisotropic diffusion, allowing the image to be denoised with less blurring compared to anisotropic diffusion or wavelet methods alone. Experimental results on various images demonstrate that WEAD achieves better denoising performance than anisotropic diffusion or Bayesian shrinkage methods, as measured by higher PSNR and SSIM scores and fewer required iterations.
In tech multiple-wavelength_holographic_interferometry_with_tunable_laser_diodesMeyli Valin Fernández
Multiple-wavelength holographic interferometry uses tunable laser diodes to measure large step heights with high accuracy. Holograms are recorded at different wavelengths, generating phase differences with synthetic wavelengths from 0.4637 mm to 129.1 mm. The 129.1 mm wavelength allows measuring a 32 mm step height, while the 0.463 mm wavelength provides 0.01 mm measurement accuracy. Recursive calculations using phase differences from multiple wavelengths eliminate 2π ambiguities, enabling measurement of the 32 mm step with 0.01 mm accuracy. Precise knowledge of the recording wavelengths is required for correct phase unwrapping.
This document provides an overview of digital image fundamentals and operations. It defines what a digital image is, how it is represented as a matrix, and common image types like RGB, grayscale, and binary. Pixels, resolution, neighborhoods, and basic relationships between pixels are discussed. The document also covers different types of image operations including point, local, and global operations as well as examples like arithmetic, logical, and geometric transformations. Finally, it introduces concepts of linear and nonlinear operations and announces the topic of the next lecture on image enhancement in the spatial domain.
Adaptive Median Filters
Elements of visual perception
Representing Digital Images
Spatial and Intensity Resolution
cones and rods
Brightness Adaptation
Spatial and Intensity Resolution
An adaptive method for noise removal from real world imagesIAEME Publication
The document summarizes an adaptive method for noise removal from real world images. It proposes modifying the bilateral filter, which considers both spatial and intensity distances between pixels. The modified filter adapts its strength based on the local noise level in the image. It estimates the smoothing parameter by analyzing noise strength factors within blocks of different sizes. This helps determine the appropriate block size to use for a given image region. The filter aims to remove Gaussian noise while preserving edges and details to enhance image quality. Experimental results show it performs well across different images for a wide range of noise levels.
This document discusses optimizing image convolution operations for GPUs using CUDA. It describes how to implement a separable convolution filter in two passes, one for rows and one for columns. This reduces redundant data loads compared to a naive single-pass implementation. The document also discusses techniques like loading multiple pixels per thread and padding thread blocks to achieve coalesced global memory accesses and avoid idle threads when processing boundary pixels. Overall, the key optimizations are using a separable filter, loading multiple pixels per thread, and padding for coalesced memory access.
This document discusses image compression using the discrete cosine transform (DCT). It develops simple Mathematica functions to compute the 1D and 2D DCT. The 1D DCT transforms a list of real numbers into elementary frequency components. It is computed via matrix multiplication or using the discrete Fourier transform with twiddle factors. The 2D DCT applies the 1D DCT to rows and then columns, making it separable. These functions illustrate how Mathematica can be used to prototype image processing algorithms.
The document discusses image sampling and quantization. It defines a digital image as a discrete 2D array containing intensity values of finite bits. A digital image is formed by sampling a continuous image, which involves multiplying it by a comb function of discrete delta pulses, yielding discrete image values. Quantization further discretizes the intensity values into a finite set of values. For accurate image reconstruction, the sampling frequency must be greater than twice the maximum image frequency, as stated by the sampling theorem.
The document proposes a hybrid method called Wavelet Embedded Anisotropic Diffusion (WEAD) for image denoising. WEAD is a two-stage filter that first applies anisotropic diffusion to reduce noise, followed by wavelet-based Bayesian shrinkage. This reduces the convergence time of anisotropic diffusion, allowing the image to be denoised with less blurring compared to anisotropic diffusion or wavelet methods alone. Experimental results on various images demonstrate that WEAD achieves better denoising performance than anisotropic diffusion or Bayesian shrinkage methods, as measured by higher PSNR and SSIM scores and fewer required iterations.
In tech multiple-wavelength_holographic_interferometry_with_tunable_laser_diodesMeyli Valin Fernández
Multiple-wavelength holographic interferometry uses tunable laser diodes to measure large step heights with high accuracy. Holograms are recorded at different wavelengths, generating phase differences with synthetic wavelengths from 0.4637 mm to 129.1 mm. The 129.1 mm wavelength allows measuring a 32 mm step height, while the 0.463 mm wavelength provides 0.01 mm measurement accuracy. Recursive calculations using phase differences from multiple wavelengths eliminate 2π ambiguities, enabling measurement of the 32 mm step with 0.01 mm accuracy. Precise knowledge of the recording wavelengths is required for correct phase unwrapping.
This document provides an overview of digital image fundamentals and operations. It defines what a digital image is, how it is represented as a matrix, and common image types like RGB, grayscale, and binary. Pixels, resolution, neighborhoods, and basic relationships between pixels are discussed. The document also covers different types of image operations including point, local, and global operations as well as examples like arithmetic, logical, and geometric transformations. Finally, it introduces concepts of linear and nonlinear operations and announces the topic of the next lecture on image enhancement in the spatial domain.
This document discusses and compares different thresholding techniques for image denoising using wavelet transforms. It introduces the concept of image denoising using wavelet transforms, which involves applying a forward wavelet transform, estimating clean coefficients using thresholding, and applying the inverse transform. It then describes several common thresholding methods - hard, soft, universal, improved, Bayes shrink, and neigh shrink. Simulation results on test images corrupted with additive white Gaussian noise show that the proposed improved thresholding technique achieves lower MSE and higher PSNR than the universal hard thresholding method, demonstrating better noise removal performance while preserving image details.
Robust Super-Resolution by minimizing a Gaussian-weighted L2 error normTuan Q. Pham
1. The document proposes a robust super-resolution algorithm that minimizes a Gaussian-weighted L2 error norm. This suppresses the influence of intensity outliers without requiring additional regularization.
2. The algorithm is based on maximum likelihood estimation but uses a Gaussian error norm instead of a quadratic norm. This makes the algorithm robust against outliers by reducing their influence to zero.
3. The effectiveness of the proposed algorithm is demonstrated on real infrared image sequences with severe aliasing and intensity outliers, where it outperforms other methods in handling outliers and noise.
This document discusses digital image processing concepts including:
- Image acquisition and representation, including sampling and quantization of images. CCD arrays are commonly used in digital cameras to capture images as arrays of pixels.
- A simple image formation model where the intensity of a pixel is a function of illumination and reflectance at that point. Typical ranges of illumination and reflectance are provided.
- Image interpolation techniques like nearest neighbor, bilinear, and bicubic interpolation which are used to increase or decrease the number of pixels in a digital image. Examples of applying these techniques are shown.
- Basic relationships between pixels including adjacency, paths, regions, boundaries, and distance measures like Euclidean, city block, and
Labview with dwt for denoising the blurred biometric imagesijcsa
In this paper, biometric blurred image (fingerprint) denoising are presented and investigated by using
LabVIEW applications , It is blurred and corrupted with Gaussian noise. This work is proposed
algorithm that has used a discrete wavelet transform (DWT) to divide the image into two parts, this will
be increasing the manipulation speed of biometric images that are of the big sizes. This work has included
two tasks ; the first designs the LabVIEW system to calculate and present the approximation coefficients,
by which the image's blur factor reduced to minimum value according to the proposed algorithm. The
second task removes the image's noise by calculated the regression coefficients according to Bayesian-
Shrinkage estimation method.
An efficient approach to wavelet image Denoisingijcsit
This document proposes an efficient approach to wavelet image denoising based on minimizing mean squared error. It uses Stein's unbiased risk estimate (SURE), which provides an accurate estimate of mean squared error without needing the original noiseless image. The key idea is to express the thresholding function as a linear combination of thresholds, allowing the minimization problem to be solved via a simple linear system rather than a nonlinear optimization. Experimental results show the proposed method achieves superior image quality compared to other techniques like BayesShrink and VisuShrink.
Otsu thresholding is an effective thresholding method for images with low signal-to-noise ratios and low contrast. It assumes a bimodal histogram with two peaks, foreground and background, and finds a threshold that minimizes intra-class variance. 2D Otsu thresholding uses a joint 2D histogram of pixel values and local neighborhood averages to find an optimal threshold vector, improving segmentation especially for noisy images. The algorithm calculates the 2D histogram, finds probabilities and mean values, and selects the threshold pair that maximizes between-class variance. On a noisy test image, 2D Otsu thresholding produces a clean binary segmentation with the threshold pair (171, 171).
International Journal of Engineering Research and Development (IJERD)IJERD Editor
1) The document discusses wavelet transforms as a recent algorithm for image compression. Wavelet transforms can capture variations at different scales in an image, making them well-suited for reducing spatial redundancy.
2) A typical lossy image compression system uses four main components - source encoding, thresholding, quantization, and entropy encoding - to achieve compression by removing different types of redundancy in images.
3) Experimental results on the Lena test image showed that soft thresholding followed by quantization achieved higher peak signal-to-noise ratios than hard thresholding and quantization, demonstrating the effectiveness of wavelet transforms for image compression.
Lecture 3 image sampling and quantizationVARUN KUMAR
This document discusses image sampling and quantization. It begins by covering 2D sampling of images, including the spectrum of sampled images and the Nyquist criteria for proper reconstruction. It then covers quantization, describing how continuous variables are mapped to discrete levels. The document focuses on Lloyd-Max quantization, which minimizes mean square error for a given number of quantization levels. It provides equations for calculating optimal decision levels and reconstruction levels to design an optimum quantizer based on the probability density function of the signal. Common probability densities used for image data, such as Gaussian, Laplacian, and uniform, are also covered.
Object Shape Representation by Kernel Density Feature Points Estimator cscpconf
This paper introduces an object shape representation using Kernel Density Feature Points
Estimator (KDFPE). In this method we obtain the density of feature points within defined rings
around the centroid of the image. The Kernel Density Feature Points Estimator is then applied to
the vector of the image. KDFPE is invariant to translation, scale and rotation. This method of
image representation shows improved retrieval rate when compared to Density Histogram
Feature Points (DHFP) method. Analytic analysis is done to justify our method and we compared our results with object shape representation by the Density Histogram of Feature Points (DHFP) to prove its robustness.
Random Valued Impulse Noise Removal in Colour Images using Adaptive Threshold...IDES Editor
To remove random valued impulse noise from
colour images, an efficient impulse detection and filtering
scheme is presented. The locally adaptive threshold for
impulse detection is derived from the pixels of the filtering
window. The restoration of the noisy pixel is done on the basis
of brightness and chromaticity information obtained from the
neighbouring pixels in the filtering window. Experimental
results demonstrate that the proposed scheme yields much
superior performance in comparison with other colour image
filtering methods.
An image can be seen as a matrix I, where I(x, y) is the brightness of the pixel located at coordinates (x, y). In the Convolutional neural network, the kernel is nothing but a filter
that is used to extract the features from the images.
The document discusses various concepts related to digital image processing including:
1) The relationships between pixels in an image including 4-neighbors, 8-neighbors, and m-neighbors of a pixel.
2) The concepts of adjacency and connectivity between pixels based on their intensity values and whether they are neighbors.
3) Computing the shortest path between two pixels using 4, 8, or m-adjacency and examples calculating these paths.
Performance Improvement of Vector Quantization with Bit-parallelism HardwareCSCJournals
Vector quantization is an elementary technique for image compression; however, searching for the nearest codeword in a codebook is time-consuming. In this work, we propose a hardware-based scheme by adopting bit-parallelism to prune unnecessary codewords. The new scheme uses a “Bit-mapped Look-up Table” to represent the positional information of the codewords. The lookup procedure can simply refer to the bitmaps to find the candidate codewords. Our simulation results further confirm the effectiveness of the proposed scheme.
This document provides an introduction to digital image processing. It discusses key topics like image representation as matrices, image digitization which involves sampling and quantization, and the basic steps in digital image processing such as image acquisition, preprocessing, segmentation, feature extraction, recognition and interpretation. Importance of image processing is highlighted for applications like remote sensing, machine vision, and medical imaging. Common techniques like noise filtering, contrast enhancement, compression and their importance are also summarized.
MULTIFOCUS IMAGE FUSION USING MULTIRESOLUTION APPROACH WITH BILATERAL GRADIEN...cscpconf
The fusion of two or more images is required for images captured using different sensors,
different modalities or different camera settings to produce the image which is more suitable for
computer processing and human visual perception. The optical lenses in the cameras are having
limited depth of focus so it is not possible to acquire an image that contains all the objects infocus.
In this case we need a Multifocus image fusion technique to create a single image where
all objects are in-focus by combining relevant information in the two or more images. As the
sharp images contain more information than blurred images image sharpness will be taken as
one of the relevant information in framing the fusion rule. Many existing algorithms use
contrast or high local energy as a measure of local sharpness (relevant information). In
practice particularly in multimodal image fusion this assumption is not true. Here in this paper
we are proposing the method which combines the multiresolution transform and local phase
coherence measure to measure the sharpness in the images. The performance of the fusion
process was evaluated with mutual information, edge-association and spatial frequency as
quality metrics and compared with Laplacian pyramid, DWT (Discrete Wavelet Transform) and
bilateral gradient based sharpness criterion methods etc. The results showed that the proposed
algorithm is performing better than the existing ones.
Research Inventy : International Journal of Engineering and Science is publis...researchinventy
The document summarizes an improved impulse noise detection and filtering scheme based on an adaptive weighted median filter. The proposed scheme uses an improved impulse noise detector that applies a normalized absolute difference within a filtering window and then removes detected impulse noise using an adaptive switching median filter. Extensive simulation results on standard test images show that the proposed scheme significantly outperforms other median filters in terms of PSNR and MAE for random-valued impulse noise removal. The proposed detection scheme distinguishes noisy and noise-free pixels more efficiently compared to other methods.
This document discusses image transforms and processing techniques in computer vision. It introduces important 2D linear transforms like the discrete cosine transform. It explains how transforms allow recovering the original image using the inverse transform. Examples are provided on image denoising using DCT. Probabilistic methods for analyzing images like calculating mean, variance and moments are also described. The document contrasts processing images in the spatial vs transform domains. Several basic intensity transformation functions are illustrated including negatives, log transforms, and piecewise linear transformations. Histogram processing techniques like equalization and matching are explained in detail with examples.
GREY LEVEL CO-OCCURRENCE MATRICES: GENERALISATION AND SOME NEW FEATURESijcseit
Grey Level Co-occurrence Matrices (GLCM) are one of the earliest techniques used for image texture
analysis. In this paper we defined a new feature called trace extracted from the GLCM and its implications
in texture analysis are discussed in the context of Content Based Image Retrieval (CBIR). The theoretical
extension of GLCM to n-dimensional gray scale images are also discussed. The results indicate that trace
features outperform Haralick features when applied to CBIR.
chAPTER1CV.pptx is abouter computer vision in artificial intelligenceshesnasuneer
This document provides an overview of digital image processing and computer vision. It discusses:
1. Low-level image processing techniques like pre-processing, segmentation, and object description that use little domain knowledge.
2. High-level image understanding techniques based on knowledge, goals, and plans that aim to imitate human cognition.
3. Fundamental concepts in digital image processing including image functions, sampling, quantization, and properties. Mathematical tools from linear systems theory, transforms, and statistics are used.
computervision1.pptx its about computer visionshesnasuneer
This document provides an overview of digital image processing and computer vision. It discusses:
1. Low-level image processing techniques like pre-processing, segmentation, and object description that use limited domain knowledge.
2. High-level image understanding techniques based on knowledge, goals, and plans that aim to imitate human cognition through artificial intelligence methods.
3. Fundamental concepts in digital image processing including image functions, sampling, quantization, and properties like histograms and noise that are introduced and will be used throughout the course.
This document discusses and compares different thresholding techniques for image denoising using wavelet transforms. It introduces the concept of image denoising using wavelet transforms, which involves applying a forward wavelet transform, estimating clean coefficients using thresholding, and applying the inverse transform. It then describes several common thresholding methods - hard, soft, universal, improved, Bayes shrink, and neigh shrink. Simulation results on test images corrupted with additive white Gaussian noise show that the proposed improved thresholding technique achieves lower MSE and higher PSNR than the universal hard thresholding method, demonstrating better noise removal performance while preserving image details.
Robust Super-Resolution by minimizing a Gaussian-weighted L2 error normTuan Q. Pham
1. The document proposes a robust super-resolution algorithm that minimizes a Gaussian-weighted L2 error norm. This suppresses the influence of intensity outliers without requiring additional regularization.
2. The algorithm is based on maximum likelihood estimation but uses a Gaussian error norm instead of a quadratic norm. This makes the algorithm robust against outliers by reducing their influence to zero.
3. The effectiveness of the proposed algorithm is demonstrated on real infrared image sequences with severe aliasing and intensity outliers, where it outperforms other methods in handling outliers and noise.
This document discusses digital image processing concepts including:
- Image acquisition and representation, including sampling and quantization of images. CCD arrays are commonly used in digital cameras to capture images as arrays of pixels.
- A simple image formation model where the intensity of a pixel is a function of illumination and reflectance at that point. Typical ranges of illumination and reflectance are provided.
- Image interpolation techniques like nearest neighbor, bilinear, and bicubic interpolation which are used to increase or decrease the number of pixels in a digital image. Examples of applying these techniques are shown.
- Basic relationships between pixels including adjacency, paths, regions, boundaries, and distance measures like Euclidean, city block, and
Labview with dwt for denoising the blurred biometric imagesijcsa
In this paper, biometric blurred image (fingerprint) denoising are presented and investigated by using
LabVIEW applications , It is blurred and corrupted with Gaussian noise. This work is proposed
algorithm that has used a discrete wavelet transform (DWT) to divide the image into two parts, this will
be increasing the manipulation speed of biometric images that are of the big sizes. This work has included
two tasks ; the first designs the LabVIEW system to calculate and present the approximation coefficients,
by which the image's blur factor reduced to minimum value according to the proposed algorithm. The
second task removes the image's noise by calculated the regression coefficients according to Bayesian-
Shrinkage estimation method.
An efficient approach to wavelet image Denoisingijcsit
This document proposes an efficient approach to wavelet image denoising based on minimizing mean squared error. It uses Stein's unbiased risk estimate (SURE), which provides an accurate estimate of mean squared error without needing the original noiseless image. The key idea is to express the thresholding function as a linear combination of thresholds, allowing the minimization problem to be solved via a simple linear system rather than a nonlinear optimization. Experimental results show the proposed method achieves superior image quality compared to other techniques like BayesShrink and VisuShrink.
Otsu thresholding is an effective thresholding method for images with low signal-to-noise ratios and low contrast. It assumes a bimodal histogram with two peaks, foreground and background, and finds a threshold that minimizes intra-class variance. 2D Otsu thresholding uses a joint 2D histogram of pixel values and local neighborhood averages to find an optimal threshold vector, improving segmentation especially for noisy images. The algorithm calculates the 2D histogram, finds probabilities and mean values, and selects the threshold pair that maximizes between-class variance. On a noisy test image, 2D Otsu thresholding produces a clean binary segmentation with the threshold pair (171, 171).
International Journal of Engineering Research and Development (IJERD)IJERD Editor
1) The document discusses wavelet transforms as a recent algorithm for image compression. Wavelet transforms can capture variations at different scales in an image, making them well-suited for reducing spatial redundancy.
2) A typical lossy image compression system uses four main components - source encoding, thresholding, quantization, and entropy encoding - to achieve compression by removing different types of redundancy in images.
3) Experimental results on the Lena test image showed that soft thresholding followed by quantization achieved higher peak signal-to-noise ratios than hard thresholding and quantization, demonstrating the effectiveness of wavelet transforms for image compression.
Lecture 3 image sampling and quantizationVARUN KUMAR
This document discusses image sampling and quantization. It begins by covering 2D sampling of images, including the spectrum of sampled images and the Nyquist criteria for proper reconstruction. It then covers quantization, describing how continuous variables are mapped to discrete levels. The document focuses on Lloyd-Max quantization, which minimizes mean square error for a given number of quantization levels. It provides equations for calculating optimal decision levels and reconstruction levels to design an optimum quantizer based on the probability density function of the signal. Common probability densities used for image data, such as Gaussian, Laplacian, and uniform, are also covered.
Object Shape Representation by Kernel Density Feature Points Estimator cscpconf
This paper introduces an object shape representation using Kernel Density Feature Points
Estimator (KDFPE). In this method we obtain the density of feature points within defined rings
around the centroid of the image. The Kernel Density Feature Points Estimator is then applied to
the vector of the image. KDFPE is invariant to translation, scale and rotation. This method of
image representation shows improved retrieval rate when compared to Density Histogram
Feature Points (DHFP) method. Analytic analysis is done to justify our method and we compared our results with object shape representation by the Density Histogram of Feature Points (DHFP) to prove its robustness.
Random Valued Impulse Noise Removal in Colour Images using Adaptive Threshold...IDES Editor
To remove random valued impulse noise from
colour images, an efficient impulse detection and filtering
scheme is presented. The locally adaptive threshold for
impulse detection is derived from the pixels of the filtering
window. The restoration of the noisy pixel is done on the basis
of brightness and chromaticity information obtained from the
neighbouring pixels in the filtering window. Experimental
results demonstrate that the proposed scheme yields much
superior performance in comparison with other colour image
filtering methods.
An image can be seen as a matrix I, where I(x, y) is the brightness of the pixel located at coordinates (x, y). In the Convolutional neural network, the kernel is nothing but a filter
that is used to extract the features from the images.
The document discusses various concepts related to digital image processing including:
1) The relationships between pixels in an image including 4-neighbors, 8-neighbors, and m-neighbors of a pixel.
2) The concepts of adjacency and connectivity between pixels based on their intensity values and whether they are neighbors.
3) Computing the shortest path between two pixels using 4, 8, or m-adjacency and examples calculating these paths.
Performance Improvement of Vector Quantization with Bit-parallelism HardwareCSCJournals
Vector quantization is an elementary technique for image compression; however, searching for the nearest codeword in a codebook is time-consuming. In this work, we propose a hardware-based scheme by adopting bit-parallelism to prune unnecessary codewords. The new scheme uses a “Bit-mapped Look-up Table” to represent the positional information of the codewords. The lookup procedure can simply refer to the bitmaps to find the candidate codewords. Our simulation results further confirm the effectiveness of the proposed scheme.
This document provides an introduction to digital image processing. It discusses key topics like image representation as matrices, image digitization which involves sampling and quantization, and the basic steps in digital image processing such as image acquisition, preprocessing, segmentation, feature extraction, recognition and interpretation. Importance of image processing is highlighted for applications like remote sensing, machine vision, and medical imaging. Common techniques like noise filtering, contrast enhancement, compression and their importance are also summarized.
MULTIFOCUS IMAGE FUSION USING MULTIRESOLUTION APPROACH WITH BILATERAL GRADIEN...cscpconf
The fusion of two or more images is required for images captured using different sensors,
different modalities or different camera settings to produce the image which is more suitable for
computer processing and human visual perception. The optical lenses in the cameras are having
limited depth of focus so it is not possible to acquire an image that contains all the objects infocus.
In this case we need a Multifocus image fusion technique to create a single image where
all objects are in-focus by combining relevant information in the two or more images. As the
sharp images contain more information than blurred images image sharpness will be taken as
one of the relevant information in framing the fusion rule. Many existing algorithms use
contrast or high local energy as a measure of local sharpness (relevant information). In
practice particularly in multimodal image fusion this assumption is not true. Here in this paper
we are proposing the method which combines the multiresolution transform and local phase
coherence measure to measure the sharpness in the images. The performance of the fusion
process was evaluated with mutual information, edge-association and spatial frequency as
quality metrics and compared with Laplacian pyramid, DWT (Discrete Wavelet Transform) and
bilateral gradient based sharpness criterion methods etc. The results showed that the proposed
algorithm is performing better than the existing ones.
Research Inventy : International Journal of Engineering and Science is publis...researchinventy
The document summarizes an improved impulse noise detection and filtering scheme based on an adaptive weighted median filter. The proposed scheme uses an improved impulse noise detector that applies a normalized absolute difference within a filtering window and then removes detected impulse noise using an adaptive switching median filter. Extensive simulation results on standard test images show that the proposed scheme significantly outperforms other median filters in terms of PSNR and MAE for random-valued impulse noise removal. The proposed detection scheme distinguishes noisy and noise-free pixels more efficiently compared to other methods.
This document discusses image transforms and processing techniques in computer vision. It introduces important 2D linear transforms like the discrete cosine transform. It explains how transforms allow recovering the original image using the inverse transform. Examples are provided on image denoising using DCT. Probabilistic methods for analyzing images like calculating mean, variance and moments are also described. The document contrasts processing images in the spatial vs transform domains. Several basic intensity transformation functions are illustrated including negatives, log transforms, and piecewise linear transformations. Histogram processing techniques like equalization and matching are explained in detail with examples.
GREY LEVEL CO-OCCURRENCE MATRICES: GENERALISATION AND SOME NEW FEATURESijcseit
Grey Level Co-occurrence Matrices (GLCM) are one of the earliest techniques used for image texture
analysis. In this paper we defined a new feature called trace extracted from the GLCM and its implications
in texture analysis are discussed in the context of Content Based Image Retrieval (CBIR). The theoretical
extension of GLCM to n-dimensional gray scale images are also discussed. The results indicate that trace
features outperform Haralick features when applied to CBIR.
chAPTER1CV.pptx is abouter computer vision in artificial intelligenceshesnasuneer
This document provides an overview of digital image processing and computer vision. It discusses:
1. Low-level image processing techniques like pre-processing, segmentation, and object description that use little domain knowledge.
2. High-level image understanding techniques based on knowledge, goals, and plans that aim to imitate human cognition.
3. Fundamental concepts in digital image processing including image functions, sampling, quantization, and properties. Mathematical tools from linear systems theory, transforms, and statistics are used.
computervision1.pptx its about computer visionshesnasuneer
This document provides an overview of digital image processing and computer vision. It discusses:
1. Low-level image processing techniques like pre-processing, segmentation, and object description that use limited domain knowledge.
2. High-level image understanding techniques based on knowledge, goals, and plans that aim to imitate human cognition through artificial intelligence methods.
3. Fundamental concepts in digital image processing including image functions, sampling, quantization, and properties like histograms and noise that are introduced and will be used throughout the course.
The document describes techniques for image texture analysis and segmentation. It proposes a methodology using constraint satisfaction neural networks to integrate region-based and edge-based texture segmentation. The methodology initializes a CSNN using fuzzy c-means clustering, then iteratively updates the neuron probabilities and edge maps to refine the segmentation. Experimental results demonstrate improved segmentation by combining region and edge information.
This document discusses different techniques for image denoising using wavelet thresholding. It begins with an introduction to image denoising and the wavelet transform approach. Then it describes various thresholding methods used in wavelet-based image denoising, including hard, soft, universal, improved, Bayes shrink, and neigh shrink thresholding. It also reviews prior literature comparing these different techniques. Finally, it presents simulated results on test images comparing the performance of universal hard thresholding and improved thresholding based on mean squared error and peak signal-to-noise ratio metrics under varying levels of additive white Gaussian noise. The improved thresholding method achieved better denoising performance according to the quantitative metrics.
Mixed Spectra for Stable Signals from Discrete Observationssipij
This paper concerns the continuous-time stable alpha symmetric processes which are inivitable in the
modeling of certain signals with indefinitely increasing variance. Particularly the case where the spectral
measurement is mixed: sum of a continuous measurement and a discrete measurement. Our goal is to
estimate the spectral density of the continuous part by observing the signal in a discrete way. For that, we
propose a method which consists in sampling the signal at periodic instants. We use Jackson's polynomial
kernel to build a periodogram which we then smooth by two spectral windows taking into account the
width of the interval where the spectral density is non-zero. Thus, we bypass the phenomenon of aliasing
often encountered in the case of estimation from discrete observations of a continuous time process.
Mixed Spectra for Stable Signals from Discrete Observationssipij
This paper concerns the continuous-time stable alpha symmetric processes which are inivitable in the
modeling of certain signals with indefinitely increasing variance. Particularly the case where the spectral
measurement is mixed: sum of a continuous measurement and a discrete measurement. Our goal is to
estimate the spectral density of the continuous part by observing the signal in a discrete way. For that, we
propose a method which consists in sampling the signal at periodic instants. We use Jackson's polynomial
kernel to build a periodogram which we then smooth by two spectral windows taking into account the
width of the interval where the spectral density is non-zero. Thus, we bypass the phenomenon of aliasing
often encountered in the case of estimation from discrete observations of a continuous time process.
MIXED SPECTRA FOR STABLE SIGNALS FROM DISCRETE OBSERVATIONSsipij
This paper proposes a method to estimate the spectral density of a continuous-time stable alpha symmetric process from discrete observations of the process. Specifically, it considers when the spectral measurement is a mixture of a continuous component and discrete jumps. It samples the process at periodic times to create a periodogram, which is shown to be an asymptotically unbiased but inconsistent estimator. The periodogram is then smoothed using two spectral windows to account for the bandwidth of the spectral density, providing a consistent estimator of the spectral density at the jump points.
Mixed Spectra for Stable Signals from Discrete Observationssipij
This paper concerns the continuous-time stable alpha symmetric processes which are inivitable in the
modeling of certain signals with indefinitely increasing variance. Particularly the case where the spectral
measurement is mixed: sum of a continuous measurement and a discrete measurement. Our goal is to
estimate the spectral density of the continuous part by observing the signal in a discrete way. For that, we
propose a method which consists in sampling the signal at periodic instants. We use Jackson's polynomial
kernel to build a periodogram which we then smooth by two spectral windows taking into account the
width of the interval where the spectral density is non-zero. Thus, we bypass the phenomenon of aliasing
often encountered in the case of estimation from discrete observations of a continuous time process.
Mixed Spectra for Stable Signals from Discrete Observationssipij
This paper concerns the continuous-time stable alpha symmetric processes which are inivitable in the modeling of certain signals with indefinitely increasing variance. Particularly the case where the spectral measurement is mixed: sum of a continuous measurement and a discrete measurement. Our goal is to estimate the spectral density of the continuous part by observing the signal in a discrete way. For that, we propose a method which consists in sampling the signal at periodic instants. We use Jackson's polynomial kernel to build a periodogram which we then smooth by two spectral windows taking into account the width of the interval where the spectral density is non-zero. Thus, we bypass the phenomenon of aliasing often encountered in the case of estimation from discrete observations of a continuous time process.
New Approach of Preprocessing For Numeral RecognitionIJERA Editor
The present paper proposes a new approach of preprocessing for handwritten, printed and isolated numeral
characters. The new approach reduces the size of the input image of each numeral by discarding the redundant
information. This method reduces also the number of features of the attribute vector provided by the extraction
features method. Numeral recognition is carried out in this work through k nearest neighbors and multilayer
perceptron techniques. The simulations have obtained a good rate of recognition in fewer running time.
ALEXANDER FRACTIONAL INTEGRAL FILTERING OF WAVELET COEFFICIENTS FOR IMAGE DEN...sipij
The present paper, proposes an efficient denoising algorithm which works well for images corrupted with
Gaussian and speckle noise. The denoising algorithm utilizes the alexander fractional integral filter which
works by the construction of fractional masks window computed using alexander polynomial. Prior to the
application of the designed filter, the corrupted image is decomposed using symlet wavelet from which only
the horizontal, vertical and diagonal components are denoised using the alexander integral filter.
Significant increase in the reconstruction quality was noticed when the approach was applied on the
wavelet decomposed image rather than applying it directly on the noisy image. Quantitatively the results
are evaluated using the peak signal to noise ratio (PSNR) which was 30.8059 on an average for images
corrupted with Gaussian noise and 36.52 for images corrupted with speckle noise, which clearly
outperforms the existing methods.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
The document proposes a new algorithm to reduce blocking artifacts in compressed images using a combination of the SAWS technique, Fuzzy Impulse Artifact Detection and Reduction Method (FIDRM), and Noise Adaptive Fuzzy Switching Median Filter (NAFSM). FIDRM uses fuzzy rules to detect noisy pixels, while NAFSM uses a median filter to correct pixels based on local information. Experimental results on test images show the proposed approach achieves better PSNR than other deblocking methods.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
Call for paper 2012, hard copy of Certificate, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJCER, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, research and review articles, IJCER Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathematics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer review journal, indexed journal, research and review articles, engineering journal, www.ijceronline.com, research journals,
yahoo journals, bing journals, International Journal of Computational Engineering Research, Google journals, hard copy of Certificate,
journal of engineering, online Submission
This document discusses image reconstruction from projections. It begins by introducing the image reconstruction problem and describes how taking projections from multiple angles can be used to reconstruct an image. It then covers the principles of computed tomography (CT), the Radon transform, and the Fourier-slice theorem. The key idea is that the Fourier transform of a projection is a slice of the 2D Fourier transform of the image. Finally, it describes how filtered back-projection can be used to reconstruct an image by filtering each projection with a ramp filter and back-projecting. Window functions are used to filter the ramp filter to reduce ringing artifacts.
INVERSIONOF MAGNETIC ANOMALIES DUE TO 2-D CYLINDRICAL STRUCTURES –BY AN ARTIF...ijsc
Application of Artificial Neural Network Committee Machine (ANNCM) for the inversion of magnetic
anomalies caused by a long-2D horizontal circular cylinder is presented. Although, the subsurface targets
are of arbitrary shape, they are assumed to be regular geometrical shape for convenience of mathematical
analysis. ANNCM inversion extract the parameters of the causative subsurface targets include depth to the
centre of the cylinder (Z), the inclination of magnetic vector(Ɵ)and the constant term (A)comprising the
radius(R)and the intensity of the magnetic field(I). The method of inversion is demonstrated over a
theoretical model with and without random noise in order to study the effect of noise on the technique and
then extended to real field data. It is noted that the method under discussion ensures fairly accurate results
even in the presence of noise. ANNCM analysis of vertical magnetic anomaly near Karimnagar, Telangana,
India, has shown satisfactory results in comparison with other inversion techniques that are in vogue.The
statistics of the predicted parameters relative to the measured data, show lower sum error (<9.58%) and
higher correlation coefficient (R>91%) indicating that good matching and correlation is achieved between
the measured and predicted parameters.
INVERSIONOF MAGNETIC ANOMALIES DUE TO 2-D CYLINDRICAL STRUCTURES –BY AN ARTIF...ijsc
Application of Artificial Neural Network Committee Machine (ANNCM) for the inversion of magnetic
anomalies caused by a long-2D horizontal circular cylinder is presented. Although, the subsurface targets
are of arbitrary shape, they are assumed to be regular geometrical shape for convenience of mathematical
analysis. ANNCM inversion extract the parameters of the causative subsurface targets include depth to the
centre of the cylinder (Z), the inclination of magnetic vector(Ɵ)and the constant term (A)comprising the
radius(R)and the intensity of the magnetic field(I). The method of inversion is demonstrated over a
theoretical model with and without random noise in order to study the effect of noise on the technique and
then extended to real field data. It is noted that the method under discussion ensures fairly accurate results
even in the presence of noise. ANNCM analysis of vertical magnetic anomaly near Karimnagar, Telangana,
India, has shown satisfactory results in comparison with other inversion techniques that are in vogue.The
statistics of the predicted parameters relative to the measured data, show lower sum error (<9.58%) and
higher correlation coefficient (R>91%) indicating that good matching and correlation is achieved between
the measured and predicted parameters.
Inversion of Magnetic Anomalies Due to 2-D Cylindrical Structures – By an Art...ijsc
Application of Artificial Neural Network Committee Machine (ANNCM) for the inversion of magnetic anomalies caused by a long-2D horizontal circular cylinder is presented. Although, the subsurface targets are of arbitrary shape, they are assumed to be regular geometrical shape for convenience of mathematical analysis. ANNCM inversion extract the parameters of the causative subsurface targets include depth to the centre of the cylinder (Z), the inclination of magnetic vector(Ɵ)and the constant term (A)comprising the radius(R)and the intensity of the magnetic field(I). The method of inversion is demonstrated over a theoretical model with and without random noise in order to study the effect of noise on the technique and then extended to real field data. It is noted that the method under discussion ensures fairly accurate results even in the presence of noise. ANNCM analysis of vertical magnetic anomaly near Karimnagar, Telangana, India, has shown satisfactory results in comparison with other inversion techniques that are in vogue.The statistics of the predicted parameters relative to the measured data, show lower sum error (<9.58%) and higher correlation coefficient (R>91%) indicating that good matching and correlation is achieved between the measured and predicted parameters.
The document discusses a method for 3D object recognition from 2D images using centroidal representation. It involves several steps: filtering and binarizing the image, detecting edges, calculating the object center point, extracting features around the centroid, and creating mathematical models using wavelet transforms and autoregression. Centroidal samples represent distances from the center to the boundary every 45 degrees. Wavelet transforms and autoregression are used to create scale and position invariant representations of the object for recognition.
This document describes algorithms for X-ray and maximum intensity projection (MIP) volume rendering. It provides pseudocode for the X-ray and MIP rendering algorithms, which involve casting rays through a volumetric dataset and accumulating or finding the maximum intensity values along each ray. Key steps include intersecting rays with the volume bounding box, trilinear interpolation of sample values, and scaling and rotating the rendered image. Code fragments are presented that implement functions for X-ray and MIP rendering based on these algorithms.
Similar to Optimal nonlocal means algorithm for denoising ultrasound image (20)
Abnormalities of hormones and inflammatory cytokines in women affected with p...Alexander Decker
Women with polycystic ovary syndrome (PCOS) have elevated levels of hormones like luteinizing hormone and testosterone, as well as higher levels of insulin and insulin resistance compared to healthy women. They also have increased levels of inflammatory markers like C-reactive protein, interleukin-6, and leptin. This study found these abnormalities in the hormones and inflammatory cytokines of women with PCOS ages 23-40, indicating that hormone imbalances associated with insulin resistance and elevated inflammatory markers may worsen infertility in women with PCOS.
A usability evaluation framework for b2 c e commerce websitesAlexander Decker
This document presents a framework for evaluating the usability of B2C e-commerce websites. It involves user testing methods like usability testing and interviews to identify usability problems in areas like navigation, design, purchasing processes, and customer service. The framework specifies goals for the evaluation, determines which website aspects to evaluate, and identifies target users. It then describes collecting data through user testing and analyzing the results to identify usability problems and suggest improvements.
A universal model for managing the marketing executives in nigerian banksAlexander Decker
This document discusses a study that aimed to synthesize motivation theories into a universal model for managing marketing executives in Nigerian banks. The study was guided by Maslow and McGregor's theories. A sample of 303 marketing executives was used. The results showed that managers will be most effective at motivating marketing executives if they consider individual needs and create challenging but attainable goals. The emerged model suggests managers should provide job satisfaction by tailoring assignments to abilities and monitoring performance with feedback. This addresses confusion faced by Nigerian bank managers in determining effective motivation strategies.
A unique common fixed point theorems in generalized dAlexander Decker
This document presents definitions and properties related to generalized D*-metric spaces and establishes some common fixed point theorems for contractive type mappings in these spaces. It begins by introducing D*-metric spaces and generalized D*-metric spaces, defines concepts like convergence and Cauchy sequences. It presents lemmas showing the uniqueness of limits in these spaces and the equivalence of different definitions of convergence. The goal of the paper is then stated as obtaining a unique common fixed point theorem for generalized D*-metric spaces.
A trends of salmonella and antibiotic resistanceAlexander Decker
This document provides a review of trends in Salmonella and antibiotic resistance. It begins with an introduction to Salmonella as a facultative anaerobe that causes nontyphoidal salmonellosis. The emergence of antimicrobial-resistant Salmonella is then discussed. The document proceeds to cover the historical perspective and classification of Salmonella, definitions of antimicrobials and antibiotic resistance, and mechanisms of antibiotic resistance in Salmonella including modification or destruction of antimicrobial agents, efflux pumps, modification of antibiotic targets, and decreased membrane permeability. Specific resistance mechanisms are discussed for several classes of antimicrobials.
A transformational generative approach towards understanding al-istifhamAlexander Decker
This document discusses a transformational-generative approach to understanding Al-Istifham, which refers to interrogative sentences in Arabic. It begins with an introduction to the origin and development of Arabic grammar. The paper then explains the theoretical framework of transformational-generative grammar that is used. Basic linguistic concepts and terms related to Arabic grammar are defined. The document analyzes how interrogative sentences in Arabic can be derived and transformed via tools from transformational-generative grammar, categorizing Al-Istifham into linguistic and literary questions.
A time series analysis of the determinants of savings in namibiaAlexander Decker
This document summarizes a study on the determinants of savings in Namibia from 1991 to 2012. It reviews previous literature on savings determinants in developing countries. The study uses time series analysis including unit root tests, cointegration, and error correction models to analyze the relationship between savings and variables like income, inflation, population growth, deposit rates, and financial deepening in Namibia. The results found inflation and income have a positive impact on savings, while population growth negatively impacts savings. Deposit rates and financial deepening were found to have no significant impact. The study reinforces previous work and emphasizes the importance of improving income levels to achieve higher savings rates in Namibia.
A therapy for physical and mental fitness of school childrenAlexander Decker
This document summarizes a study on the importance of exercise in maintaining physical and mental fitness for school children. It discusses how physical and mental fitness are developed through participation in regular physical exercises and cannot be achieved solely through classroom learning. The document outlines different types and components of fitness and argues that developing fitness should be a key objective of education systems. It recommends that schools ensure pupils engage in graded physical activities and exercises to support their overall development.
A theory of efficiency for managing the marketing executives in nigerian banksAlexander Decker
This document summarizes a study examining efficiency in managing marketing executives in Nigerian banks. The study was examined through the lenses of Kaizen theory (continuous improvement) and efficiency theory. A survey of 303 marketing executives from Nigerian banks found that management plays a key role in identifying and implementing efficiency improvements. The document recommends adopting a "3H grand strategy" to improve the heads, hearts, and hands of management and marketing executives by enhancing their knowledge, attitudes, and tools.
This document discusses evaluating the link budget for effective 900MHz GSM communication. It describes the basic parameters needed for a high-level link budget calculation, including transmitter power, antenna gains, path loss, and propagation models. Common propagation models for 900MHz that are described include Okumura model for urban areas and Hata model for urban, suburban, and open areas. Rain attenuation is also incorporated using the updated ITU model to improve communication during rainfall.
A synthetic review of contraceptive supplies in punjabAlexander Decker
This document discusses contraceptive use in Punjab, Pakistan. It begins by providing background on the benefits of family planning and contraceptive use for maternal and child health. It then analyzes contraceptive commodity data from Punjab, finding that use is still low despite efforts to improve access. The document concludes by emphasizing the need for strategies to bridge gaps and meet the unmet need for effective and affordable contraceptive methods and supplies in Punjab in order to improve health outcomes.
A synthesis of taylor’s and fayol’s management approaches for managing market...Alexander Decker
1) The document discusses synthesizing Taylor's scientific management approach and Fayol's process management approach to identify an effective way to manage marketing executives in Nigerian banks.
2) It reviews Taylor's emphasis on efficiency and breaking tasks into small parts, and Fayol's focus on developing general management principles.
3) The study administered a survey to 303 marketing executives in Nigerian banks to test if combining elements of Taylor and Fayol's approaches would help manage their performance through clear roles, accountability, and motivation. Statistical analysis supported combining the two approaches.
A survey paper on sequence pattern mining with incrementalAlexander Decker
This document summarizes four algorithms for sequential pattern mining: GSP, ISM, FreeSpan, and PrefixSpan. GSP is an Apriori-based algorithm that incorporates time constraints. ISM extends SPADE to incrementally update patterns after database changes. FreeSpan uses frequent items to recursively project databases and grow subsequences. PrefixSpan also uses projection but claims to not require candidate generation. It recursively projects databases based on short prefix patterns. The document concludes by stating the goal was to find an efficient scheme for extracting sequential patterns from transactional datasets.
A survey on live virtual machine migrations and its techniquesAlexander Decker
This document summarizes several techniques for live virtual machine migration in cloud computing. It discusses works that have proposed affinity-aware migration models to improve resource utilization, energy efficient migration approaches using storage migration and live VM migration, and a dynamic consolidation technique using migration control to avoid unnecessary migrations. The document also summarizes works that have designed methods to minimize migration downtime and network traffic, proposed a resource reservation framework for efficient migration of multiple VMs, and addressed real-time issues in live migration. Finally, it provides a table summarizing the techniques, tools used, and potential future work or gaps identified for each discussed work.
A survey on data mining and analysis in hadoop and mongo dbAlexander Decker
This document discusses data mining of big data using Hadoop and MongoDB. It provides an overview of Hadoop and MongoDB and their uses in big data analysis. Specifically, it proposes using Hadoop for distributed processing and MongoDB for data storage and input. The document reviews several related works that discuss big data analysis using these tools, as well as their capabilities for scalable data storage and mining. It aims to improve computational time and fault tolerance for big data analysis by mining data stored in Hadoop using MongoDB and MapReduce.
1. The document discusses several challenges for integrating media with cloud computing including media content convergence, scalability and expandability, finding appropriate applications, and reliability.
2. Media content convergence challenges include dealing with the heterogeneity of media types, services, networks, devices, and quality of service requirements as well as integrating technologies used by media providers and consumers.
3. Scalability and expandability challenges involve adapting to the increasing volume of media content and being able to support new media formats and outlets over time.
This document surveys trust architectures that leverage provenance in wireless sensor networks. It begins with background on provenance, which refers to the documented history or derivation of data. Provenance can be used to assess trust by providing metadata about how data was processed. The document then discusses challenges for using provenance to establish trust in wireless sensor networks, which have constraints on energy and computation. Finally, it provides background on trust, which is the subjective probability that a node will behave dependably. Trust architectures need to be lightweight to account for the constraints of wireless sensor networks.
This document discusses private equity investments in Kenya. It provides background on private equity and discusses trends in various regions. The objectives of the study discussed are to establish the extent of private equity adoption in Kenya, identify common forms of private equity utilized, and determine typical exit strategies. Private equity can involve venture capital, leveraged buyouts, or mezzanine financing. Exits allow recycling of capital into new opportunities. The document provides context on private equity globally and in developing markets like Africa to frame the goals of the study.
This document discusses a study that analyzes the financial health of the Indian logistics industry from 2005-2012 using Altman's Z-score model. The study finds that the average Z-score for selected logistics firms was in the healthy to very healthy range during the study period. The average Z-score increased from 2006 to 2010 when the Indian economy was hit by the global recession, indicating the overall performance of the Indian logistics industry was good. The document reviews previous literature on measuring financial performance and distress using ratios and Z-scores, and outlines the objectives and methodology used in the current study.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
20240609 QFM020 Irresponsible AI Reading List May 2024
Optimal nonlocal means algorithm for denoising ultrasound image
1. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.3, 2012
Optimal Nonlocal means algorithm for denoising ultrasound
image
Md. Motiur Rahman 1 , Md. Gauhar Arefin1*, Mithun Kumar PK.1 , Dr. Md. Shorif Uddin2
1. Dept. of Computer Science & Engineering , Mawlana Bhashani Science and Technology
University, Santosh, Tangail-1902, Bangladesh
2. Dept. of Computer Science & Engineering , Jahangirnagar University, Savar, Dhaka-1342 ,
Bangladesh
* E-mail: garefin005@gmail.com
Abstract
We propose a new measure for denoising image by calculating mean distance of all pixels in an image in
non-local means (NL-means) algorithm. We compute and analyze the original NL-means algorithm which
total all the distance of the patches but, our proposed algorithm calculates the mean value of all distance of
all the patches and then than get the sum of all distance. Our proposed algorithm exhibit better result with
comparison of the existing NL-means algorithm.
Keywords: NL-means, Patches, Mean Value, Measurement Matrix.
1. Introduction
Non-local means algorithm systematically use all possible self-predictions that an image can be provided
[1]. But local filters or frequency domain filters are not avail to do that. Non-Local means (NL-means)
approach introduced by Buades et al. to denoise 2D natural images corrupted by an additive white Gaussian
noise [2]. NL-means filter normally calculate the total patch distances of the image, computed a weighted
average of all the pixels in the image and denoise the image [1][3]. We propose a method that could
denoise the image by calculating mean value of all patch distances of the image and denoise the image
better than previous filter.
The aim is to recover the original image from a noisy measurement,
v(i) = u(i) + n(i) … ……………(1)
where, v(i) is the result value, u(i) is the “original” value and n(i) is the noise perturbation at a pixel i.
The best way to model the effect of noise on a digital image is to add some gaussian white noise. In that
case, n(i) are i.i.d. Gaussian values with zero mean and variance σ2 [2].
The denoising methods must not change the original image. But, for the better understanding of an
image those method allows to loss data to reduce the noise from the image [4]. Human vision can only
understand the better recognition of the intensity of the pixel value of an image [5][6]. That’s why, the
propose method is allows calculate mean patch distances, avoiding the total patch distances.
56
2. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.3, 2012
Section II. gives the introduction of the NL-means algorithm. Section III. discuses the NL-means algorithm
with mean distance calculation of pixel neighborhoods [7]. Section IV. compare the performance of the
NL-means algorithm and proposed NL-mean algorithm.
2. Non-Local Means Algorithm
2.1. Non local means
Recently, a new patch-based non local recovery paradigm has been proposed by Buades et al [2]. This new
paradigm replaces the local comparison of pixels by the non local comparison of patches. The current pixel
does not depend on the distance between neither spatial distances nor in intensity distance. NL-means filter
analyzes the patterns around the pixels.
2.2 Algorithm
In the actual NL-means algorithm filter the restored intensity NL(u)(xi) of pixel xiЄΩdim, is the weighted
average of all the pixel intensities u(xi) in the image Ωdim (a bounded
dim dim
domain Ω ⊆ R ):
NL (u )( x ) = ∑ W ( xi , x j )u ( x j )......... .......... ..(12 )
………….…….(2
x j ∈Ω
dim
where the family of weights {w(xi,xj)}j depend on the similarity between the pixels xi and xj and satisfy the
usual conditions 0 ≤ w(xi, xj) ≤ 1 and w(xi,xj)=1. The weight evaluates the similarity between the intensities
of the local neighborhoods (patches) Ni and Nj centered on pixels xi and xj.
2
For each pixel xj in ∆i, the Gaussian-weighted Euclidean distance ║.║ 2 ,a is computed between the two
patches u(Nj) and u(Ni) of image as explained in [8]. This distance is the traditional L2-norm convolved
with a Gaussian kernel of standard deviation a. The kernel is used to assign spatial weights to the patch
elements. The central pixels in the patch contribute more to the distance than the pixels surrounded of the
central pixel.
The weights w(xi, xj) are then computed as follows:
||u ( N i ) − u ( N j )||2,a
2
1
W ( xi , x j ) = exp− 2
..................(13)
…………..(3)
Zi h
where Zi is the normalizing constant and h acts as a filtering parameter controlling the decay of the
exponential function.
|| u ( N i ) − u ( N j ) || 2 ,a
2
1
W ( x i , x j i =∑ exp −
Z) = exp 2
.......... ........( 13 )
……….....(4)
Zi h
57
3. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.3, 2012
The NL-means not only compares the gray level in a single point but also compute the geometrical
configurations of whole neighborhoods [4]. Fig. 1 showing this fact, the pixel q3 has the same gray level
value of pixel p, but the neighborhoods are much different and therefore the weight w(p, q3) is nearly
zero[9][10].
3. NL-means algorithm with mean distance calculation
In previous section we discuss about the original algorithm of NL-means. In the equation (2) it estimated
value NL(u)(xi), for a pixel xi, is computed as a weighted average of all the pixels in the image. In this
proposed algorithm of NL-means we determinate the value NL(u)(xi), for a pixel xi, is calculate weighted
mean distance of all the pixels in the image. The proposed algorithm is only compute the mean distances of
the neighborhoods, total all the distances and then it averaged all the weights of neighborhoods.
In NL-means the current pixel does not depend on the distance between neither spatial distances nor in
intensity distance. This filter analyzes the patterns around the pixels. The similarity between two pixels xi
and xj depends on the similarity of the intensity gray level vectors u(Ni ) and u(Nj), where Nk denotes a
square neighborhood of fixed size and centered at a pixel k [3]. This similarity is determinate as a
decreasing function of the weighted Euclidean distance, of equation (3), where a>0 is the standard
deviation of the Gaussian kernel. In the distance calculation we compute mean distance of all
neighborhoods and then calculate the total of all distances.
Figure 1: Similar neighborhoods pixels give a large weight, w(p,q1) and w(p,q2), while much
different neighborhoods give a small weight w(p,q3).
Mean ( ||u(Ni)−u(N j)||2,a ) * size(patch)
2
1
W(xi , x j) = exp− 2
..........
........( )
13
Zi h
58
4. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.3, 2012
After calculating the mean distance of the intensities of the local neighborhoods (patches) Ni and Nj
centered on pixels xi and xj, it need to multiply with the size of local neighborhood, because it need to have
actual distances of all neighborhoods.
From Figure 2. we can read the pixel q4 has the same gray level value of pixel p, but it’s neighborhoods
make the w(p,q4) is smaller weighted. Here our propose NL-means algorithm turn the q4 pixel intensity
less and q3 pixel intensity high [11]. That’s why visually the image is more readable and it makes the noise
removed.
The original NL-means algorithm donoises an image by smoothing and calculating the total distances of
neighborhoods [4]. It improves the visibility of an image than local filters. But the propose algorithm
compute the mean distance of all neighborhoods, then calculate the total and makes the image more visible
and more easily edge detectable [10].
4. Performance and analysis
In this section we will compare NL-means algorithm and proposed algorithm under three well defined
criteria: the noise removing, the visual quality of the restored image and the mean square error, that is, the
Euclidean difference between the restored and original images [5][12].
For programming and calculation purposes of the NL-means algorithm, in a larger “search window” of
size S×S pixels we restrict the search of similar windows [13]. In all the experimentation we have fixed a
similarity square neighborhood Ni of 5×5 pixels and a search window of 11×11 pixels. If N2 is the number
of pixels of the image, then the final complexity of the algorithm is about 25 × 121 × N2 [3].
Large Euclidean distances lead to nearly zero weights acting as an automatic threshold because the fast
decay of the exponential kernel.
These formulas are corroborated by the visual experiments of Figure 3. This figure displays the visual
different
between those methods for the standard image Lena(512 x 512). In this figure we can identify the
NL-means filter reduce the noise and blur the image and the propose filter reduce the noise [4], blur the
image and detected some edges of the image. It makes the image quality increase and more suitable for
human eyes.
59
5. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.3, 2012
Figure 2: Similar neighborhoods pixels w(p,q1) and w(p,q2) give a large weights, while much different
neighborhoods w(p,q3) and w(p,q4) give a small weight.
Table 1. displaying the improvement of the signal-to-noise ratio (SNR), root mean square errors (RMSE)
and peak signal to noise ratio (PSNR) of two ultrasound noisy images.
Signal to Noise Ratio (SNR) compares the level of desired signal to the level of background noise. The
higher the ratio the less obtrusive the background noise is.
Let, see the improvement of ultra sound phantom image (256×256) and a normal ultrasound image.
60
6. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.3, 2012
a)The speckle noisy image(512×512) , b) Original NL-means filtered image in left and Proposed filtered
image in right(h=10)
c) Original NL-means filtered image in left and Proposed filtered image in right(h=2.5)
Figure 3. (a) .02 speckle noise is add to the lean image, (b) NL-means filtered image using degree of
filter, h =10, (c) Proposed filtered image using degree of filter, h =2.5
a)The ultrasound phantom image(256×256), b)Original NL-means filtered image in left and Proposed
filtered image in right(h=10)
61
7. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.3, 2012
c) Original NL-means filtered image in left and Proposed filtered image in right(h=1)
Figure 4. (a) ultrasound phantom image (b) NL-means filtered image using degree of filter, h =10
(c)Proposed filtered image using degree of filter, h =1
(a) (b)
(c)
Figure 5. (a) Normal ultrasound image (b)NL-means filtered image using degree of filter, h =10 (c)
Proposed filtered image using degree of filter, h =1
M N
∑ ∑ ( xi2, j + yi , j )
2
i =1 j =1
SNR = 10 . log10 M N
.......... .......... ........(16 )
……………(5)
∑ ∑ ( xi, j − y i, j )
2
i =1 j =1
where M and N are the width and height of the image. The larger SNR values correspond to good quality
image.
62
8. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.3, 2012
The Root Mean square error (RMSE), is given by
1 MN
RMSE= ( .∑∑(xi, j − yi, j) ).........
2
.......................(6)17)
.......... .......(
MN i=1 j=1
Peak Signal to Noise Ratio (PSNR) is computed using
PSNR = 20. log10 ( g max RMSE )........................(18)
2
……...……...(7)
where g2 is the maximum intensity in the unfiltered images. The PSNR is higher for a better transformed
image.
Table 1: Measurement Matrix
Image name Degree Filter SNR RMSE PSNR
of filter
Phantom 10 NL-means 8.31 15.74 24.23
(Figure 4) Proposed 8.55 15.35 24.44
1 NL-means 8.35 15.67 24.26
Proposed 9.64 13.58 25.51
Normal 10 NL-means 9.91 19.61 22.32
Ultra sound Proposed 11.16 17.24 23.43
(Figure 5) 1 NL-means 10.37 18.71 22.73
Proposed 13.30 14.00 25.24
Since, we can measure from Figure4. and Figure 5. it does not rely on any visual interpretation this
numerical Measurement is the most objective one. A small root mean square error does not assure a high
visual quality, the high SNR assure high visual quality of image. From the above discussion it can
measure that the NL-means calculation with mean distance is better method to denoise image.
5. Conclusions
Human vision is very sensitive to high-frequency information. Image details (e.g., corners and lines) have
high frequency contents and carry very important information for visual perception. Accordingly, the
purpose of this study was to determine the preference of filter of NL-means algorithm and for image
enhancement in a clinical soft-copy display setting and to establish a promising set of algorithm for use
with various ultrasound image.
63
9. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.3, 2012
References
Pierrick Coup´e1,2,4, Pierre Hellier1,2,4, Charles Kervrann3,5 and Christian Barillot 1, 2,
4(2009),“NonLocal Means-based Speckle Filtering for Ultrasound Images”,“IEEE Transactions on
Image Processing 2009;18(10):2221-9", DOI :10.1109/TIP.2024064.
A. Buades, B. Coll, and J. M. Morel(2005) “A review of image denoising algorithms, with a new one,”
Multiscale Modeling & Simulation, vol. 4, no. 2, pp. 490–530.
B. Coll and J.-M. Morel(2004), "A non-local algorithm for image denoising", SIAM J. Multiscale Model.
Simul., vol. 4, pp. 490 .
A. Buades, B. Coll, and J. Morel(2004). On image denoising methods. Technical Report 2004-15, CMLA.
Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli(2004), "Image quality assessment: From error
visibility to structural similarity", IEEE Transactios on Image Processing, vol. 13, no. 4, pp. 600-612,
Apr. 2004
R C Gonzalez, and R E Woods(2002), “Digital Image Processing”, Pearson Education, India.
A. Buades, B. Coll, and J. Morel(2005). Neighborhood filters and pde’s. Technical Report 2005-04,
CMLA.
H.Q. Luong, A. Ledda, and W. Philips(2006),“Non-local image interpolation,” in IEEE
InternationalConferenceonImageProcessing,pp.693–696.
D. Donoho(1995). De-noising by soft-thresholding. IEEE Transactions on Information Theory,
41:613–627.
S Sudha , GR Suresh , R Sukanesh (2009), “Speckle Noise Reduction in Ultrasound Images Using
Context-based Adaptive Wavelet Thresholding”, IETE Journal of Research, Volume: 55, Issue:
3, Page: 135-143.
S W Smith, and H Lopez(1982), “A contrast-detail analysis of diagnostic ultrasound imaging”, Med. Phy,
Vol. 9, pp. 4-12.
J.S.Lee(1980),“Digital image enhancement and noise filtering by use of local statistics,” IEEET
ransactionson Pattern Analysis and Machine Intelligence, vol.2, pp.165 168. [Online]. Available:
http://adsabs.harvard.edu/cgi-bin/nph-bibquery?bibcode=1980ITPAM...2..165L
Tay, P.C. Acton, S.T. Hossack, J.A.(2006) “Ultrasound despeckling using an adaptive window stochastic
approach,” in IEEE International Conferenceon Image Processing,pp.2549–2552. [Online].
Available: http://ieeexplore.ieee.org/xpls/ abs all.jsp?arnumber=4107088
Authors
Md. Motiur Rahman received the B.Sc Engg. & M.S degree in Computer Science & Engineering
from Jahangir Nagar University,Dhaka, Bangladesh, in 1995 & 2001, Where he is currently pursuing the
Ph.D. degree. His research interests include digital image processing, medical image processing,
computer vision & digital electronics.
Md. Gauhar Arefin was born in Nilphamari, Bangladesh in 1990. Currently he is the student of the
department of Computer Science & Engineering in Mawlana Bhashani Science & Technology University,
64
10. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.3, 2012
Santosh, Tangail, Bangladesh. His research interests include image analysis, image processing & medical
image processing, 3D visualization.
Mithun Kumar PK. was born in Rajshahi, Bangladesh in 1989. Currently he is the student of the
department of Computer Science & Engineering in Mawlana Bhashani Science & Technology University,
Santosh, Tangail, Bangladesh. His research interests include image analysis, image processing & medical
image processing, 3D visualization, Segmentation, Filter Optimization.
Dr. Mohammad Shorif Uddin is currently working in Department of Computer Science and Engineering,
Jahangirnagar University, Dhaka, Bangladesh. His research is focused on bioimaging and image analysis,
computer vision, pattern recognition, blind navigation, medical diagnosis, and disaster prevention. He
published many papers in renowned journals like IEEE, ELESVIER,IET, OPTICAL SOCIETY OF
AMERICA etc.
65