IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Improved nonlocal means based on pre classification and invariant block matchingIAEME Publication
One of the most popular image denoising methods based on self-similarity is called nonlocal
means (NLM). Though it can achieve remarkable performance, this method has a few shortcomings,
e.g., the computationally expensive calculation of the similarity measure, and the lack of reliable
candidates for some non repetitive patches. In this paper, we propose to improve NLM by integrating
Gaussian blur, clustering, and row image weighted averaging into the NLM framework.
Experimental results show that the proposed technique can perform denoising better than the original
NLM both quantitatively and visually, especially when the noise level is high.
IMAGE AUTHENTICATION THROUGH ZTRANSFORM WITH LOW ENERGY AND BANDWIDTH (IAZT)IJNSA Journal
In this paper a Z-transform based image authentication technique termed as IAZT has been proposed to
authenticate gray scale images. The technique uses energy efficient and low bandwidth based invisible data
embedding with a minimal computational complexity. Near about half of the bandwidth is required
compared to the traditional Z–transform while transmitting the multimedia contents such as images with
authenticating message through network. This authenticating technique may be used for copyright
protection or ownership verification. Experimental results are computed and compared with the existing
authentication techniques like Li’s method [11], SCDFT [13], Region-Based method [14] and many more
based on Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR), Image Fidelity (IF), Universal
Quality Image (UQI) and Structural Similarity Index Measurement (SSIM) which shows better performance
in IAZT.
This document compares the performance of various thresholding algorithms for segmenting biomedical images. It begins by introducing thresholding as a common segmentation technique and describes several thresholding methods: global thresholding, variable thresholding, thresholding with background, Otsu's method, and Sauvola thresholding. It then applies these algorithms to segment a chest radiograph and analyzes the results. The algorithms either over-segment or under-segment the image when using a single global threshold. Variable thresholding and Sauvola thresholding perform better by adapting locally. Subdividing the image and thresholding segments independently gives the best segmentation.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
A modified pso based graph cut algorithm for the selection of optimal regular...IAEME Publication
1) A modified PSO-based graph cut algorithm is proposed to select the optimal regularizing parameter for image segmentation. The algorithm uses a modified PSO to optimize the smallest size of area and smallest threshold cut value parameters for the graph cut algorithm.
2) In the proposed method, images are first preprocessed using Gaussian filtering. Then, a modified PSO optimizes the regularizing parameters for graph cut. Segmentation is performed using the graph cut algorithm with the optimized parameters.
3) The method is implemented in MATLAB and evaluated on various images. Evaluation metrics like Jaccard similarity, Dice coefficient, and accuracy show the proposed method achieves better performance than conventional PSO and graph cut approaches.
Performance Evaluation of 2D Adaptive Bilateral Filter For Removal of Noise F...CSCJournals
In this paper, we present the performance analysis of adaptive bilateral filter by pixel to noise ratio and mean square errors. It was evaluate changing the parameters of the adaptive filter half width values and standard deviations. In adaptive bilateral filter, the edge slope is enhanced by transforming the histogram via a range filter with adaptive offset and width. The variance of range filter can also be adaptive. The filter is applied to improve the sharpens of a gray level and color image by increasing the slope of the edges without producing overshoot or undershoots. The related graphs were plotted and the best filter parameters are obtained.
Fusion of Wavelet Coefficients from Visual and Thermal Face Images for Human ...CSCJournals
In this paper we present a comparative study on fusion of visual and thermal images using different wavelet transformations. Here, coefficients of discrete wavelet transforms from both visual and thermal images are computed separately and combined. Next, inverse discrete wavelet transformation is taken in order to obtain fused face image. Both Haar and Daubechies (db2) wavelet transforms have been used to compare recognition results. For experiments IRIS Thermal/Visual Face Database was used. Experimental results using Haar and Daubechies wavelets show that the performance of the approach presented here achieves maximum success rate of 100% in many cases.
Brain tumor segmentation using asymmetry based histogram thresholding and k m...eSAT Publishing House
This document presents a method for segmenting brain tumors from MRI images using asymmetry-based histogram thresholding and k-means clustering. The method involves 8 steps: 1) preprocessing the MRI image using sharpening and median filters, 2) computing histograms of the left and right halves of the image, 3) calculating a threshold value using the difference between left and right histograms, 4) applying thresholding and morphological operations to extract the tumor region, 5) applying k-means clustering and using the cluster centroids to refine the segmentation. The method is tested on 30 MRI images and results show the tumor region is accurately segmented. The segmented tumors can then be used for quantification, classification, and computer-assisted diagnosis of brain tumors.
Improved nonlocal means based on pre classification and invariant block matchingIAEME Publication
One of the most popular image denoising methods based on self-similarity is called nonlocal
means (NLM). Though it can achieve remarkable performance, this method has a few shortcomings,
e.g., the computationally expensive calculation of the similarity measure, and the lack of reliable
candidates for some non repetitive patches. In this paper, we propose to improve NLM by integrating
Gaussian blur, clustering, and row image weighted averaging into the NLM framework.
Experimental results show that the proposed technique can perform denoising better than the original
NLM both quantitatively and visually, especially when the noise level is high.
IMAGE AUTHENTICATION THROUGH ZTRANSFORM WITH LOW ENERGY AND BANDWIDTH (IAZT)IJNSA Journal
In this paper a Z-transform based image authentication technique termed as IAZT has been proposed to
authenticate gray scale images. The technique uses energy efficient and low bandwidth based invisible data
embedding with a minimal computational complexity. Near about half of the bandwidth is required
compared to the traditional Z–transform while transmitting the multimedia contents such as images with
authenticating message through network. This authenticating technique may be used for copyright
protection or ownership verification. Experimental results are computed and compared with the existing
authentication techniques like Li’s method [11], SCDFT [13], Region-Based method [14] and many more
based on Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR), Image Fidelity (IF), Universal
Quality Image (UQI) and Structural Similarity Index Measurement (SSIM) which shows better performance
in IAZT.
This document compares the performance of various thresholding algorithms for segmenting biomedical images. It begins by introducing thresholding as a common segmentation technique and describes several thresholding methods: global thresholding, variable thresholding, thresholding with background, Otsu's method, and Sauvola thresholding. It then applies these algorithms to segment a chest radiograph and analyzes the results. The algorithms either over-segment or under-segment the image when using a single global threshold. Variable thresholding and Sauvola thresholding perform better by adapting locally. Subdividing the image and thresholding segments independently gives the best segmentation.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
A modified pso based graph cut algorithm for the selection of optimal regular...IAEME Publication
1) A modified PSO-based graph cut algorithm is proposed to select the optimal regularizing parameter for image segmentation. The algorithm uses a modified PSO to optimize the smallest size of area and smallest threshold cut value parameters for the graph cut algorithm.
2) In the proposed method, images are first preprocessed using Gaussian filtering. Then, a modified PSO optimizes the regularizing parameters for graph cut. Segmentation is performed using the graph cut algorithm with the optimized parameters.
3) The method is implemented in MATLAB and evaluated on various images. Evaluation metrics like Jaccard similarity, Dice coefficient, and accuracy show the proposed method achieves better performance than conventional PSO and graph cut approaches.
Performance Evaluation of 2D Adaptive Bilateral Filter For Removal of Noise F...CSCJournals
In this paper, we present the performance analysis of adaptive bilateral filter by pixel to noise ratio and mean square errors. It was evaluate changing the parameters of the adaptive filter half width values and standard deviations. In adaptive bilateral filter, the edge slope is enhanced by transforming the histogram via a range filter with adaptive offset and width. The variance of range filter can also be adaptive. The filter is applied to improve the sharpens of a gray level and color image by increasing the slope of the edges without producing overshoot or undershoots. The related graphs were plotted and the best filter parameters are obtained.
Fusion of Wavelet Coefficients from Visual and Thermal Face Images for Human ...CSCJournals
In this paper we present a comparative study on fusion of visual and thermal images using different wavelet transformations. Here, coefficients of discrete wavelet transforms from both visual and thermal images are computed separately and combined. Next, inverse discrete wavelet transformation is taken in order to obtain fused face image. Both Haar and Daubechies (db2) wavelet transforms have been used to compare recognition results. For experiments IRIS Thermal/Visual Face Database was used. Experimental results using Haar and Daubechies wavelets show that the performance of the approach presented here achieves maximum success rate of 100% in many cases.
Brain tumor segmentation using asymmetry based histogram thresholding and k m...eSAT Publishing House
This document presents a method for segmenting brain tumors from MRI images using asymmetry-based histogram thresholding and k-means clustering. The method involves 8 steps: 1) preprocessing the MRI image using sharpening and median filters, 2) computing histograms of the left and right halves of the image, 3) calculating a threshold value using the difference between left and right histograms, 4) applying thresholding and morphological operations to extract the tumor region, 5) applying k-means clustering and using the cluster centroids to refine the segmentation. The method is tested on 30 MRI images and results show the tumor region is accurately segmented. The segmented tumors can then be used for quantification, classification, and computer-assisted diagnosis of brain tumors.
Image Denoising Using Earth Mover's Distance and Local HistogramsCSCJournals
In this paper an adaptive range and domain filtering is presented. In the proposed method local histograms are computed to tune the range and domain extensions of bilateral filter. Noise histogram is estimated to measure the noise level at each pixel in the noisy image. The extensions of range and domain filters are determined based on pixel noise level. Experimental results show that the proposed method effectively removes the noise while preserves the details. The proposed method performs better than bilateral filter and restored test images have higher PSNR than those obtained by applying popular Bayesshrink wavelet denoising method.
This document reviews different techniques for thinning images, including the Zhang and Suen algorithm and neural networks. It provides an overview of existing thinning approaches, such as iterative algorithms, and proposes a new approach using neural networks. The proposed approach aims to perform thinning invariant to rotations while being less sensitive to noise than existing methods. It evaluates techniques based on execution time, thinning rate, and other performance measures. The document concludes that neural networks may provide better results than existing techniques in terms of metrics like PSNR and MSE, while also reducing execution time for skeletonization.
This document summarizes JPEG image compression techniques. It discusses how images are divided into blocks and transformed from the spatial domain to the frequency domain using the Discrete Cosine Transform (DCT). It then describes how the DCT coefficients are quantized and arranged in zigzag order before entropy encoding with Huffman coding. The goal of JPEG compression is to store image data using as little space as possible while maintaining enough visual detail. The techniques discussed aim to remove irrelevant and redundant image data through DCT, quantization, and entropy encoding.
A novel approach for efficient skull stripping using morphological reconstruc...eSAT Journals
This document presents a novel two-step approach for skull stripping of MRI brain images. The first step uses morphological reconstruction operations including erosion, opening by reconstruction, dilation, and opening-closing by reconstruction to generate a primary segmentation mask. The second step applies thresholding to the primary mask to extract the final skull-stripped brain image. The method is tested on axial PD and FLAIR MRI images and achieves high Jaccard and Dice similarity scores compared to manually stripped images, demonstrating its effectiveness at skull stripping.
This document presents a novel two-step approach for skull stripping MRI brain images. The first step uses morphological reconstruction operations to generate a mask of the brain. The second step applies thresholding to the mask to extract the brain. The method was tested on axial PD and FLAIR MRI images. Results found Jaccard and Dice similarity scores above 0.8 and 0.9 respectively, indicating the method efficiently extracts the brain from the skull.
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity
IRJET- Implementation of Histogram based Tsallis Entropic Thresholding Segmen...IRJET Journal
This document discusses image segmentation techniques for plasma detection in visible images of tokamaks. It compares Gray Level Local Variance (GLLV), Gray Level Local Entropy (GLLE), and Gray Level Spatial Correlation (GLSC) based 2D histogram segmentation methods using Tsallis entropy thresholding. These methods construct 2D histograms using pixel gray levels combined with local variance, entropy, or spatial correlation features. The document implements these methods on visible tokamak images and evaluates the results using an unsupervised uniformity value metric. It finds that the GLSC method provides better segmentation in terms of uniformity value compared to the other techniques.
Digital image processing short quesstion answersAteeq Zada
This document discusses several techniques for 2D spatial image filtering and background subtraction in digital image processing. It covers linear filtering, Gaussian filtering, frame differencing, running averages, and mixtures of Gaussian models. The key techniques are linear filtering using a kernel or mask, Gaussian filtering to smooth images, and running averages or mixtures of Gaussians to model the background pixels over time while adapting to changes in illumination, motion, or scene geometry.
1. The document proposes a Modified Fuzzy C-Means (MFCM) algorithm to segment brain tumors in noisy MRI images.
2. The conventional Fuzzy C-Means algorithm is sensitive to noise, so the MFCM adds an adaptive filtering step during segmentation.
3. The MFCM incorporates neighboring pixel membership values to reduce each pixel's resistance to being clustered, improving segmentation in noisy images.
STUDY ANALYSIS ON TEETH SEGMENTATION USING LEVEL SET METHODaciijournal
This document summarizes a study that used level set methods to segment teeth from CBCT scans. It describes segmenting both anterior and posterior teeth in 3D for use in dental treatment planning. The level set method uses five energy functions including edge, region, shape prior, and dentine wall thickness energies. It was tested on 12 patient scans containing around 600 slices total. Results showed the method could accurately segment individual tooth crowns and roots in 3D with average segmentation time of 228 seconds per tooth. The segmented teeth could then be reconstructed in 3D for use by dentists in treatment simulation and planning.
ANALYSIS OF INTEREST POINTS OF CURVELET COEFFICIENTS CONTRIBUTIONS OF MICROS...sipij
This paper focuses on improved edge model based on Curvelet coefficients analysis. Curvelet transform is
a powerful tool for multiresolution representation of object with anisotropic edge. Curvelet coefficients
contributions have been analyzed using Scale Invariant Feature Transform (SIFT), commonly used to study
local structure in images. The permutation of Curvelet coefficients from original image and edges image
obtained from gradient operator is used to improve original edges. Experimental results show that this
method brings out details on edges when the decomposition scale increases.
AUTOMATIC THRESHOLDING TECHNIQUES FOR SAR IMAGEScsitconf
Segmentation of Synthetic Aperture Radar (SAR) images have a great use in observing the
global environment, and in analysing the target detection and recognition .But , segmentation
of (SAR) images is known as a very complex task, due to the existence of speckle noise.
Therefore, in this paper we present a fast SAR images segmentation based on between class
variance. Our choice for used (BCV) method, because it is one of the most effective thresholding
techniques for most real world images with regard to uniformity and shape measures. Our
experiments will be as a test to determine which technique is effective in thresholding
(extraction) the oil spill for numerous SAR images, and in the future these thresholding
techniques can be very useful in detection objects in other SAR images
AUTOMATIC THRESHOLDING TECHNIQUES FOR SAR IMAGEScscpconf
Segmentation of Synthetic Aperture Radar (SAR) images have a great use in observing the global environment, and in analysing the target detection and recognition .But , segmentation of (SAR) images is known as a very complex task, due to the existence of speckle noise. Therefore, in this paper we present a fast SAR images segmentation based on between class variance. Our choice for used (BCV) method, because it is one of the most effective thresholding techniques for most real world images with regard to uniformity and shape measures. Our experiments will be as a test to determine which technique is effective in thresholding (extraction) the oil spill for numerous SAR images, and in the future these thresholding
techniques can be very useful in detection objects in other SAR images
Image Enhancement: Introduction to Spatial Filters, Low Pass Filter and High Pass Filters. Here Discussed Image Smoothing and Image Sharping, Gaussian Filters
Denoising Process Based on Arbitrarily Shaped WindowsCSCJournals
Many factors, such as moving objects, introduce noise in digital images. The presence of noise affects image quality. The image denoising process works on reconstructing a noiseless image and improving its quality. When an image has an additive white Gaussian noise (AWGN) then denoising becomes a challenging process. In our research, we present an improved algorithm for image denoising in the wavelet domain. Homogenous regions for an input image are estimated using a region merging algorithm. The local variance and wavelet shrinkage algorithm are applied to denoise each image patch. Experimental results based on peak signal to noise ratio (PSNR) measurements showed that our algorithm provided better results compared with a denoising algorithm based on a minimum mean square error (MMSE) estimator.
This document presents a method for recovering text from degraded document images. It involves several steps:
1. Constructing a contrast image to distinguish text from background by calculating local image contrast and gradient.
2. Detecting text stroke edges in the contrast image using Otsu's thresholding and Canny edge detection.
3. Estimating a local threshold for binarization based on mean and standard deviation of detected edge pixel intensities.
4. Converting the image to binary format above the threshold.
5. Post-processing to remove unwanted background pixels.
The method is tested on several degraded documents and shows good performance in recovering text contents in a short time period. It provides a
This summarizes a research paper that proposes a new approach for noise estimation on images. It uses wavelet transform because of its sparse nature, then applies a Bayesian approach by imposing a Gaussian distribution on transformed pixels. It checks image quality before noise estimation using maximum likelihood decision criteria. Then designs a new bound-based estimation process using ideas from Cramer-Rao lower bound for signals in additive white Gaussian noise. The experimental results show visually better output after reconstructing the original image.
Analysis of collaborative learning methods for image contrast enhancementIAEME Publication
The document describes collaborative learning methods for image contrast enhancement. It begins with background on image enhancement techniques like histogram equalization. It then summarizes an existing collaborative learning method that determines pixel values from multiple randomly sampled windows. The document proposes a modified method that combines collaborative learning with block-based histogram equalization using randomly sized sliding windows. It is evaluated on medical and underwater images and is found to provide better results than the original collaborative learning method. Quality metrics are used to measure enhancement.
- The document discusses cyclones that impact the coastal areas of Andhra Pradesh state in India. It notes that cyclones commonly occur in May, October and November and cross the coastal areas, causing significant damage to houses, crops and infrastructure. The 1977 cyclone that crossed Krishna and Guntur districts was particularly severe, causing widespread destruction and estimated deaths of over 10,000 people. The document analyzes cyclones that have impacted specific districts and the damage caused. It recommends measures like constructing cyclone-resistant housing to mitigate future cyclone impacts.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Load balancing with switching mechanism in cloud computing environmenteSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Image Denoising Using Earth Mover's Distance and Local HistogramsCSCJournals
In this paper an adaptive range and domain filtering is presented. In the proposed method local histograms are computed to tune the range and domain extensions of bilateral filter. Noise histogram is estimated to measure the noise level at each pixel in the noisy image. The extensions of range and domain filters are determined based on pixel noise level. Experimental results show that the proposed method effectively removes the noise while preserves the details. The proposed method performs better than bilateral filter and restored test images have higher PSNR than those obtained by applying popular Bayesshrink wavelet denoising method.
This document reviews different techniques for thinning images, including the Zhang and Suen algorithm and neural networks. It provides an overview of existing thinning approaches, such as iterative algorithms, and proposes a new approach using neural networks. The proposed approach aims to perform thinning invariant to rotations while being less sensitive to noise than existing methods. It evaluates techniques based on execution time, thinning rate, and other performance measures. The document concludes that neural networks may provide better results than existing techniques in terms of metrics like PSNR and MSE, while also reducing execution time for skeletonization.
This document summarizes JPEG image compression techniques. It discusses how images are divided into blocks and transformed from the spatial domain to the frequency domain using the Discrete Cosine Transform (DCT). It then describes how the DCT coefficients are quantized and arranged in zigzag order before entropy encoding with Huffman coding. The goal of JPEG compression is to store image data using as little space as possible while maintaining enough visual detail. The techniques discussed aim to remove irrelevant and redundant image data through DCT, quantization, and entropy encoding.
A novel approach for efficient skull stripping using morphological reconstruc...eSAT Journals
This document presents a novel two-step approach for skull stripping of MRI brain images. The first step uses morphological reconstruction operations including erosion, opening by reconstruction, dilation, and opening-closing by reconstruction to generate a primary segmentation mask. The second step applies thresholding to the primary mask to extract the final skull-stripped brain image. The method is tested on axial PD and FLAIR MRI images and achieves high Jaccard and Dice similarity scores compared to manually stripped images, demonstrating its effectiveness at skull stripping.
This document presents a novel two-step approach for skull stripping MRI brain images. The first step uses morphological reconstruction operations to generate a mask of the brain. The second step applies thresholding to the mask to extract the brain. The method was tested on axial PD and FLAIR MRI images. Results found Jaccard and Dice similarity scores above 0.8 and 0.9 respectively, indicating the method efficiently extracts the brain from the skull.
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity
IRJET- Implementation of Histogram based Tsallis Entropic Thresholding Segmen...IRJET Journal
This document discusses image segmentation techniques for plasma detection in visible images of tokamaks. It compares Gray Level Local Variance (GLLV), Gray Level Local Entropy (GLLE), and Gray Level Spatial Correlation (GLSC) based 2D histogram segmentation methods using Tsallis entropy thresholding. These methods construct 2D histograms using pixel gray levels combined with local variance, entropy, or spatial correlation features. The document implements these methods on visible tokamak images and evaluates the results using an unsupervised uniformity value metric. It finds that the GLSC method provides better segmentation in terms of uniformity value compared to the other techniques.
Digital image processing short quesstion answersAteeq Zada
This document discusses several techniques for 2D spatial image filtering and background subtraction in digital image processing. It covers linear filtering, Gaussian filtering, frame differencing, running averages, and mixtures of Gaussian models. The key techniques are linear filtering using a kernel or mask, Gaussian filtering to smooth images, and running averages or mixtures of Gaussians to model the background pixels over time while adapting to changes in illumination, motion, or scene geometry.
1. The document proposes a Modified Fuzzy C-Means (MFCM) algorithm to segment brain tumors in noisy MRI images.
2. The conventional Fuzzy C-Means algorithm is sensitive to noise, so the MFCM adds an adaptive filtering step during segmentation.
3. The MFCM incorporates neighboring pixel membership values to reduce each pixel's resistance to being clustered, improving segmentation in noisy images.
STUDY ANALYSIS ON TEETH SEGMENTATION USING LEVEL SET METHODaciijournal
This document summarizes a study that used level set methods to segment teeth from CBCT scans. It describes segmenting both anterior and posterior teeth in 3D for use in dental treatment planning. The level set method uses five energy functions including edge, region, shape prior, and dentine wall thickness energies. It was tested on 12 patient scans containing around 600 slices total. Results showed the method could accurately segment individual tooth crowns and roots in 3D with average segmentation time of 228 seconds per tooth. The segmented teeth could then be reconstructed in 3D for use by dentists in treatment simulation and planning.
ANALYSIS OF INTEREST POINTS OF CURVELET COEFFICIENTS CONTRIBUTIONS OF MICROS...sipij
This paper focuses on improved edge model based on Curvelet coefficients analysis. Curvelet transform is
a powerful tool for multiresolution representation of object with anisotropic edge. Curvelet coefficients
contributions have been analyzed using Scale Invariant Feature Transform (SIFT), commonly used to study
local structure in images. The permutation of Curvelet coefficients from original image and edges image
obtained from gradient operator is used to improve original edges. Experimental results show that this
method brings out details on edges when the decomposition scale increases.
AUTOMATIC THRESHOLDING TECHNIQUES FOR SAR IMAGEScsitconf
Segmentation of Synthetic Aperture Radar (SAR) images have a great use in observing the
global environment, and in analysing the target detection and recognition .But , segmentation
of (SAR) images is known as a very complex task, due to the existence of speckle noise.
Therefore, in this paper we present a fast SAR images segmentation based on between class
variance. Our choice for used (BCV) method, because it is one of the most effective thresholding
techniques for most real world images with regard to uniformity and shape measures. Our
experiments will be as a test to determine which technique is effective in thresholding
(extraction) the oil spill for numerous SAR images, and in the future these thresholding
techniques can be very useful in detection objects in other SAR images
AUTOMATIC THRESHOLDING TECHNIQUES FOR SAR IMAGEScscpconf
Segmentation of Synthetic Aperture Radar (SAR) images have a great use in observing the global environment, and in analysing the target detection and recognition .But , segmentation of (SAR) images is known as a very complex task, due to the existence of speckle noise. Therefore, in this paper we present a fast SAR images segmentation based on between class variance. Our choice for used (BCV) method, because it is one of the most effective thresholding techniques for most real world images with regard to uniformity and shape measures. Our experiments will be as a test to determine which technique is effective in thresholding (extraction) the oil spill for numerous SAR images, and in the future these thresholding
techniques can be very useful in detection objects in other SAR images
Image Enhancement: Introduction to Spatial Filters, Low Pass Filter and High Pass Filters. Here Discussed Image Smoothing and Image Sharping, Gaussian Filters
Denoising Process Based on Arbitrarily Shaped WindowsCSCJournals
Many factors, such as moving objects, introduce noise in digital images. The presence of noise affects image quality. The image denoising process works on reconstructing a noiseless image and improving its quality. When an image has an additive white Gaussian noise (AWGN) then denoising becomes a challenging process. In our research, we present an improved algorithm for image denoising in the wavelet domain. Homogenous regions for an input image are estimated using a region merging algorithm. The local variance and wavelet shrinkage algorithm are applied to denoise each image patch. Experimental results based on peak signal to noise ratio (PSNR) measurements showed that our algorithm provided better results compared with a denoising algorithm based on a minimum mean square error (MMSE) estimator.
This document presents a method for recovering text from degraded document images. It involves several steps:
1. Constructing a contrast image to distinguish text from background by calculating local image contrast and gradient.
2. Detecting text stroke edges in the contrast image using Otsu's thresholding and Canny edge detection.
3. Estimating a local threshold for binarization based on mean and standard deviation of detected edge pixel intensities.
4. Converting the image to binary format above the threshold.
5. Post-processing to remove unwanted background pixels.
The method is tested on several degraded documents and shows good performance in recovering text contents in a short time period. It provides a
This summarizes a research paper that proposes a new approach for noise estimation on images. It uses wavelet transform because of its sparse nature, then applies a Bayesian approach by imposing a Gaussian distribution on transformed pixels. It checks image quality before noise estimation using maximum likelihood decision criteria. Then designs a new bound-based estimation process using ideas from Cramer-Rao lower bound for signals in additive white Gaussian noise. The experimental results show visually better output after reconstructing the original image.
Analysis of collaborative learning methods for image contrast enhancementIAEME Publication
The document describes collaborative learning methods for image contrast enhancement. It begins with background on image enhancement techniques like histogram equalization. It then summarizes an existing collaborative learning method that determines pixel values from multiple randomly sampled windows. The document proposes a modified method that combines collaborative learning with block-based histogram equalization using randomly sized sliding windows. It is evaluated on medical and underwater images and is found to provide better results than the original collaborative learning method. Quality metrics are used to measure enhancement.
- The document discusses cyclones that impact the coastal areas of Andhra Pradesh state in India. It notes that cyclones commonly occur in May, October and November and cross the coastal areas, causing significant damage to houses, crops and infrastructure. The 1977 cyclone that crossed Krishna and Guntur districts was particularly severe, causing widespread destruction and estimated deaths of over 10,000 people. The document analyzes cyclones that have impacted specific districts and the damage caused. It recommends measures like constructing cyclone-resistant housing to mitigate future cyclone impacts.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Load balancing with switching mechanism in cloud computing environmenteSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
A software framework for dynamic modeling of dc motors at robot jointseSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
This document discusses geo-distributed parallelization of MapReduce jobs across multiple datacenters. It introduces GEO-PACT, a Hadoop-based framework that can efficiently process sequences of parallelization contracts jobs on geo-distributed input data. GEO-PACT uses a group manager to determine optimal execution paths and job managers at each datacenter to execute tasks locally using Hadoop. It employs copy managers to transfer data between datacenters and aggregation managers to combine results. The goal is to optimize execution time by leveraging data locality across geographically distributed data sources.
New optimization scheme for cooperative spectrum sensing taking different snr...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
A study on the importance of image processing and its apllicationseSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Scheduling for interference mitigation using enhanced intercell interference ...eSAT Publishing House
This document summarizes research on scheduling algorithms for interference mitigation using enhanced inter-cell interference coordination (eICIC) in heterogeneous networks. It discusses how deploying low-power base stations (picos) within existing macro cells can improve data rates but also causes interference. eICIC and range expansion techniques are used to mitigate this interference and improve throughput. The document analyzes the performance of round robin scheduling for eICIC and finds that it maintains fairness between users while improving throughput, especially for cell-edge users under varying traffic loads.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
A study of localized algorithm for self organized wireless sensor network and...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Process design features of a 5 tonnesday multi – stage, intermittent drainage...eSAT Publishing House
This document summarizes the process design features of a 5 tonnes per day vegetable oil solvent extraction plant using n-hexane. It describes the 7-stage continuous full immersion extraction process. Material and energy balances were performed and the plant efficiency was determined to be 0.70. Process calculations found the energy requirement to be 23.17Kj per kg of extracted oil and the diffusivity of oils in the solvent averaged 4.0 x 10-9 m2/s. The mass transfer coefficient was calculated to be 3.2 x 10-5 Kmole/m2.s.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Automatic collision detection for an autonomous robot using proximity sensing...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
This document summarizes a study on the mechanical properties of polyester mortar cured at ambient temperature and 80°C. Thermally cured specimens exhibited higher compressive, flexural, and split tensile strengths compared to ambient cured specimens. However, ambient cured samples had a greater modulus of elasticity. The compressive strength, density, and water absorption of the mortar increased as the percentage of polyester resin increased. Thermal curing improved the mechanical properties but reduced the modulus of elasticity and ductility compared to ambient curing. The polyester mortar was found to exhibit properties suitable for construction applications such as earthquake-resistant structures and repair works.
Image Denoising of various images Using Wavelet Transform and Thresholding Te...IRJET Journal
The document discusses image denoising using wavelet transforms and thresholding techniques. It first provides background on image denoising and wavelet transforms. It then reviews several existing studies that used wavelet transforms like Haar, db4, and sym4 along with thresholding to denoise images corrupted with Gaussian and salt-and-pepper noise. Next, it describes the proposed denoising algorithm which involves adding noise to test images, decomposing the noisy images using different wavelet transforms, applying thresholding, and calculating metrics like PSNR to evaluate performance. The algorithm aims to eliminate noise in the wavelet domain using soft and hard thresholding followed by reconstruction.
The document reviews techniques for reducing speckle noise in synthetic aperture radar (SAR) data. It begins by describing the characteristics of speckle noise and its multiplicative nature. It then discusses common spatial domain filtering techniques for SAR data denoising, including Lee filtering, Frost filtering, and Kuan filtering. These are adaptive filters that estimate pixel values based on statistics within a moving window. The document also reviews wavelet-based denoising techniques and their advantages over spatial domain filters, including better preservation of edges. Finally, it provides an overview of future research opportunities in developing new speckle reduction methods.
FusIon - On-Field Security and Privacy Preservation for IoT Edge Devices: Con...jamesinniss
FusIon - On-Field Security and Privacy Preservation for IoT Edge Devices: Concurrent Defense Against Multiple types of Hardware Trojan Attacks
Get more info here:
>>> https://bit.ly/38FXJav
Efficient fingerprint image enhancement algorithm based on gabor filtereSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Improved nonlocal means based on pre classification and invariant block matchingIAEME Publication
This document summarizes an article that proposes improvements to the nonlocal means (NLM) image denoising algorithm. The proposed method first applies Gaussian blurring to pre-process the noisy image. It then extracts features from image patches using Hu's moment invariants. Next, it performs k-means clustering on the feature vectors to group similar patches. Finally, it applies row image weighted averaging to reconstruct the image. The experimental results showed this method can perform better denoising than the original NLM, especially at higher noise levels, by providing more reliable candidate patches for the weighted averaging.
The document proposes techniques to detect and remove Gaussian, impulse, and mixed noise from MR brain images. It presents an architecture that uses Extreme Learning Machine for noise detection and separate filters for Gaussian and impulse noise removal. Experimental results show that the proposed filtering technique outperforms existing methods like mean, bilateral, and non-local mean filters in terms of metrics like PSNR, MSE, and SSIM for denoising images with different noise levels and types.
IMAGE DENOISING BY MEDIAN FILTER IN WAVELET DOMAINijma
The details of an image with noise may be restored by removing noise through a suitable image de-noising
method. In this research, a new method of image de-noising based on using median filter (MF) in the
wavelet domain is proposed and tested. Various types of wavelet transform filters are used in conjunction
with median filter in experimenting with the proposed approach in order to obtain better results for image
de-noising process, and, consequently to select the best suited filter. Wavelet transform working on the
frequencies of sub-bands split from an image is a powerful method for analysis of images. According to this
experimental work, the proposed method presents better results than using only wavelet transform or
median filter alone. The MSE and PSNR values are used for measuring the improvement in de-noised
images.
An adaptive method for noise removal from real world imagesIAEME Publication
The document summarizes an adaptive method for noise removal from real world images. It proposes modifying the bilateral filter, which considers both spatial and intensity distances between pixels. The modified filter adapts its strength based on the local noise level in the image. It estimates the smoothing parameter by analyzing noise strength factors within blocks of different sizes. This helps determine the appropriate block size to use for a given image region. The filter aims to remove Gaussian noise while preserving edges and details to enhance image quality. Experimental results show it performs well across different images for a wide range of noise levels.
IMAGE AUTHENTICATION THROUGH ZTRANSFORM WITH LOW ENERGY AND BANDWIDTH (IAZT)IJNSA Journal
In this paper a Z-transform based image authentication technique termed as IAZT has been proposed to authenticate gray scale images. The technique uses energy efficient and low bandwidth based invisible data embedding with a minimal computational complexity. Near about half of the bandwidth is required compared to the traditional Z–transform while transmitting the multimedia contents such as images with authenticating message through network. This authenticating technique may be used for copyright protection or ownership verification. Experimental results are computed and compared with the existing authentication techniques like Li’s method [11], SCDFT [13], Region-Based method [14] and many more based on Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR), Image Fidelity (IF), Universal Quality Image (UQI) and Structural Similarity Index Measurement (SSIM) which shows better performance in IAZT.
Investigations on the role of analysis window shape parameter in speech enhan...karthik annam
This document summarizes a research paper that investigates the role of analysis window shape parameter in speech enhancement using a hybrid method. The paper analyzes how different window shapes (e.g. Hamming, Gaussian, Kaiser) impact speech enhancement when used to segment noisy speech frames. Six objective measures are used to evaluate enhanced speech quality for different window shapes. Experimental results show that window shape plays an important role in the enhancement process. An optimum shape constant is proposed to achieve better speech quality. Two new hybrid thresholding schemes combining different thresholding methods are also proposed and evaluated.
ALEXANDER FRACTIONAL INTEGRAL FILTERING OF WAVELET COEFFICIENTS FOR IMAGE DEN...sipij
The present paper, proposes an efficient denoising algorithm which works well for images corrupted with
Gaussian and speckle noise. The denoising algorithm utilizes the alexander fractional integral filter which
works by the construction of fractional masks window computed using alexander polynomial. Prior to the
application of the designed filter, the corrupted image is decomposed using symlet wavelet from which only
the horizontal, vertical and diagonal components are denoised using the alexander integral filter.
Significant increase in the reconstruction quality was noticed when the approach was applied on the
wavelet decomposed image rather than applying it directly on the noisy image. Quantitatively the results
are evaluated using the peak signal to noise ratio (PSNR) which was 30.8059 on an average for images
corrupted with Gaussian noise and 36.52 for images corrupted with speckle noise, which clearly
outperforms the existing methods.
Performance Analysis of Acoustic Echo Cancellation TechniquesIJERA Editor
Mainly, the adaptive filters are implemented in time domain which works efficiently in most of the applications. But in many applications the impulse response becomes too large, which increases the complexity of the adaptive filter beyond a level where it can no longer be implemented efficiently in time domain. An example of where this can happen would be acoustic echo cancellation (AEC) applications. So, there exists an alternative solution i.e. to implement the filters in frequency domain. AEC has so many applications in wide variety of problems in industrial operations, manufacturing and consumer products. Here in this paper, a comparative analysis of different acoustic echo cancellation techniques i.e. Frequency domain adaptive filter (FDAF), Least mean square (LMS), Normalized least mean square (NLMS) &Sign error (SE) is presented. The results are compared with different values of step sizes and the performance of these techniques is measured in terms of Error rate loss enhancement (ERLE), Mean square error (MSE)& Peak signal to noise ratio (PSNR).
Translation Invariance (TI) based Novel Approach for better De-noising of Dig...IRJET Journal
1. The document discusses a novel Translation Invariance (TI) approach for improving the performance of various digital image processing filters for image denoising.
2. It describes applying filters like convolution, wiener, gaussian etc. both without TI (directly on noisy image) and with TI (by shifting the image and averaging results) to denoise images.
3. The results found that using the TI approach, where the filters are applied after shifting the image and averaging the outputs, produced better performance and noise removal compared to directly applying the filters without translation invariance. This was also verified using edge detection tests.
A vlsi architecture for efficient removal of noises and enhancement of imagesIAEME Publication
This document summarizes a proposed VLSI architecture for removing noise and enhancing images. The architecture uses a decision tree-based approach with three modules: an isolation module to identify noisy pixels, a fringe module to determine if pixels are on edges, and a similarity module to further analyze pixels. Noisy pixels are replaced with reconstructed values from an edge-preserving filter. Histogram equalization is then applied to improve image quality. The proposed architecture is implemented on FPGA and evaluated on different types of noises using metrics like PSNR and MSE.
Speech Enhancement Using Compressive SensingIRJET Journal
The document proposes a method for speech enhancement using compressive sensing. It involves compressing a speech signal using a measurement matrix at the transmitter and then reconstructing it at the receiver from fewer samples than the Nyquist rate. The method takes a noisy speech signal, applies a discrete cosine transform, uses thresholding to eliminate small coefficients, and then reconstructs the signal using l1-minimization. Simulation results show the method can compress signals by 40% with low error rates and reconstruct the signal without losing important information, increasing data rates. Future work could involve testing different transformations and designing an optimal measurement matrix for speech signals.
DESPECKLING OF SAR IMAGES BY OPTIMIZING AVERAGED POWER SPECTRAL VALUE IN CURV...ijistjournal
The document describes a novel algorithm for despeckling synthetic aperture radar (SAR) images using particle swarm optimization (PSO) in the curvelet domain. The algorithm first identifies homogeneous regions in the speckled image using variance calculations. It then uses PSO to optimize the thresholding of curvelet coefficients, with the objective of minimizing the average power spectral value. This provides an optimized threshold to apply curvelet-based despeckling. The proposed method is tested on standard images and shown to outperform conventional filters like median and Lee filters in reducing speckle noise.
DESPECKLING OF SAR IMAGES BY OPTIMIZING AVERAGED POWER SPECTRAL VALUE IN CURV...ijistjournal
Synthetic Aperture Radar (SAR) images are inherently affected by multiplicative speckle noise, due to the coherent nature of scattering phenomena. In this paper, a novel algorithm capable of suppressing speckle noise using Particle Swarm Optimization (PSO) technique is presented. The algorithm initially identifies homogenous region from the corrupted image and uses PSO to optimize the Thresholding of curvelet coefficients to recover the original image. Average Power Spectrum Value (APSV) has been used as objective function of PSO. The Proposed algorithm removes Speckle noise effectively and the performance of the algorithm is tested and compared with Mean filter, Median filter, Lee filter, Statistic Lee filter, Kuan filter, frost filter and gamma filter., outperforming conventional filtering methods.
Accelerated Joint Image Despeckling Algorithm in the Wavelet and Spatial DomainsCSCJournals
This document summarizes an algorithm for reducing speckle noise in images using a two-stage approach combining wavelet and spatial domain filtering. The first stage estimates the optimal parameter value for a spatial speckle reduction filter based on edge pixel statistics and noise variance. The second stage then uses the optimized spatial filter to additionally smooth wavelet approximation sub-band coefficients. A complexity reduction method for wavelet decomposition is also proposed. Existing noise reduction methods like the Lee, Kuan and Frost filters are reviewed for context. The results of applying the proposed two-stage algorithm are promising in terms of improved image quality.
A Novel Framework For Preprocessing Of Breast Ultra Sound Images By Combining...IRJET Journal
The document presents a novel framework for preprocessing breast ultrasound images that combines non-local means filtering and morphological operations. Non-local means filtering is used to reduce speckle noise, which is a significant issue for ultrasound images. Then morphological techniques are applied to enhance the noise-reduced images. The framework achieves peak signal-to-noise ratios of 60-80 decibels when tested on real breast ultrasound images. It provides an effective method for preprocessing ultrasound images to reduce noise and improve image quality.
IRJET- A Review on Various Restoration Techniques in Digital Image ProcessingIRJET Journal
The document reviews various image restoration techniques used for removing noise and blurring from digital images. It discusses techniques like median filtering, Wiener filtering, and Lucy Richardson algorithms. It provides an overview of each technique, including their advantages and limitations. The document also reviews several research papers that propose modifications to existing techniques or new methods for tasks like salt-and-pepper noise removal. The reviewed papers found that their proposed methods improved restoration quality over other techniques, achieving higher PSNR values and producing images that looked visually sharper and more distinct.
Similar to Speckle noise reduction using hybrid tmav based fuzzy filter (20)
Hudhud cyclone caused extensive damage in Visakhapatnam, India in October 2014, especially to tree cover. This will likely impact the local environment in several ways: increased air pollution as trees absorb less; higher temperatures without tree canopy; increased erosion and landslides. It also created large amounts of waste from destroyed trees. Proper management of solid waste is needed to prevent disease spread. Suggested measures include restoring damaged plants, building fountains to reduce heat, mandating light-colored buildings, improving waste management, and educating public on health risks. Overall, changes are needed to water, land, and waste practices to rebuild the environment after the cyclone removed green cover.
Impact of flood disaster in a drought prone area – case study of alampur vill...eSAT Publishing House
1) In September-October 2009, unprecedented heavy rainfall and dam releases caused widespread flooding in Alampur village in Mahabub Nagar district, a historically drought-prone area.
2) The flood damaged or destroyed homes, buildings, infrastructure, crops, and documents. It displaced many residents and cut off the village.
3) The socioeconomic conditions and mud-based construction of homes in the village exacerbated the flood's impacts, making damage more severe and recovery more difficult.
The document summarizes the Hudhud cyclone that struck Visakhapatnam, India in October 2014. It describes the cyclone's formation, rapid intensification to winds of 175 km/h, and landfall near Visakhapatnam. The cyclone caused extensive damage estimated at over $1 billion and at least 109 deaths in India and Nepal. Infrastructure like buildings, bridges, and power lines were destroyed. Crops and fishing boats were also damaged. The document then discusses coping strategies and improvements needed to disaster management plans to better prepare for future cyclones.
Groundwater investigation using geophysical methods a case study of pydibhim...eSAT Publishing House
This document summarizes the results of a geophysical investigation using vertical electrical sounding (VES) methods at 13 locations around an industrial area in India. The VES data was interpreted to generate geo-electric sections and pseudo-sections showing subsurface resistivity variations. Three main layers were typically identified - a high resistivity topsoil, a weathered middle layer, and a basement rock. Pseudo-sections revealed relatively more weathered areas in the northwest and southwest. Resistivity sections helped identify zones of possible high groundwater potential based on low resistivity anomalies sandwiched between more resistive layers. The study concluded the electrical resistivity method was useful for understanding subsurface geology and identifying areas prospective for groundwater exploration.
Flood related disasters concerned to urban flooding in bangalore, indiaeSAT Publishing House
1. The document discusses urban flooding in Bangalore, India. It describes how factors like heavy rainfall, population growth, and improper land use have contributed to increased flooding in the city.
2. Flooding events in 2013 are analyzed in detail. A November rainfall caused runoff six times higher than the drainage capacity, inundating low-lying residential areas.
3. Impacts of urban flooding include disrupted daily life, damaged infrastructure, and decreased economic activity in affected areas. The document calls for improved flood management strategies to better mitigate urban flooding risks in Bangalore.
Enhancing post disaster recovery by optimal infrastructure capacity buildingeSAT Publishing House
This document discusses enhancing post-disaster recovery through optimal infrastructure capacity building. It presents a model to minimize the cost of meeting demand using auxiliary capacities when disaster damages infrastructure. The model uses genetic algorithms to select optimal capacity combinations. The document reviews how infrastructure provides vital services supporting recovery activities and discusses classifying infrastructure into six types. When disaster reduces infrastructure services, a gap forms between community demands and available support, hindering recovery. The proposed research aims to identify this gap and optimize capacity selection to fill it cost-effectively.
Effect of lintel and lintel band on the global performance of reinforced conc...eSAT Publishing House
This document analyzes the effect of lintels and lintel bands on the seismic performance of reinforced concrete masonry infilled frames through non-linear static pushover analysis. Four frame models are considered: a frame with a full masonry infill wall; a frame with a central opening but no lintel/band; a frame with a lintel above the opening; and a frame with a lintel band above the opening. The results show that the full infill wall model has 27% higher stiffness and 32% higher strength than the model with just an opening. Models with lintels or lintel bands have slightly higher strength and stiffness than the model with just an opening. The document concludes lintels and lintel
Wind damage to trees in the gitam university campus at visakhapatnam by cyclo...eSAT Publishing House
1) A cyclone with wind speeds of 175-200 kph caused massive damage to the green cover of Gitam University campus in Visakhapatnam, India. Thousands of trees were uprooted or damaged.
2) A study assessed different types of damage to trees from the cyclone, including defoliation, salt spray damage, damage to stems/branches, and uprooting. Certain tree species were more vulnerable than others.
3) The results of the study can help in selecting more wind-resistant tree species for future planting and reducing damage from future storms.
Wind damage to buildings, infrastrucuture and landscape elements along the be...eSAT Publishing House
1) A visual study was conducted to assess wind damage from Cyclone Hudhud along the 27km Visakha-Bheemli Beach road in Visakhapatnam, India.
2) Residential and commercial buildings suffered extensive roof damage, while glass facades on hotels and restaurants were shattered. Infrastructure like electricity poles and bus shelters were destroyed.
3) Landscape elements faced damage, including collapsed trees that damaged pavements, and debris in parks. The cyclone wiped out over half the city's green cover and caused beach erosion around protected areas.
1) The document reviews factors that influence the shear strength of reinforced concrete deep beams, including compressive strength of concrete, percentage of tension reinforcement, vertical and horizontal web reinforcement, aggregate interlock, shear span-to-depth ratio, loading distribution, side cover, and beam depth.
2) It finds that compressive strength of concrete, tension reinforcement percentage, and web reinforcement all increase shear strength, while shear strength decreases as shear span-to-depth ratio increases.
3) The distribution and amount of vertical and horizontal web reinforcement also affects shear strength, but closely spaced stirrups do not necessarily enhance capacity or performance.
Role of voluntary teams of professional engineers in dissater management – ex...eSAT Publishing House
1) A team of 17 professional engineers from various disciplines called the "Griha Seva" team volunteered after the 2001 Gujarat earthquake to provide technical assistance.
2) The team conducted site visits, assessments, testing and recommended retrofitting strategies for damaged structures in Bhuj and Ahmedabad. They were able to fully assess and retrofit 20 buildings in Ahmedabad.
3) Factors observed that exacerbated the earthquake's impacts included unplanned construction, non-engineered buildings, improper prior retrofitting, and defective materials and workmanship. The professional engineers' technical expertise was crucial for effective post-disaster management.
This document discusses risk analysis and environmental hazard management. It begins by defining risk, hazard, and toxicity. It then outlines the steps involved in hazard identification, including HAZID, HAZOP, and HAZAN. The document presents a case study of a hypothetical gas collecting station, identifying potential accidents and hazards. It discusses quantitative and qualitative approaches to risk analysis, including calculating a fire and explosion index. The document concludes by discussing hazard management strategies like preventative measures, control measures, fire protection, relief operations, and the importance of training personnel on safety.
Review study on performance of seismically tested repaired shear wallseSAT Publishing House
This document summarizes research on the performance of reinforced concrete shear walls that have been repaired after damage. It begins with an introduction to shear walls and their failure modes. The literature review then discusses the behavior of original shear walls as well as different repair techniques tested by other researchers, including conventional repair with new concrete, jacketing with steel plates or concrete, and use of fiber reinforced polymers. The document focuses on evaluating the strength retention of shear walls after being repaired with various methods.
Monitoring and assessment of air quality with reference to dust particles (pm...eSAT Publishing House
This document summarizes a study on monitoring and assessing air quality with respect to dust particles (PM10 and PM2.5) in the urban environment of Visakhapatnam, India. Sampling was conducted in residential, commercial, and industrial areas from October 2013 to August 2014. The average PM2.5 and PM10 concentrations were within limits in residential areas but moderate to high in commercial and industrial areas. Exceedance factor levels indicated moderate pollution for residential areas and moderate to high pollution for commercial and industrial areas. There is a need for management measures like improved public transport and green spaces to combat particulate air pollution in the study areas.
Low cost wireless sensor networks and smartphone applications for disaster ma...eSAT Publishing House
This document describes a low-cost wireless sensor network and smartphone application system for disaster management. The system uses an Arduino-based wireless sensor network comprising nodes with various sensors to monitor the environment. The sensor data is transmitted to a central gateway and then to the cloud for analysis. A smartphone app connected to the cloud can detect disasters from the sensor data and send real-time alerts to users to help with early evacuation. The system aims to provide low-cost localized disaster detection and warnings to improve safety.
Coastal zones – seismic vulnerability an analysis from east coast of indiaeSAT Publishing House
This document summarizes an analysis of seismic vulnerability along the east coast of India. It discusses the geotectonic setting of the region as a passive continental margin and reports some moderate seismic activity from offshore in recent decades. While seismic stability cannot be assumed given events like the 2004 tsunami, no major earthquakes have been recorded along this coast historically. The document calls for further study of active faults, neotectonics, and implementation of improved seismic building codes to mitigate vulnerability.
Can fracture mechanics predict damage due disaster of structureseSAT Publishing House
This document discusses how fracture mechanics can be used to better predict damage and failure of structures. It notes that current design codes are based on small-scale laboratory tests and do not account for size effects, which can lead to more brittle failures in larger structures. The document outlines how fracture mechanics considers factors like size effect, ductility, and minimum reinforcement that influence the strength and failure behavior of structures. It provides examples of how fracture mechanics has been applied to problems like evaluating shear strength in deep beams and investigating a failure of an oil platform structure. The document argues that fracture mechanics provides a more scientific basis for structural design compared to existing empirical code provisions.
This document discusses the assessment of seismic susceptibility of reinforced concrete (RC) buildings. It begins with an introduction to earthquakes and the importance of vulnerability assessment in mitigating earthquake risks and losses. It then describes modeling the nonlinear behavior of RC building elements and performing pushover analysis to evaluate building performance. The document outlines modeling RC frames and developing moment-curvature relationships. It also summarizes the results of pushover analyses on sample 2D and 3D RC frames with and without shear walls. The conclusions emphasize that pushover analysis effectively assesses building properties but has limitations, and that capacity spectrum method provides appropriate results for evaluating building response and retrofitting impact.
A geophysical insight of earthquake occurred on 21 st may 2014 off paradip, b...eSAT Publishing House
1) A 6.0 magnitude earthquake occurred off the coast of Paradip, Odisha in the Bay of Bengal on May 21, 2014 at a depth of around 40 km.
2) Analysis of magnetic and bathymetric data from the area revealed the presence of major lineaments in NW-SE and NE-SW directions that may be responsible for seismic activity through stress release.
3) Movements along growth faults at the margins of large Bengal channels, due to large sediment loads, could also contribute to seismic events by triggering movements along the faults.
Effect of hudhud cyclone on the development of visakhapatnam as smart and gre...eSAT Publishing House
This document discusses the effects of Cyclone Hudhud on the development of Visakhapatnam as a smart and green city through a case study and preliminary surveys. The surveys found that 31% of participants had experienced cyclones, 9% floods, and 59% landslides previously in Visakhapatnam. Awareness of disaster alarming systems increased from 14% before the 2004 tsunami to 85% during Cyclone Hudhud, while awareness of disaster management systems increased from 50% before the tsunami to 94% during Hudhud. The surveys indicate that initiatives after the tsunami improved awareness and preparedness. Developing Visakhapatnam as a smart, green city should consider governance
Software Engineering and Project Management - Software Testing + Agile Method...Prakhyath Rai
Software Testing: A Strategic Approach to Software Testing, Strategic Issues, Test Strategies for Conventional Software, Test Strategies for Object -Oriented Software, Validation Testing, System Testing, The Art of Debugging.
Agile Methodology: Before Agile – Waterfall, Agile Development.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
Build the Next Generation of Apps with the Einstein 1 Platform.
Rejoignez Philippe Ozil pour une session de workshops qui vous guidera à travers les détails de la plateforme Einstein 1, l'importance des données pour la création d'applications d'intelligence artificielle et les différents outils et technologies que Salesforce propose pour vous apporter tous les bénéfices de l'IA.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELijaia
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Applications of artificial Intelligence in Mechanical Engineering.pdfAtif Razi
Historically, mechanical engineering has relied heavily on human expertise and empirical methods to solve complex problems. With the introduction of computer-aided design (CAD) and finite element analysis (FEA), the field took its first steps towards digitization. These tools allowed engineers to simulate and analyze mechanical systems with greater accuracy and efficiency. However, the sheer volume of data generated by modern engineering systems and the increasing complexity of these systems have necessitated more advanced analytical tools, paving the way for AI.
AI offers the capability to process vast amounts of data, identify patterns, and make predictions with a level of speed and accuracy unattainable by traditional methods. This has profound implications for mechanical engineering, enabling more efficient design processes, predictive maintenance strategies, and optimized manufacturing operations. AI-driven tools can learn from historical data, adapt to new information, and continuously improve their performance, making them invaluable in tackling the multifaceted challenges of modern mechanical engineering.
Prediction of Electrical Energy Efficiency Using Information on Consumer's Ac...PriyankaKilaniya
Energy efficiency has been important since the latter part of the last century. The main object of this survey is to determine the energy efficiency knowledge among consumers. Two separate districts in Bangladesh are selected to conduct the survey on households and showrooms about the energy and seller also. The survey uses the data to find some regression equations from which it is easy to predict energy efficiency knowledge. The data is analyzed and calculated based on five important criteria. The initial target was to find some factors that help predict a person's energy efficiency knowledge. From the survey, it is found that the energy efficiency awareness among the people of our country is very low. Relationships between household energy use behaviors are estimated using a unique dataset of about 40 households and 20 showrooms in Bangladesh's Chapainawabganj and Bagerhat districts. Knowledge of energy consumption and energy efficiency technology options is found to be associated with household use of energy conservation practices. Household characteristics also influence household energy use behavior. Younger household cohorts are more likely to adopt energy-efficient technologies and energy conservation practices and place primary importance on energy saving for environmental reasons. Education also influences attitudes toward energy conservation in Bangladesh. Low-education households indicate they primarily save electricity for the environment while high-education households indicate they are motivated by environmental concerns.
Prediction of Electrical Energy Efficiency Using Information on Consumer's Ac...
Speckle noise reduction using hybrid tmav based fuzzy filter
1. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
__________________________________________________________________________________________
Volume: 03 Special Issue: 03 | May-2014 | NCRIET-2014, Available @ http://www.ijret.org 113
SPECKLE NOISE REDUCTION USING HYBRID TMAV BASED FUZZY
FILTER
Nagashettappa Biradar1
, M.L.Dewal2
, ManojKumar Rohit3
1
Research Scholar, Electrical Engineering Department, Indian Institute of technology Roorkee, Roorkee, India
2
Professor, Electrical Engineering Department, Indian Institute of technology Roorkee, Roorkee, India
3
Professor, Cardiology Department, PGIMER, Chandigarh, Punjab, India
Abstract
The multiplicative nature of speckle noise present in imaging modalities like echocardiography complicates the despeckling procedure
as it would be necessary to remove noise with the edges well preserved. A novel speckle reduction technique based on integration of
moving average filter using fuzzy triangulation membership function (TMAV) with moving average center with wiener filter is
proposed and analyzed in this paper. Fuzzy TMAV filter is experimented for reduction of speckle noise in homomorphic domain.
Denoising features of this filter are fine tuned by sequentially embedding it with wiener filter. This hybrid TMAV filters result in the
enhancement of edges with higher amount of noise reduction. The performance of proposed filter is compared with ten state-of-art
denoising techniques. Figure of merit (FOM), structural similarity (SSIM) index along with traditional parameters are superior for
hybrid fuzzy filters in comparison to methods like probability patch based (PPB), Non-local means (NLM), and posterior sampling
based Bayesian estimation (PSBE) based filters.
Keywords: Speckle reduction, TMAV based fuzzy filter, Wiener filter, Hybrid Fuzzy filter, Edges preservation
-----------------------------------------------------------------------***-----------------------------------------------------------------------
1. INTRODUCTION
Multiplicative noise in the coherent imaging modalities like
synthetic aperture RADAR (SAR), laser, remote sensing,
optical coherence tomography (OCT), and ultrasound (US)
proposes lot of difficulties like masking of finer details and
abrogating human interpretation due to low contrast and low
visibility. It is therefore necessary to incorporate post
processing steps to remove noise with edge preservation and
enhance the contrast of the images [1-5].
The omnipresence of noise has lead to development of various
types of filters based on the principles like anisotropic
diffusion (AD) [2, 6], wavelets [1, 3, 5, 7], adaptive filters like
enhanced Lee, enhanced Frost, wiener [1, 2, 4, 5, 7],
homomorphic[1, 7], Bayesian estimation [3], and non-local
(NL) means [8]. Each of the filters behaves differently with
different types of images offering their own advantages and
drawbacks; compelling researchers to fine tune each for its
variants.
Basic noise reduction techniques like median filter, adaptive
weighted median filter (AWMF), and moving average (MAV)
filter are very popular for additive noise removal but their
application on ultrasound images are less researched [1]. Poor
noise removing capacity, loss of finer image details, selection
of appropriate window size and shape are the basic issues
which need to be sorted out in basic techniques [1, 3, 5, 9].
The denoising characteristics of spatially adaptive wiener filter
for additive Gaussian denoising are highly acceptable. It is put
to use for speckle noise reduction in homomorphic scheme.
Homomorphic wiener filter is used by many researchers for
comparison of despeckling results obtained by their respective
methods [5, 7, 10, 11].
Noise reduction capability of diffusion based despeckling is
under question when noise contamination is higher as in the
cases of OCT [3]. An extension of NLM filter was proposed
by Deledalle et al. [12] by incorporating noise distribution
model instead of computing Euclidean distance for pixel
similarity calculations. Fuzzy filters incorporating the
concepts moving average and median were tested and proven
to be effective in reducing various types of additive noise [13,
14] but are not extensively experimented for multiplicative
noise reduction. The performance of fuzzy filters are being
reported only in-terms of MSE and number of looks
(ENL)[13, 14] , but in medical image applications it is
necessary to preserve edges like medical images [1].
To address the issue of speckle noise reduction in general and
fine tune the denoising characteristics of fuzzy filter in
particular, an integrated despeckling technique based on the
sequential combination of TMAV based fuzzy filter with
adaptive wiener filter in homomorphic domain is being
proposed, and analyzed in this paper. Also, in this paper the
performance of proposed method is expressed in terms of
seven performance parameters along with visual quality
assessment. Importance is being to edge preservation, overall
2. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
__________________________________________________________________________________________
Volume: 03 Special Issue: 03 | May-2014 | NCRIET-2014, Available @ http://www.ijret.org 114
quality of denoised image and the structural integrity is
maintained.
2. MODELING EMPLOYED FOR DENOISING
The multiplicative speckle noise is modeled as
( , ) ( , ) ( , )f i j g i j n i j (1)
Where
( , )g i j is noise free image,
( , )f i j is the acquired
image and
( , )n i j is the multiplicative noise, i and j are the
variables indicating the spatial locations [1, 15].
The process of converting multiplicative noise to
approximated additive noise is performed by projecting the
image into logarithmic space [1]
log[ ( , )] log[ ( , ) ( , )]
log{ ( , )} log{ ( , )}
f i j g i j n i j
g i j n i j
(2)
The above eq.(2) is rewritten with ijf
=
log[ ( , )]f i j ,
ijg
=
log{ ( , )}g i j and ijn
=
log{ ( , )}n i j as
ij ij ijf g n
(3)
This provision using eq.(3) makes way for application of
methods developed for additive white Gaussian noise, to be
tested and analyzed on images under the curse of
multiplicative noise. In these methods the input is a
logarithmic transformed,
( , ) log( ( , ))f i j f i j and output
is being obtained by taking the exponential of denoised image,
ˆ( , ) exp( (log( ( , )))g i j MX f i j (4)
Where MX represents filter being used
2.1 Fuzzy Filters
Median filter effectively suppresses the speckle noise but the
edges are not well preserved [13, 14]. Fuzzy filters with
moving average center preserve image sharpness but the edges
are not preserved. To address this issue it is proposed to
integrate the noise reduction capabilities of wiener filter with
fuzzy filter.
Fig-1:Proposed hybrid TMAV based fuzzy filter
2.2 Proposed Hybrid TMAV Based Fuzzy Filtering
Algorithm
The block diagram of proposed hybrid TMAV based fuzzy
filter is shown in Fig.1 and each of the step incorporated in the
implementation are stepwise described below:
Step 1: Consider standard noise free image, resize the image
size to 512x512, convert it to gray scale and embed each of the
image with synthetic speckle noise.
Step 2: Project the noisy image into the logarithmic space
according to eq.(2). The output is of the form
f=log(double(f)+1); where f is noisy image .
Step 3: Median value are calculated using fuzzy triangulation
membership function with moving average center (TMAV)
defined by eq.(5) and eq.(6) with different window and
padding size.
mav
mv
mav mv
mv
F f(i r, j s)
f(i r, j s) f (i, j )
,
f (i, j )
for f(i r, j s) f (i, j ) f (i, j )
, for f o
1
1
(5)
3. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
__________________________________________________________________________________________
Volume: 03 Special Issue: 03 | May-2014 | NCRIET-2014, Available @ http://www.ijret.org 115
max
min
( , ) max[ ( , ) ( , ),
( , ) ( , )]
mv mav
mav
f i j f i j f i j
f i j f i j
(6)
The maximum, minimum and moving average values are
respectively represented by max ( , )f i j , min ( , )f i j , and
( , )mavf i j with ,s r A , the window at indices ( , )i j .
Step 4: The output of the fuzzy TMAV filter are estimated
using eq.(7) given below:
( , )
( , )
( , ) . ( , )
( , )
( , )
r s A
r s A
F f i r j s f i r j s
y i j
F f i r j s
(7)
Where [ ( , )]F f i j and A are the window function and area
respectively.
Step 5: Output of fuzzy filter is passed through adaptive
wiener filter with different window size.
Step 6: The output of fuzzy filter is projected back to the non-
logarithmic space using exponential operation which is
represented by Ydenoised=exp(y)-1.
Step 7: Performance parameter computation and result
analysis using eq.(8) to eq.(14) along with visual quality
assessment.
The above steps are being repeated for different levels of noise
artificially added on to the noise free images and for different
window size of fuzzy and wiener filters varying in the range
3x3, 5x5, 7x7 and 9x9.
All experimentations are performed using seven standard test
images of Lena, Mandril, Cameraman, Barbara, Monarch,
Woman dark hair and House of size 512x512 [12]. Synthetic
noise is being embedded to each of these images using matlab
inbuilt function imnoise with variance varying from 0.01 to
0.5. The matlab inbuilt function wiener2 is employed for
wiener filtering and experimentations are performed with
different combination of window size of both fuzzy and
wiener filters. All the experimentations are being performed
using MATLAB R2010a.
3. RESULTS AND DISCUSSIONS
The denoising capabilities of fuzzy filter, wiener filter and
proposed filtering technique are evaluated using peak signal to
noise ratio (PSNR), mean square error (MSE), correlation
coefficient (ρ) and signal to noise ratio (SNR) using original
image ijf
and denoised image ijg
[1, 2]. The edge preservation
and distortion of images is measured using figure of merit
(FOM), beta metric (β) and structural similarity (SSIM)
index[2, 16]. The parameters are defined as follows:
255PSNR = 20xlog
MSE( , )ij ijf g
(8)
2
1 1
1
MSE= ( , ) ( , )
M N
i j
f i j g i j
MN
(9)
var( )
SNR = 10xlog
MSE( , )
ij
ij ij
f
f g
(10)
2
1
1 1
FOM=
max( , ) 1
dn
jd r jn n d
(11)
1 1
2 2
1 1 1 1
.
M N
ij ij
i j
M N M N
ij ij
i j i j
g f
g f
(12)
( , )
( , ). ( , )
D g g f f
D g g g g D f f f f
(13)
f,g 2 1
2 2 2 2
2 1
(2 )(2 x )
SSIM
( )( + )f g
c l l c
c l lr c
(14)
Where γ is the scalar multiplier being utilized as penalization
factor with typical value 1/9, nd and nr are the number of
pixels in original and processed images respectively, jd
is the
Euclidean distance,
g and
f represent the filtered version
of original and processed images, pixel mean intensities in the
region
g ,
f are represented by
g and
f respectively,
and
2
1 1( x )c K L
,
2
2 2( x )c K L
.
Table -1: Comparison of performance parameters obtained
proposed hybrid TMAV based fuzzy filter
Metric Image
Without
filter
Wiener
Filter
Fuzzy
TMAV
Filter
Proposed
Filter
SSIM
Lena 0.5733 0.7394 0.8356 0.8576
Monar 0.6742 0.8192 0.8855 0.8862
4. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
__________________________________________________________________________________________
Volume: 03 Special Issue: 03 | May-2014 | NCRIET-2014, Available @ http://www.ijret.org 116
House 0.4569 0.6512 0.8238 0.8445
DHair 0.6173 0.7978 0.9126 0.9206
FOM
Lena 0.3833 0.4913 0.8219 0.8338
Monar 0.4440 0.5517 0.8536 0.8547
House 0.3393 0.3896 0.5101 0.5727
DHair 0.3792 0.4285 0.6259 0.7481
ρ
Lena 0.9767 0.9963 0.9963 0.9965
Monar 0.9768 0.9965 0.9955 0.9953
House 0.9765 0.9968 0.9986 0.9988
DHair 0.9784 0.9967 0.9983 0.9984
SNR
Lena 26.370 41.987 39.202 39.334
Monar 26.456 42.405 37.247 36.688
House 26.293 43.261 44.829 44.963
DHair 27.247 42.342 41.634 41.638
MSE
Lena 849.082 140.62 193.78 190.86
Monar 738.898 117.79 213.31 227.513
House 1024.85 145.30 121.29 119.437
DHair 669.937 117.83 127.84 127.793
IQI
Lena 0.2781 0.4286 0.4610 0.4839
Monar 0.3232 0.5438 0.6151 0.6309
House 0.1491 0.3365 0.3892 0.4002
DHair 0.1667 0.4000 0.5320 0.5589
PSNR
Lena 18.841 26.650 25.258 25.324
Monar 19.445 27.419 24.841 24.561
House 19.149 24.033 21.920 22.082
DHair 19.870 27.418 27.064 27.066
Table- 2: Comparison of IQI, PSNR, FOM, SSIM
Ref.
Filter
Name
Performance parameter
IQI PSNR FOM SSIM
[17] Geometric 0.2630 17.94 0.3507 0.5202
[11] Wiener 0.3702 23.45 0.3888 0.6263
[18] OWT 0.3042 19.78 0.3604 0.6096
[19] BayesShrink 0.4470 26.97 0.5193 0.7454
[20] Curvelet 0.3025 19.73 0.3651 0.6070
[3] PSBE 0.3032 19.76 0.3813 0.6101
[8] NLM 0.4230 23.34 0.5931 0.7884
[12] PPB 0.3880 24.94 0.6517 0.7949
[21] PMAD 0.3111 19.95 0.3848 0.6138
[22] CED 0.4231 23.46 0.4166 0.6843
Proposed 0.4290 22.16 0.6911 0.8032
The performance parameters of proposed hybrid TMAV based
fuzzy filter are compared with fuzzy filter and wiener filter
(WF) in Table 1. Analysis of results tabulated in Table 1 and
Table 2 reveals that adaptive wiener filter in homomorphic
domain is superior compared to fuzzy TMAV filter terms of
traditional parameters like SNR and PSNR.
Fig-3: Comparison of visual quality of Lena image at σ=0.1
for Fuzzy TMAV filter, WF and proposed filter
But the performance of fuzzy filter is superior in-terms of IQI,
SSIM and FOM. It is also observed that performance of the
proposed hybrid algorithm is superior compared to both fuzzy
and wiener filters in terms of both edge preservation and noise
reduction for all noise levels and images. The values of SSIM,
FOM, IQI and ρ are enhanced with the integration of fuzzy
filter with wiener filter. Denoising results obtained for noise
variance equal to 0.1 are compared in Table 1. The results
obtained for various values of noise variance ranging from
0.01 to 0.5. Improvements are noise at higher values of noise
variance using proposed denoising technique. It is also
observed that with lower noise levels embedding of wiener
filter results in over-smoothing but the edges and structure of
the images are well preserved. FOM and IQI obtained using
proposed method is almost double that of noisy filter. The
value of correlation coefficient ρ≥0.99 for all images shows
that the input and output values are highly correlated.
Based on the analysis of results in Table 1, it can be concluded
that embedding of wiener filter in fuzzy TMAV filter edge
preservation and structural similarity are enhanced. The visual
quality of denoised Lena image using fuzzy filter and
proposed filters are compared in Fig.1 for noise variance equal
to 0.1 and it is observed that large amount of noise is retained
in fuzzy filters.Noise reduction is more pronounced using
proposed filters as clearly observed from Fig.1. The
performance of proposed hybrid TMAV based fuzzy filter is
compared with 10 state-of-art denoising techniques in Table 2.
5. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
__________________________________________________________________________________________
Volume: 03 Special Issue: 03 | May-2014 | NCRIET-2014, Available @ http://www.ijret.org 117
Fig-4: Visual quality comparison between proposed and other
denoising techniques
The Matlab functions provided by the authors of NLM [8],
PPB [12], orthogonal wavelet thresholding (OWT) [18], and
Bayes shrinkage (BayesShrink) [19], are being used for the
purpose of comparing the results. The visual quality of the
denoised images using proposed method and state-of-art
denoising techniques are compared in Fig.3. PSNR of
proposed method is higher compared to geometric filter
operated with four iterations, OWT, curvelet, logarithmic
PSBE and inferior compared to BayesShrink, NLM, PPB, and
CED based denoising techniques. IQI of proposed method is
superior compared to all methods except for BayesShrink
based denoising. SSIM and FOM obtained for hybrid TMAV
based filter are superior in comparison to all other methods
tabulated in Table 2.
4. CONCLUSIONS
The edge preservation capabilities of TMAV based fuzzy filter
are enhanced with integration of wiener filters. Not only the
edges, structures are well preserved but also higher amount of
speckle noise is removed using the proposed integration
techniques. The proposed denoising scheme would be useful
for edge preserved denoising of images acquired from the
coherent imaging modalities but with fractionally higher
computation time. Comparison of performance parameters and
visual quality assessment reveals that proposed scheme is the
refined versions of fuzzy filter in terms of noise reduction and
edge preservation. Improvement in the performance is proved
with enhanced IQI, FOM and SSIM parameters along with
visual quality.
CONFLICT OF INTEREST
NagashettappaBiradar, M.L.Dewal, and ManojKumarRohit
declare that they have no conflict of interest.
ACKNOWLEDGEMENTS
The authors are thankful to BKIT,Bhlaki for sponsoring the
first author to pursue PhD with financial assistance.
REFERENCES
[1] Mateo, J.L., and Fernández-Caballero, A.: ‘Finding out
general tendencies in speckle noise reduction in
ultrasound images’, Expert Systems with Applications,
2009, 36, (4), pp. 7786-7797
[2] Finn, S., Glavin, M., and Jones, E.: ‘Echocardiographic
speckle reduction comparison’, Ultrasonics,
Ferroelectrics and Frequency Control, IEEE
Transactions on, 2011, 58, (1), pp. 82-101
[3] Wong, A., Mishra, A., Bizheva, K., and Clausi, D.A.:
‘General Bayesian estimation for speckle noise
reduction in optical coherence tomography retinal
imagery’, Opt. Express, 2010, 18, (8), pp. 8338-8352
[4] Qiu, F., Berglund, J., Jensen, J.R., Thakkar, P., and
Ren, D.: ‘Speckle noise reduction in SAR imagery
using a local adaptive median filter’, GIScience&
Remote Sensing, 2004, 41, (3), pp. 244-266
6. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
__________________________________________________________________________________________
Volume: 03 Special Issue: 03 | May-2014 | NCRIET-2014, Available @ http://www.ijret.org 118
[5] Ozcan, A., Bilenca, A., Desjardins, A.E., Bouma, B.E.,
and Tearney, G.J.: ‘Speckle reduction in optical
coherence tomography images using digital filtering’,
JOSA A, 2007, 24, (7), pp. 1901-1910
[6] Yu, Y., and Acton, S.T.: ‘Speckle reducing anisotropic
diffusion’, Image Processing, IEEE Transactions on,
2002, 11, (11), pp. 1260-1270
[7] Nicolae, M.C., Moraru, L., and Onose, L.:
‘Comparative approach for speckle reduction in
medical ultrasound images’, Romanian j. biophys,
2010, 20, (1), pp. 13-21
[8] Coupé, P., Hellier, P., Kervrann, C., and Barillot, C.:
‘Nonlocal means-based speckle filtering for ultrasound
images’, Image Processing, IEEE Transactions on,
2009, 18, (10), pp. 2221-2229
[9] Czerwinski, R.N., Jones, D.L., and O'Brien Jr, W.D.:
‘Ultrasound speckle reduction by directional median
filtering’, ConfProc IEEE Image Processing. 1995;
358-61.
[10] Gupta, S., Chauhan, R., and Sexana, S.: ‘Wavelet-
based statistical approach for speckle reduction in
medical ultrasound images’, Medical and Biological
Engineering and computing, 2004, 42, (2), pp. 189-192
[11] Pizurica, A., Philips, W., Lemahieu, I., and Acheroy,
M.: ‘A versatile wavelet domain noise filtration
technique for medical imaging’, Medical Imaging,
IEEE Transactions on, 2003, 22, (3), pp. 323-331
[12] Deledalle, C.-A., Denis, L., and Tupin, F.: ‘Iterative
weighted maximum likelihood denoising with
probabilistic patch-based weights’, Image Processing,
IEEE Transactions on, 2009, 18, (12), pp. 2661-2672
[13] Kwan, H., and Cai, Y.: ‘Fuzzy filters for image
filtering’, ConfProc IEEE Circuits and Systems. 2002.
3: III-672-5.
[14] Kwan, H.K.: ‘Fuzzy filters for noise reduction in
images’: Fuzzy Filters for Image Processing. Springer
Berlin Heidelberg.2003; 25-53.
[15] Zong, X., Geiser, E.A., Laine, A.F., and Wilson, D.C.:
‘Homomorphic wavelet shrinkage and feature emphasis
for speckle reduction and enhancement of
echocardiographic images’, ConfProc SPIE1996; 658-
67.
[16] Mittal, D., Kumar, V., Saxena, S.C., Khandelwal, N.,
and Kalra, N.: ‘Enhancement of the ultrasound images
by modified anisotropic diffusion method’, Medical &
biological engineering & computing, 2010, 48, (12),
pp. 1281-1291
[17] Luisier, F., Blu, T., and Unser, M.: ‘A New SURE
Approach to Image Denoising: Interscale Orthonormal
Wavelet Thresholding’, Image Processing, IEEE
Transactions on, 2007, 16, (3), pp. 593-606
[18] Chang, S.G., Yu, B., and Vetterli, M.: ‘Adaptive
wavelet thresholding for image denoising and
compression’, Image Processing, IEEE Transactions
on, 2000, 9, (9), pp. 1532-1546
[19] Crimmins, T.R.: ‘Geometric filter for speckle
reduction’, Applied optics, 1985, 24, (10), pp. 1438-
1443
[20] Starck, J.-L., Candès, E.J., and Donoho, D.L.: ‘The
curvelet transform for image denoising’, Image
Processing, IEEE Transactions on, 2002, 11, (6), pp.
670-684
[21] Perona, P., and Malik, J.: ‘Scale-space and edge
detection using anisotropic diffusion’, Pattern Analysis
and Machine Intelligence, IEEE Transactions on, 1990,
12, (7), pp. 629-639
[22] Weickert, J.: ‘Coherence-enhancing diffusion filtering’,
International Journal of Computer Vision, 1999, 31, (2-
3), pp. 111-127