The main challenge experienced by the present face recognition techniques and smooth filters
are the difficulty in managing illumination. The differences in face images that are created by illumination
are normally bigger compared to the differences in inter-person that is utilized to differentiate identities.
However, face recognition over illumination has more uses in a lot of applications that deal with subjects
that are not cooperative where the highest potential of the face recognition as a non-intrusive biometric
feature can be executed and utilized. A lot of work has been put into the research and development of
illumination and face recognition in the present era and a lot of critical methods have been introduced.
Nevertheless, there are some concerns with face recognition and illumination that require further
considerations which include the deficiencies in comprehending the sub-spaces in illumination pictures,
problems with intractability in face modelling and complicated mechanisms of face surface reflections.
A New Approach of Medical Image Fusion using Discrete Wavelet TransformIDES Editor
MRI-PET medical image fusion has important
clinical significance. Medical image fusion is the important
step after registration, which is an integrative display method
of two images. The PET image shows the brain function with
a low spatial resolution, MRI image shows the brain tissue
anatomy and contains no functional information. Hence, a
perfect fused image should contains both functional
information and more spatial characteristics with no spatial
& color distortion. The DWT coefficients of MRI-PET
intensity values are fused based on the even degree method
and cross correlation method The performance of proposed
image fusion scheme is evaluated with PSNR and RMSE and
its also compared with the existing techniques.
This document proposes a new method for multifocus image fusion that operates based on categorizing image energy levels. It calculates the energy of gradient for input images to identify focused vs. blurred regions. The images are divided into low, mid, and high energy regions using thresholds on the average energy map. Pixels are then selected from the input images for each region using different fusion rules. Experimental results on book, clock, leaf, and wafer images show the proposed method produces clearer fused images without artifacts compared to other spatial and transform domain fusion methods.
Survey on Single image Super Resolution TechniquesIOSR Journals
Super-resolution is the process of recovering a high-resolution image from multiple lowresolutionimages
of the same scene. The key objective of super-resolution (SR) imaging is to reconstruct a
higher-resolution image based on a set of images, acquired from the same scene and denoted as ‘lowresolution’
images, to overcome the limitation and/or ill-posed conditions of the image acquisition process for
facilitating better content visualization and scene recognition. In this paper, we provide a comprehensive review
of existing super-resolution techniques and highlight the future research challenges. This includes the
formulation of an observation model and coverage of the dominant algorithm – Iterative back projection.We
critique these methods and identify areas which promise performance improvements. In this paper, future
directions for super-resolution algorithms are discussed. Finally results of available methods are given.
There exists a plethora of algorithms to perform image segmentation and there are several issues related to
execution time of these algorithms. Image Segmentation is nothing but label relabeling problem under
probability framework. To estimate the label configuration, an iterative optimization scheme is
implemented to alternately carry out the maximum a posteriori (MAP) estimation and the maximum
likelihood (ML) estimations. In this paper this technique is modified in such a way so that it performs
segmentation within stipulated time period. The extensive experiments shows that the results obtained are
comparable with existing algorithms. This algorithm performs faster execution than the existing algorithm
to give automatic segmentation without any human intervention. Its result match image edges very closer to
human perception.
Different Image Fusion Techniques –A Critical ReviewIJMER
This document reviews and compares different image fusion techniques, including spatial domain and transform domain methods. Spatial domain techniques like simple averaging and maximum selection are disadvantageous because they can produce spatial distortions and reduce contrast in the fused image. Transform domain methods like discrete wavelet transform (DWT) and principal component analysis (PCA) perform better by preserving more spatial and spectral information. DWT fusion in particular minimizes spectral distortion and improves the signal-to-noise ratio over pixel-based approaches, though it results in lower spatial resolution. Tables in the document provide quantitative comparisons of different techniques using performance measures like peak signal-to-noise ratio, entropy, and normalized cross-correlation.
The document describes an image fusion approach that uses adaptive fuzzy logic modeling for global processing followed by Markov random field modeling for local processing.
It begins by introducing image fusion and its applications. It then discusses existing fusion approaches and their limitations. The proposed approach first uses an adaptive fuzzy logic model to minimize redundant information globally. It then applies Markov random field modeling locally for fusion. Experimental results showed the proposed approach improved the universal image quality index by 30-35% compared to fusion with Markov random field modeling alone.
This document reviews techniques for multi-image morphing. It discusses early cross-dissolve morphing methods and their limitations. Mesh warping and field morphing are introduced as improved techniques that use control points and line mappings to better align images during transition. The document also summarizes point distribution, critical point filters, and other common morphing methods. It concludes by noting that effective morphing requires mechanisms for feature specification, warp generation, and transition control.
A New Approach of Medical Image Fusion using Discrete Wavelet TransformIDES Editor
MRI-PET medical image fusion has important
clinical significance. Medical image fusion is the important
step after registration, which is an integrative display method
of two images. The PET image shows the brain function with
a low spatial resolution, MRI image shows the brain tissue
anatomy and contains no functional information. Hence, a
perfect fused image should contains both functional
information and more spatial characteristics with no spatial
& color distortion. The DWT coefficients of MRI-PET
intensity values are fused based on the even degree method
and cross correlation method The performance of proposed
image fusion scheme is evaluated with PSNR and RMSE and
its also compared with the existing techniques.
This document proposes a new method for multifocus image fusion that operates based on categorizing image energy levels. It calculates the energy of gradient for input images to identify focused vs. blurred regions. The images are divided into low, mid, and high energy regions using thresholds on the average energy map. Pixels are then selected from the input images for each region using different fusion rules. Experimental results on book, clock, leaf, and wafer images show the proposed method produces clearer fused images without artifacts compared to other spatial and transform domain fusion methods.
Survey on Single image Super Resolution TechniquesIOSR Journals
Super-resolution is the process of recovering a high-resolution image from multiple lowresolutionimages
of the same scene. The key objective of super-resolution (SR) imaging is to reconstruct a
higher-resolution image based on a set of images, acquired from the same scene and denoted as ‘lowresolution’
images, to overcome the limitation and/or ill-posed conditions of the image acquisition process for
facilitating better content visualization and scene recognition. In this paper, we provide a comprehensive review
of existing super-resolution techniques and highlight the future research challenges. This includes the
formulation of an observation model and coverage of the dominant algorithm – Iterative back projection.We
critique these methods and identify areas which promise performance improvements. In this paper, future
directions for super-resolution algorithms are discussed. Finally results of available methods are given.
There exists a plethora of algorithms to perform image segmentation and there are several issues related to
execution time of these algorithms. Image Segmentation is nothing but label relabeling problem under
probability framework. To estimate the label configuration, an iterative optimization scheme is
implemented to alternately carry out the maximum a posteriori (MAP) estimation and the maximum
likelihood (ML) estimations. In this paper this technique is modified in such a way so that it performs
segmentation within stipulated time period. The extensive experiments shows that the results obtained are
comparable with existing algorithms. This algorithm performs faster execution than the existing algorithm
to give automatic segmentation without any human intervention. Its result match image edges very closer to
human perception.
Different Image Fusion Techniques –A Critical ReviewIJMER
This document reviews and compares different image fusion techniques, including spatial domain and transform domain methods. Spatial domain techniques like simple averaging and maximum selection are disadvantageous because they can produce spatial distortions and reduce contrast in the fused image. Transform domain methods like discrete wavelet transform (DWT) and principal component analysis (PCA) perform better by preserving more spatial and spectral information. DWT fusion in particular minimizes spectral distortion and improves the signal-to-noise ratio over pixel-based approaches, though it results in lower spatial resolution. Tables in the document provide quantitative comparisons of different techniques using performance measures like peak signal-to-noise ratio, entropy, and normalized cross-correlation.
The document describes an image fusion approach that uses adaptive fuzzy logic modeling for global processing followed by Markov random field modeling for local processing.
It begins by introducing image fusion and its applications. It then discusses existing fusion approaches and their limitations. The proposed approach first uses an adaptive fuzzy logic model to minimize redundant information globally. It then applies Markov random field modeling locally for fusion. Experimental results showed the proposed approach improved the universal image quality index by 30-35% compared to fusion with Markov random field modeling alone.
This document reviews techniques for multi-image morphing. It discusses early cross-dissolve morphing methods and their limitations. Mesh warping and field morphing are introduced as improved techniques that use control points and line mappings to better align images during transition. The document also summarizes point distribution, critical point filters, and other common morphing methods. It concludes by noting that effective morphing requires mechanisms for feature specification, warp generation, and transition control.
Fusion of Wavelet Coefficients from Visual and Thermal Face Images for Human ...CSCJournals
In this paper we present a comparative study on fusion of visual and thermal images using different wavelet transformations. Here, coefficients of discrete wavelet transforms from both visual and thermal images are computed separately and combined. Next, inverse discrete wavelet transformation is taken in order to obtain fused face image. Both Haar and Daubechies (db2) wavelet transforms have been used to compare recognition results. For experiments IRIS Thermal/Visual Face Database was used. Experimental results using Haar and Daubechies wavelets show that the performance of the approach presented here achieves maximum success rate of 100% in many cases.
This document discusses image fusion techniques at different levels of abstraction: pixel level, feature level, and decision level. It describes various fusion methods including numerical (e.g. multiplicative, Brovey), color related (e.g. IHS), statistical (e.g. PCA, Gram Schmidt), and feature level (e.g. Ehlers) techniques. Both qualitative (visual) and quantitative (statistical measures like RMSE, correlation coefficient, entropy) methods to assess fusion quality are outlined. Image fusion has applications in improving classification and displaying sharper resolution images.
Image Denoising Using Earth Mover's Distance and Local HistogramsCSCJournals
In this paper an adaptive range and domain filtering is presented. In the proposed method local histograms are computed to tune the range and domain extensions of bilateral filter. Noise histogram is estimated to measure the noise level at each pixel in the noisy image. The extensions of range and domain filters are determined based on pixel noise level. Experimental results show that the proposed method effectively removes the noise while preserves the details. The proposed method performs better than bilateral filter and restored test images have higher PSNR than those obtained by applying popular Bayesshrink wavelet denoising method.
This document discusses GPU-based implementations of bilateral filtering for images. Bilateral filtering smooths images while preserving edges by combining pixel values based on both geometric closeness and photometric similarity. It can be applied to color images in a way that is tuned to human color perception. A naïve bilateral filtering implementation iterates over all pixels, but it is well-suited for parallel GPU implementations due to its iterative and local nature. The document provides mathematical definitions of domain filtering, range filtering, and bilateral filtering, and notes that bilateral filtering combines the benefits of both by enforcing both geometric and photometric locality. It describes using Gaussian functions to implement the filters and discusses parameters for controlling the degree of blurring and edge preservation.
Comparative study on image fusion methods in spatial domainIAEME Publication
This document provides a comparative study of various image fusion methods in the spatial domain. It begins by introducing image fusion and its applications. Section 2 then describes several common fusion algorithms in the spatial domain, including average, select maximum/minimum, Brovey transform, intensity hue saturation (IHS), and principal component analysis (PCA). Section 3 defines image fusion quality measures like entropy, mean squared error, and normalized cross correlation. Section 4 provides a comparative analysis of the spatial domain fusion techniques based on parameters like simplicity, type of resources, and disadvantages. It finds that spatial domain methods provide high spatial resolution but have issues like image blurring and producing less informative outputs. The document concludes that while the best algorithm depends on the problem, spatial
Performance and analysis of improved unsharp masking algorithm for imageIAEME Publication
This document presents a study on improving an unsharp masking algorithm for image enhancement. It proposes using an exploratory data analysis model that decomposes an image into a model component and a residual component. The proposed algorithm then individually processes these components to increase contrast and sharpness while reducing halo effects and out-of-range issues. It defines new log-ratio operations for a generalized linear system using concepts from vector spaces and Bregman divergence to provide a theoretical basis for the algorithm. Experimental results showed the proposed algorithm enhanced contrast and sharpness better than previous methods.
This document describes a method for pixel-level image fusion using principal component analysis (PCA). PCA is used to transform correlated image pixels into a set of uncorrelated principal components. The first principal component accounts for the most variance in the pixel values. To fuse images, the pixels of the input images are arranged into vectors and subtracted from their mean. PCA is applied to get the eigenvectors corresponding to the largest eigenvalues. The normalized eigenvectors are used to compute a fused image as a weighted sum of the input images. Performance is evaluated using metrics like standard deviation, entropy, cross-entropy, and fusion mutual information, with higher values of these metrics indicating better quality of the fused image.
This document discusses wavelet-based image fusion techniques. Image fusion combines information from multiple images of the same scene to create a fused image that is more informative than any single input image. The wavelet transform decomposes images into different frequency bands, and image fusion algorithms merge the corresponding bands from input images. Common fusion rules include choosing the maximum, minimum, mean, or a value from one image at each band location. The inverse wavelet transform then reconstructs the fused image. Wavelet-based fusion can integrate high spatial and high spectral information from images like panchromatic and multispectral satellite data.
This document provides an introduction to image segmentation. It discusses how image segmentation partitions an image into meaningful regions based on measurements like greyscale, color, texture, depth, or motion. Segmentation is often an initial step in image understanding and has applications in identifying objects, guiding robots, and video compression. The document describes thresholding and clustering as two common segmentation techniques and provides examples of segmentation based on greyscale, texture, motion, depth, and optical flow. It also discusses region-growing, edge-based, and active contour model approaches to segmentation.
Marker Controlled Segmentation Technique for Medical applicationRushin Shah
Medical image segmentation is a very important field for the medical science. In medical images, edge detection is an important work for object recognition of the human organs such as brain, heart or kidney etc. and it is an essential pre-processing step in medical image segmentation.
Medical images such as CT, MRI or X-Ray visualizes the various information’s of internal organs which is very important for doctors diagnoses as well as medical teaching, learning and research.
It is a tough job to locate the internal organs if images contains noise or rough structure of human body organs.
Face Image Restoration based on sample patterns using MLP Neural NetworkIOSR Journals
This document presents a face image restoration method using MLP neural networks. Low resolution face images are generated from a high resolution image using an observation model. Patches are extracted from the high and low resolution images and used to train an MLP network. After training, the model can be used to restore low resolution images. The method is tested on images from the ORL database. Results show the proposed method has better performance than other methods in terms of statistical metrics and visual quality, especially when there are only geometric changes between images. When noise levels are varied, performance decreases.
Comparison of Segmentation Algorithms and Estimation of Optimal Segmentation ...Pinaki Ranjan Sarkar
Recent advancement in sensor technology allows very high spatial resolution along with multiple spectral bands. There are many studies, which highlight that Object Based Image Analysis(OBIA) is more accurate than pixel-based classification for high resolution(< 2m) imagery. Image segmentation is a crucial step for OBIA and it is a very formidable task to estimate optimal parameters for segmentation as it does not have any unique solution. In this paper, we have studied different segmentation algorithms (both mono-scale and multi-scale) for different terrain categories and showed how the segmented output depends on upon various parameters. Later, we have introduced a novel method to estimate optimal segmentation parameters. The main objectives of this study are to highlight the effectiveness of presently available segmentation techniques on very high-resolution satellite data and to automate segmentation process. Pre-estimation of segmentation parameter is more practical and efficient in OBIA. Assessment of segmentation algorithms and estimation of segmentation parameters are examined based on the very high-resolution multi-spectral WorldView-3(0.3m, PAN sharpened) data.
The document discusses different data structures used to represent image data at various levels of abstraction, from low-level pixel matrices to higher-level relational models. It describes iconic and segmented image representations, as well as geometric, relational, and hierarchical structures like matrices, chains, graphs, run-length coding, pyramids, and quadtrees. These data structures organize image information to facilitate image analysis algorithms.
Modified adaptive bilateral filter for image contrast enhancementeSAT Publishing House
This paper proposes a two-phase method for removing noise and enhancing image resolution using an adaptive bilateral filter. In the first phase, total variation regularization is used to identify pixels likely contaminated by noise. In the second phase, the image is reconstructed by applying an adaptive technique only to candidate pixels identified for enhancement. Experimental results show the proposed method performs better than median-based filters or edge-preserving regularization methods at preserving texture, details and edges even at high noise levels.
Super-resolution (SR) is the process of obtaining a high resolution (HR) image or
a sequence of HR images from a set of low resolution (LR) observations. The block
matching algorithms used for motion estimation to obtain motion vectors between the
frames in Super-resolution. The implementation and comparison of two different types of
block matching algorithms viz. Exhaustive Search (ES) and Spiral Search (SS) are
discussed. Advantages of each algorithm are given in terms of motion estimation
computational complexity and Peak Signal to Noise Ratio (PSNR). The Spiral Search
algorithm achieves PSNR close to that of Exhaustive Search at less computation time than
that of Exhaustive Search. The algorithms that are evaluated in this paper are widely used
in video super-resolution and also have been used in implementing various video standards
like H.263, MPEG4, H.264.
This is about Image segmenting.We will be using fuzzy logic & wavelet transformation for segmenting it.Fuzzy logic shall be used because of the inconsistencies that may occur during segementing or
An evaluation of two popular segmentation algorithms, the mean shift-based segmentation algorithm and a graph-based segmentation scheme. We also consider a hybrid method which combines the other two methods.
A Novel Approach for Edge Detection using Modified ACIES Filteringidescitation
The most important humiliating attribute of an image is noise, which may
become obvious during image capturing, communication or processing and may conceivably
be reliant on or sovereign of image. In order to supplement the high- frequency components
in this paper we are proposing a new class of filter called ACIES filter which implies spatial
filter shape that has a high positive component at the centre. Since restraint of noise can
only be achieved by smoothing the image. Sharpening with ACIES filter highlights the fine
details of an image and enhances the clarity of the image at the boundaries. Experimental
results show that the concert of the proposed ACIES filter is acceptable and Quality of the
consequent images is remarkably well even under the strongly noise-corrupted conditions.
If the edges in an image can be recognized specifically, then all the objects in the image can
be located and basic properties such as region, edge, and contour can be measured. The
performance of the proposed approach is measured with image quality metrics such as
PSNR, MSE, NAE and NK.
This document discusses a reflectance perception model based face recognition algorithm that is robust to illumination variations. It begins with an introduction to the challenges of face recognition across different lighting conditions. It then reviews related work on illumination compensation techniques. The document proposes a reflectance perception model that transforms face images into an illumination-insensitive representation by estimating an illumination gain factor. It also describes applying principal component analysis (PCA) to extract facial features from the preprocessed images in a lower dimensional space, removing unwanted vectors. Finally, it discusses fusing matching scores from multiple classifiers using a weighted sum to improve recognition accuracy across variations in lighting.
This document presents a reflectance perception model based face recognition approach that is robust to illumination variations. It proposes a preprocessing algorithm based on the reflectance perception model to generate illumination insensitive images. It then applies principal component analysis (PCA) for feature extraction to reduce the image dimension and remove unwanted vectors. Multiple classifiers are used to extract features from different Fourier domains and frequencies, and scores from these classifiers are combined using a weighted sum fusion method based on equal error rate weights. Experimental results on standard databases show the proposed approach delivers large performance improvements over other face recognition algorithms in handling illumination variations.
Survey on Single image Super Resolution TechniquesIOSR Journals
Abstract:Super-resolution is the process of recovering a high-resolution image from multiple low-resolutionimages of the same scene. The key objective of super-resolution (SR) imaging is to reconstruct a higher-resolution image based on a set of images, acquired from the same scene and denoted as ‘low-resolution’ images, to overcome the limitation and/or ill-posed conditions of the image acquisition process for facilitating better content visualization and scene recognition. In this paper, we provide a comprehensive review of existing super-resolution techniques and highlight the future research challenges. This includes the formulation of an observation model and coverage of the dominant algorithm – Iterative back projection.We critique these methods and identify areas which promise performance improvements. In this paper, future directions for super-resolution algorithms are discussed. Finally results of available methods are given. Keywords: Super-resolution, POCS, IBP, Canny Edge Detection
Fusion of Wavelet Coefficients from Visual and Thermal Face Images for Human ...CSCJournals
In this paper we present a comparative study on fusion of visual and thermal images using different wavelet transformations. Here, coefficients of discrete wavelet transforms from both visual and thermal images are computed separately and combined. Next, inverse discrete wavelet transformation is taken in order to obtain fused face image. Both Haar and Daubechies (db2) wavelet transforms have been used to compare recognition results. For experiments IRIS Thermal/Visual Face Database was used. Experimental results using Haar and Daubechies wavelets show that the performance of the approach presented here achieves maximum success rate of 100% in many cases.
This document discusses image fusion techniques at different levels of abstraction: pixel level, feature level, and decision level. It describes various fusion methods including numerical (e.g. multiplicative, Brovey), color related (e.g. IHS), statistical (e.g. PCA, Gram Schmidt), and feature level (e.g. Ehlers) techniques. Both qualitative (visual) and quantitative (statistical measures like RMSE, correlation coefficient, entropy) methods to assess fusion quality are outlined. Image fusion has applications in improving classification and displaying sharper resolution images.
Image Denoising Using Earth Mover's Distance and Local HistogramsCSCJournals
In this paper an adaptive range and domain filtering is presented. In the proposed method local histograms are computed to tune the range and domain extensions of bilateral filter. Noise histogram is estimated to measure the noise level at each pixel in the noisy image. The extensions of range and domain filters are determined based on pixel noise level. Experimental results show that the proposed method effectively removes the noise while preserves the details. The proposed method performs better than bilateral filter and restored test images have higher PSNR than those obtained by applying popular Bayesshrink wavelet denoising method.
This document discusses GPU-based implementations of bilateral filtering for images. Bilateral filtering smooths images while preserving edges by combining pixel values based on both geometric closeness and photometric similarity. It can be applied to color images in a way that is tuned to human color perception. A naïve bilateral filtering implementation iterates over all pixels, but it is well-suited for parallel GPU implementations due to its iterative and local nature. The document provides mathematical definitions of domain filtering, range filtering, and bilateral filtering, and notes that bilateral filtering combines the benefits of both by enforcing both geometric and photometric locality. It describes using Gaussian functions to implement the filters and discusses parameters for controlling the degree of blurring and edge preservation.
Comparative study on image fusion methods in spatial domainIAEME Publication
This document provides a comparative study of various image fusion methods in the spatial domain. It begins by introducing image fusion and its applications. Section 2 then describes several common fusion algorithms in the spatial domain, including average, select maximum/minimum, Brovey transform, intensity hue saturation (IHS), and principal component analysis (PCA). Section 3 defines image fusion quality measures like entropy, mean squared error, and normalized cross correlation. Section 4 provides a comparative analysis of the spatial domain fusion techniques based on parameters like simplicity, type of resources, and disadvantages. It finds that spatial domain methods provide high spatial resolution but have issues like image blurring and producing less informative outputs. The document concludes that while the best algorithm depends on the problem, spatial
Performance and analysis of improved unsharp masking algorithm for imageIAEME Publication
This document presents a study on improving an unsharp masking algorithm for image enhancement. It proposes using an exploratory data analysis model that decomposes an image into a model component and a residual component. The proposed algorithm then individually processes these components to increase contrast and sharpness while reducing halo effects and out-of-range issues. It defines new log-ratio operations for a generalized linear system using concepts from vector spaces and Bregman divergence to provide a theoretical basis for the algorithm. Experimental results showed the proposed algorithm enhanced contrast and sharpness better than previous methods.
This document describes a method for pixel-level image fusion using principal component analysis (PCA). PCA is used to transform correlated image pixels into a set of uncorrelated principal components. The first principal component accounts for the most variance in the pixel values. To fuse images, the pixels of the input images are arranged into vectors and subtracted from their mean. PCA is applied to get the eigenvectors corresponding to the largest eigenvalues. The normalized eigenvectors are used to compute a fused image as a weighted sum of the input images. Performance is evaluated using metrics like standard deviation, entropy, cross-entropy, and fusion mutual information, with higher values of these metrics indicating better quality of the fused image.
This document discusses wavelet-based image fusion techniques. Image fusion combines information from multiple images of the same scene to create a fused image that is more informative than any single input image. The wavelet transform decomposes images into different frequency bands, and image fusion algorithms merge the corresponding bands from input images. Common fusion rules include choosing the maximum, minimum, mean, or a value from one image at each band location. The inverse wavelet transform then reconstructs the fused image. Wavelet-based fusion can integrate high spatial and high spectral information from images like panchromatic and multispectral satellite data.
This document provides an introduction to image segmentation. It discusses how image segmentation partitions an image into meaningful regions based on measurements like greyscale, color, texture, depth, or motion. Segmentation is often an initial step in image understanding and has applications in identifying objects, guiding robots, and video compression. The document describes thresholding and clustering as two common segmentation techniques and provides examples of segmentation based on greyscale, texture, motion, depth, and optical flow. It also discusses region-growing, edge-based, and active contour model approaches to segmentation.
Marker Controlled Segmentation Technique for Medical applicationRushin Shah
Medical image segmentation is a very important field for the medical science. In medical images, edge detection is an important work for object recognition of the human organs such as brain, heart or kidney etc. and it is an essential pre-processing step in medical image segmentation.
Medical images such as CT, MRI or X-Ray visualizes the various information’s of internal organs which is very important for doctors diagnoses as well as medical teaching, learning and research.
It is a tough job to locate the internal organs if images contains noise or rough structure of human body organs.
Face Image Restoration based on sample patterns using MLP Neural NetworkIOSR Journals
This document presents a face image restoration method using MLP neural networks. Low resolution face images are generated from a high resolution image using an observation model. Patches are extracted from the high and low resolution images and used to train an MLP network. After training, the model can be used to restore low resolution images. The method is tested on images from the ORL database. Results show the proposed method has better performance than other methods in terms of statistical metrics and visual quality, especially when there are only geometric changes between images. When noise levels are varied, performance decreases.
Comparison of Segmentation Algorithms and Estimation of Optimal Segmentation ...Pinaki Ranjan Sarkar
Recent advancement in sensor technology allows very high spatial resolution along with multiple spectral bands. There are many studies, which highlight that Object Based Image Analysis(OBIA) is more accurate than pixel-based classification for high resolution(< 2m) imagery. Image segmentation is a crucial step for OBIA and it is a very formidable task to estimate optimal parameters for segmentation as it does not have any unique solution. In this paper, we have studied different segmentation algorithms (both mono-scale and multi-scale) for different terrain categories and showed how the segmented output depends on upon various parameters. Later, we have introduced a novel method to estimate optimal segmentation parameters. The main objectives of this study are to highlight the effectiveness of presently available segmentation techniques on very high-resolution satellite data and to automate segmentation process. Pre-estimation of segmentation parameter is more practical and efficient in OBIA. Assessment of segmentation algorithms and estimation of segmentation parameters are examined based on the very high-resolution multi-spectral WorldView-3(0.3m, PAN sharpened) data.
The document discusses different data structures used to represent image data at various levels of abstraction, from low-level pixel matrices to higher-level relational models. It describes iconic and segmented image representations, as well as geometric, relational, and hierarchical structures like matrices, chains, graphs, run-length coding, pyramids, and quadtrees. These data structures organize image information to facilitate image analysis algorithms.
Modified adaptive bilateral filter for image contrast enhancementeSAT Publishing House
This paper proposes a two-phase method for removing noise and enhancing image resolution using an adaptive bilateral filter. In the first phase, total variation regularization is used to identify pixels likely contaminated by noise. In the second phase, the image is reconstructed by applying an adaptive technique only to candidate pixels identified for enhancement. Experimental results show the proposed method performs better than median-based filters or edge-preserving regularization methods at preserving texture, details and edges even at high noise levels.
Super-resolution (SR) is the process of obtaining a high resolution (HR) image or
a sequence of HR images from a set of low resolution (LR) observations. The block
matching algorithms used for motion estimation to obtain motion vectors between the
frames in Super-resolution. The implementation and comparison of two different types of
block matching algorithms viz. Exhaustive Search (ES) and Spiral Search (SS) are
discussed. Advantages of each algorithm are given in terms of motion estimation
computational complexity and Peak Signal to Noise Ratio (PSNR). The Spiral Search
algorithm achieves PSNR close to that of Exhaustive Search at less computation time than
that of Exhaustive Search. The algorithms that are evaluated in this paper are widely used
in video super-resolution and also have been used in implementing various video standards
like H.263, MPEG4, H.264.
This is about Image segmenting.We will be using fuzzy logic & wavelet transformation for segmenting it.Fuzzy logic shall be used because of the inconsistencies that may occur during segementing or
An evaluation of two popular segmentation algorithms, the mean shift-based segmentation algorithm and a graph-based segmentation scheme. We also consider a hybrid method which combines the other two methods.
A Novel Approach for Edge Detection using Modified ACIES Filteringidescitation
The most important humiliating attribute of an image is noise, which may
become obvious during image capturing, communication or processing and may conceivably
be reliant on or sovereign of image. In order to supplement the high- frequency components
in this paper we are proposing a new class of filter called ACIES filter which implies spatial
filter shape that has a high positive component at the centre. Since restraint of noise can
only be achieved by smoothing the image. Sharpening with ACIES filter highlights the fine
details of an image and enhances the clarity of the image at the boundaries. Experimental
results show that the concert of the proposed ACIES filter is acceptable and Quality of the
consequent images is remarkably well even under the strongly noise-corrupted conditions.
If the edges in an image can be recognized specifically, then all the objects in the image can
be located and basic properties such as region, edge, and contour can be measured. The
performance of the proposed approach is measured with image quality metrics such as
PSNR, MSE, NAE and NK.
This document discusses a reflectance perception model based face recognition algorithm that is robust to illumination variations. It begins with an introduction to the challenges of face recognition across different lighting conditions. It then reviews related work on illumination compensation techniques. The document proposes a reflectance perception model that transforms face images into an illumination-insensitive representation by estimating an illumination gain factor. It also describes applying principal component analysis (PCA) to extract facial features from the preprocessed images in a lower dimensional space, removing unwanted vectors. Finally, it discusses fusing matching scores from multiple classifiers using a weighted sum to improve recognition accuracy across variations in lighting.
This document presents a reflectance perception model based face recognition approach that is robust to illumination variations. It proposes a preprocessing algorithm based on the reflectance perception model to generate illumination insensitive images. It then applies principal component analysis (PCA) for feature extraction to reduce the image dimension and remove unwanted vectors. Multiple classifiers are used to extract features from different Fourier domains and frequencies, and scores from these classifiers are combined using a weighted sum fusion method based on equal error rate weights. Experimental results on standard databases show the proposed approach delivers large performance improvements over other face recognition algorithms in handling illumination variations.
Survey on Single image Super Resolution TechniquesIOSR Journals
Abstract:Super-resolution is the process of recovering a high-resolution image from multiple low-resolutionimages of the same scene. The key objective of super-resolution (SR) imaging is to reconstruct a higher-resolution image based on a set of images, acquired from the same scene and denoted as ‘low-resolution’ images, to overcome the limitation and/or ill-posed conditions of the image acquisition process for facilitating better content visualization and scene recognition. In this paper, we provide a comprehensive review of existing super-resolution techniques and highlight the future research challenges. This includes the formulation of an observation model and coverage of the dominant algorithm – Iterative back projection.We critique these methods and identify areas which promise performance improvements. In this paper, future directions for super-resolution algorithms are discussed. Finally results of available methods are given. Keywords: Super-resolution, POCS, IBP, Canny Edge Detection
Most face recognition algorithms are generally capable to achieve a high level of accuracy when
the image is acquired under wellcontrolled conditions. The face should be still during the acquisition
process; otherwise, the resulted image would be blur and hard for recognition. Enforcing persons to stand
still during the process is impractical; extremely likely that recognition should be performed on a blurred
image. It is important to understand the relation between the image blur and the recognition accuracy. The
ORL Database was used in the study. All images were in PGM format of 92 × 112 pixels from forty
different persons, ten images per person. Those images were randomly divided into training and testing
datasets with 50-50 ratio. Singular value decomposition was used to extract the features. The images in
the testing datasets were artificially blurred to represent a linear motion, and recognition was performed.
The blurred images were also filtered using various methods. The accuracy levels of the recognition on the
basis of the blurred faces and filtered faces were compared. The performed numerical study suggests that
at its best, the image improvement processes are capable to improve the recognition accuracy level by
less than five percent.
This is a paper I wrote as part of my seminar "Inverse Problems in Computer Vision" while pursuing my M.Sc Medical Engineering at FAU, Erlangen, Germany.
The paper details a state-of-the-art method used for Single Image Super Resolution using Deep Convolutional Networks and the possible extensions to the original approach by considering compression and noise artifacts.
An Efficient Approach for Image Enhancement Based on Image Fusion with Retine...ijsrd.com
Aiming at problems of poor contrast and blurred edges in degraded images, a novel enhancement algorithm is proposed in present research. Image fusion refers to a technique that combines the information from two or more images of a scene into a single fused image.The Algorithm uses Retinex theory and gamma correction to perform a better enhancement of images. The algorithm can efficiently combine the advantages of Retinex and Gamma correction improving both color constancy and intensity of image.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document presents a face image restoration method using MLP neural networks. Low resolution face images are generated from a high resolution image using an observation model. Patches are extracted from the high and low resolution images and used to train an MLP network. After training, the model can be used to restore low resolution images. The method is tested on images from the ORL database. Results show the proposed method has better performance than other methods in terms of statistical metrics and visual quality, especially when there are only geometric changes between images. When noise levels are varied, performance decreases.
SINGLE IMAGE SUPER RESOLUTION: A COMPARATIVE STUDYcsandit
The majority of applications requiring high resolution images to derive and analyze data
accurately and easily. Image super resolution is playing an effective role in those applications.
Image super resolution is the process of producing high resolution image from low resolution
image. In this paper, we study various image super resolution techniques with respect to the
quality of results and processing time. This comparative study introduces a comparison between
four algorithms of single image super-resolution. For fair comparison, the compared algorithms
are tested on the same dataset and same platform to show the major advantages of one over the
others.
Atmospheric Correction of Remotely Sensed Images in Spatial and Transform DomainCSCJournals
Remotely sensed data is an effective source of information for monitoring changes in land use and land cover. However remotely sensed images are often degraded due to atmospheric effects or physical limitations. Atmospheric correction minimizes or removes the atmospheric influences that are added to the pure signal of target and to extract more accurate information. The atmospheric correction is often considered critical pre-processing step to achieve full spectral information from every pixel especially with hyperspectral and multispectral data. In this paper, multispectral atmospheric correction approaches that require no ancillary data are presented in spatial domain and transform domain. We propose atmospheric correction using linear regression model based on the wavelet transform and Fourier transform. They are tested on Landsat image consisting of 7 multispectral bands and their performance is evaluated using visual and statistical measures. The application of the atmospheric correction methods for vegetation analyses using Normalized Difference Vegetation Index is also presented in this paper.
Remotely sensed image segmentation using multiphase level set acmKriti Bajpai
Satellite Imagery is used in various research domains. These images contain major quality issues. However, it can be improvised by image enhancement algorithms in terms of contrast, brightness, feature reduction from noise contents, etc. These algorithms are employed to focus, sharp or smooth image to exhibit and examine the image attributes. Hence, the objective of image enhancement depends on the precise application. The objective of this paper is to provide brief information about the image enhancement techniques; which fetches progressive and optimum results for remote-sensing satellite imagery.
Image restoration model with wavelet based fusionAlexander Decker
1. The document discusses various techniques for image restoration, which aims to recover a sharp original image from a degraded one using mathematical models of degradation and restoration.
2. It analyzes techniques like deconvolution using Lucy Richardson algorithm, Wiener filter, regularized filter, and blind image deconvolution on different image formats based on metrics like PSNR, MSE, and RMSE.
3. Previous studies have applied techniques like Wiener filtering, wavelet-based fusion, and iterative blind deconvolution for motion blur restoration and compared their performance.
Optimal Coefficient Selection For Medical Image FusionIJERA Editor
Medical image fusion is one of the major research fields in image processing. Medical imaging has become a
vital component in major clinical applications such as detection/ diagnosis and treatment. Joint analysis of
medical data collected from same patient using different modalities is required in many clinical applications.
This paper introduces an optimal fusion technique for multiscale-decomposition based fusion of medical images
and measuring its performance with existing fusion techniques. This approach incorporates genetic algorithm
for optimal coefficient selection and employ various multiscale filters for noise removal. Experiments
demonstrate that proposed fusion technique generate better results than existing rules. The performance of
proposed system is found to be superior to existing schemes used in this literature.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Matching algorithm performance analysis for autocalibration method of stereo ...TELKOMNIKA JOURNAL
Stereo vision is one of the interesting research topics in the computer vision field. Two cameras are used to generate a disparity map, resulting in the depth estimation. Camera calibration is the most important step in stereo vision. The calibration step is used to generate an intrinsic parameter of each camera to get a better disparity map. In general, the calibration process is done manually by using a chessboard pattern, but this process is an exhausting task. Self-calibration is an important ability required to overcome this problem. Self-calibration required a robust and good matching algorithm to find the key feature between images as reference. The purpose of this paper is to analyze the performance of three matching algorithms for the autocalibration process. The matching algorithms used in this research are SIFT, SURF, and ORB. The result shows that SIFT performs better than other methods.
A NOVEL APPROACH FOR SEGMENTATION OF SECTOR SCAN SONAR IMAGES USING ADAPTIVE ...ijistjournal
The SAR and SAS images are perturbed by a multiplicative noise called speckle, due to the coherent nature of the scattering phenomenon. If the background of an image is uneven, the fixed thresholding technique is not suitable to segment an image using adaptive thresholding method. In this paper a new Adaptive thresholding method is proposed to reduce the speckle noise, preserving the structural features and textural information of Sector Scan SONAR (Sound Navigation and Ranging) images. Due to the massive proliferation of SONAR images, the proposed method is very appealing in under water environment applications. In fact it is a pre- treatment required in any SONAR images analysis system. The results obtained from the proposed method were compared quantitatively and qualitatively with the results obtained from the other speckle reduction techniques and demonstrate its higher performance for speckle reduction in the SONAR images.
A NOVEL APPROACH FOR SEGMENTATION OF SECTOR SCAN SONAR IMAGES USING ADAPTIVE ...ijistjournal
The SAR and SAS images are perturbed by a multiplicative noise called speckle, due to the coherent nature of the scattering phenomenon. If the background of an image is uneven, the fixed thresholding technique is not suitable to segment an image using adaptive thresholding method. In this paper a new Adaptive thresholding method is proposed to reduce the speckle noise, preserving the structural features and textural information of Sector Scan SONAR (Sound Navigation and Ranging) images. Due to the massive proliferation of SONAR images, the proposed method is very appealing in under water environment applications. In fact it is a pre- treatment required in any SONAR images analysis system. The results obtained from the proposed method were compared quantitatively and qualitatively with the results obtained from the other speckle reduction techniques and demonstrate its higher performance for speckle reduction in the SONAR images.
Boosting ced using robust orientation estimationijma
In this paper, Coherence Enhancement Diffusion (CED) is boosted feeding external orientation using new
robust orientation estimation. In CED, proper scale selection is very important as the gradient vector at
that scale reflects the orientation of local ridge. For this purpose a new scheme is proposed in which pre
calculated orientation, by using local and integration scales. From the experiments it is found the proposed
scheme is working much better in noisy environment as compared to the traditional Coherence
Enhancement Diffusion
iaetsd Image fusion of brain images using discrete wavelet transformIaetsd Iaetsd
1) The document discusses using discrete wavelet transform to fuse MRI and CT brain images. This allows physicians to view soft tissue details from MRI and bone details from CT in a single fused image.
2) Discrete wavelet transform decomposes images into different frequency bands, allowing salient features like edges to be separated. It is proposed to fuse MRI and CT brain images using discrete wavelet transform to reduce noise and computational load compared to other methods.
3) Fusing the images provides advantages for physicians by having both soft tissue and bone details in a single image, reducing storage costs compared to viewing images separately.
A Novel Framework For Preprocessing Of Breast Ultra Sound Images By Combining...IRJET Journal
The document presents a novel framework for preprocessing breast ultrasound images that combines non-local means filtering and morphological operations. Non-local means filtering is used to reduce speckle noise, which is a significant issue for ultrasound images. Then morphological techniques are applied to enhance the noise-reduced images. The framework achieves peak signal-to-noise ratios of 60-80 decibels when tested on real breast ultrasound images. It provides an effective method for preprocessing ultrasound images to reduce noise and improve image quality.
Similar to Filtering Based Illumination Normalization Techniques for Face Recognition (20)
Dynamic RWX ACM Model Optimizing the Risk on Real Time Unix File SystemRadita Apriana
The preventive control is one of the well advance controls for recent security for protection of data
and services from the uncertainty. Because, increasing the importance of business, communication
technologies and growing the external risk is a very common phenomenon now-a-days. The system
security risks put forward to the management focus on IT infrastructure (OS). The top management has to
decide whether to accept expected losses or to invest into technical security mechanisms in order to
minimize the frequency of attacks, thefts as well as uncertainty. This work contributes to the development
of an optimization model that aims to determine the optimal cost to be invested into security mechanisms
deciding on the measure component of UFS attribute. Our model should be design in such way, the Read,
Write & Execute automatically Protected, Detected and Corrected on RTOS. We have to optimize the
system attacks and down time by implementing RWX ACM mechanism based on semi-group structure,
mean while improving the throughput of the Business, Resources & Technology.
False Node Recovery Algorithm for a Wireless Sensor NetworkRadita Apriana
This paper proposes a fault node recovery algorithm to enhance the lifetime of a wireless sensor
network when some of the sensor nodes shut down. The algorithm is based on the grade diffusion algorithm
combined with the genetic algorithm. The algorithm can result in fewer replacements of sensor nodes and
more reused routing paths. In our simulation, the proposed algorithm increases the number of active nodes
up to 8.7 times, reduces 98.8%, and reduces the rate of energy consumption by approximately 31.1%.
ESL Reading Research Based on Eye Tracking TechniquesRadita Apriana
This document discusses eye tracking research on ESL reading. It begins with an introduction to eye tracking techniques and how they can provide insights into the cognitive processes involved in reading. It then describes key eye movement patterns observed in reading, including fixations, saccades, regressive eye movements, and return sweeps. Finally, it discusses the perceptual span in reading and the different types of information that can be obtained within the perceptual span, such as word identification and visual features.
A New Approach to Linear Estimation Problem in Multiuser Massive MIMO SystemsRadita Apriana
A novel approach for solving linear estimation problem in multi-user massive MIMO systems is
proposed. In this approach, the difficulty of matrix inversion is attributed to the incomplete definition of the
dot product. The general definition of dot product implies that the columns of channel matrix are always
orthogonal whereas, in practice, they may be not. If the latter information can be incorporated into dot
product, then the unknowns can be directly computed from projections without inverting the channel
matrix. By doing so, the proposed method is able to achieve an exact solution with a 25% reduction in
computational complexity as compared to the QR method. Proposed method is stable, offers an extra
flexibility of computing any single unknown, and can be implemented in just twelve lines of code.
Internet Access Using Ethernet over PDH Technology for Remote AreaRadita Apriana
There was still is gap among people living in city and in remote area to get information access,
especially who lived in the Eastern part of Indonesia. People living in such remote area usually were
isolated from town by natural condition like rivers, valleys, hills and so on. Therefore, telecommunication
infrastructure for remote area using cooper was not effective and efficient way to build. The issue was how
information and communication technology could penetrate such areas. This research aimed to propose
technology that could be implemented to overcome the difficulties. Ethernet over Plesiochronous Digital
Hierarchy (EoPDH) was one of many techniques that provided Ethernet connectivity over non-Ethernet
networks. EoPDH was a standardized methodology for transporting native Ethernet frames over the
existing established PDH transport technology. To provide last milefor the local people, use of Mesh
Wireless Local Area Network was made and connected to internet gateway via Ethernet over PDH based
microwave radio link. The test showed that The Ethernet frames were successfully transported to remote
area with good quality of service such as throughput, response time, and transaction rate.
- A new ozone concentration regulator was designed based on the theory of continuity equation for gas flow in parallel pipes.
- The regulator allows an ozone generator to produce variable ozone concentrations at different flow rates, expanding its capabilities.
- Experimental results showed that at an initial oxygen flow rate of 33.33 cm3/s, the regulator could vary ozone concentration between 429.30 ppm to 3826.60 ppm, and at 25 cm3/s between 387.30 ppm to 4435.20 ppm.
- When setting the concentration to around 1000 ppm, the concentration decreased with increasing flow rate as expected based on gas flow theory.
- The regulator was effective in
Identifying of the Cielab Space Color for the Balinese Papyrus CharactersRadita Apriana
Balinese papyrus is one of the media to write the ideas from minstrels in ancient times. Currently,
many ancient literatures which are written in the papyrus, that also very difficult to identify because the
writings were beginning to rot or fade influenced by age. In this study takes the Balinese papyrus
characters as its object. The improvement of the image quality in image processing of the characters here
refers to the change for the quality of the image before its next development. There is the process of
identifying the color space to spot the shape of the papyrus characters. The difference of the colors is so
small so that it needs CIELab to process the identification out of its background and noise.
Analysis and Estimation of Harmonics Using Wavelet TechniqueRadita Apriana
The paper develops an approach based on wavelet technique for the evaluation and estimation of
harmonic contents of power system waveform. The proposed algorithm decomposes the signal waveforms
into the uniform frequency sub-bands corresponding to the odd harmonic components of the signal. The
proposed implementation of algorithm determines the frequency bands of harmonics which retain both the
time and frequency relationship of the original waveforms and uses a method to suppress those
harmonics.Thewaveletalgorithm is selected to obtain compatible output bands with the harmonic groups
defined in the standards for power-supply systems. A comparative analysis will be done with the input and
the results obtained from the wavelet transform (WT) for different measuring conditions and Simulation
results are given.
A Novel Method for Sensing Obscene Videos using Scene Change DetectionRadita Apriana
Video scene change detection has great importance of managing and analyzing large amount of
videos. Traditionally this technique used for indexing, segmenting and categorizing different types of
videos. Very few works addressed to classify obscene using scene change detection method. In this
research we proposed a simple approach for sensing objectionable videos by observing scene changes
into different video genres. Video scenes are grouped into set of key frames. After analyzing duration of
each scene and counting the number of key frames of designated scene, it has been shown that obscene
videos have infrequent scene changing nature. While in sports, dramas, music and action films have large
number of scene changes. We used six types of video genres and the decision has been made by setting
a threshold based on extracted key frames. Experimental result showed that the accuracy is 83.33% and
false positive rate is 16.67%.
Robust SINS/GNSS Integration Method for High Dynamic ApplicationsRadita Apriana
As high dynamic movement is always accompanied by colored noise which lacks of mathematical
model, traditional Kalman filtering based on an assumption of white Gaussian noise always faces serious
divergence. To enhance the performance in high dynamic environment with uncertain colored noise, a kind
of robust filtering based on H-infinity technology is developed. State model of the algorithm is derived from
SINS error propagation. Both position and velocity errors are used as the measurements. A simulation
system which includes a tri-axial turntable and a GNSS signal simulator is used to verify the integration
design under high dynamic environment. Simulation results proved that both the accuracy and robustness
of the integration design have been improved significantly.
Study on Adaptive PID Control Algorithm Based on RBF Neural NetworkRadita Apriana
Aim at the limitation of traditional PID controller has certain limitation, the traditionalPID control is
often difficult to obtain satisfactory control performance, and the RBF neural networkis difficult to meet the
requirement ofreal-time control system.To overcome it, an adaptive PID control strategy based on (RBF)
neural network isproposed in this paper. The resultsshow that the proposed controller is practical and
effective, because of the adaptability, strong robustness and satisfactory controlperformance.It is also
revealed from simulation results that the proposed control algorithm is valid for DC motor and also
provides the theoretical and experimental basis.
Solving Method of H-Infinity Model Matching Based on the Theory of the Model ...Radita Apriana
People used to solve high-order H model matching based on H control theory, it is too
difficult. In this paper, we use model reduction theory to solve high-order H model matching problem, A
new method to solve H model matching problem based on the theory of the model reduction is
proposed. The simulation results show that the method has better applicability and can get the expected
performance.
A Review to AC Modeling and Transfer Function of DCDC ConvertersRadita Apriana
In this paper, AC modeling and small signal transfer function for DC-DC converters are
represented. The fundamentals governing the formulas are also reviewed. In DC-DC converters, the
output voltage must be kept constant, regardless of changes in the input voltage or in the effective load
resistance. Transfer function is the necessary knowledge to design a proper feedback control such as PID
control to regulate the output voltage as linear PID and PI controllers are usually designed for DC-DC
converters using standard frequency response techniques based on the small signal model of the
converter.
DTC Method for Vector Control of 3-Phase Induction Motor under Open-Phase FaultRadita Apriana
Three-phase Induction Motor (IM) drives are widely used in industrial equipments. One of the
essential problems of 3-phase IM drives is their high speed and torque pulsation in the fault conditions.
This paper shows Direct Torque Control (DTC) strategy for vector control of a 3-phase IM under opencircuit
fault. The objective is to implement a solution for vector control of 3-phase IM drives which can be
also used under open-phase fault. MATLAB simulations were carried out and performance analysis is
presented.
Impact Analysis of Midpoint Connected STATCOM on Distance Relay PerformanceRadita Apriana
This paper presents the impact of the Static Synchronous Compensator (STATCOM) on the
performance of distance protection of EHV transmission lines. A 400kV transmission system having
midpoint connected STATCOM with its control circuit is modeled using MATLAB/SIMULINK software. The
impact of STATCOM on distance relay for different fault conditions and different fault locations is analyzed.
Simulation results indicate that the presence of the STATCOM in the transmission system significantly
changes the line impedance seen by the distance relay to be lower or higher than the actual line
impedance. Due to this the performance of the distance relay changes, either overreaches or under
reaches.
Optimal Placement and Sizing of Distributed Generation Units Using Co-Evoluti...Radita Apriana
Today, with the increase of distributed generation sources in power systems, it’s important to
optimal location of these sources. Determine the number, location, size and type of distributed generation
(DG) on Power Systems, causes the reducing losses and improving reliability of the system. In this paper
is used Co-evolutionary particle swarm optimization algorithm (CPSO) to determine the optimal values of
the listed parameters. Obtained results through simulations are done in MATLAB software is presented in
the form of figure and table in this paper. These tables and figures, show how to changes the system
losses and improving reliability by changing parameters such as location, size, number and type of DG.
Finally, the results of this method are compared with the results of the Genetic algorithm (GA) method, to
determine the performance of each of these methods.
Similarity and Variance of Color Difference Based DemosaicingRadita Apriana
The aim of the project is to find the missing color samples at each pixel location by the
combination of similarity algorithm and the variance of colour difference algorithm. Many demosaicing
algorithms find edges in horizontal and vertical directions, which are not suitable for other directions.
Hence using the similarity algorithm the edges are found in different directions. But in this similarity
algorithm sometimes the horizontal and vertical directions are mislead. Hence this problem can be rectified
using the variance of colour difference algorithm. It is proved experimentally that this new demosaicing
algorithm based on similarity and variance of colouyr difference has better colour peak signal to noise ratio
(CPSNR). It has better o0bjective and subjective performance. It is an analysis study of both similarity and
colour variance algorithms.
Energy Efficient RF Remote Control for Dimming the Household ApplaincesRadita Apriana
During recent years there is a strong trend towards radio frequency (RF) remote controls as it is
delivering even more comfort to the users and increased usability with high robustness of the RF links.
Lower power consumption with new features and security make RF remote control systems more
competitive with widely used IR remote control systems. In this paper we propose, a RF module based
real time system, which is an integrated system designed to control the dimming or speed of the
appliances. Zigbee module is used as RF module to establish wireless link between Remote and appliance
section having 9600kbs baud rate and range of 100m. Dimming control circuits is part of appliances
section which is used to control the appliances corresponding to signals generated by the remote section.
This system provides energy efficient solution for household uses.
A Novel and Advanced Data Mining Model Based Hybrid Intrusion Detection Frame...Radita Apriana
The document proposes a hybrid intrusion detection framework that uses two classifiers: Tree Augmented Naive Bayes (TAN) as the base classifier and Reduced Error Pruning (REP) as the meta classifier. The TAN classifier performs initial classification on the KDD Cup 99 dataset and the results are then used as input for the REP meta classifier, which reclassifies the instances to improve overall classification performance. The framework is evaluated using a testing dataset, with the results analyzed to assess the performance of the hybrid approach.
For some kinds of products, such as car, aircraft, government acquisition, the consumers have
strict requirements to the reliability of these products. Then the manufacturer is inclined to provide the twodimensional
preventive maintenance policy to take the usage degree of the product into account. As a
result, two-dimensional preventive maintenance policy in the warranty period has recently obtained
increasing attention from manufacturers and consumers. In this paper, we focused on the optimization of
based warranty cost and proposed a new expected cost model of the two-dimensional imperfect
preventive maintenance policy from the perspective of the manufacture. The optimal two-dimensional
preventive maintenance was obtained by minimizing based warranty cost. And asymmetric copula function
was applied to modeling the failure function of the product. At last, numerical examples are given to
illustrate the proposed models, of which the results prove the model effective and validate.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
TIME DIVISION MULTIPLEXING TECHNIQUE FOR COMMUNICATION SYSTEMHODECEDSIET
Time Division Multiplexing (TDM) is a method of transmitting multiple signals over a single communication channel by dividing the signal into many segments, each having a very short duration of time. These time slots are then allocated to different data streams, allowing multiple signals to share the same transmission medium efficiently. TDM is widely used in telecommunications and data communication systems.
### How TDM Works
1. **Time Slots Allocation**: The core principle of TDM is to assign distinct time slots to each signal. During each time slot, the respective signal is transmitted, and then the process repeats cyclically. For example, if there are four signals to be transmitted, the TDM cycle will divide time into four slots, each assigned to one signal.
2. **Synchronization**: Synchronization is crucial in TDM systems to ensure that the signals are correctly aligned with their respective time slots. Both the transmitter and receiver must be synchronized to avoid any overlap or loss of data. This synchronization is typically maintained by a clock signal that ensures time slots are accurately aligned.
3. **Frame Structure**: TDM data is organized into frames, where each frame consists of a set of time slots. Each frame is repeated at regular intervals, ensuring continuous transmission of data streams. The frame structure helps in managing the data streams and maintaining the synchronization between the transmitter and receiver.
4. **Multiplexer and Demultiplexer**: At the transmitting end, a multiplexer combines multiple input signals into a single composite signal by assigning each signal to a specific time slot. At the receiving end, a demultiplexer separates the composite signal back into individual signals based on their respective time slots.
### Types of TDM
1. **Synchronous TDM**: In synchronous TDM, time slots are pre-assigned to each signal, regardless of whether the signal has data to transmit or not. This can lead to inefficiencies if some time slots remain empty due to the absence of data.
2. **Asynchronous TDM (or Statistical TDM)**: Asynchronous TDM addresses the inefficiencies of synchronous TDM by allocating time slots dynamically based on the presence of data. Time slots are assigned only when there is data to transmit, which optimizes the use of the communication channel.
### Applications of TDM
- **Telecommunications**: TDM is extensively used in telecommunication systems, such as in T1 and E1 lines, where multiple telephone calls are transmitted over a single line by assigning each call to a specific time slot.
- **Digital Audio and Video Broadcasting**: TDM is used in broadcasting systems to transmit multiple audio or video streams over a single channel, ensuring efficient use of bandwidth.
- **Computer Networks**: TDM is used in network protocols and systems to manage the transmission of data from multiple sources over a single network medium.
### Advantages of TDM
- **Efficient Use of Bandwidth**: TDM all
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
A SYSTEMATIC RISK ASSESSMENT APPROACH FOR SECURING THE SMART IRRIGATION SYSTEMSIJNSA Journal
The smart irrigation system represents an innovative approach to optimize water usage in agricultural and landscaping practices. The integration of cutting-edge technologies, including sensors, actuators, and data analysis, empowers this system to provide accurate monitoring and control of irrigation processes by leveraging real-time environmental conditions. The main objective of a smart irrigation system is to optimize water efficiency, minimize expenses, and foster the adoption of sustainable water management methods. This paper conducts a systematic risk assessment by exploring the key components/assets and their functionalities in the smart irrigation system. The crucial role of sensors in gathering data on soil moisture, weather patterns, and plant well-being is emphasized in this system. These sensors enable intelligent decision-making in irrigation scheduling and water distribution, leading to enhanced water efficiency and sustainable water management practices. Actuators enable automated control of irrigation devices, ensuring precise and targeted water delivery to plants. Additionally, the paper addresses the potential threat and vulnerabilities associated with smart irrigation systems. It discusses limitations of the system, such as power constraints and computational capabilities, and calculates the potential security risks. The paper suggests possible risk treatment methods for effective secure system operation. In conclusion, the paper emphasizes the significant benefits of implementing smart irrigation systems, including improved water conservation, increased crop yield, and reduced environmental impact. Additionally, based on the security analysis conducted, the paper recommends the implementation of countermeasures and security approaches to address vulnerabilities and ensure the integrity and reliability of the system. By incorporating these measures, smart irrigation technology can revolutionize water management practices in agriculture, promoting sustainability, resource efficiency, and safeguarding against potential security threats.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...University of Maribor
Slides from talk presenting:
Aleš Zamuda: Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapter and Networking.
Presentation at IcETRAN 2024 session:
"Inter-Society Networking Panel GRSS/MTT-S/CIS
Panel Session: Promoting Connection and Cooperation"
IEEE Slovenia GRSS
IEEE Serbia and Montenegro MTT-S
IEEE Slovenia CIS
11TH INTERNATIONAL CONFERENCE ON ELECTRICAL, ELECTRONIC AND COMPUTING ENGINEERING
3-6 June 2024, Niš, Serbia
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
2. TELKOMNIKA ISSN: 2302-4046
Filtering Based Illumination Normalization Techniques for Face… (Sasan Karamizadeh)
315
F(x, y) =K•e-(x2+y2), integral (integral (F(x, y), dx, Dy=1, (2)
Figure1. Illustration original images are in the upper rows, Images processed by SSR - the
reflection functionalities are in the lower rows
2.2. Multi Scale Retinex or (MSR) algorithm
The algorithm for Multi Scale Retinex or MSR is extended from the algorithm of the SSR
as suggested [6].The theory of retinex has the assumption that the perception of color is strictly
dependent on the human vision system’s neural structure. Reference [7, 8] introduced the
retinex model for lightness computation. In order to improve the images with conditions of
complex lightings, the theory of retinex, which is derived from the system of human visual, has
been utilized for improvements in image contrasting. Reference [7, 9] first introduced the theory
of retinex. Used Land’s theory to create the SSR [8] and MSR [6]. In general, the MSR is
successful in improving local contrasting and compression of a robust range. In recent times,
researches have introduced methods to enhance the MSR based on the correction of color [10],
natural impressions [11], and halo effect [12].
Figure 2. Show original images are in the upper rows, Images processed by MSR – the
reflection functionalities are in the lower rows
2.3. Homomorphic Filtering-based Normalization (HOMO) Algorithm
Homomorphic filtering or HOMO is a popular technique for normalization whereby the
image input is firstly changed into the necessary logarithm and after that into the domain for
frequency [20]. After that, the components of high frequency are focused on and the
components of lower frequency are decreased. Finally, the image is changed back to the
spation domain utilizing the inverse Fourier transformation and hence retrieving the exponential
outcome [21].
3. ISSN: 2302-4046
TELKOMNIKA Vol. 13, No. 2, February 2015 : 314 – 320
316
Figure 3. Illustration original images are in the upper rows, Images processed by HOMO - the
reflection functionalities are in the lower rows
The component of illumination is located close to the central two-dimensional Fourier
that changes the image’s logarithm. However, the component of reflectance is situated further in
the outer region of the Fourier spectrum’s two dimension. In general, the illumination and
component of reflectance are not directly separated in Fourier’s space [14]. An appropraiate
divide for the components is dependent on the suitable parameters of filter of the homomorphic.
Normally, the parameters are adjusted accordingly by utilizing the experiences of the
researcher. Adelmann [14] in his research utilized homomorphic filtering with various parameter
settings and the most suitable setting was chosen for more processing works. Reference [15]
chose a static group of values for the homomorphic parameters according to their experiences
in utilizing the algorithm for looking for the homomorphic filtering. According to [16], there are
various values for this kind of filter where they have tested the parameters and the most suitable
is selected. Methods for selecting parameters in all the techniques depend on their dataset.
Nevertheless, [17] suggested that a selection method for choosing the homomorphic parameter
should be made according to the global contrast factors (GCF) [18], which is not reliant on any
databases.
2.4. Isotropic Diffusion-based Normalization Algorithm
The algorithm known as the isotropic diffusion-based normalization or IS utilizes
isotropic smoothening of the image to calculate the functionality of the luminance. It reflects the
simplified r-variant of the technique of anisotropic diffusion-based normalization as suggested
by Brajovic and Gross [19].
The algorithm for anisotropic diffusion (AD) is popular for the feature of illumination
invariant removal of the face image. The function of the algorithm for the anisotropic diffusion is
dependent on the conduction functionality and measure of discontinuity [20]. Nevertheless, in
the conventional algorithm of the anisotropic diffusion, the discontinuity measurement normally
takes on the in-homogeneity or space gradient [21, 35].
In the model known as the Lambertian convex surface, the face image is reflected as
revealed below:
I(x, y) = R(x, y) L(x, y). (3)
According to (3), R(x, y) is usually the reflection of the scene and L (x, y) usually
represents the illumination. Thus, the normalization of illumination for verifying the face can be
achieved by.
Calculation the illumination L(x, y). The L(x, y) cannot be calculated from the I(x, y)
because of the ill- posed position. It is commonly assumed that R(x, y) differs quicker than the
L(x, y). Therefore, in most of the approcahes, R(x, y) is retrieved by considering the variance
between the images’s I(x, y) logarithm and its smoothening version that is the approximation of
L(x, y). The logarithmic functionality gets rid of the noises and causes the measurement to be
easily done. This classification is known as the generic quotient image. The fundamental
diagram of the method for generic quotient image is revealed in Figure 4.
4. TELKOMNIKA ISSN: 2302-4046
Filtering Based Illumination Normalization Techniques for Face… (Sasan Karamizadeh)
317
Figure 4. Basic diagram of generic quotient image method
The main highlight of generic quotient image is on the method to calculate the
illumination L (x, y) using image I (x, y).
Figure 5. Illustration original images are in the upper rows, Images processed by Isotropic
Diffusion-based - the reflection functionalities are in the lower rows.
2.5. Adaptive Non-Local Means Based Technique for Normalization
The technique known as the Aadaptive Non-Local Means based normalization (ANL)
was introduced by ˇ Struc and Paveˇsiˇc [22, 34]. This technique utilizes the algorithm for
adaptive non-local means de-noising to measure the functionality of luminance and to calculate
the reflectance after that. The Adaptive Non Local Means Filter with a Mixture of Wavelet is just
like the NLM filtering however, the smoothening parameter is adapted locally as expressed in:
(4)
Whereby the distance is measured from a volume R that is inputted as the subtraction
of the original noisy volume u and the low pass filtered volume (). It was discovered
experimentally that the distance minimally required in this situation is about the same as s2
because of the removed information of low frequency and the applying of the minimal operator
[23, 24].
This approach is simple and has two critical advantages. It lets the same patches with a
similar structure to be discovered but with a varied mean level compensating for the intensity in
the homogeneity that normally exists in the data [25, 26]. However, the noise variance’s
overestimation will be minimal in situations that have with unique patches in the search
database [34]. Therefore, the adaptive filtering will establish the parameter h2 similar to the
minimal distance calculation as referred to in Equation (4).
2 min(d(Ri, Rj))iandR( )
5. ISSN: 2302-4046
TELKOMNIKA Vol. 13, No. 2, February 2015 : 314 – 320
318
Figure 6: Illustration original images are in the upper rows, Images processed by ANL- the
reflection functionalities are in the lower row
3. Comparisons
Table 1 demonstrates pros and cons of illumination filters. In Table 1 different
techniques are compared in term of performance and accuracy.
Table 1. Advantage and disadvantaage of illumination filters in face Recognition
ALGORITHM
ADVANTAGE
DISADVANTAGE
REFERENCE
SSR
1. The scale of an SSR increases, its
color reliabilityfeatures improve.
1.Includes halo artifacts
2. SSRs have differentdynamic
range compression. Characteristics
according tothe scale.
3.The SSR does not rungood tonal
execution
[5,27]
MSR
1. Works effectivelywithimages that
are grayscale.
1.Histogram equalization isutilized
to improve the colorof the images.
This mayresult in a change in
the color scale causing the
artifacts and having animbalance in
the color ofthe images.
[28,28]
HOMO
1.The features cause the linkwith the
low frequency of the image with
illumination and the high frequenc with
reflection.
1. Discrete feature in attractive
aspects betweencontiguous
discrete scalesthat may not be
found at theoutput.
[16,31]
ISOTROPIC
1.Is able to preserve edges ofimage
and is able to reducenoise at the same
time.
1.It is insensitive toorientation and
symmetric,resulting in blurred
edges.
[21,32]
Adaptive
non-local
1. The applications that
includesegmentation, Relaxometry,
ortractography might make useof the
improved data that iscreatedafter
applying theproposed filtering.
1. This technique does notreduce
the difficulty of thealgorithm
significantly while only
decreasing slightly the
accuracy offiltering.
[26,33]
6. TELKOMNIKA ISSN: 2302-4046
Filtering Based Illumination Normalization Techniques for Face… (Sasan Karamizadeh)
319
4. Conclusion
Given the critical challenge in face recognition, researchers have used an extensive
manner of illumination variations in the works of pattern recognition and computer vision.
Several techniques were portrayed that tolerated and/or compensated the variations in the
image caused by changes of illumination. Nevertheless, gaining illumination in face recognition
is still a huge challenge that needs continuous effort and attention. This study had carried out an
extensive survey and included techniques that have recently introduced.
References
[1] Mansoor, Nafees, AKM Muzahidul Islam, and M. AbdurRazzak. Multiple description discrete cosine
transform-based image coding using DC coefficient relocation and AC coefficient interpolation. Journal
of Electronic Imaging. 2013; 22(2): 023030-023030.
[2] X Tan, B Triggs. Enhanced local texture feature sets for face recognition under difficult lighting
conditions. Image Processing, IEEE Transactions on. 2010; 19(6): 1635-1650.
[3] S Karamizadeh, SM Abdullah, M Zamani. An Overview of Holistic Face Recognition. IJRCCT.
2013; 2(9): 738-741.
[4] ZU Rahman, DJ Jobson, GA Woodell. Multi-scale retinex for color image enhancement. 1003-1006.
[5] CY Jang, J Hyun, S Cho, HS Kim, YH Kim. Adaptive Selection of Weights in Multi-scale Retinex using
Illumination and Object Edges. IPCV. 2012.
[6] DJ Jobson, ZU Rahman, GA Woodell. A multiscaleretinex for bridging the gap between color images
and the human observation of scenes. Image Processing, IEEE Transactions on. 1997; 6(7): 965-976.
[7] JA Frankle, JJ McCann. Method and apparatus for lightness imaging. Google Patents. 1983.
[8] F Ciurea, B Funt, “Tuning retinex parameters,” Journal of Electronic Imaging, vol. 13, no. 1, pp. 58-64,
2004.
[9] IR Terol-Villalobos. Multiscale image enhancement and segmentation based on morphological
connected contrast mappings. MICAI 2004: Advances in Artificial Intelligence. Springer. 2004; 662-
671.
[10] DH Choi, IH Jang, MH Kim, NC Kim. Color image enhancement based on single-scale retinex with a
JND-based nonlinear filter. Illumination. 2007; 1: 1.
[11] M Herscovitz, O Yadid-Pecht. A modified Multi Scale Retinex algorithm with an improved global
impressionof brightness for wide dynamic range pictures. Machine Vision and Applications. 2004;
15(4): 220-228.
[12] B Sun, W Chen, H Li, W Tao, J Li. Modified luminance based adaptive MSR. 116-120.
[13] HG Adelmann. Butterworth equations for homomorphic filtering of images. Computers in Biology and
Medicine. 1998; 28(2): 169-181.
[14] K Delac, M Grgic, T Kos. Sub-image homomorphic filtering technique for improving facial identification
under difficult illumination conditions. 21-23.
[15] CN Fan, FY Zhang. Homomorphic filtering based illumination normalization method for face
recognition. Pattern Recognition Letters. 2011; 32(10): 1468-1479.
[16] K Baek, Y Chang, D Kim, Y Kim, B Lee, H Chung, Y Han, H Hahn. Face region detection using DCT
and homomorphic filter. 7-12.
[17] K Matković, L Neumann, A Neumann, T Psik, W Purgathofer. Global contrast factor-a new approach
to image contrast. 159-167.
[18] R Gross, V Brajovic. An image preprocessing algorithm for illumination invariant face recognition. 10-
18.
[19] H Wang, SZ Li, Y Wang. Face recognition under varying lighting conditions using self quotient image.
819-824.
[20] W Li, T Kuang, W Gong. Anisotropic diffusion algorithm based on weber local descriptor for
illumination invariant face verification. Machine Vision and Applications. 2014; 25(4): 997-1006.
[21] V Štruc, N Pavešić. Illumination invariant face recognition by non-local smoothing: Springer. 2009.
[22] Karamizadeh, Sasan, Shahidan M Abdullah, Mazdak Zamani, AtabakKherikhah. Pattern Recognition
Techniques: Studies on Appropriate Classifications. Advanced Computer and Communication
Engineering Technology. Springer International Publishing. 2015; 791-799.
[23] JV Manjón, P Coupé, L Martí‐Bonmatí, DL Collins, M Robles. Adaptive non‐ local means denoising of
MR images with spatially varying noise levels. Journal of Magnetic Resonance Imaging. 2010; 31(1):
192-203.
[24] N Wiest-Daesslé, S Prima, P Coupé, SP Morrissey, C Barillot. Rician noise removal by non-local
means filtering for low signal-to-noise ratio MRI: applications to DT-MRI. Medical Image Computing
and Computer-Assisted Intervention–MICCAI. Springer. 2008: 171-179.
[25] L Yang, R Parton, G Ball, Z Qiu, AH Greenaway, I Davis, W Lu. An adaptive non-local means filter for
denoising live-cell images and improving particle detection. Journal of structural biology. 2010; 172(3):
233-243.
7. ISSN: 2302-4046
TELKOMNIKA Vol. 13, No. 2, February 2015 : 314 – 320
320
[26] L Wu, P Zhou, X Xu. An Illumination Invariant Face Recognition Scheme to Combining Normalized
Structural Descriptor with Single Scale Retinex. Biometric Recognition. Springer. 2013; 34-42.
[27] Q Meng, D Bian, M Guo, F Lu, D Liu. Improved multi-scale retinex algorithm for medical image
enhancement. Information Engineering and Applications. Springer. 2012; 930-937.
[28] L Yong, YP Xian. Multi-dimensional multi-scale image enhancement algorithm. V3-165-V3-168.
[29] Mansoor, Nafees, FatouNdiaye, Shahin Akter Chowdhury, M Abdur Razzak. Multiple description
image coding: A low complexity approach for lossy networks. TENCON 2008-2008 IEEE Region 10
Conference. IEEE. 2008; 1-5.
[30] AK Sao, B Yegnanarayana. On the use of phase of the Fourier transform for face recognition under
variations in illumination. Signal, image and video processing. 2010; 4(3): 353-358
[31] W GE, Gj Li, Yq CHENG, C XUE, M ZHU. Face image illumination processing based on improved
Retinex. Opt. Precision Eng. 2010; 18(4): 1011-1020.
[32] T Tasdizen. Principal components for non-local means image denoising. 1728-1731.
[33] Karamizadeh, Sasan, Shahidan M Abdullah, Mehran Halimi, Jafar Shayan, Mohammad javadRajabi.
Advantage and Drawback of Support Vector Machine Functionality.
[34] Qiang, Song, Zhang Hai-Feng. Research on Application of Sintering Basicity of Based on Various
Intelligent Algorithms. TELKOMNIKA Indonesian Journal of Electrical Engineering. 2014; 12(11):
7728-7737.
[35] Adhani, Gita, AgusBuono, AkhmadFaqih. Optimization of Support Vector Regression using Genetic
Algorithm and Particle Swarm Optimization for Rainfall Prediction in Dry Season. TELKOMNIKA
Indonesian Journal of Electrical Engineering. 2014; 12(11): 7912-7919.