This document summarizes a study that uses Particle Swarm Optimization (PSO) for automatic segmentation of nano-particles in Transmission Electron Microscopy (TEM) images. PSO is applied to specify local and global thresholds for segmentation by treating image entropy as a minimization problem. Results show the PSO method improves over previous techniques by reducing incorrect characterization of nano-particles in images affected by liquid concentrations or supporting materials, with up to a 27% reduction in errors. Compared to manual characterization, PSO provides comparable particle counting with higher computational efficiency suitable for real-time analysis.
INFORMATION SATURATION IN MULTISPECTRAL PIXEL LEVEL IMAGE FUSIONIJCI JOURNAL
The availability of imaging sensors operating in multiple spectral bands has led to the requirement of
image fusion algorithms that would combine the image from these sensors in an efficient way to give an
image that is more informative as well as perceptible to human eye. Multispectral image fusion is the
process of combining images from different spectral bands that are optically acquired. In this paper, we
used a pixel-level image fusion based on principal component analysis that combines satellite images of the
same scene from seven different spectral bands. The purpose of using principal component analysis
technique is that it is best method for Grayscale image fusion and gives better results. The main aim of
PCA technique is to reduce a large set of variables into a small set which still contains most of the
information that was present in the large set. The paper compares different parameters namely, entropy,
standard deviation, correlation coefficient etc. for different number of images fused from two to seven.
Finally, the paper shows that the information content in an image gets saturated after fusing four images.
Under the certain circumstances of the low and unacceptable accuracy on image recognition, the feature
extraction method for optical images based on the wavelet space feature spectrum entropy is recently
studied. With this method, the principle that the energy is constant before and after the wavelet
transformation is employed to construct the wavelet energy pattern matrices, and the feature spectrum
entropy of singular value is extracted as the image features by singular value decomposition of the matrix.
At the same time, BP neural network is also applied in image recognition. The experimental results show
that high image recognition accuracy can be acquired by using the feature extraction method for optical
images proposed in this paper, which proves the validity of the method.
APPLYING DYNAMIC MODEL FOR MULTIPLE MANOEUVRING TARGET TRACKING USING PARTICL...IJITCA Journal
In this paper, we applied a dynamic model for manoeuvring targets in SIR particle filter algorithm for improving tracking accuracy of multiple manoeuvring targets. In our proposed approach, a color distribution model is used to detect changes of target's model . Our proposed approach controls
deformation of target's model. If deformation of target's model is larger than a predetermined threshold,then the model will be updated. Global Nearest Neighbor (GNN) algorithm is used as data association algorithm. We named our proposed method as Deformation Detection Particle Filter (DDPF) . DDPF
approach is compared with basic SIR-PF algorithm on real airshow videos. Comparisons results show that, the basic SIR-PF algorithm is not able to track the manoeuvring targets when the rotation or scaling is occurred in target' s model. However, DDPF approach updates target's model when the rotation or
scaling is occurred. Thus, the proposed approach is able to track the manoeuvring targets more efficiently
and accurately.
Mr image compression based on selection of mother wavelet and lifting based w...ijma
Magnetic Resonance (MR) image is a medical image technique required enormous data to be stored and
transmitted for high quality diagnostic application. Various algorithms have been proposed to improve the
performance of the compression scheme. In this paper we extended the commonly used algorithms to image
compression and compared its performance. For an image compression technique, we have linked different
wavelet techniques using traditional mother wavelets and lifting based Cohen-Daubechies-Feauveau
wavelets with the low-pass filters of the length 9 and 7 (CDF 9/7) wavelet transform with Set Partition in
Hierarchical Trees (SPIHT) algorithm. A novel image quality index with highlighting shape of histogram
of the image targeted is introduced to assess image compression quality. The index will be used in place of
existing traditional Universal Image Quality Index (UIQI) “in one go”. It offers extra information about
the distortion between an original image and a compressed image in comparisons with UIQI. The proposed
index is designed based on modelling image compression as combinations of four major factors: loss of
correlation, luminance distortion, contrast distortion and shape distortion. This index is easy to calculate
and applicable in various image processing applications. One of our contributions is to demonstrate the
choice of mother wavelet is very important for achieving superior wavelet compression performances based
on proposed image quality indexes. Experimental results show that the proposed image quality index plays
a significantly role in the quality evaluation of image compression on the open sources “BrainWeb:
Simulated Brain Database (SBD) ”.
Object Recogniton Based on Undecimated Wavelet TransformIJCOAiir
Object Recognition (OR) is the mission of finding a specified object in an image or video sequence
in computer vision. An efficient method for recognizing object in an image based on Undecimated Wavelet
Transform (UWT) is proposed. In this system, the undecimated coefficients are used as features to recognize the
objects. The given original image is decomposed by using the UWT. All coefficients are taken as features for
the classification process. This method is applied to all the training images and the extracted features of
unknown object are used as an input to the K-Nearest Neighbor (K-NN) classifier to recognize the object. The
assessment of the system is agreed on using Columbia Object Image Library Dataset (COIL-100) database.
ADOPTING AND IMPLEMENTATION OF SELF ORGANIZING FEATURE MAP FOR IMAGE FUSIONijistjournal
A different image fusion algorithm based on self organizing feature map is proposed in this paper, aiming to produce quality images. Image Fusion is to integrate complementary and redundant information from multiple images of the same scene to create a single composite image that contains all the important features of the original images. The resulting fused image will thus be more suitable for human and machine perception or for further image processing tasks. The existing fusion techniques based on either direct operation on pixels or segments fail to produce fused images of the required quality and are mostly application based. The existing segmentation algorithms become complicated and time consuming when multiple images are to be fused. A new method of segmenting and fusion of gray scale images adopting Self organizing Feature Maps(SOM) is proposed in this paper. The Self Organizing Feature Maps is adopted to produce multiple slices of the source and reference images based on various combination of gray scale and can dynamically fused depending on the application. The proposed technique is adopted and analyzed for fusion of multiple images. The technique is robust in the sense that there will be no loss in information due to the property of Self Organizing Feature Maps; noise removal in the source images done during processing stage and fusion of multiple images is dynamically done to get the desired results. Experimental results demonstrate that, for the quality multifocus image fusion, the proposed method performs better than some popular image fusion methods in both subjective and objective qualities.
INFORMATION SATURATION IN MULTISPECTRAL PIXEL LEVEL IMAGE FUSIONIJCI JOURNAL
The availability of imaging sensors operating in multiple spectral bands has led to the requirement of
image fusion algorithms that would combine the image from these sensors in an efficient way to give an
image that is more informative as well as perceptible to human eye. Multispectral image fusion is the
process of combining images from different spectral bands that are optically acquired. In this paper, we
used a pixel-level image fusion based on principal component analysis that combines satellite images of the
same scene from seven different spectral bands. The purpose of using principal component analysis
technique is that it is best method for Grayscale image fusion and gives better results. The main aim of
PCA technique is to reduce a large set of variables into a small set which still contains most of the
information that was present in the large set. The paper compares different parameters namely, entropy,
standard deviation, correlation coefficient etc. for different number of images fused from two to seven.
Finally, the paper shows that the information content in an image gets saturated after fusing four images.
Under the certain circumstances of the low and unacceptable accuracy on image recognition, the feature
extraction method for optical images based on the wavelet space feature spectrum entropy is recently
studied. With this method, the principle that the energy is constant before and after the wavelet
transformation is employed to construct the wavelet energy pattern matrices, and the feature spectrum
entropy of singular value is extracted as the image features by singular value decomposition of the matrix.
At the same time, BP neural network is also applied in image recognition. The experimental results show
that high image recognition accuracy can be acquired by using the feature extraction method for optical
images proposed in this paper, which proves the validity of the method.
APPLYING DYNAMIC MODEL FOR MULTIPLE MANOEUVRING TARGET TRACKING USING PARTICL...IJITCA Journal
In this paper, we applied a dynamic model for manoeuvring targets in SIR particle filter algorithm for improving tracking accuracy of multiple manoeuvring targets. In our proposed approach, a color distribution model is used to detect changes of target's model . Our proposed approach controls
deformation of target's model. If deformation of target's model is larger than a predetermined threshold,then the model will be updated. Global Nearest Neighbor (GNN) algorithm is used as data association algorithm. We named our proposed method as Deformation Detection Particle Filter (DDPF) . DDPF
approach is compared with basic SIR-PF algorithm on real airshow videos. Comparisons results show that, the basic SIR-PF algorithm is not able to track the manoeuvring targets when the rotation or scaling is occurred in target' s model. However, DDPF approach updates target's model when the rotation or
scaling is occurred. Thus, the proposed approach is able to track the manoeuvring targets more efficiently
and accurately.
Mr image compression based on selection of mother wavelet and lifting based w...ijma
Magnetic Resonance (MR) image is a medical image technique required enormous data to be stored and
transmitted for high quality diagnostic application. Various algorithms have been proposed to improve the
performance of the compression scheme. In this paper we extended the commonly used algorithms to image
compression and compared its performance. For an image compression technique, we have linked different
wavelet techniques using traditional mother wavelets and lifting based Cohen-Daubechies-Feauveau
wavelets with the low-pass filters of the length 9 and 7 (CDF 9/7) wavelet transform with Set Partition in
Hierarchical Trees (SPIHT) algorithm. A novel image quality index with highlighting shape of histogram
of the image targeted is introduced to assess image compression quality. The index will be used in place of
existing traditional Universal Image Quality Index (UIQI) “in one go”. It offers extra information about
the distortion between an original image and a compressed image in comparisons with UIQI. The proposed
index is designed based on modelling image compression as combinations of four major factors: loss of
correlation, luminance distortion, contrast distortion and shape distortion. This index is easy to calculate
and applicable in various image processing applications. One of our contributions is to demonstrate the
choice of mother wavelet is very important for achieving superior wavelet compression performances based
on proposed image quality indexes. Experimental results show that the proposed image quality index plays
a significantly role in the quality evaluation of image compression on the open sources “BrainWeb:
Simulated Brain Database (SBD) ”.
Object Recogniton Based on Undecimated Wavelet TransformIJCOAiir
Object Recognition (OR) is the mission of finding a specified object in an image or video sequence
in computer vision. An efficient method for recognizing object in an image based on Undecimated Wavelet
Transform (UWT) is proposed. In this system, the undecimated coefficients are used as features to recognize the
objects. The given original image is decomposed by using the UWT. All coefficients are taken as features for
the classification process. This method is applied to all the training images and the extracted features of
unknown object are used as an input to the K-Nearest Neighbor (K-NN) classifier to recognize the object. The
assessment of the system is agreed on using Columbia Object Image Library Dataset (COIL-100) database.
ADOPTING AND IMPLEMENTATION OF SELF ORGANIZING FEATURE MAP FOR IMAGE FUSIONijistjournal
A different image fusion algorithm based on self organizing feature map is proposed in this paper, aiming to produce quality images. Image Fusion is to integrate complementary and redundant information from multiple images of the same scene to create a single composite image that contains all the important features of the original images. The resulting fused image will thus be more suitable for human and machine perception or for further image processing tasks. The existing fusion techniques based on either direct operation on pixels or segments fail to produce fused images of the required quality and are mostly application based. The existing segmentation algorithms become complicated and time consuming when multiple images are to be fused. A new method of segmenting and fusion of gray scale images adopting Self organizing Feature Maps(SOM) is proposed in this paper. The Self Organizing Feature Maps is adopted to produce multiple slices of the source and reference images based on various combination of gray scale and can dynamically fused depending on the application. The proposed technique is adopted and analyzed for fusion of multiple images. The technique is robust in the sense that there will be no loss in information due to the property of Self Organizing Feature Maps; noise removal in the source images done during processing stage and fusion of multiple images is dynamically done to get the desired results. Experimental results demonstrate that, for the quality multifocus image fusion, the proposed method performs better than some popular image fusion methods in both subjective and objective qualities.
Shift Invarient and Eigen Feature Based Image Fusion ijcisjournal
Image fusion is a technique of fusing multiple images for better information and more accurate image
compared input images. Image fusion has applications in biomedical imaging, remote sensing, pattern
recognition, multi-focus image integration, and modern military. The proposed methodology uses benefits
of Stationary Wavelet Transform (SWT) and Principal Component Analysis (PCA) to fuse the two images.
The obtained results are compared with exiting methodologies and shows robustness in terms of entropy,
Peak Signal to Noise Ratio (PSNR) and standard deviation.
Multi objective predictive control a solution using metaheuristicsijcsit
The application of multi objective model predictive control approaches is significantly limited with
computation time associated with optimization algorithms. Metaheuristics are general purpose heuristics
that have been successfully used in solving difficult optimization problems in a reasonable computation
time. In this work , we use and compare two multi objective metaheuristics, Multi-Objective Particle
swarm Optimization, MOPSO, and Multi-Objective Gravitational Search Algorithm, MOGSA, to generate
a set of approximately Pareto-optimal solutions in a single run. Two examples are studied, a nonlinear
system consisting of two mobile robots tracking trajectories and avoiding obstacles and a linear multi
variable system. The computation times and the quality of the solution in terms of the smoothness of the
control signals and precision of tracking show that MOPSO can be an alternative for real time
applications.
We presents a technique for moving objects extraction. There are several different approaches for moving object extraction, clustering is one of object extraction method with a stronger teorical foundation used in many applications. And need high performance in many extraction process of moving object. We compare K-Means and Self-Organizing Map method for extraction moving objects, for performance measurement of moving object extraction by applying MSE and PSNR. According to experimental result that the MSE value of K-Means is smaller than Self-Organizing Map. It is also that PSNR of K-Means is higher than Self-Organizing Map algorithm. The result proves that K-Means is a promising method to cluster pixels in moving objects extraction.
A Blind Steganalysis on JPEG Gray Level Image Based on Statistical Features a...IJERD Editor
This paper presents a blind steganalysis technique to effectively attack the JPEG steganographic
schemes i.e. Jsteg, F5, Outguess and DWT Based. The proposed method exploits the correlations between
block-DCTcoefficients from intra-block and inter-block relation and the statistical moments of characteristic
functions of the test image is selected as features. The features are extracted from the BDCT JPEG 2-array.
Support Vector Machine with cross-validation is implemented for the classification.The proposed scheme gives
improved outcome in attacking.
QUALITY ASSESSMENT OF PIXEL-LEVEL IMAGE FUSION USING FUZZY LOGICijsc
Image fusion is to reduce uncertainty and minimize redundancy in the output while maximizing relevant information from two or more images of a scene into a single composite image that is more informative and is more suitable for visual perception or processing tasks like medical imaging, remote sensing, concealed weapon detection, weather forecasting, biometrics etc. Image fusion combines registered images to
produce a high quality fused image with spatial and spectral information. The fused image with more information will improve the performance of image analysis algorithms used in different applications. In this paper, we proposed a fuzzy logic method to fuse images from different sensors, in order to enhance the
quality and compared proposed method with two other methods i.e. image fusion using wavelet transform and weighted average discrete wavelet transform based image fusion using genetic algorithm (here onwards abbreviated as GA) along with quality evaluation parameters image quality index (IQI), mutual
information measure ( MIM), root mean square error (RMSE), peak signal to noise ratio (PSNR), fusion factor (FF), fusion symmetry (FS) and fusion index (FI) and entropy. The results obtained from proposed fuzzy based image fusion approach improves quality of fused image as compared to earlier reported
methods, wavelet transform based image fusion and weighted average discrete wavelet transform based
image fusion using genetic algorithm.
ADOPTING AND IMPLEMENTATION OF SELF ORGANIZING FEATURE MAP FOR IMAGE FUSIONijistjournal
A different image fusion algorithm based on self organizing feature map is proposed in this paper, aiming to produce quality images. Image Fusion is to integrate complementary and redundant information from multiple images of the same scene to create a single composite image that contains all the important features of the original images. The resulting fused image will thus be more suitable for human and machine perception or for further image processing tasks. The existing fusion techniques based on either direct operation on pixels or segments fail to produce fused images of the required quality and are mostly application based. The existing segmentation algorithms become complicated and time consuming when multiple images are to be fused. A new method of segmenting and fusion of gray scale images adopting Self organizing Feature Maps(SOM) is proposed in this paper. The Self Organizing Feature Maps is adopted to produce multiple slices of the source and reference images based on various combination of gray scale and can dynamically fused depending on the application. The proposed technique is adopted and analyzed for fusion of multiple images. The technique is robust in the sense that there will be no loss in information due to the property of Self Organizing Feature Maps; noise removal in the source images done during processing stage and fusion of multiple images is dynamically done to get the desired results. Experimental results demonstrate that, for the quality multifocus image fusion, the proposed method performs better than some popular image fusion methods in both subjective and objective qualities.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
EDGE DETECTION IN SEGMENTED IMAGES THROUGH MEAN SHIFT ITERATIVE GRADIENT USIN...ijscmcj
In this paper, we propose a new method for edge detection in obtained images from the Mean Shift iterative algorithm. The comparable, proportional and symmetrical images are de?ned and the importance of Ring Theory is explained. A relation of equivalence among proportional images are de?ned for image groups in equivalent classes. The length of the mean shift vector is used in order to quantify the homogeneity of the neighborhoods. This gives a measure of how much uniform are the regions that compose the image. Edge detection is carried out by using the mean shift gradient based on symmetrical images. The difference among the values of gray levels are accentuated or these are decreased to enhance the interest region contours. The chosen images for the experiments were standard images and real images (cerebral hemorrhage images). The obtained results were compared with the Canny detector, and our results showed a good performance as for the edge continuity.
Optimized Neural Network for Classification of Multispectral ImagesIDES Editor
The proposed work involves the multiobjective PSO
based optimization of artificial neural network structure for
the classification of multispectral satellite images. The neural
network is used to classify each image pixel in various land
cove types like vegetations, waterways, man-made structures
and road network. It is per pixel supervised classification using
spectral bands (original feature space). Use of neural network
for classification requires selection of most discriminative
spectral bands and determination of optimal number of nodes
in hidden layer. We propose new methodology based on
multiobjective particle swarm optimization (MOPSO) to
determine discriminative spectral bands and the number of
hidden layer node simultaneously. The result obtained using
such optimized neural network is compared with that of
traditional classifiers like MLC and Euclidean classifier. The
performance of all classifiers is evaluated quantitatively using
Xie-Beni and â indexes. The result shows the superiority of
the proposed method.
Adaptive threshold for moving objects detection using gaussian mixture modelTELKOMNIKA JOURNAL
Moving object detection becomes the important task in the video surveilance system. Defining the threshold automatically is challenging to differentiate the moving object from the background within a video. This study proposes gaussian mixture model (GMM) as a threshold strategy in moving object detection. The performance of the proposed method is compared to the Otsu algorithm and gray threshold as the baseline method using mean square error (MSE) and Peak Signal Noise Ratio (PSNR). The performance comparison of the methods is evaluated on human video dataset. The average result of MSE value GMM is 257.18, Otsu is 595.36 and Gray is 645.39, so the MSE value is lower than Otsu and Gray threshold. The average result of PSNR value GMM is 24.71, Otsu is 20.66 and Gray is 19.35, so the PSNR value is higher than Otsu and Gray threshold. The performance of the proposed method outperforms the baseline method in term of error detection.
A NOVEL APPROACH FOR SEGMENTATION OF SECTOR SCAN SONAR IMAGES USING ADAPTIVE ...ijistjournal
The SAR and SAS images are perturbed by a multiplicative noise called speckle, due to the coherent nature of the scattering phenomenon. If the background of an image is uneven, the fixed thresholding technique is not suitable to segment an image using adaptive thresholding method. In this paper a new Adaptive thresholding method is proposed to reduce the speckle noise, preserving the structural features and textural information of Sector Scan SONAR (Sound Navigation and Ranging) images. Due to the massive proliferation of SONAR images, the proposed method is very appealing in under water environment applications. In fact it is a pre- treatment required in any SONAR images analysis system. The results obtained from the proposed method were compared quantitatively and qualitatively with the results obtained from the other speckle reduction techniques and demonstrate its higher performance for speckle reduction in the SONAR images.
Fast Motion Estimation for Quad-Tree Based Video Coder Using Normalized Cross...CSCJournals
Motion estimation is the most challenging and time consuming stage in block based video codec. To reduce the computation time, many fast motion estimation algorithms were proposed and implemented. This paper proposes a quad-tree based Normalized Cross Correlation (NCC) measure for obtaining estimates of inter-frame motion. The measure operates in frequency domain using FFT algorithm as the similarity measure with an exhaustive full search in region of interest. NCC is a more suitable similarity measure than Sum of Absolute Difference (SAD) for reducing the temporal redundancy in video compression since we can attain flatter residual after motion compensation. The degrees of homogeneous and stationery regions are determined by selecting suitable initial fixed threshold for block partitioning. An experimental result of the proposed method shows that actual numbers of motion vectors are significantly less compared to existing methods with marginal effect on the quality of reconstructed frame. It also gives higher speed up ratio for both fixed block and quad-tree based motion estimation methods.
An efficient image compression algorithm using dct biorthogonal wavelet trans...eSAT Journals
Abstract
Recently the digital imaging applications is increasing significantly and it leads the requirement of effective image compression techniques. Image compression removes the redundant information from an image. By using it we can able to store only the necessary information which helps to reduce the transmission bandwidth, transmission time and storage size of image. This paper proposed a new image compression technique using DCT-Biorthogonal Wavelet Transform with arithmetic coding for improvement the visual quality of an image. It is a simple technique for getting better compression results. In this new algorithm firstly Biorthogonal wavelet transform is applied and then 2D DCT-Biorthogonal wavelet transform is applied on each block of the low frequency sub band. Finally, split all values from each transformed block and arithmetic coding is applied for image compression.
Keywords: Arithmetic coding, Biorthogonal wavelet Transform, DCT, Image Compression.
Ant Colony Optimization (ACO) based Data Hiding in Image Complex Region IJECEIAES
This paper presents data an Ant colony optimization (ACO) based data hiding technique. ACO is used to detect complex region of cover image and afterward, least significant bits (LSB) substitution is used to hide secret information in the detected complex regions’ pixels. ACO is an algorithm developed inspired by the inborn manners of ant species. The ant leaves pheromone on the ground for searching food and provisions. The proposed ACO-based data hiding in complex region establishes an array of pheromone, also called pheromone matrix, which represents the complex region in sequence at each pixel position of the cover image. The pheromone matrix is developed according to the movements of ants, determined by local differences of the image element’s intensity. The least significant bits of complex region pixels are substituted with message bits, to hide secret information. The experimental results, provided, show the significance of the performance of the proposed method.
Image segmentation is useful in many applications. It can identify the regions of interest in a scene or annotate
the data. It categorizes the existing segmentation algorithm into region-based segmentation, data clustering, and
edge-base segmentation. Region-based segmentation includes the seeded and unseeded region growing
algorithms, the JSEG, and the fast scanning algorithm. Due to the presence of speckle noise, segmentation of
Synthetic Aperture Radar (SAR) images is still a challenging problem. We proposed a fast SAR image
segmentation method based on Particle Swarm Optimization-Gravitational Search Algorithm (PSO-GSA). In
this method, threshold estimation is regarded as a search procedure that examinations for an appropriate value in
a continuous grayscale interval. Hence, PSO-GSA algorithm is familiarized to search for the optimal threshold.
Experimental results indicate that our method is superior to GA based, AFS based and ABC based methods in
terms of segmentation accuracy, segmentation time, and Thresholding.
TERRIAN IDENTIFICATION USING CO-CLUSTERED MODEL OF THE SWARM INTELLEGENCE & S...cscpconf
A digital image is nothing more than data -- numbers indicating variations of red, green, and
blue at a particular location on a grid of pixels. Clustering is the process of assigning data
objects into a set of disjoint groups called clusters so that objects in each cluster are more
similar to each other than objects from different clusters. Clustering techniques are applied in
many application areas such as pattern recognition, data mining, machine learning, etc.
Clustering algorithms can be broadly classified as Hard, Fuzzy, Possibility, and Probabilistic .Kmeans
is one of the most popular hard clustering algorithms which partitions data objects into k
clusters where the number of clusters, k, is decided in advance according to application
purposes. This model is inappropriate for real data sets in which there are no definite boundaries
between the clusters. After the fuzzy theory introduced by Lotfi Zadeh, the researchers put the
fuzzy theory into clustering. Fuzzy algorithms can assign data object partially to multiple
clusters. The degree of membership in the fuzzy clusters depends on the closeness of the data
object to the cluster centers. The most popular fuzzy clustering algorithm is fuzzy c-means (FCM)
which introduced by Bezdek in 1974 and now it is widely used. Fuzzy c-means clustering is an
effective algorithm, but the random selection in center points makes iterative process falling into
the local optimal solution easily. For solving this problem, recently evolutionary algorithms such
as genetic algorithm (GA), simulated annealing (SA), ant colony optimization (ACO) , and particle swarm optimization (PSO) have been successfully applied.
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Application of optimization algorithms for classification problemIJECEIAES
The work presented in this paper investigates the use of metaheuristic optimization algorithms for the face recognition problem. In the first setup, a face recognition system is implemented using particle swarm optimization (PSO) and firefly optimization algorithms, separately. PSO and firefly are used for forming the feature vectors in the feature selection stage. These feature vectors serve as the new representation for the face images that will be fed to the classifier. In the second setup, selected features from both PSO and firefly algorithms are fused to form one single feature vector for each face image before the classification stage. Extensive simulations are conducted using Poznan University of Technology (PUT) and face recognition technology (FERET) face databases. Optimal values for population size and maximum iterations number were selected before conducting the experiments. The effect of using different numbers of selected features on the performance is investigated for feature selection using PSO, firefly, and feature fusion of both.
Shift Invarient and Eigen Feature Based Image Fusion ijcisjournal
Image fusion is a technique of fusing multiple images for better information and more accurate image
compared input images. Image fusion has applications in biomedical imaging, remote sensing, pattern
recognition, multi-focus image integration, and modern military. The proposed methodology uses benefits
of Stationary Wavelet Transform (SWT) and Principal Component Analysis (PCA) to fuse the two images.
The obtained results are compared with exiting methodologies and shows robustness in terms of entropy,
Peak Signal to Noise Ratio (PSNR) and standard deviation.
Multi objective predictive control a solution using metaheuristicsijcsit
The application of multi objective model predictive control approaches is significantly limited with
computation time associated with optimization algorithms. Metaheuristics are general purpose heuristics
that have been successfully used in solving difficult optimization problems in a reasonable computation
time. In this work , we use and compare two multi objective metaheuristics, Multi-Objective Particle
swarm Optimization, MOPSO, and Multi-Objective Gravitational Search Algorithm, MOGSA, to generate
a set of approximately Pareto-optimal solutions in a single run. Two examples are studied, a nonlinear
system consisting of two mobile robots tracking trajectories and avoiding obstacles and a linear multi
variable system. The computation times and the quality of the solution in terms of the smoothness of the
control signals and precision of tracking show that MOPSO can be an alternative for real time
applications.
We presents a technique for moving objects extraction. There are several different approaches for moving object extraction, clustering is one of object extraction method with a stronger teorical foundation used in many applications. And need high performance in many extraction process of moving object. We compare K-Means and Self-Organizing Map method for extraction moving objects, for performance measurement of moving object extraction by applying MSE and PSNR. According to experimental result that the MSE value of K-Means is smaller than Self-Organizing Map. It is also that PSNR of K-Means is higher than Self-Organizing Map algorithm. The result proves that K-Means is a promising method to cluster pixels in moving objects extraction.
A Blind Steganalysis on JPEG Gray Level Image Based on Statistical Features a...IJERD Editor
This paper presents a blind steganalysis technique to effectively attack the JPEG steganographic
schemes i.e. Jsteg, F5, Outguess and DWT Based. The proposed method exploits the correlations between
block-DCTcoefficients from intra-block and inter-block relation and the statistical moments of characteristic
functions of the test image is selected as features. The features are extracted from the BDCT JPEG 2-array.
Support Vector Machine with cross-validation is implemented for the classification.The proposed scheme gives
improved outcome in attacking.
QUALITY ASSESSMENT OF PIXEL-LEVEL IMAGE FUSION USING FUZZY LOGICijsc
Image fusion is to reduce uncertainty and minimize redundancy in the output while maximizing relevant information from two or more images of a scene into a single composite image that is more informative and is more suitable for visual perception or processing tasks like medical imaging, remote sensing, concealed weapon detection, weather forecasting, biometrics etc. Image fusion combines registered images to
produce a high quality fused image with spatial and spectral information. The fused image with more information will improve the performance of image analysis algorithms used in different applications. In this paper, we proposed a fuzzy logic method to fuse images from different sensors, in order to enhance the
quality and compared proposed method with two other methods i.e. image fusion using wavelet transform and weighted average discrete wavelet transform based image fusion using genetic algorithm (here onwards abbreviated as GA) along with quality evaluation parameters image quality index (IQI), mutual
information measure ( MIM), root mean square error (RMSE), peak signal to noise ratio (PSNR), fusion factor (FF), fusion symmetry (FS) and fusion index (FI) and entropy. The results obtained from proposed fuzzy based image fusion approach improves quality of fused image as compared to earlier reported
methods, wavelet transform based image fusion and weighted average discrete wavelet transform based
image fusion using genetic algorithm.
ADOPTING AND IMPLEMENTATION OF SELF ORGANIZING FEATURE MAP FOR IMAGE FUSIONijistjournal
A different image fusion algorithm based on self organizing feature map is proposed in this paper, aiming to produce quality images. Image Fusion is to integrate complementary and redundant information from multiple images of the same scene to create a single composite image that contains all the important features of the original images. The resulting fused image will thus be more suitable for human and machine perception or for further image processing tasks. The existing fusion techniques based on either direct operation on pixels or segments fail to produce fused images of the required quality and are mostly application based. The existing segmentation algorithms become complicated and time consuming when multiple images are to be fused. A new method of segmenting and fusion of gray scale images adopting Self organizing Feature Maps(SOM) is proposed in this paper. The Self Organizing Feature Maps is adopted to produce multiple slices of the source and reference images based on various combination of gray scale and can dynamically fused depending on the application. The proposed technique is adopted and analyzed for fusion of multiple images. The technique is robust in the sense that there will be no loss in information due to the property of Self Organizing Feature Maps; noise removal in the source images done during processing stage and fusion of multiple images is dynamically done to get the desired results. Experimental results demonstrate that, for the quality multifocus image fusion, the proposed method performs better than some popular image fusion methods in both subjective and objective qualities.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
EDGE DETECTION IN SEGMENTED IMAGES THROUGH MEAN SHIFT ITERATIVE GRADIENT USIN...ijscmcj
In this paper, we propose a new method for edge detection in obtained images from the Mean Shift iterative algorithm. The comparable, proportional and symmetrical images are de?ned and the importance of Ring Theory is explained. A relation of equivalence among proportional images are de?ned for image groups in equivalent classes. The length of the mean shift vector is used in order to quantify the homogeneity of the neighborhoods. This gives a measure of how much uniform are the regions that compose the image. Edge detection is carried out by using the mean shift gradient based on symmetrical images. The difference among the values of gray levels are accentuated or these are decreased to enhance the interest region contours. The chosen images for the experiments were standard images and real images (cerebral hemorrhage images). The obtained results were compared with the Canny detector, and our results showed a good performance as for the edge continuity.
Optimized Neural Network for Classification of Multispectral ImagesIDES Editor
The proposed work involves the multiobjective PSO
based optimization of artificial neural network structure for
the classification of multispectral satellite images. The neural
network is used to classify each image pixel in various land
cove types like vegetations, waterways, man-made structures
and road network. It is per pixel supervised classification using
spectral bands (original feature space). Use of neural network
for classification requires selection of most discriminative
spectral bands and determination of optimal number of nodes
in hidden layer. We propose new methodology based on
multiobjective particle swarm optimization (MOPSO) to
determine discriminative spectral bands and the number of
hidden layer node simultaneously. The result obtained using
such optimized neural network is compared with that of
traditional classifiers like MLC and Euclidean classifier. The
performance of all classifiers is evaluated quantitatively using
Xie-Beni and â indexes. The result shows the superiority of
the proposed method.
Adaptive threshold for moving objects detection using gaussian mixture modelTELKOMNIKA JOURNAL
Moving object detection becomes the important task in the video surveilance system. Defining the threshold automatically is challenging to differentiate the moving object from the background within a video. This study proposes gaussian mixture model (GMM) as a threshold strategy in moving object detection. The performance of the proposed method is compared to the Otsu algorithm and gray threshold as the baseline method using mean square error (MSE) and Peak Signal Noise Ratio (PSNR). The performance comparison of the methods is evaluated on human video dataset. The average result of MSE value GMM is 257.18, Otsu is 595.36 and Gray is 645.39, so the MSE value is lower than Otsu and Gray threshold. The average result of PSNR value GMM is 24.71, Otsu is 20.66 and Gray is 19.35, so the PSNR value is higher than Otsu and Gray threshold. The performance of the proposed method outperforms the baseline method in term of error detection.
A NOVEL APPROACH FOR SEGMENTATION OF SECTOR SCAN SONAR IMAGES USING ADAPTIVE ...ijistjournal
The SAR and SAS images are perturbed by a multiplicative noise called speckle, due to the coherent nature of the scattering phenomenon. If the background of an image is uneven, the fixed thresholding technique is not suitable to segment an image using adaptive thresholding method. In this paper a new Adaptive thresholding method is proposed to reduce the speckle noise, preserving the structural features and textural information of Sector Scan SONAR (Sound Navigation and Ranging) images. Due to the massive proliferation of SONAR images, the proposed method is very appealing in under water environment applications. In fact it is a pre- treatment required in any SONAR images analysis system. The results obtained from the proposed method were compared quantitatively and qualitatively with the results obtained from the other speckle reduction techniques and demonstrate its higher performance for speckle reduction in the SONAR images.
Fast Motion Estimation for Quad-Tree Based Video Coder Using Normalized Cross...CSCJournals
Motion estimation is the most challenging and time consuming stage in block based video codec. To reduce the computation time, many fast motion estimation algorithms were proposed and implemented. This paper proposes a quad-tree based Normalized Cross Correlation (NCC) measure for obtaining estimates of inter-frame motion. The measure operates in frequency domain using FFT algorithm as the similarity measure with an exhaustive full search in region of interest. NCC is a more suitable similarity measure than Sum of Absolute Difference (SAD) for reducing the temporal redundancy in video compression since we can attain flatter residual after motion compensation. The degrees of homogeneous and stationery regions are determined by selecting suitable initial fixed threshold for block partitioning. An experimental result of the proposed method shows that actual numbers of motion vectors are significantly less compared to existing methods with marginal effect on the quality of reconstructed frame. It also gives higher speed up ratio for both fixed block and quad-tree based motion estimation methods.
An efficient image compression algorithm using dct biorthogonal wavelet trans...eSAT Journals
Abstract
Recently the digital imaging applications is increasing significantly and it leads the requirement of effective image compression techniques. Image compression removes the redundant information from an image. By using it we can able to store only the necessary information which helps to reduce the transmission bandwidth, transmission time and storage size of image. This paper proposed a new image compression technique using DCT-Biorthogonal Wavelet Transform with arithmetic coding for improvement the visual quality of an image. It is a simple technique for getting better compression results. In this new algorithm firstly Biorthogonal wavelet transform is applied and then 2D DCT-Biorthogonal wavelet transform is applied on each block of the low frequency sub band. Finally, split all values from each transformed block and arithmetic coding is applied for image compression.
Keywords: Arithmetic coding, Biorthogonal wavelet Transform, DCT, Image Compression.
Ant Colony Optimization (ACO) based Data Hiding in Image Complex Region IJECEIAES
This paper presents data an Ant colony optimization (ACO) based data hiding technique. ACO is used to detect complex region of cover image and afterward, least significant bits (LSB) substitution is used to hide secret information in the detected complex regions’ pixels. ACO is an algorithm developed inspired by the inborn manners of ant species. The ant leaves pheromone on the ground for searching food and provisions. The proposed ACO-based data hiding in complex region establishes an array of pheromone, also called pheromone matrix, which represents the complex region in sequence at each pixel position of the cover image. The pheromone matrix is developed according to the movements of ants, determined by local differences of the image element’s intensity. The least significant bits of complex region pixels are substituted with message bits, to hide secret information. The experimental results, provided, show the significance of the performance of the proposed method.
Image segmentation is useful in many applications. It can identify the regions of interest in a scene or annotate
the data. It categorizes the existing segmentation algorithm into region-based segmentation, data clustering, and
edge-base segmentation. Region-based segmentation includes the seeded and unseeded region growing
algorithms, the JSEG, and the fast scanning algorithm. Due to the presence of speckle noise, segmentation of
Synthetic Aperture Radar (SAR) images is still a challenging problem. We proposed a fast SAR image
segmentation method based on Particle Swarm Optimization-Gravitational Search Algorithm (PSO-GSA). In
this method, threshold estimation is regarded as a search procedure that examinations for an appropriate value in
a continuous grayscale interval. Hence, PSO-GSA algorithm is familiarized to search for the optimal threshold.
Experimental results indicate that our method is superior to GA based, AFS based and ABC based methods in
terms of segmentation accuracy, segmentation time, and Thresholding.
TERRIAN IDENTIFICATION USING CO-CLUSTERED MODEL OF THE SWARM INTELLEGENCE & S...cscpconf
A digital image is nothing more than data -- numbers indicating variations of red, green, and
blue at a particular location on a grid of pixels. Clustering is the process of assigning data
objects into a set of disjoint groups called clusters so that objects in each cluster are more
similar to each other than objects from different clusters. Clustering techniques are applied in
many application areas such as pattern recognition, data mining, machine learning, etc.
Clustering algorithms can be broadly classified as Hard, Fuzzy, Possibility, and Probabilistic .Kmeans
is one of the most popular hard clustering algorithms which partitions data objects into k
clusters where the number of clusters, k, is decided in advance according to application
purposes. This model is inappropriate for real data sets in which there are no definite boundaries
between the clusters. After the fuzzy theory introduced by Lotfi Zadeh, the researchers put the
fuzzy theory into clustering. Fuzzy algorithms can assign data object partially to multiple
clusters. The degree of membership in the fuzzy clusters depends on the closeness of the data
object to the cluster centers. The most popular fuzzy clustering algorithm is fuzzy c-means (FCM)
which introduced by Bezdek in 1974 and now it is widely used. Fuzzy c-means clustering is an
effective algorithm, but the random selection in center points makes iterative process falling into
the local optimal solution easily. For solving this problem, recently evolutionary algorithms such
as genetic algorithm (GA), simulated annealing (SA), ant colony optimization (ACO) , and particle swarm optimization (PSO) have been successfully applied.
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Application of optimization algorithms for classification problemIJECEIAES
The work presented in this paper investigates the use of metaheuristic optimization algorithms for the face recognition problem. In the first setup, a face recognition system is implemented using particle swarm optimization (PSO) and firefly optimization algorithms, separately. PSO and firefly are used for forming the feature vectors in the feature selection stage. These feature vectors serve as the new representation for the face images that will be fed to the classifier. In the second setup, selected features from both PSO and firefly algorithms are fused to form one single feature vector for each face image before the classification stage. Extensive simulations are conducted using Poznan University of Technology (PUT) and face recognition technology (FERET) face databases. Optimal values for population size and maximum iterations number were selected before conducting the experiments. The effect of using different numbers of selected features on the performance is investigated for feature selection using PSO, firefly, and feature fusion of both.
BEHAVIOR STUDY OF ENTROPY IN A DIGITAL IMAGE THROUGH AN ITERATIVE ALGORITHM O...ijscmcj
Image segmentation is a critical step in computer vision tasks constituting an essential issue for pattern
recognition and visual interpretation. In this paper, we study the behavior of entropy in digital images
through an iterative algorithm of mean shift filtering. The order of a digital image in gray levels is defined.
The behavior of Shannon entropy is analyzed and then compared, taking into account the number of
iterations of our algorithm, with the maximum entropy that could be achieved under the same order. The
use of equivalence classes it induced, which allow us to interpret entropy as a hyper-surface in real m-
dimensional space. The difference of the maximum entropy of order n and the entropy of the image is used
to group the the iterations, in order to caractrizes the performance of the algorithm
CELL TRACKING QUALITY COMPARISON BETWEEN ACTIVE SHAPE MODEL (ASM) AND ACTIVE ...ijitcs
The aim of this paper is to introduce a comparison between cell tracking using active shape model (ASM)
and active appearance model (AAM) algorithms, to compare the cells tracking quality between the two
methods to track the mobility of the living cells. Where sensitive and accurate cell tracking system is
essential to cell motility studies. The active shape model (ASM) and active appearance model (AAM)
algorithms has proved to be a successful methods for matching statistical models. The experimental results
indicate the ability of (AAM) meth
Usage of Shape From Focus Method For 3D Shape Recovery And Identification of ...CSCJournals
Shape from focus is a method of 3D shape and depth estimation of an object from a sequence of pictures with changing focus settings. In this paper we propose a novel method of shape recovery, which was originally created for shape and position identification of glass pipette in medical hybrid robot. In proposed algorithm, Sum of Modified Laplacian is used as a focus operator. Each step of the algorithm is tested in order to pick the operators with the best results. Reconstruction allows not only to determine shape but also precisely define position of the object. The results of proposed method, performed on real objects, have shown the efficiency of this scheme.
This article aims at a new algorithm for tracking moving objects in the long term. We have tried to overcome some potential difficulties, first by a comparative study of the measuring methods of the difference and the similarity between the template and the source image. In the second part, an improvement of the best method allows us to follow the target in a robust way. This method also allows us to effectively overcome the problems of geometric deformation, partial occlusion and recovery after the target leaves the field of vision. The originality of our algorithm is based on a new model, which does not depend on a probabilistic process and does not require a data based detection in advance. Experimental results on several difficult video sequences have proven performance advantages over many recent trackers. The developed algorithm can be employed in several applications such as video surveillance, active vision or industrial visual servoing.
nternational Journal of Engineering Research and Development is an international premier peer reviewed open access engineering and technology journal promoting the discovery, innovation, advancement and dissemination of basic and transitional knowledge in engineering, technology and related disciplines.
GRAY SCALE IMAGE SEGMENTATION USING OTSU THRESHOLDING OPTIMAL APPROACHJournal For Research
Image segmentation is often used to distinguish the foreground from the background. Image segmentation is one of the difficult research problems in the machine vision industry and pattern recognition. Thresholding is a simple but effective method to separate objects from the background. A commonly used method, the Otsu method, improves the image segmentation effect obviously. It can be implemented by two different approaches: Iteration approach and Custom approach. In this paper both approaches has been implemented on MATLAB and give the comparison of them and show that both has given almost the same threshold value for segmenting image but the custom approach requires less computations. So if this method will be implemented on hardware in an optimized way then custom approach is the best option.
Multiple Ant Colony Optimizations for Stereo MatchingCSCJournals
The stereo matching problem, which obtains the correspondence between right and left images, can be cast as a search problem. The matching of all candidates in the same line forms a 2D optimization task and the two dimensional (2D) optimization is a NP-hard problem. There are two characteristics in stereo matching. Firstly, the local optimization process along each scan-line can be done concurrently; secondly, there are some relationship among adjacent scan-lines can be explored to promote the matching correctness. Although there are many methods, such as GCPs, GGCPs are proposed, but these so called GCPs maybe not be ground. The relationship among adjacent scan-lines is posteriori, that is to say the relationship can only be discovered after every optimization is finished. The Multiple Ant Colony Optimization(MACO) is efficient to solve large scale problem. It is a proper way to settle down the stereo matching task with constructed MACO, in which the master layer values the sub-solutions and propagate the reliability after every local optimization is finished. Besides, whether the ordering and uniqueness constraints should be considered during the optimization is discussed, and the proposed algorithm is proved to guarantee its convergence to find the optimal matched pairs.
Comparison of Neural Network Training Functions for Hematoma Classification i...IOSR Journals
Classification is one of the most important task in application areas of artificial neural networks
(ANN).Training neural networks is a complex task in the supervised learning field of research. The main
difficulty in adopting ANN is to find the most appropriate combination of learning, transfer and training
function for the classification task. We compared the performances of three types of training algorithms in feed
forward neural network for brain hematoma classification. In this work we have selected Gradient Descent
based backpropagation, Gradient Descent with momentum, Resilence backpropogation algorithms. Under
conjugate based algorithms, Scaled Conjugate back propagation, Conjugate Gradient backpropagation with
Polak-Riebreupdates(CGP) and Conjugate Gradient backpropagation with Fletcher-Reeves updates (CGF).The
last category is Quasi Newton based algorithm, under this BFGS, Levenberg-Marquardt algorithms are
selected. Proposed work compared training algorithm on the basis of mean square error, accuracy, rate of
convergence and correctness of the classification. Our conclusion about the training functions is based on the
simulation results
REMOVING OCCLUSION IN IMAGES USING SPARSE PROCESSING AND TEXTURE SYNTHESISIJCSEA Journal
We provide a solution to problem of occlusion in images by removing the occluding region and filling in the gap left behind. Inpainting algorithms fail in filling occlusions when the occluding region is large since there is loss of both structure and texture. We decompose the image into structure and texture images using a decomposition method based on sparseness of the image. The sparse reconstruction of the decomposed images result in an inpainted image with all the structures made intact. A texture synthesis is performed on the texture only image. Finally the structure and texture images are combined to get an image where the occlusion is filled. The performance of our algorithm in terms of visual effectiveness is compared with other algorithms used for inpainting.
Super-resolution (SR) is the process of obtaining a high resolution (HR) image or
a sequence of HR images from a set of low resolution (LR) observations. The block
matching algorithms used for motion estimation to obtain motion vectors between the
frames in Super-resolution. The implementation and comparison of two different types of
block matching algorithms viz. Exhaustive Search (ES) and Spiral Search (SS) are
discussed. Advantages of each algorithm are given in terms of motion estimation
computational complexity and Peak Signal to Noise Ratio (PSNR). The Spiral Search
algorithm achieves PSNR close to that of Exhaustive Search at less computation time than
that of Exhaustive Search. The algorithms that are evaluated in this paper are widely used
in video super-resolution and also have been used in implementing various video standards
like H.263, MPEG4, H.264.
Image Steganography Using Wavelet Transform And Genetic AlgorithmAM Publications
This paper presents the application of Wavelet Transform and Genetic Algorithm in a novel
steganography scheme. We employ a genetic algorithm based mapping function to embed data in Discrete Wavelet
Transform coefficients in 4x4 blocks on the cover image. The optimal pixel adjustment process is applied after
embedding the message. We utilize the frequency domain to improve the robustness of steganography and, we
implement Genetic Algorithm and Optimal Pixel Adjustment Process to obtain an optimal mapping function to
reduce the difference error between the cover and the stego-image, therefore improving the hiding capacity with
low distortions. Our Simulation results reveal that the novel scheme outperforms adaptive steganography technique
based on wavelet transform in terms of peak signal to noise ratio and capacity, 39.94 dB and 50% respectively.
An enhanced fireworks algorithm to generate prime key for multiple users in f...journalBEEI
This work presents a new method to enhance the performance of fireworks algorithm to generate a prime key for multiple users. A threshold technique in image segmentation is used as one of the major steps. It is used processing the digital image. Some useful algorithms and methods for dividing and sharing an image, including measuring, recognizing, and recognizing, are common. In this research, we proposed a hybrid technique of fireworks and camel herd algorithms (HFCA), where Fireworks are based on 3-dimension (3D) logistic chaotic maps. Both, the Otsu method and the convolution technique are used in the pre-processing image for further analysis. The Otsu is employed to segment the image and find the threshold for each image, and convolution is used to extract the features of the used images. The sample of the images consists of two images of fingerprints taken from the Biometric System Lab (University of Bologna). The performance of the anticipated method is evaluated by using FVC2004 dataset. The results of the work enhanced algorithm, so quick response code (QRcode) is used to generate a stream key by using random text or number, which is a class of symmetric-key algorithm that operates on individual bits or bytes.
A CONCERT EVALUATION OF EXEMPLAR BASED IMAGE INPAINTING ALGORITHMS FOR NATURA...cscpconf
Image inpainting derives from restoration of art works, and has been applied to repair ancient
art works. Inpainting is a technique of restoring a partially damaged or occluded image in an
undetectable way. It fills the damaged part of an image by employing information of the
undamaged part according to some rules to make it look “reasonable” to human eyes. Digital
image inpainting is relatively new area of research, but numerous and different approaches to
tackle the inpainting problem have been proposed since the concept was first introduced. This
paper analyzes and compares the recent exemplar based inpainting algorithms by Minqin Wang
and Hao Guo et al. A number of examples on real images are demonstrated to evaluate the
results of algorithms using Peak Signal to Noise Ratio (PSNR)
OPTIMAL GLOBAL THRESHOLD ESTIMATION USING STATISTICAL CHANGE-POINT DETECTIONsipij
Aim of this paper is reformulation of global image thresholding problem as a well-founded statistical
method known as change-point detection (CPD) problem. Our proposed CPD thresholding algorithm does
not assume any prior statistical distribution of background and object grey levels. Further, this method is
less influenced by an outlier due to our judicious derivation of a robust criterion function depending on
Kullback-Leibler (KL) divergence measure. Experimental result shows efficacy of proposed method
compared to other popular methods available for global image thresholding. In this paper we also propose
a performance criterion for comparison of thresholding algorithms. This performance criteria does not
depend on any ground truth image. We have used this performance criterion to compare the results of
proposed thresholding algorithm with most cited global thresholding algorithms in the literature.
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...Levi Shapiro
Letter from the Congress of the United States regarding Anti-Semitism sent June 3rd to MIT President Sally Kornbluth, MIT Corp Chair, Mark Gorenberg
Dear Dr. Kornbluth and Mr. Gorenberg,
The US House of Representatives is deeply concerned by ongoing and pervasive acts of antisemitic
harassment and intimidation at the Massachusetts Institute of Technology (MIT). Failing to act decisively to ensure a safe learning environment for all students would be a grave dereliction of your responsibilities as President of MIT and Chair of the MIT Corporation.
This Congress will not stand idly by and allow an environment hostile to Jewish students to persist. The House believes that your institution is in violation of Title VI of the Civil Rights Act, and the inability or
unwillingness to rectify this violation through action requires accountability.
Postsecondary education is a unique opportunity for students to learn and have their ideas and beliefs challenged. However, universities receiving hundreds of millions of federal funds annually have denied
students that opportunity and have been hijacked to become venues for the promotion of terrorism, antisemitic harassment and intimidation, unlawful encampments, and in some cases, assaults and riots.
The House of Representatives will not countenance the use of federal funds to indoctrinate students into hateful, antisemitic, anti-American supporters of terrorism. Investigations into campus antisemitism by the Committee on Education and the Workforce and the Committee on Ways and Means have been expanded into a Congress-wide probe across all relevant jurisdictions to address this national crisis. The undersigned Committees will conduct oversight into the use of federal funds at MIT and its learning environment under authorities granted to each Committee.
• The Committee on Education and the Workforce has been investigating your institution since December 7, 2023. The Committee has broad jurisdiction over postsecondary education, including its compliance with Title VI of the Civil Rights Act, campus safety concerns over disruptions to the learning environment, and the awarding of federal student aid under the Higher Education Act.
• The Committee on Oversight and Accountability is investigating the sources of funding and other support flowing to groups espousing pro-Hamas propaganda and engaged in antisemitic harassment and intimidation of students. The Committee on Oversight and Accountability is the principal oversight committee of the US House of Representatives and has broad authority to investigate “any matter” at “any time” under House Rule X.
• The Committee on Ways and Means has been investigating several universities since November 15, 2023, when the Committee held a hearing entitled From Ivory Towers to Dark Corners: Investigating the Nexus Between Antisemitism, Tax-Exempt Universities, and Terror Financing. The Committee followed the hearing with letters to those institutions on January 10, 202
Honest Reviews of Tim Han LMA Course Program.pptxtimhan337
Personal development courses are widely available today, with each one promising life-changing outcomes. Tim Han’s Life Mastery Achievers (LMA) Course has drawn a lot of interest. In addition to offering my frank assessment of Success Insider’s LMA Course, this piece examines the course’s effects via a variety of Tim Han LMA course reviews and Success Insider comments.
Francesca Gottschalk - How can education support child empowerment.pptxEduSkills OECD
Francesca Gottschalk from the OECD’s Centre for Educational Research and Innovation presents at the Ask an Expert Webinar: How can education support child empowerment?
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Introduction to AI for Nonprofits with Tapp NetworkTechSoup
Dive into the world of AI! Experts Jon Hill and Tareq Monaur will guide you through AI's role in enhancing nonprofit websites and basic marketing strategies, making it easy to understand and apply.
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
Macroeconomics- Movie Location
This will be used as part of your Personal Professional Portfolio once graded.
Objective:
Prepare a presentation or a paper using research, basic comparative analysis, data organization and application of economic information. You will make an informed assessment of an economic climate outside of the United States to accomplish an entertainment industry objective.
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
Instructions for Submissions thorugh G- Classroom.pptx
Particle Swarm Optimization for Nano-Particles Extraction from Supporting Materials
1. M.A.Abdou
International Journal of Image Processing (IJIP), Volume (5) : Issue (3) : 2011 361
Particle Swarm Optimization for Nano-Particles Extraction from
Supporting Materials
M.A.Abdou m.abdou@pua.edu.eg
Assistant Professor, Informatics Research Institute
City for Scientific Research and Technology Applications
P.O.Box 21934, Alexandria, Egypt
Abstract
Evolutionary computation for image processing is an encouraging research area. Transmission electron
microscopy (TEM) images when used to characterize metallic and non-metallic nano-particles (size,
morphology, structure, or composition), need such advanced image processing algorithms. This paper
presents an efficient evolutionary computational method, particle swarm optimization (PSO), for automatic
segmentation of nano-particles. A threshold-based segmentation technique is applied, where image
entropy is attacked as a minimization problem to specify local and global thresholds. We are concerned
with reducing wrong characterization of nano-particles due to concentration of liquid solutions or
supporting material within the acquired image. The obtained results are compared with manual
techniques and with previous researches in this area.
Keywords: Particle Swarm Optimization, TEM Images Scanning, Threshold Segmentation, Nano-
Particles.
1. INTRODUCTION
Evolutionary computation algorithms (genetic-algorithms, genetic programming, or particle swarm
optimizations) and data mining tools (neural network or decision tree) since introduced in image
processing, have leaded to significant results [1-2-3]. In medical data classifications, neural networks [4–
5] and linear programming models [6] obtained high classification accuracy rate, however their decision
process was poor. Better results were achieved when using hybrid computation techniques, i.e. fuzzy-
genetic, or neuro-fuzzy. In [7], a hybrid model was developed by introducing PSO for medical data
classification.
Particle Swarm Optimization (PSO) is a relatively recent optimization method based on the idea of birds’
swarming. PSO is similar to the Genetic Algorithm (GA) in the sense that they are both population-based
search approaches. They both depend on information sharing among their population members to
enhance their search processes. GA converges towards high quality solutions but within many and many
iterations. PSO is easy to be implemented, have few parameters to be adjusted, and is more
computationally efficient. This superior computational efficiency makes PSO more consistent in future
image processing optimization problems [8].
From image processing point of view, TEM images characterization could be either manual or automatic.
Existing techniques are facing different problems: illumination changes across the image, intensity
variation of similar nano-particles due to diffraction contrast and/or liquids, and weak signal-to noise-ratio.
TEM images contents’ usually investigate binary or multi-class classifications [9]. Thresholding is an
efficient image classifier tool, especially when real time processing is needed. Threshold selection can be
either global or local. A global thresholding technique is one that segments the entire image with a single
threshold value, whereas a local thresholding technique is one that partitions a given image into
subimages and determines a threshold for each subimage [10]. Global techniques are further classified
into point-dependent or region-dependent methods. When the threshold value is determined from the
pixel gray tone independently from the gray tone of its neighborhood pixels, the thresholding method is
point-dependent. On the other hand, a method is called region-dependent if the threshold value is
determined from the local property within a neighborhood of a pixel.
2. M.A.Abdou
International Journal of Image Processing (IJIP), Volume (5) : Issue (3) : 2011 362
1.1 Particle Swarm Optimization
PSO was originally introduced by Eberhart and Kennedy in 1995 [11]. In PSO algorithm, birds in a flock
are our particles, where these particles are considered as simple agents through a problem space (food).
PSO could be considered as an optimization technique. The algorithm starts by initializing a group of
randomly distributed particles. These particles freely fly across the concerned space. During their flight,
each initial particle should update its own velocity and position based on the experience of its own and
the entire population. Instant particles’ location in the multi-dimensional problem space represents one of
the assumed solutions for the problem. When these particles move to new locations, another solution is
generated. The velocity and direction of each particle will be recorded then altered in each generation. In
any iteration, the particle’s new location is computed by adding the particle’s current velocity to its
location. A fitness function is calculated for each solution and is to be minimized via further population of
the swarms [8]. The PSO algorithm could better be summarized by means of a descriptive block diagram
in the next section.
1.2 Nano-Particle Characterization Techniques in TEM Images
In [12], we have introduced a new fast Transmission Electron Microscopy (TEM) images clustering
technique. In this research, analysis of particle sizes and shapes from two-dimensional TEM images were
attacked. The hybrid method consisted of: an automatic segmentation and nano-particles counting. The
automatic segmentation has assumed what we called “Automatic Threshold Generator” (ATG) towards a
high efficient multiple- regions segmentation technique. ATG generated a vector of bi-threshold values
used for electronic microscopic input image segmentation. This ATG gave good results when compared
with existing algorithms [13]. However, TEM images where concentration of liquid solutions or supporting
material affects image intensities failed to be counted via ATG correctly. Results were not comparable to
manual results in these TEM images. This could be referred to wrong classification of supporting
materials as nano particles.
2. The Proposed Nano-particle Segmentation Algorithm
In this paper we are attacking threshold-segmentation problem as a minimization problem, where both
local and global minima are to be detected via an evolutionary computational method (PSO). The
objective is to search for a better classifier that differentiates between nano-particles and their supporting
materials.
2.1 PSO Algorithm
Figure (1) presents a summary of the PSO steps. Equation (1) refers to a number ‘rand’ which is a
generated positive random number les than unity. As we are dealing with computer based simulation, the
variable ∆t is generally referred to the iterations span that could be considered unity. Thus, X
i
0 is limited
between a minimum and a maximum value. The second step is the particles velocity update through the
iterative procedure. In equation (2), the particles velocities are updated (Vk+1
i
) looking towards the initial
velocity (Vk
i
) and influences emerging from: the particle itself (pi) and the swarm (pk
g
); where pi denotes
the best position required from the fitness function for the particle itself and pk
g
is the best position
obtained globally for any particle within the whole swarm towards the desired minimum. The values of w,
c1, and c2 are all knowledge based constants that take values from experience knowledge. Equation (3)
explains the position updates. Equation (4) deals with the memory update for the fittest position and
velocity. Finally, equation (5) concerns with the stopping criteria. The algorithm could stop in one of two
conditions: either the error obtained is less than the required error, or the algorithm undergoes successive
iterations without any improvement in the error.
2.2 Entropy- Based Segmentation
The motivation of application of the maximum entropy method to solve threshold selection has been
started since 1989 [14]. The maximum entropy principle states that, for a given amount of information, the
probability distribution which best describes our knowledge is the one that maximizes the Shannon
entropy subjected to a set of constrains [15]. The Entropy mathematical model could be summarized as
figure (2). This model assumes first the number of classes required for the nano-particles image. We have
selected two thresholds in order to compare the obtained results with the ATG previously presented in
3. M.A.Abdou
International Journal of Image Processing (IJIP), Volume (5) : Issue (3) : 2011 363
[12]. Selecting two threshold means that we have three classes: [0, t1] where we will define an entropy H0;
from [t1, t2] where entropy H1 is defined, and finally [t2, L] where H2 is calculated. ‘L’ represents the
maximum grayscale obtained during histogram analysis.
2.3 Thresholds Generation
The previously discussed PSO algorithm is included within a segmentation procedure, as shown in figure
(3). This algorithm starts with a histogram generator, a PSO implemented algorithm where the target
function is the entropy and the output is the threshold vector. Images are then segmented to separate the
nano-particles from the TEM image. Swarming, the PSO technique generates a vector of bi-threshold
values used for electronic microscopic input image segmentation.
FIGURE 1: PSO Segmentation Algorithm.
Generating Particles:
t
X
V
XXrandXX
i
i
i
∆
=
−+=
0
0
minmaxmin0 ].[
(1)
Velocity Update:
].[.].[.. 211
t
Xp
randc
t
Xp
randcVwV
i
k
g
k
i
kii
k
i
k
∆
−
+
∆
−
+=+
(2
Position Update:
tVXX i
k
i
k
i
k ∆+= ++ .11
(3)
Memory Update:
)()(if
)()(if
,,
,,
bestiiibesti
bestiiibesti
gfgfgg
pfpfpp
>=
>=
(4)
Stopping Criteria:
Sqpfpf g
qk
g
k ,...2,1)()( =≤− − ε (5)
4. M.A.Abdou
International Journal of Image Processing (IJIP), Volume (5) : Issue (3) : 2011 364
The advantage of the PSO when compared to other existing techniques concerns with the dimensionality
broadband of the PSO. This means that PSO could generate a multi-threshold values vector with n-
dimensions according to the type of images to be segmented. The vector length is related to the entropy
equation applied which in turns is related to the degree of the image classifier.
FIGURE 2: Entropy Calculations of TEM Image.
imageTEMin thepixelsofnumbertotalThe:
(i)levelgrayinpixelsofnumberThe:)(
)(
)( 210
N
ih
N
ih
P
HHHtf
i =
++=
)log()log(
1
)log()log(
1
)log()log(
1
2
2
2
1
1
1
1
0
1
00
0
2
2
1
1
wPP
w
H
wPP
w
H
wPP
w
H
L
t
ii
t
t
ii
t
ii
+
−
=
+
−
=
+
−
=
∑
∑
∑
−
−
∑
∑∑
=
==
−−
L
t
i
t
t
i
t
i
Pw
PwPw
2
2
1
1
2
1
1
1
0
0
(6)
(7)
(8)
imageTEMin thepixelsofnumbertotalThe:
(i)levelgrayinpixelsofnumberThe:)(
)(
)( 210
N
ih
N
ih
P
HHHtf
i =
++=
)log()log(
1
)log()log(
1
)log()log(
1
2
2
2
1
1
1
1
0
1
00
0
2
2
1
1
wPP
w
H
wPP
w
H
wPP
w
H
L
t
ii
t
t
ii
t
ii
+
−
=
+
−
=
+
−
=
∑
∑
∑
−
−
∑
∑∑
=
==
−−
L
t
i
t
t
i
t
i
Pw
PwPw
2
2
1
1
2
1
1
1
0
0
(6)
(7)
(8)
5. M.A.Abdou
International Journal of Image Processing (IJIP), Volume (5) : Issue (3) : 2011 365
TEM image Histogram
Generator
PSO
Threshold
Generator
Threshold
Based
Segmentation
Entropy
(Goal Function)
Nano-
particles
FIGURE 3: The Segmentation Technique: The PSO uses entropy as a goal function.
3. Results and Discussion
The introduced algorithm is applied to a wide TEM data set. Comparison starts with cases described in
[12] to evaluate the improvement gained. Table (1) shows a comparison between this PSO based
segmentation and the previous presented ATG method [12] for four different selected cases: nano1-
nano2- nano3- nano4. Table (2) shows the ratio of nano-particles characterized by means of two different
algorithms. The improvement in the PSO method is remarkable in cases: nano3 and nano4. Figures (4
and 5) show a comparison between two easy segmented cases. PSO gives results approximately equal
to those obtained in [12].
Case ATG [12] Proposed PSO
Nano1 T1 = 0 T2 = 73.5 T1 = 0.0441 T2 = 70.1350
Nano2 T1 = 0 T2 = 83.5 T1 = 1.0643 T2 = 70.449
Nano3 T1 = 0 T2 = 127 T1 = 1.1497 T2 = 50.1398
Nano4 T1 = 0 T2 = 64 T1 = 0.1290 T2 = 30.1241
TABLE 1: Comparison between the two entropy-based segmentation techniques.
Case Ratio of nano-particles extracted Improvement
ATG [12] Proposed PSO
Nano1 0.0401 0.0356 0.4532
Nano2 0.1015 0.0771 2.4475%
Nano3 0.2527 0.0358 21.6934%
Nano4 0.2843 0.0133 27.1027%
TABLE 2: Nano-particles extraction effectiveness of the proposed method.
6. M.A.Abdou
International Journal of Image Processing (IJIP), Volume (5) : Issue (3) : 2011 366
0 50 100 150 200 250 300
0
500
1000
1500
2000
2500
AGT PSO
FIGURE 4: Nano1 case: Original image- Histogram- AGT results- PSO results
AGT PSO
FIGURE 5: Nano2 case: Original image- Histogram- AGT results- PSO results.
Figures (6 and 7) show how the proposed PSO segmentation succeeded in segmenting smaller regions
corresponding to nano-particles areas. False areas are characterized as nano-particles due to fluids
within the specimen sample. In figure (8), the false areas characterized as nano particles using the ATG
[12] are shown. All these areas are not the required targeted particles. The histogram presented in figure
(9) is a comparison of one manual and two automatic counting methods. The PSO presented in this work
is close to the manual method especially in images that are difficult to be segmented.
8. M.A.Abdou
International Journal of Image Processing (IJIP), Volume (5) : Issue (3) : 2011 368
Nano4 nano3
FIGURE 8: PSO improvement: false Area characterized as nano-particles using ATG.
FIGURE 9: Nano-particles counting: Manual- ATG [12]- Proposed method.
9. M.A.Abdou
International Journal of Image Processing (IJIP), Volume (5) : Issue (3) : 2011 369
CONCLUSION
The PSO- automatic segmentation presented has specified a threshold vector that clusters the nano-
particles existing within the TEM image, where image entropy measure is the goal function. Obtained
results show that the proposed segmentation method reduces wrong characterization of nano-particles in
images where concentration of liquid solutions or supporting material affects image intensities. The PSO
segmentation surpasses the ATG, presented in [12], by about 27%. Furthermore, the counted particles
although comparable to manual results, the PSO presented technique has shown high computational
efficiency which serves for real time characterization.
REFERENCES
[1] C.A. Pena-Reyes, M. Sipper. “Evolving fuzzy rules for breast cancer diagnosis”. In Proceedings of
1998 International Symposium on Nonlinear Theory and Applications (NOLTA’98), 2: 369–372,
1998.
[2] C.A. Pena-Reyes, M. Sipper. “A fuzzy-genetic approach to breast cancer diagnosis”. Artificial
Intelligence in Medicine, 17: 131–155, 1999.
[3] X. Chang, J.H. Lilly. “Evolutionary design of a fuzzy classifier from data”. IEEE Transactions on
Systems, Man, and Cybernetics, 34: 1894–1906, 2004.
[4] R. Setiono. “Extracting rules from pruned neural networks for breast cancer diagnosis”. Artificial
Intelligence in Medicine, 8: 37–51, 1996.
[5] R. Setiono, H. Liu. “Symbolic representation of neural networks”. Computer, 29: 71–77, 1996.
[6] K.P. Bennett, O.L. Mangasarian. “Neural network training via linear programming”. Advances in
Optimization and Parallel Computing, 56–67, 1992.
[7] Pei-ChannChang, Jyun-JieLin, Chen-HaoLiu. Computer methods and Programs in Biomedicine,
Vol.(not yet): 2011.
[8] L.Zhang, T.Mei, Y.Liu, D.Tao, H.Zhou. “Visual search reranking via adaptive particle swarm
optimization”. Pattern Recognition, 44: 1811- 1820, 2001.
[9] C.C. Bojarczuk, H.S. Lopes, A.A. Freitas. “Genetic programming for knowledge discovery in chest-
pain diagnosis: exploring a promising data mining approach”. IEEE Engineering in Medicine and
Biology Magazine 19: 38–44, 2000.
[10] Du Feng, Shi Wenkang, Chen Liangzhou, Deng Yong, Zhu Zhenfu. “Infrared image segmentation
with 2-D maximum entropy method based on particle swarm optimization (PSO)”. Pattern
Recognition Letters, 26: 597–603, 2005.
[11] J Kennedy, R. Eberhart. “Particle Swarm Optimization”. 1942–1948, (1995).
[12] M.A. Abdou, Bayumy B.A. Youssef, W.M. Sheta. “Nano-particle Characterization Using a Fast
Hybrid Clustering Technique for TEM Images”. (IJCSIS) International Journal of Computer Science
and Information Security, 8 (9): 101-110, 2010.
[13] T.Chem, Tubitak, H.Woehrle, E.Hutchison, S.Ozkar, G.Finke. “Analysis of Nanoparticle
Transmission Electron Microscopy Data Using a Public- Domain Image-Processing Program”.
Image Metrology, 30: 1-13, 2006.
[14] N.R.Pal, S.K.Pal, Object-background segmentation using new definitions of entropy, Proc. Inst.
Elec. Eng. 136, pp.284-295, 1989.
10. M.A.Abdou
International Journal of Image Processing (IJIP), Volume (5) : Issue (3) : 2011 370
[15] G.J.Klir, T.Folger. “Fuzzy Sets Uncertainty and Information”. Printice Hall, Englewood Cliffs, NJ,
(1988).