In this paper a new approach for invariant recognition of broken rectangular biscuits is proposed using fuzzy membership-distance products, called fuzzy moment descriptors. The existing methods for recognition of flawed rectangular biscuits are mostly based on Hough transform. However these methods are prone to error due to noise and/or variation in illumination. Fuzzy moment descriptors are less sensitive to noise thus making it an effective approach invariant to the above stray external disturbances. Further, the normalization and sorting of the moment vectors make it a size and rotation invariant recognition process .In earlier studies fuzzy moment descriptors has successfully been applied in image matching problem. In this paper the algorithm is applied in recognition of flawed and non-flawed rectangular biscuits. In general the proposed algorithm has potential applications in industrial quality control.
Performance of Efficient Closed-Form Solution to Comprehensive Frontier Exposureiosrjce
IOSR Journal of Electronics and Communication Engineering(IOSR-JECE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of electronics and communication engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in electronics and communication engineering. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Frequency Domain Blockiness and Blurriness Meter for Image Quality AssessmentCSCJournals
Image and video compression introduces distortions (artefacts) to the coded image. The most prominent artefacts added are blockiness and blurriness. Many existing quality meters are normally distortion-specific. This paper proposes an objective quality meter for quantifying the combined blockiness and blurriness distortions in frequency domain. The model first applies edge detection and cancellation, then spatial masking to mimic the characteristics of the human visual system. Blockiness is then estimated by transforming image into frequency domain, followed by finding the ratio of harmonics to other AC components. Blurriness is determined by comparing the high frequency coefficients of the reference and coded images due to the fact that blurriness reduces the high frequency coefficients. Then, both blockiness and blurriness distortions are combined for a single quality metric. The meter is tested on blocky and blurred images from the LIVE image database, with a correlation coefficient of 95-96%.
MAGNETIC RESONANCE BRAIN IMAGE SEGMENTATIONVLSICS Design
Segmentation of tissues and structures from medical images is the first step in many image analysis applications developed for medical diagnosis. With the growing research on medical image segmentation, it is essential to categorize the research outcomes and provide researchers with an overview of the existing segmentation techniques in medical images. In this paper, different image segmentation methods applied on magnetic resonance brain images are reviewed. The selection of methods includes sources from image processing journals, conferences, books, dissertations and thesis. The conceptual details of the methods are explained and mathematical details are avoided for simplicity. Both broad and detailed categorizations of reviewed segmentation techniques are provided. The state of art research is provided with emphasis on developed techniques and image properties used by them. The methods defined are not always mutually independent. Hence, their inter relationships are also stated. Finally, conclusions are drawn summarizing commonly used techniques and their complexities in application.
Comparative analysis and implementation of structured edge active contour IJECEIAES
This paper proposes modified chanvese model which can be implemented on image for segmentation. The structure of paper is based on Linear structure tensor (LST) as input to the variant model. Structure tensor is a matrix illustration of partial derivative information. In the proposed model, the original image is considered as information channel for computing structure tensor. Difference of Gaussian (DOG) is featuring improvement in which we can get less blurred image than original image. In this paper LST is modified by adding intensity information to enhance orientation information. Finally Active Contour Model (ACM) is used to segment the images. The proposed algorithm is tested on various images and also on some images which have intensity inhomogeneity and results are shown. Also, the results with other algorithms like chanvese, Bhattacharya, Gabor based chanvese and Novel structure tensor based model are compared. It is verified that accuracy of proposed model is the best. The biggest advantage of proposed model is clear edge enhancement.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
GREY LEVEL CO-OCCURRENCE MATRICES: GENERALISATION AND SOME NEW FEATURESijcseit
Grey Level Co-occurrence Matrices (GLCM) are one of the earliest techniques used for image texture
analysis. In this paper we defined a new feature called trace extracted from the GLCM and its implications
in texture analysis are discussed in the context of Content Based Image Retrieval (CBIR). The theoretical
extension of GLCM to n-dimensional gray scale images are also discussed. The results indicate that trace
features outperform Haralick features when applied to CBIR.
DOMAIN SPECIFIC CBIR FOR HIGHLY TEXTURED IMAGEScseij
It is A Challenging Task To Build A Cbir System Which Primarily Works On Texture Values As There
Meaning And Semantics Needs A Special Care To Be Mapped With Human Based Languages. We Have
Consider Highly Textured Images Having Properties(Entropy, Homogeneity, Contrast, Cluster Shade, Auto
Correlation)And Have Mapped Using A Fuzzy Minmax Scale W.R.T. Their Degree(High, Low,
Medium)And Technical Interpetation.This Developed System Is Performing Well In Terms Of Precision
And Recall Value Showing That Semantic Gap Has Been Reduced For Highly Textured Images Based Cbir.
Performance of Efficient Closed-Form Solution to Comprehensive Frontier Exposureiosrjce
IOSR Journal of Electronics and Communication Engineering(IOSR-JECE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of electronics and communication engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in electronics and communication engineering. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Frequency Domain Blockiness and Blurriness Meter for Image Quality AssessmentCSCJournals
Image and video compression introduces distortions (artefacts) to the coded image. The most prominent artefacts added are blockiness and blurriness. Many existing quality meters are normally distortion-specific. This paper proposes an objective quality meter for quantifying the combined blockiness and blurriness distortions in frequency domain. The model first applies edge detection and cancellation, then spatial masking to mimic the characteristics of the human visual system. Blockiness is then estimated by transforming image into frequency domain, followed by finding the ratio of harmonics to other AC components. Blurriness is determined by comparing the high frequency coefficients of the reference and coded images due to the fact that blurriness reduces the high frequency coefficients. Then, both blockiness and blurriness distortions are combined for a single quality metric. The meter is tested on blocky and blurred images from the LIVE image database, with a correlation coefficient of 95-96%.
MAGNETIC RESONANCE BRAIN IMAGE SEGMENTATIONVLSICS Design
Segmentation of tissues and structures from medical images is the first step in many image analysis applications developed for medical diagnosis. With the growing research on medical image segmentation, it is essential to categorize the research outcomes and provide researchers with an overview of the existing segmentation techniques in medical images. In this paper, different image segmentation methods applied on magnetic resonance brain images are reviewed. The selection of methods includes sources from image processing journals, conferences, books, dissertations and thesis. The conceptual details of the methods are explained and mathematical details are avoided for simplicity. Both broad and detailed categorizations of reviewed segmentation techniques are provided. The state of art research is provided with emphasis on developed techniques and image properties used by them. The methods defined are not always mutually independent. Hence, their inter relationships are also stated. Finally, conclusions are drawn summarizing commonly used techniques and their complexities in application.
Comparative analysis and implementation of structured edge active contour IJECEIAES
This paper proposes modified chanvese model which can be implemented on image for segmentation. The structure of paper is based on Linear structure tensor (LST) as input to the variant model. Structure tensor is a matrix illustration of partial derivative information. In the proposed model, the original image is considered as information channel for computing structure tensor. Difference of Gaussian (DOG) is featuring improvement in which we can get less blurred image than original image. In this paper LST is modified by adding intensity information to enhance orientation information. Finally Active Contour Model (ACM) is used to segment the images. The proposed algorithm is tested on various images and also on some images which have intensity inhomogeneity and results are shown. Also, the results with other algorithms like chanvese, Bhattacharya, Gabor based chanvese and Novel structure tensor based model are compared. It is verified that accuracy of proposed model is the best. The biggest advantage of proposed model is clear edge enhancement.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
GREY LEVEL CO-OCCURRENCE MATRICES: GENERALISATION AND SOME NEW FEATURESijcseit
Grey Level Co-occurrence Matrices (GLCM) are one of the earliest techniques used for image texture
analysis. In this paper we defined a new feature called trace extracted from the GLCM and its implications
in texture analysis are discussed in the context of Content Based Image Retrieval (CBIR). The theoretical
extension of GLCM to n-dimensional gray scale images are also discussed. The results indicate that trace
features outperform Haralick features when applied to CBIR.
DOMAIN SPECIFIC CBIR FOR HIGHLY TEXTURED IMAGEScseij
It is A Challenging Task To Build A Cbir System Which Primarily Works On Texture Values As There
Meaning And Semantics Needs A Special Care To Be Mapped With Human Based Languages. We Have
Consider Highly Textured Images Having Properties(Entropy, Homogeneity, Contrast, Cluster Shade, Auto
Correlation)And Have Mapped Using A Fuzzy Minmax Scale W.R.T. Their Degree(High, Low,
Medium)And Technical Interpetation.This Developed System Is Performing Well In Terms Of Precision
And Recall Value Showing That Semantic Gap Has Been Reduced For Highly Textured Images Based Cbir.
Texture Unit based Approach to Discriminate Manmade Scenes from Natural Scenesidescitation
In this paper a method is proposed to discriminate
natural and manmade scenes of similar depth. Increase in
image depth leads to increase in roughness in manmade
scenes; on the contrary natural scenes exhibit smooth behavior
at higher image depth. This particular arrangement of pixels
in scene structure can be well explained by local texture
information in a pixel and its neighborhood. Our proposed
method analyses local texture information of a scene image
using texture unit matrix. For final classification we have
used unsupervised learning using Self Organizing Map
(SOM). This technique is useful for online classification due
to very less computational complexity.
SYMMETRICAL WEIGHTED SUBSPACE HOLISTIC APPROACH FOR EXPRESSION RECOGNITIONijcsit
Human face expression is one of the cognitive activity or attribute to deliver the opinions to others.This paper mainly delivers the performance of appearance based holistic approach subspace methods based on Principal Component Analysis (PCA). In this work texture features are extracted from face images using Gabor filter. It was observed that extracted texture feature vector space has higher dimensional and has
more number of redundant contents. Hence training, testing and classification time becomes more. The expression recognition accuracy rate is also reduced. To overcome this problem Symmetrical Weighted 2DPCA (SW2DPCA) subspace method is introduced. Extracted feature vector space is projected in to subspace by using SW2DPCA method. By implementing weighted principles on odd and even symmetrical
decomposition space of training samples sets proposed method have been formed. Conventional PCA and 2DPCA method yields less recognition rate due to larger variations in expressions and light due to more number of feature space redundant variants. Proposed SW2DPCA method optimizes this problem by reducing redundant contents and discarding unequal variants. In this work a well known JAFFE databases
is used for experiments and tested with proposed SW2DPCA algorithm. From the experimental results it was found that facial recognition accuracy rate of GF+SW2DPCA based feature fusion subspace method has been increased to 95.24% compared to 2DPCA method.
Colour-Texture Image Segmentation using Hypercomplex Gabor Analysissipij
Texture analysis such as segmentation and classification plays a vital role in computer vision and pattern recognition and is widely applied to many areas such as industrial automation, bio-medical image processing and remote sensing. In this paper, we first extend the well-known Gabor filters to color images using a specific form of hypercomplex numbers known as quaternions. These filters are
constructed as windowed basis functions of the quaternion Fourier transform also known as hypercomplex Fourier transform. Based on this extension this paper presents the use of these new
quaternionic Gabor filters in colour texture image segmentation. Experimental results on two colour texture images are presented. We tested the robustness of this technique for segmentation by adding Gaussian noise to the texture images. Experimental results indicate that the proposed method gives better segmentation results even in the presence of strongest noise.
We develop an efficient MRI denoising algorithm based on sparse representation and curvelet transform with variance stabilizing transformation framework. By using sparse representation, a MR image is decomposed into a sparsest coefficients matrix with more no of zeros. Curvelet transform is directional in nature and it preserves the important edge and texture details of MR images. In order to get sparsity and texture preservation, we post process the denoising result of sparse based method through curvelet transform. To use our proposed sparse based curvelet transform denoising method to remove rician noise in MR images, we use forward and inverse variance-stabilizing transformations. Experimental results reveal the efficacy of our approach to rician noise removal while well preserving the image details. Our proposed method shows improved performance over the existing denoising methods in terms of PSNR and SSIM for T1, T2 weighted MR images.
Texture Unit based Monocular Real-world Scene Classification using SOM and KN...IDES Editor
In this paper a method is proposed to discriminate
real world scenes in to natural and manmade scenes of similar
depth. Global-roughness of a scene image varies as a function
of image-depth. Increase in image depth leads to increase in
roughness in manmade scenes; on the contrary natural scenes
exhibit smooth behavior at higher image depth. This particular
arrangement of pixels in scene structure can be well explained
by local texture information in a pixel and its neighborhood.
Our proposed method analyses local texture information of a
scene image using texture unit matrix. For final classification
we have used both supervised and unsupervised learning using
K-Nearest Neighbor classifier (KNN) and Self Organizing
Map (SOM) respectively. This technique is useful for online
classification due to very less computational complexity.
FUZZY SET THEORETIC APPROACH TO IMAGE THRESHOLDINGIJCSEA Journal
Thresholding is a fast, popular and computationally inexpensive segmentation technique that is always critical and decisive in some image processing applications. The result of image thresholding is not always satisfactory because of the presence of noise and vagueness and ambiguity among the classes. Since the theory of fuzzy sets is a generalization of the classical set theory, it has greater flexibility to capture faithfully the various aspects of incompleteness or imperfectness in information of situation. To overcome this problem, in this paper we proposed a two-stage fuzzy set theoretic approach to image thresholding utilizing the measure of fuzziness to evaluate the fuzziness of an image and to determine an adequate threshold value. At first, images are preprocessed to reduce noise without any loss of image details by fuzzy rule-based filtering and then in the final stage a suitable threshold is determined with the help of a fuzziness measure as a criterion function. Experimental results on test images have demonstrated the effectiveness of this method.
COMPOSITE TEXTURE SHAPE CLASSIFICATION BASED ON MORPHOLOGICAL SKELETON AND RE...sipij
After several decades of research, the development of an effective feature extraction method for texture
classification is still an ongoing effort. Therefore , several techniques have been proposed to resolve such
problems. In this paper a novel composite texture classification method based on innovative pre-processing
techniques, skeletonization and Regional moments (RM) is proposed. This proposed texture classification
approach, takes into account the ambiguity brought in by noise and the different caption and digitization
processes. To offer better classification rate, innovative pre-processing methods are applied on various
texture images first. Pre-processing mechanisms describe various methods of converting a grey level image
into binary image with minimal consideration of the noise model. Then shape features are evaluated using
RM on the proposed Morphological Skeleton (MS) method by suitable numerical characterization
measures for a precise classification. This texture classification study using MS and RM has given a good
performance. Good classification result is achieved from a single region moment RM10 while others failed
in classification.
Fast Complex Gabor Wavelet Based Palmprint AuthenticationCSCJournals
A biometric system is a pattern recognition system that recognizes a person on the basis of the physiological or behavioural characteristics that the person possesses. There is increasing interest of researchers in the development of fast and accurate personal recognition systems. In this paper, Sliding window method is used to make the system fast by reducing the matching time. The reduction in computation time indirectly reduces the overall comparison time that makes the system fast. Here, 2-D Complex Gabor Wavelet method is used to extract features from palmprint. The extracted features are stored in a feature vector and matched by hamming distance similarity measurement using sliding window approach. Reduction of 74.12% and 90.32% in comparison time is achieved using Sliding window methods. The improvement in time and accuracy as indicated by experimental results makes a system rapid and accurate.
A Fuzzy Set Approach for Edge DetectionCSCJournals
Image segmentation is one of the most studied problems in image analysis, computer vision, pattern recognition etc. Edge detection is a discontinuity based approach used for image segmentation. In this paper, an edge detection using fuzzy set is proposed, where an image is considered as a fuzzy set and pixels are taken as elements of fuzzy set. The fuzzy approach converts the color image to a partially segmented image; finally an edge detector is convolved over the partially segmented image to obtain an edged image. The approach is implemented using MATLAB 7.11. (R2010b). For qualitative and quantitative comparison, BSD (Berkeley Segmentation Database) images are used for experimentation. Performance parameters used are PSNR (dB) and Performance ratio (PR) of true to false edges. It has been shown that the proposed approach performs better than Canny’s edge detection algorithm under almost all scenarios. The proposed approach reduces false edge detection and double edges.
3D Face Recognition Method Using 2DPCAEuclidean Distance ClassificationIDES Editor
In this paper, we present a 3d face recognition method
that is robust to changes in facial expressions. Instead of
locating many feature points, we just need to locate the nose
tip as a reference point. After finding this reference point,
pictures are converted to a standard size. Two dimensional
principle component analysis (2DPCA) is employed to obtain
features Matrix vectors. Finally Euclidean distance method is
employed for classifying and comparison of the features.
Experimental results implemented on CASIA 3D face database
which including 123 individuals in total, demonstrate that
our proposed method achieves up to 98% recognition accuracy
with respect to pose variation.
A Hybrid Technique for the Automated Segmentation of Corpus Callosum in Midsa...IJERA Editor
The corpus callosum (CC) is the largest white-matter structure in human brain. In this paper, we take two techniques to observe the results of segmentation of Corpus Callosum. The first one is mean shift algorithm and morphological operation. The second one is k-means clustering. In this paper, it is performed in three steps. The first step is finding the corpus callosum area using adaptive mean shift algorithm or k-means clustering . In second step, the boundary of detected CC area is then used as the initial contour in the Geometric Active Contour (GAC) mode and final step to remove unknown noise using morphological operation and evolved to get the final segmentation result. The experimental results demonstrate that the mean shift algorithm and k-means clustering has provided a reliable segmentation performance.
Hierarchical Approach for Total Variation Digital Image InpaintingIJCSEA Journal
The art of recovering an image from damage in an undetectable form is known as inpainting. The manual work of inpainting is most often a very time consum ing process. Due to digitalization of this technique, it is automatic and faster. In this paper, after the user selects the regions to be reconstructed, the algorithm automatically reconstruct the lost regions with the help of the information surrounding them. The existing methods perform very well when the region to be reconstructed is very small, but fails in proper reconstruction as the area increases. This paper describes a Hierarchical method by which the area to be inpainted is reduced in multiple levels and Total Variation(TV) method is used to inpaint in each level. This algorithm gives better performance when compared with other existing algorithms such as nearest neighbor interpolation, Inpainting through Blurring and Sobolev Inpainting.
Performance Evaluation of 2D Adaptive Bilateral Filter For Removal of Noise F...CSCJournals
In this paper, we present the performance analysis of adaptive bilateral filter by pixel to noise ratio and mean square errors. It was evaluate changing the parameters of the adaptive filter half width values and standard deviations. In adaptive bilateral filter, the edge slope is enhanced by transforming the histogram via a range filter with adaptive offset and width. The variance of range filter can also be adaptive. The filter is applied to improve the sharpens of a gray level and color image by increasing the slope of the edges without producing overshoot or undershoots. The related graphs were plotted and the best filter parameters are obtained.
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
LOCAL DISTANCE AND DEMPSTER-DHAFER FOR MULTI-FOCUS IMAGE FUSION sipij
This work proposes a new method of fusion image using Dempster-Shafer theory and local variability (DST-LV). This method takes into account the behaviour of each pixel with its neighbours. It consists in calculating the quadratic distance between the value of the pixel I (x, y) of each point and the value of all the neighbouring pixels. Local variability is used to determine the mass function defined in DempsterShafer theory. The two classes of Dempster-Shafer theory studied are : the fuzzy part and the focused part. The results of the proposed method are significantly better when comparing them to results of other methods.
Local Distance and Dempster-Dhafer for Multi-Focus Image Fusionsipij
This work proposes a new method of fusion image using Dempster-Shafer theory and local variability
(DST-LV). This method takes into account the behaviour of each pixel with its neighbours. It consists in
calculating the quadratic distance between the value of the pixel I (x, y) of each point and the value of all
the neighbouring pixels. Local variability is used to determine the mass function defined in DempsterShafer theory. The two classes of Dempster-Shafer theory studied are : the fuzzy part and the focused part.
The results of the proposed method are significantly better when comparing them to results of other
methods.
Texture Unit based Approach to Discriminate Manmade Scenes from Natural Scenesidescitation
In this paper a method is proposed to discriminate
natural and manmade scenes of similar depth. Increase in
image depth leads to increase in roughness in manmade
scenes; on the contrary natural scenes exhibit smooth behavior
at higher image depth. This particular arrangement of pixels
in scene structure can be well explained by local texture
information in a pixel and its neighborhood. Our proposed
method analyses local texture information of a scene image
using texture unit matrix. For final classification we have
used unsupervised learning using Self Organizing Map
(SOM). This technique is useful for online classification due
to very less computational complexity.
SYMMETRICAL WEIGHTED SUBSPACE HOLISTIC APPROACH FOR EXPRESSION RECOGNITIONijcsit
Human face expression is one of the cognitive activity or attribute to deliver the opinions to others.This paper mainly delivers the performance of appearance based holistic approach subspace methods based on Principal Component Analysis (PCA). In this work texture features are extracted from face images using Gabor filter. It was observed that extracted texture feature vector space has higher dimensional and has
more number of redundant contents. Hence training, testing and classification time becomes more. The expression recognition accuracy rate is also reduced. To overcome this problem Symmetrical Weighted 2DPCA (SW2DPCA) subspace method is introduced. Extracted feature vector space is projected in to subspace by using SW2DPCA method. By implementing weighted principles on odd and even symmetrical
decomposition space of training samples sets proposed method have been formed. Conventional PCA and 2DPCA method yields less recognition rate due to larger variations in expressions and light due to more number of feature space redundant variants. Proposed SW2DPCA method optimizes this problem by reducing redundant contents and discarding unequal variants. In this work a well known JAFFE databases
is used for experiments and tested with proposed SW2DPCA algorithm. From the experimental results it was found that facial recognition accuracy rate of GF+SW2DPCA based feature fusion subspace method has been increased to 95.24% compared to 2DPCA method.
Colour-Texture Image Segmentation using Hypercomplex Gabor Analysissipij
Texture analysis such as segmentation and classification plays a vital role in computer vision and pattern recognition and is widely applied to many areas such as industrial automation, bio-medical image processing and remote sensing. In this paper, we first extend the well-known Gabor filters to color images using a specific form of hypercomplex numbers known as quaternions. These filters are
constructed as windowed basis functions of the quaternion Fourier transform also known as hypercomplex Fourier transform. Based on this extension this paper presents the use of these new
quaternionic Gabor filters in colour texture image segmentation. Experimental results on two colour texture images are presented. We tested the robustness of this technique for segmentation by adding Gaussian noise to the texture images. Experimental results indicate that the proposed method gives better segmentation results even in the presence of strongest noise.
We develop an efficient MRI denoising algorithm based on sparse representation and curvelet transform with variance stabilizing transformation framework. By using sparse representation, a MR image is decomposed into a sparsest coefficients matrix with more no of zeros. Curvelet transform is directional in nature and it preserves the important edge and texture details of MR images. In order to get sparsity and texture preservation, we post process the denoising result of sparse based method through curvelet transform. To use our proposed sparse based curvelet transform denoising method to remove rician noise in MR images, we use forward and inverse variance-stabilizing transformations. Experimental results reveal the efficacy of our approach to rician noise removal while well preserving the image details. Our proposed method shows improved performance over the existing denoising methods in terms of PSNR and SSIM for T1, T2 weighted MR images.
Texture Unit based Monocular Real-world Scene Classification using SOM and KN...IDES Editor
In this paper a method is proposed to discriminate
real world scenes in to natural and manmade scenes of similar
depth. Global-roughness of a scene image varies as a function
of image-depth. Increase in image depth leads to increase in
roughness in manmade scenes; on the contrary natural scenes
exhibit smooth behavior at higher image depth. This particular
arrangement of pixels in scene structure can be well explained
by local texture information in a pixel and its neighborhood.
Our proposed method analyses local texture information of a
scene image using texture unit matrix. For final classification
we have used both supervised and unsupervised learning using
K-Nearest Neighbor classifier (KNN) and Self Organizing
Map (SOM) respectively. This technique is useful for online
classification due to very less computational complexity.
FUZZY SET THEORETIC APPROACH TO IMAGE THRESHOLDINGIJCSEA Journal
Thresholding is a fast, popular and computationally inexpensive segmentation technique that is always critical and decisive in some image processing applications. The result of image thresholding is not always satisfactory because of the presence of noise and vagueness and ambiguity among the classes. Since the theory of fuzzy sets is a generalization of the classical set theory, it has greater flexibility to capture faithfully the various aspects of incompleteness or imperfectness in information of situation. To overcome this problem, in this paper we proposed a two-stage fuzzy set theoretic approach to image thresholding utilizing the measure of fuzziness to evaluate the fuzziness of an image and to determine an adequate threshold value. At first, images are preprocessed to reduce noise without any loss of image details by fuzzy rule-based filtering and then in the final stage a suitable threshold is determined with the help of a fuzziness measure as a criterion function. Experimental results on test images have demonstrated the effectiveness of this method.
COMPOSITE TEXTURE SHAPE CLASSIFICATION BASED ON MORPHOLOGICAL SKELETON AND RE...sipij
After several decades of research, the development of an effective feature extraction method for texture
classification is still an ongoing effort. Therefore , several techniques have been proposed to resolve such
problems. In this paper a novel composite texture classification method based on innovative pre-processing
techniques, skeletonization and Regional moments (RM) is proposed. This proposed texture classification
approach, takes into account the ambiguity brought in by noise and the different caption and digitization
processes. To offer better classification rate, innovative pre-processing methods are applied on various
texture images first. Pre-processing mechanisms describe various methods of converting a grey level image
into binary image with minimal consideration of the noise model. Then shape features are evaluated using
RM on the proposed Morphological Skeleton (MS) method by suitable numerical characterization
measures for a precise classification. This texture classification study using MS and RM has given a good
performance. Good classification result is achieved from a single region moment RM10 while others failed
in classification.
Fast Complex Gabor Wavelet Based Palmprint AuthenticationCSCJournals
A biometric system is a pattern recognition system that recognizes a person on the basis of the physiological or behavioural characteristics that the person possesses. There is increasing interest of researchers in the development of fast and accurate personal recognition systems. In this paper, Sliding window method is used to make the system fast by reducing the matching time. The reduction in computation time indirectly reduces the overall comparison time that makes the system fast. Here, 2-D Complex Gabor Wavelet method is used to extract features from palmprint. The extracted features are stored in a feature vector and matched by hamming distance similarity measurement using sliding window approach. Reduction of 74.12% and 90.32% in comparison time is achieved using Sliding window methods. The improvement in time and accuracy as indicated by experimental results makes a system rapid and accurate.
A Fuzzy Set Approach for Edge DetectionCSCJournals
Image segmentation is one of the most studied problems in image analysis, computer vision, pattern recognition etc. Edge detection is a discontinuity based approach used for image segmentation. In this paper, an edge detection using fuzzy set is proposed, where an image is considered as a fuzzy set and pixels are taken as elements of fuzzy set. The fuzzy approach converts the color image to a partially segmented image; finally an edge detector is convolved over the partially segmented image to obtain an edged image. The approach is implemented using MATLAB 7.11. (R2010b). For qualitative and quantitative comparison, BSD (Berkeley Segmentation Database) images are used for experimentation. Performance parameters used are PSNR (dB) and Performance ratio (PR) of true to false edges. It has been shown that the proposed approach performs better than Canny’s edge detection algorithm under almost all scenarios. The proposed approach reduces false edge detection and double edges.
3D Face Recognition Method Using 2DPCAEuclidean Distance ClassificationIDES Editor
In this paper, we present a 3d face recognition method
that is robust to changes in facial expressions. Instead of
locating many feature points, we just need to locate the nose
tip as a reference point. After finding this reference point,
pictures are converted to a standard size. Two dimensional
principle component analysis (2DPCA) is employed to obtain
features Matrix vectors. Finally Euclidean distance method is
employed for classifying and comparison of the features.
Experimental results implemented on CASIA 3D face database
which including 123 individuals in total, demonstrate that
our proposed method achieves up to 98% recognition accuracy
with respect to pose variation.
A Hybrid Technique for the Automated Segmentation of Corpus Callosum in Midsa...IJERA Editor
The corpus callosum (CC) is the largest white-matter structure in human brain. In this paper, we take two techniques to observe the results of segmentation of Corpus Callosum. The first one is mean shift algorithm and morphological operation. The second one is k-means clustering. In this paper, it is performed in three steps. The first step is finding the corpus callosum area using adaptive mean shift algorithm or k-means clustering . In second step, the boundary of detected CC area is then used as the initial contour in the Geometric Active Contour (GAC) mode and final step to remove unknown noise using morphological operation and evolved to get the final segmentation result. The experimental results demonstrate that the mean shift algorithm and k-means clustering has provided a reliable segmentation performance.
Hierarchical Approach for Total Variation Digital Image InpaintingIJCSEA Journal
The art of recovering an image from damage in an undetectable form is known as inpainting. The manual work of inpainting is most often a very time consum ing process. Due to digitalization of this technique, it is automatic and faster. In this paper, after the user selects the regions to be reconstructed, the algorithm automatically reconstruct the lost regions with the help of the information surrounding them. The existing methods perform very well when the region to be reconstructed is very small, but fails in proper reconstruction as the area increases. This paper describes a Hierarchical method by which the area to be inpainted is reduced in multiple levels and Total Variation(TV) method is used to inpaint in each level. This algorithm gives better performance when compared with other existing algorithms such as nearest neighbor interpolation, Inpainting through Blurring and Sobolev Inpainting.
Performance Evaluation of 2D Adaptive Bilateral Filter For Removal of Noise F...CSCJournals
In this paper, we present the performance analysis of adaptive bilateral filter by pixel to noise ratio and mean square errors. It was evaluate changing the parameters of the adaptive filter half width values and standard deviations. In adaptive bilateral filter, the edge slope is enhanced by transforming the histogram via a range filter with adaptive offset and width. The variance of range filter can also be adaptive. The filter is applied to improve the sharpens of a gray level and color image by increasing the slope of the edges without producing overshoot or undershoots. The related graphs were plotted and the best filter parameters are obtained.
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
LOCAL DISTANCE AND DEMPSTER-DHAFER FOR MULTI-FOCUS IMAGE FUSION sipij
This work proposes a new method of fusion image using Dempster-Shafer theory and local variability (DST-LV). This method takes into account the behaviour of each pixel with its neighbours. It consists in calculating the quadratic distance between the value of the pixel I (x, y) of each point and the value of all the neighbouring pixels. Local variability is used to determine the mass function defined in DempsterShafer theory. The two classes of Dempster-Shafer theory studied are : the fuzzy part and the focused part. The results of the proposed method are significantly better when comparing them to results of other methods.
Local Distance and Dempster-Dhafer for Multi-Focus Image Fusionsipij
This work proposes a new method of fusion image using Dempster-Shafer theory and local variability
(DST-LV). This method takes into account the behaviour of each pixel with its neighbours. It consists in
calculating the quadratic distance between the value of the pixel I (x, y) of each point and the value of all
the neighbouring pixels. Local variability is used to determine the mass function defined in DempsterShafer theory. The two classes of Dempster-Shafer theory studied are : the fuzzy part and the focused part.
The results of the proposed method are significantly better when comparing them to results of other
methods.
Local Distance and Dempster-Dhafer for Multi-Focus Image Fusionsipij
This work proposes a new method of fusion image using Dempster-Shafer theory and local variability
(DST-LV). This method takes into account the behaviour of each pixel with its neighbours. It consists in
calculating the quadratic distance between the value of the pixel I (x, y) of each point and the value of all
the neighbouring pixels. Local variability is used to determine the mass function defined in DempsterShafer theory. The two classes of Dempster-Shafer theory studied are : the fuzzy part and the focused part.
The results of the proposed method are significantly better when comparing them to results of other
methods.
Automatic rectification of perspective distortion from a single image using p...ijcsa
Perspective distortion occurs due to the perspective projection of 3D scene on a 2D surface. Correcting the distortion of a single image without losing any desired information is one of the challenging task in the field of Computer Vision. We consider the problem of estimating perspective distortion from a single still image of an unstructured environment and to make perspective correction which is both quantitatively accurate as well as visually pleasing. Corners are detected based on the orientation of the image. A method based on plane homography and transformation is used to make perspective correction. The algorithm infers frontier information directly from the images, without any reference objects or prior knowledge of the camera parameters. The frontiers are detected using geometric context based segmentation. The goal of this paper is to present a framework providing fully automatic and fast perspective correction.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
2-Dimensional Wavelet pre-processing to extract IC-Pin information for disarr...IOSR Journals
Abstract: Due to higher processing power to cost ratio, it is now possible to replace the manual detection methods used in the IC (Integrated Circuit) industry by Image-processing based automated methods, to detect a broken pin of an IC connected on a PCB during manufacturing, which will make the process faster, easier and cheaper. In this paper an accurate and fast automatic detection method is used where the top view camera shots of PCBs are processed using advanced methods of 2-dimensional discrete wavelet pre-processing before applying edge-detection. Comparison with conventional edge detection methods such as Sobel, Prewitt and Canny edge detection without 2-D DWT is also performed. Keywords :2-dimensional wavelets, Edge detection, Machine vision, Image processing, Canny.
ROLE OF HYBRID LEVEL SET IN FETAL CONTOUR EXTRACTIONsipij
Image processing technologies may be employed for quicker and accurate diagnosis in analysis and
feature extraction of medical images. Here, existing level set algorithm is modified and it is employed for
extracting contour of fetus in an image. In traditional approach, fetal parameters are extracted manually
from ultrasound images. An automatic technique is highly desirable to obtain fetal biometric measurements
due to some problems in traditional approach such as lack of consistency and accuracy. The proposed
approach utilizes global & local region information for fetal contour extraction from ultrasonic images.
The main goal of this research is to develop a new methodology to aid the analysis and feature extraction.
Role of Hybrid Level Set in Fetal Contour Extractionsipij
Image processing technologies may be employed for quicker and accurate diagnosis in analysis and
feature extraction of medical images. Here, existing level set algorithm is modified and it is employed for
extracting contour of fetus in an image. In traditional approach, fetal parameters are extracted manually
from ultrasound images. An automatic technique is highly desirable to obtain fetal biometric measurements
due to some problems in traditional approach such as lack of consistency and accuracy. The proposed
approach utilizes global & local region information for fetal contour extraction from ultrasonic images.
The main goal of this research is to develop a new methodology to aid the analysis and feature extraction.
Mislaid character analysis using 2-dimensional discrete wavelet transform for...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
Developing 3D Viewing Model from 2D Stereo Pair with its Occlusion RatioCSCJournals
We intend to make a 3D model using a stereo pair of images by using a novel method of local matching in pixel domain for calculating horizontal disparities. We also find the occlusion ratio using the stereo pair followed by the use of The Edge Detection and Image SegmentatiON (EDISON) system, on one the images, which provides a complete toolbox for discontinuity preserving filtering, segmentation and edge detection. Instead of assigning a disparity value to each pixel, a disparity plane is assigned to each segment. We then warp the segment disparities to the original image to get our final 3D viewing Model.
NMS and Thresholding Architecture used for FPGA based Canny Edge Detector for...idescitation
In this paper, an architecture designed for Non-
Maximal Suppression used in Canny edge detection algorithm
is presented in order to reduce memory requirements
significantly. The architecture also achieves decreased latency
and increased throughput with no loss in edge detection. The
new algorithm used has a low-complexity 8-bin non-uniform
gradient magnitude histogram to compute block-based
hysteresis thresholds that are used by the Canny edge detector.
Furthermore, the hardware architecture of the proposed
algorithm is presented in this paper and the architecture is
synthesized on the Xilinx Virtex 5 FPGA. The design
development is done in VHDL and simulated results are
obtained using modelsim 6.3 with Xilinx 12.2.
Image enhancement technique plays vital role in improving the quality of the image. Enhancement
technique basically enhances the foreground information and retains the background and improve the
overall contrast of an image. In some case the background of an image hides the structural information of
an image. This paper proposes an algorithm which enhances the foreground image and the background
part separately and stretch the contrast of an image at inter-object level and intra-object level and then
combines it to an enhanced image. The results are compared with various classical methods using image
quality measures
Similar to Invariant Recognition of Rectangular Biscuits with Fuzzy Moment Descriptors, Flawed Pieces Detection (20)
Biological screening of herbal drugs: Introduction and Need for
Phyto-Pharmacological Screening, New Strategies for evaluating
Natural Products, In vitro evaluation techniques for Antioxidants, Antimicrobial and Anticancer drugs. In vivo evaluation techniques
for Anti-inflammatory, Antiulcer, Anticancer, Wound healing, Antidiabetic, Hepatoprotective, Cardio protective, Diuretics and
Antifertility, Toxicity studies as per OECD guidelines
Safalta Digital marketing institute in Noida, provide complete applications that encompass a huge range of virtual advertising and marketing additives, which includes search engine optimization, virtual communication advertising, pay-per-click on marketing, content material advertising, internet analytics, and greater. These university courses are designed for students who possess a comprehensive understanding of virtual marketing strategies and attributes.Safalta Digital Marketing Institute in Noida is a first choice for young individuals or students who are looking to start their careers in the field of digital advertising. The institute gives specialized courses designed and certification.
for beginners, providing thorough training in areas such as SEO, digital communication marketing, and PPC training in Noida. After finishing the program, students receive the certifications recognised by top different universitie, setting a strong foundation for a successful career in digital marketing.
Executive Directors Chat Leveraging AI for Diversity, Equity, and InclusionTechSoup
Let’s explore the intersection of technology and equity in the final session of our DEI series. Discover how AI tools, like ChatGPT, can be used to support and enhance your nonprofit's DEI initiatives. Participants will gain insights into practical AI applications and get tips for leveraging technology to advance their DEI goals.
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
Strategies for Effective Upskilling is a presentation by Chinwendu Peace in a Your Skill Boost Masterclass organisation by the Excellence Foundation for South Sudan on 08th and 09th June 2024 from 1 PM to 3 PM on each day.
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
This presentation was provided by Steph Pollock of The American Psychological Association’s Journals Program, and Damita Snow, of The American Society of Civil Engineers (ASCE), for the initial session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session One: 'Setting Expectations: a DEIA Primer,' was held June 6, 2024.
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...Levi Shapiro
Letter from the Congress of the United States regarding Anti-Semitism sent June 3rd to MIT President Sally Kornbluth, MIT Corp Chair, Mark Gorenberg
Dear Dr. Kornbluth and Mr. Gorenberg,
The US House of Representatives is deeply concerned by ongoing and pervasive acts of antisemitic
harassment and intimidation at the Massachusetts Institute of Technology (MIT). Failing to act decisively to ensure a safe learning environment for all students would be a grave dereliction of your responsibilities as President of MIT and Chair of the MIT Corporation.
This Congress will not stand idly by and allow an environment hostile to Jewish students to persist. The House believes that your institution is in violation of Title VI of the Civil Rights Act, and the inability or
unwillingness to rectify this violation through action requires accountability.
Postsecondary education is a unique opportunity for students to learn and have their ideas and beliefs challenged. However, universities receiving hundreds of millions of federal funds annually have denied
students that opportunity and have been hijacked to become venues for the promotion of terrorism, antisemitic harassment and intimidation, unlawful encampments, and in some cases, assaults and riots.
The House of Representatives will not countenance the use of federal funds to indoctrinate students into hateful, antisemitic, anti-American supporters of terrorism. Investigations into campus antisemitism by the Committee on Education and the Workforce and the Committee on Ways and Means have been expanded into a Congress-wide probe across all relevant jurisdictions to address this national crisis. The undersigned Committees will conduct oversight into the use of federal funds at MIT and its learning environment under authorities granted to each Committee.
• The Committee on Education and the Workforce has been investigating your institution since December 7, 2023. The Committee has broad jurisdiction over postsecondary education, including its compliance with Title VI of the Civil Rights Act, campus safety concerns over disruptions to the learning environment, and the awarding of federal student aid under the Higher Education Act.
• The Committee on Oversight and Accountability is investigating the sources of funding and other support flowing to groups espousing pro-Hamas propaganda and engaged in antisemitic harassment and intimidation of students. The Committee on Oversight and Accountability is the principal oversight committee of the US House of Representatives and has broad authority to investigate “any matter” at “any time” under House Rule X.
• The Committee on Ways and Means has been investigating several universities since November 15, 2023, when the Committee held a hearing entitled From Ivory Towers to Dark Corners: Investigating the Nexus Between Antisemitism, Tax-Exempt Universities, and Terror Financing. The Committee followed the hearing with letters to those institutions on January 10, 202
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
Delivering Micro-Credentials in Technical and Vocational Education and TrainingAG2 Design
Explore how micro-credentials are transforming Technical and Vocational Education and Training (TVET) with this comprehensive slide deck. Discover what micro-credentials are, their importance in TVET, the advantages they offer, and the insights from industry experts. Additionally, learn about the top software applications available for creating and managing micro-credentials. This presentation also includes valuable resources and a discussion on the future of these specialised certifications.
For more detailed information on delivering micro-credentials in TVET, visit this https://tvettrainer.com/delivering-micro-credentials-in-tvet/
"Protectable subject matters, Protection in biotechnology, Protection of othe...
Invariant Recognition of Rectangular Biscuits with Fuzzy Moment Descriptors, Flawed Pieces Detection
1. Pulivarthi Srinivasa Rao, Sheli Sinha Chaudhuri,& Romesh Laishram
International Journal of Image Processing, Volume (4): Issue (3) 232
Invariant Recognition of Rectangular Biscuits with Fuzzy
Moment Descriptors, Flawed Pieces Detection
Pulivarthi Srinivasa Rao srinivasp08@gmail.com
Electronics & Communication Department
Andhra Polytechnic
Kaninada, 533003, India
Sheli Sinha Chaudhuri shelisc@yahoo.co.in
Electronics & Telecommunication Department
Jadavpur University
Kolkata, 700032, India
Romesh Laishram romeshlaishram@gmail.com
Electronics & Communication Department
Manipur Institute of Technology
Takyelpat, 795001, India
Abstract
In this paper a new approach for invariant recognition of broken rectangular
biscuits is proposed using fuzzy membership-distance products, called fuzzy
moment descriptors. The existing methods for recognition of flawed rectangular
biscuits are mostly based on Hough transform. However these methods are
prone to error due to noise and/or variation in illumination. Fuzzy moment
descriptors are less sensitive to noise thus making it an effective approach and
invariant to the above stray external disturbances. Further, the normalization and
sorting of the moment vectors make it a size and rotation invariant recognition
process .In earlier studies fuzzy moment descriptors has successfully been
applied in image matching problem. In this paper the algorithm is applied in
recognition of flawed and non-flawed rectangular biscuits. In general the
proposed algorithm has potential applications in industrial quality control.
Keywords: Fuzzy moment descriptors, Euclidean distance, Edge detection, Flawed biscuits detection.
1. INTRODUCTION
The problem of pattern recognition has been studied extensively in different literatures in different
domain. Invariant pattern recognition in Hough space [1]-[4] has already been addressed by
Krishnapuram et al [5], and by Sinha et al [6], none of these two works includes the treatment of
size-scaling. Authors in [5] and [6] use convolution in 9-space to achieve the rotational
registration between sample objects and the templates, an additional processing is then
necessary to determine the translational correspondence. The use of a pre-defined standard
position in Hough space along with Distance-Discriminator Neural Neurons to achieve invariant
pattern recognition has been addressed by Montenegro et al [7].In that the distance between a
template and a sample vector gives the corresponding likeliness degree. If several objects appear
simultaneously in image space a preprocessing is necessary to single out individual objects by
some of the broadly known labeling [8] technique. The authors in [9] have applied Hough
transform in contour identification for object identification. In [10]-[11], the author proposed the
2. Pulivarthi Srinivasa Rao, Sheli Sinha Chaudhuri,& Romesh Laishram
International Journal of Image Processing, Volume (4): Issue (3) 233
geometric transformation invariant pattern recognition in Hough space for polygonal shaped
objects. In [12] the author proposed the algorithm for flawed biscuits detection in Hough space. In
their method the recognition process is done entirely in the Hough space. The same algorithm
has been applied in recognition of rectangular chocolates [13] and metallic corners-fasteners
[14].In [15] the authors proposed a technique for extracting rectangular shape objects from real
color images using a combination of global contour based line segment detection and a Markov
random field (MRF) model.
In this paper we proposed rectangular broken biscuit detection problem using Fuzzy moment
descriptor. Fuzzy logic [16] which has proved itself successful in matching of inexact data can
equally be used for inexact matching of close image attributes. In [17] the authors introduced the
concept of fuzzy moment descriptor technique in invariant recognition of gray image and
successfully applied in face recognition problem. The authors used the Euclidean distance of a
reference image and the test images as the metric for recognition. The test image that has least
Euclidean distance is taken as the best matched image. The authors in [18] proposed a new
method for image registration based on Nonsubsampled Contourlet Transform (NCST) for feature
extraction and the matching of a test image with a reference image is achieved using Zernike
moments. This approach can be extended for shape matching and object classification. However
our present work is mainly focused on the used Fuzzy moment descriptor proposed in [17].
In this technique an input gray image has been partitioned into n
2
non overlapped blocks of equal
dimensions. Blocks containing region of edge is then identified The degree of membership of a
given block to contain edges is measured subsequently with the help of a few pre-estimated
image parameters like average gradient, variance and the difference of the maximum and the
minimum of gradients. Fuzzy moment which informally means the membership-distance product
of a block b[i, w] with respect to a block b[j, k],is computed for all 1 ≤ i, w, j, k ≤ n. A feature called
‘sum of fuzzy moments’ that keeps track of the image characteristics and their relative distances
is used as image descriptor. We used Euclidean distance measure to determine the distance
between the image descriptors of a reference image and a test image. Then we set a threshold
value of the Euclidean distance for separating flawed and non-flawed rectangular biscuits.
The remainder of the paper is outlined as follows. Section 2 describes the estimation of fuzzy
membership distribution of a given block to contain edge. A method for computing the fuzzy
moments and a method for constructing the image descriptors are presented in section3.The
algorithm for the invariant recognition of rectangular biscuits is described in section 4.The
simulation result is presented in section 5 and section 6 concludes the paper.
2. IMAGE FEATURES AND THEIR MEMBERSHIP DISTRIBUTIONS
This section describes briefly the image features used in the proposed technique and their fuzzy
membership distributions.
Definition 2.1 An edge is a contour of pixels of large gradient with respect to its neighbors in an
image.
Definition 2.2 Fuzzy membership distribution µY (x) denotes the degree of membership of a
variable x to belong to Y, where Y is a subset of a universal set U.
Definition 2.3 The gradient (Gonzalez and Wintz, 1993) at a pixel (x, y) in an image is estimated
by taking the square root of the sum of difference of gray levels of the neighboring pixels with
respect to pixel (x, y).
Definition 2.4 The gradient difference (Gdiff) within a partitioned block is defined as the difference
of maximum and minimum gradient values in that block.
Definition 2.5 The gradient average (Gavg) within a block is defined as the average of the
gradient of all pixels within that block.
3. Pulivarthi Srinivasa Rao, Sheli Sinha Chaudhuri,& Romesh Laishram
International Journal of Image Processing, Volume (4): Issue (3) 234
Definition 2.6 The variance (σ2
) of gradient is defined as the arithmetic mean of square of
deviation from mean. It is expressed formally as
P(G)
2
)avgG(G
2
σ
Where G denotes the gradient values at pixels, and P(G) represents the probability of the
particular gradient G in that block.
2.1 Fuzzy Membership Distributions
Once the features of the partitioned blocks in an image are estimated following the above
definitions, the same features may be used for the estimation of membership value of a block
containing Edge.
Consider when a block contains edge, the gradient in such blocks will have non-zero values only
on the edges. So there must be a small positive average gradient of the pixels in these blocks.
Consequently, σ2
should be a small positive number. we may thus generalize that when σ2
is
close to zero but positive, the membership of a block bij to contain edge is high, and low
otherwise, based on these intuitive assumptions, we presumed the membership curves µ(bjk) = 1–
exp(bx2
), b > 0. The exact value of b can be evaluated by employing genetic algorithms (Man et
al., 1999). It may be noted that 1– exp (–bx
2
) has a zero value at x = σ
2
= 0 and approaches unity
as x = σ
2
→ ∞. The square of x, (x
2
) represents the faster rate of growth of the membership curve
1– exp (–bx2
).
Analogously, the average gradient Gavg of a block bij containing only edge is large. Thus, the
membership of a block bij to contain edge will be high, when Gavg is large. This phenomena can
be represented by the membership function 1- exp (-x), The selection of the membership
function for edge w.r.t. Gdiff directly follows from the previous discussion.
TABLE 1: Membership functions for features
Parameter Edge
membership
Gavg 1– e
-8x
Gdiff 1– exp (–bx
2
)
b > 0
σ
2
1– exp (–bx
2
)
b > 0
The membership values of a block b [j,k] containing edge can be easily estimated if the
parameters and the membership curves are known. The fuzzy production rules, described below,
are subsequently used to estimate the degree of membership of a block b [j, k] to contain edge by
taking into account the effect of the parameters.
2.2 Fuzzy production rules
A fuzzy production rule is an if-then relationship representing a piece of knowledge in a given
problem domain. For the estimation of fuzzy memberships of a block b [j, k] to contain, say, edge,
we need to obtain the composite membership value from their individual parametric values. The
if-then rules represent logical mapping functions from the individual parametric memberships to
the composite membership of a block containing edge. The production rule PR1 is a typical
example of the above concept.
PR1: IF (Gavg > 0.142) AND
(Gdiff > 0.707) AND
(σ2
≈ 1.0)
THEN (the block contains edges).
4. Pulivarthi Srinivasa Rao, Sheli Sinha Chaudhuri,& Romesh Laishram
International Journal of Image Processing, Volume (4): Issue (3) 235
Let us assume that the Gavg, Gdiff and σ2
for a given partitioned block have found to be 0.149, 0.8
and 0.9, respectively. The µedge (bjk) now can be estimated first by obtaining the membership
values µedge (bjk) w.r.t. Gavg, Gdiff and σ
2
, respectively by consulting the membership curves and
then by applying the fuzzy AND (minimum) operator over these membership values. The single
valued membership, thus obtained, describes the degree of membership of the block b[j,k] to
contain edge.
3. FUZZY MOMENT DESCRIPTORS
In this section we define fuzzy moments and evaluate image descriptors based on those
moments. A few definitions, which will be required to understand the concept, are defined below.
Definition 3.1. Fuzzy edge moment
edge
jk
iw
M
is estimated by taking the product of the
membership value )jk(bedgeµ (of containing edge in the block k]b[j, ) and normalized Euclidean
distance jkiw,d of the block k]b[j, w.r.t. w]b[i, . Formally,
)
jk
(b
edge
µ
jkiw,
d
edge
jk
iw
M
(3.1)
Definition 3.2. The fuzzy sum of moments (FSM), for edge Siw, w.r.t. block b[i,w] is defined as the
sum of edge moments of the blocks where edge membership is the highest among all other
membership values.
Formally,
jk
)jk(bedgeµjkiw,diwS (3.2)
Where x,)
jk
(b
x
µMax)jk(bedgeµ
set of features
After the estimation of the fuzzy membership values for edges, the predominant membership
value for each block and the predominant feature are saved. The FSM with respect to the
predominant features are evaluated for each block in the image. Each set of FSM is stored in a
one-dimensional array and is sorted in a descending order. These sorted vectors are used as
descriptors for the image.
For matching a reference image with a set of input images, one has to estimate the image
descriptors for the reference and the input images. For our recognition process we used a non
broken biscuit as the reference image. The matching of images requires estimation of Euclidean
distance between the reference image with respect to all other input images. The measure of the
distance between descriptors of two images is evident from Definition 3.3.
Definition 3.3 The Euclidean distance [Ei,j] k, between the corresponding two kth sorted FSM
descriptor vectors Vi and Vj of two images I and j of respective dimensions (n×1) and (m×1) is
estimated first by ignoring the last (n – m) elements of the first array, where n > m and then taking
the sum of square of the elemental differences of the second array with respect to the modified
first array having m elements.
Definition 3.4 The measure of distance between two images, hereafter called image distance, is
estimated by taking exhaustively the Euclidean distance between each of the two similar
descriptor vectors of the two images and then by taking the weighted sum of these Euclidean
distances.
Formally, the distance ryD between a reference image r and an image y is defined as
k]ij[EkβryD (3.3)
5. Pulivarthi Srinivasa Rao, Sheli Sinha Chaudhuri,& Romesh Laishram
International Journal of Image Processing, Volume (4): Issue (3) 236
Where the suffix i and j in k]ij[E correspond to the set of vectors Vi for image r and Vj for image y.
In our technique of flawed pieces detection of rectangular biscuits we estimate the image
distance ryD where y Є the set of input images and r denotes the reference image. The image for
which the image distance ryD is larger than a predefined threshold belongs to flawed pieces
otherwise the biscuit is non-broken.
4. ALGORITHM
In this section the algorithm used in invariant recognition of rectangular biscuits is presented. The
algorithm requires a reference image which is a non-broken biscuit image and a set of input
images for recognition process. Here all the images to the algorithm are supplied after the images
passes through an edge detector. For recognition of rectangular biscuits only the edge
information of the image is sufficient. The edge detection may be performed using Sobel edge
detector or Cany edge detector. For our problem we have used a simple Sobel edge detector.
The major steps for the process is as depicted in figure 4.1
Figure 1: Basic steps involved in the recognition process
6. Pulivarthi Srinivasa Rao, Sheli Sinha Chaudhuri,& Romesh Laishram
International Journal of Image Processing, Volume (4): Issue (3) 237
In order to keep the matching process free from size and rotational variance, the following
strategies have been adopted.
I. Euclidean distances between each two blocks of an image used for estimation of fuzzy
moments are normalized with respect to the image itself.
II. the descriptor vectors are sorted so as to keep the blocks with most predominant features
at the beginning of the array, which only participate subsequently in the matching process
is free from rotational variance of the reference image
5. SIMULATION RESULTS
To study the effectiveness of the proposed algorithm we consider computer generated images as
shown in figure 2. These computer synthesized images directly follows from [12]. In the figure 2,
the image ref is the reference image which is a non-broken rectangular biscuit and the images r2
– r12 are taken as the input test images
Figure 2: Computer generated biscuit images
7. Pulivarthi Srinivasa Rao, Sheli Sinha Chaudhuri,& Romesh Laishram
International Journal of Image Processing, Volume (4): Issue (3) 238
Table 2: Euclidean Distance between the reference & the test input images
Test
images
Euclidean
distance
r2 0.244295
r3 1.850987
r4 3.645934
r5 1.711132
r6 0.462434
r7 2.635301
r8 0.254587
r9 1.538170
r10 0.430162
r11 1.517850
r12 0.142401
From table 2 we can observe that if the test input images are unbroken biscuits then the
Euclidean distance between the input images and the reference image are small whereas for
broken biscuits (flawed pieces) the Euclidean distance is high. Therefore using the Euclidean
distance as the metric we can separate broken rectangular biscuits from unbroken rectangular
biscuits. We can draw an inference from the Euclidean metric that, if the Euclidean distance is
greater than a predefined threshold the biscuit is a broken biscuits otherwise unbroken. The
choice of the threshold is very critical in the successful detection of broken biscuits. There may be
many methods for designing the threshold value. Presently we choose the threshold intuitively
based on the database in performing the simulations. The test input images r2, r6, r8, r10 and r12
are unbroken biscuits of different size and orientations (figure 2) while the remaining biscuits are
flawed pieces. It can be observed from table 2 that for unbroken biscuits the Euclidean distance is
below 1 and for flawed pieces it is greater than 1. For simulation using the above set of images
we choose the value of the Euclidean distance threshold to be equal to 1. For set the of test input
images this threshold perfectly works. However this may not be an ideal way of designing the
threshold. But if the database is very large then the intuitive way of choosing threshold may
produce result with less error. The results may be less satisfactory compared to the method
proposed in [12] however their algorithm will fail to recognize the image r10 and r12 in figure 2 as
it cannot be applied to Horizontally and vertically align images. Our proposed algorithm is more
robust compared to algorithm based on Hough transform[12].
6. CONCLUSION
In this paper we have shown that fuzzy moment descriptor can be used in invariant recognition of
rectangular biscuits with detection of flawed biscuits pieces detection. The proposed technique is
not limited to only rectangular objects; it can be applied to appropriate pattern recognition
problem also. The proposed technique is less sensitive to external noise as compared to the
technique using Hough space. But the intuitive way of choosing the threshold may not be an
ideal. The use of neural networks or any optimization technique in choosing the threshold is
considered as future aspect in continuation of this work.
7. REFERENCES
1. J. Illingworth and J. Kittler. “A survey of the Hough transform”, CVGIP, vol. 44, pages 87-1
16.1988
2. P.V.C Hough. “A method and means for recognizing complex patterns”, U.S. patent
3.069.654. (1962)
3. R. 0. Duda, and P.E. Han. “Use of the Hough transform to detect lines and curves in
pictures”, Communications of the ACM. Vol. 15, No 1, 11-15, 1972
8. Pulivarthi Srinivasa Rao, Sheli Sinha Chaudhuri,& Romesh Laishram
International Journal of Image Processing, Volume (4): Issue (3) 239
4. R Gonzalez. and R. E. Woods, “Digital image processing”, Addison-Wesley Publishing,
(1992)
5. R. Krishnapuram. and D. Casasent, “Hough space transformation for discrimination and
distortion estimation”, CVGIP, vol. 38, PP 299-316.1987
6. P.K. Sinha, F.Y. Chen. R.E.N. Horne, “Recognition and location of shapes in the Hough
pattern space”, IEE Elect. Div. Colloq. on Hough transforms 19931106. Savoy place,
London, 1993
7. J. Montenegro Joo, L. da F. Coata and R. KBberle, “Geometric transformation invariant
pattern recognition with Hough transforms and distance discriminator neural networks”,
presented at the Workshop sobre Computapio de Alto Descmpenho pam Processamenfo de
Sinars, Si0 Carlos, SP, Brazil, 1993.
8. R. Scha1koff. “Digital image processing and computer vision”, John Wiley & Sons Inc. (1989)
9. G. Gesig G. and F. Klein., “Fast contour identification through efficient Hough transform and
simplified interpretation strategy”, UCPR-8. Paris. 498.500. (1986).
10. J. Montenegro Joo. “A Polar-Hough-Transform Based Algorithm for the Translation,
Orientation and Size-Scale Invariant Pattern Recognition of Polygonal Objects”, UMI
Dissertations l.D03769, 1998.
11. J. Montenegro Joo. “Geometric-Transformations Invariant Pattern Recognition in the Hough
Space”, Doctoral Degree Project. Cybernetic Vision Research Group, Instituto de Ffsica tic
Sao Carlos (IFSC), Dpto. de Fisica c Informtica. Universidade de San Paulo (USP). San
CarIes, SP. Brazil. (August 1994).
12. J. Montenegro Joo. “Invariant recognition of rectangular biscuits through an algorithm
operating exclusively in Hough Space. Flawed pieces detection”. Revista de Investigacion de
Fisica, Vol 5, No 1,2 ,2002
13. . Montenegro Joo. “Hough Transform based Algorithm for the automatic Invariant Recognition
of Rectangular Chocolates, Detection of defective pieces”, Industrial Data, Lima, Peru.Vol.
9, No 2, 2006
14. J. Montenegro Joo, (2007). “Hough-transform based automatic invariant recognition of
metallic corner-fasteners”, SISTEMAS E INFO RMA TICA. Industrial Data, Lima, Peru. Vol
10, No 1, 2007
15. Yangxing Liu, Ikenaga. T, Goto.S. “A Novel Approach of Rectangular Shape Object Detection
in Color Images Based on an MRF Model” In Proceedings of 5
th
IEEE International
Conference on Cognitive Informatics, 2006
16. Amit Konar. “Artificial Intelligence and soft computing: Behavioral and Cognitive modeling of
the Human brain”, CRC press (2000)
17. Biswas B., Konar A., Mukherjee A.K. “Image matching with fuzzy moment descriptors”.
Engineering Applications of Artificial Intelligence, 14 (1), pp. 43-49, 2001
18. Jignesh Sarvaiya, Suprava Patnaik & Hemant Goklani.” Image Registration using NSCT and
Invariant Moment”,International Journal of Image Processing ISSN (1985-2304) Vol. (4),issue
(2),pp 119-130,May 2010.