Grey Level Co-occurrence Matrices (GLCM) are one of the earliest techniques used for image texture
analysis. In this paper we defined a new feature called trace extracted from the GLCM and its implications
in texture analysis are discussed in the context of Content Based Image Retrieval (CBIR). The theoretical
extension of GLCM to n-dimensional gray scale images are also discussed. The results indicate that trace
features outperform Haralick features when applied to CBIR.
DOMAIN SPECIFIC CBIR FOR HIGHLY TEXTURED IMAGEScseij
It is A Challenging Task To Build A Cbir System Which Primarily Works On Texture Values As There
Meaning And Semantics Needs A Special Care To Be Mapped With Human Based Languages. We Have
Consider Highly Textured Images Having Properties(Entropy, Homogeneity, Contrast, Cluster Shade, Auto
Correlation)And Have Mapped Using A Fuzzy Minmax Scale W.R.T. Their Degree(High, Low,
Medium)And Technical Interpetation.This Developed System Is Performing Well In Terms Of Precision
And Recall Value Showing That Semantic Gap Has Been Reduced For Highly Textured Images Based Cbir.
Segmentation by Fusion of Self-Adaptive SFCM Cluster in Multi-Color Space Com...CSCJournals
This paper proposes a new, simple, and efficient segmentation approach that could find diverse applications in pattern recognition as well as in computer vision, particularly in color image segmentation. First, we choose the best segmentation components among six different color spaces. Then, Histogram and SFCM techniques are applied for initialization of segmentation. Finally, we fuse the segmentation results and merge similar regions. Extensive experiments have been taken on Berkeley image database by using the proposed algorithm. The results show that, compared with some classical segmentation algorithms, such as Mean-Shift, FCR and CTM, etc, our method could yield reasonably good or better image partitioning, which illustrates practical value of the method.
Blind Source Separation Using Hessian EvaluationCSCJournals
This paper focuses on the blind image separation using sparse representation for natural images. The statistics of the natural image is based on one particular statistical property called sparseness, which is closely related to the super-gaussian distribution. Since natural images can have both gaussian and non gaussian distribution, the original infomax algorithm cannot be directly used for source separation as it is better suited to estimate the super-gaussian sources. Hence, we explore the property of sparseness for image representation and show that it can be effectively used for blind source separation. The efficiency of the proposed method is compared with other sparse representation methods through Hessian evaluation.
Perimetric Complexity of Binary Digital ImagesRSARANYADEVI
Perimetric complexity is a measure of the complexity of binary pictures. It is defined as the sum of inside and outside perimeters of the foreground, squared, divided by the foreground area, divided by . Difficulties arise when this definition is applied to digital images composed of binary pixels. In this article we identify these problems and propose solutions. Perimetric complexity is often used as a measure of visual complexity, in which case it should take into account the limited resolution of the visual system. We propose a measure of visual perimetric complexity that meets this requirement.
LOCAL DISTANCE AND DEMPSTER-DHAFER FOR MULTI-FOCUS IMAGE FUSION sipij
This work proposes a new method of fusion image using Dempster-Shafer theory and local variability (DST-LV). This method takes into account the behaviour of each pixel with its neighbours. It consists in calculating the quadratic distance between the value of the pixel I (x, y) of each point and the value of all the neighbouring pixels. Local variability is used to determine the mass function defined in DempsterShafer theory. The two classes of Dempster-Shafer theory studied are : the fuzzy part and the focused part. The results of the proposed method are significantly better when comparing them to results of other methods.
AUTOMATIC THRESHOLDING TECHNIQUES FOR OPTICAL IMAGESsipij
Image segmentation is one of the important tasks in computer vision and image processing. Thresholding is
a simple but most effective technique in segmentation. It based on classify image pixels into object and
background depended on the relation between the gray level value of the pixels and the threshold. Otsu
technique is a robust and fast thresholding techniques for most real world images with regard to uniformity
and shape measures. Otsu technique splits the object from the background by increasing the separability
factor between the classes. Our aim form this work is (1) making a comparison among five thresholding
techniques (Otsu technique, valley emphasis technique, neighborhood valley emphasis technique, variance
and intensity contrast technique, and variance discrepancy technique)on different applications. (2)
determining the best thresholding technique that extracted the object from the background. Our
experimental results ensure that every thresholding technique has shown a superior level of performance
on specific type of bimodal images.
OTSU Thresholding Method for Flower Image Segmentationijceronline
Segmentation is basic process in image processing. It always produces an effective result for next process. In this paper, we proposed the flower image segmentation. Oxford flower collection is used for segmentation.Different segmentation techniques are available. Different techniques and algorithm are developed to describe the segmentation.We proposed a OTSU thresholding technique for flower image segmentation in this paper. which gives good result as compared with the other methods and simple also.Segmentation subdivide the image into different parts.firstly, segmentation techniques and then otsu thresholding method described in this paper.CIE L*a*b color space is used in thresholding for better results.Thresholding apply seperatly on each L, a and b component. accordingly the features can be extracted like shape, color, texture etc. finally, results with the flower images are shown.
Study on Contrast Enhancement with the help of Associate Regions Histogram Eq...IJSRD
Histogram equalization is an uncomplicated and extensively used image distinction enhancement technique. The crucial drawback of histogram equalization is it transforms the brightness of the image. To overcome this drawback, different histogram Equalization methods have been projected. These methods protect the brightness on the result image but, do not have a usual look. Therefore this paper is an attempt to bridge the gap and results after the processed Associate regions are collected into one image. The mock-up result explains that the algorithm can not only improve image information successfully but also remain the imaginative image luminance well enough to make it likely to be used in video arrangement directly.
DOMAIN SPECIFIC CBIR FOR HIGHLY TEXTURED IMAGEScseij
It is A Challenging Task To Build A Cbir System Which Primarily Works On Texture Values As There
Meaning And Semantics Needs A Special Care To Be Mapped With Human Based Languages. We Have
Consider Highly Textured Images Having Properties(Entropy, Homogeneity, Contrast, Cluster Shade, Auto
Correlation)And Have Mapped Using A Fuzzy Minmax Scale W.R.T. Their Degree(High, Low,
Medium)And Technical Interpetation.This Developed System Is Performing Well In Terms Of Precision
And Recall Value Showing That Semantic Gap Has Been Reduced For Highly Textured Images Based Cbir.
Segmentation by Fusion of Self-Adaptive SFCM Cluster in Multi-Color Space Com...CSCJournals
This paper proposes a new, simple, and efficient segmentation approach that could find diverse applications in pattern recognition as well as in computer vision, particularly in color image segmentation. First, we choose the best segmentation components among six different color spaces. Then, Histogram and SFCM techniques are applied for initialization of segmentation. Finally, we fuse the segmentation results and merge similar regions. Extensive experiments have been taken on Berkeley image database by using the proposed algorithm. The results show that, compared with some classical segmentation algorithms, such as Mean-Shift, FCR and CTM, etc, our method could yield reasonably good or better image partitioning, which illustrates practical value of the method.
Blind Source Separation Using Hessian EvaluationCSCJournals
This paper focuses on the blind image separation using sparse representation for natural images. The statistics of the natural image is based on one particular statistical property called sparseness, which is closely related to the super-gaussian distribution. Since natural images can have both gaussian and non gaussian distribution, the original infomax algorithm cannot be directly used for source separation as it is better suited to estimate the super-gaussian sources. Hence, we explore the property of sparseness for image representation and show that it can be effectively used for blind source separation. The efficiency of the proposed method is compared with other sparse representation methods through Hessian evaluation.
Perimetric Complexity of Binary Digital ImagesRSARANYADEVI
Perimetric complexity is a measure of the complexity of binary pictures. It is defined as the sum of inside and outside perimeters of the foreground, squared, divided by the foreground area, divided by . Difficulties arise when this definition is applied to digital images composed of binary pixels. In this article we identify these problems and propose solutions. Perimetric complexity is often used as a measure of visual complexity, in which case it should take into account the limited resolution of the visual system. We propose a measure of visual perimetric complexity that meets this requirement.
LOCAL DISTANCE AND DEMPSTER-DHAFER FOR MULTI-FOCUS IMAGE FUSION sipij
This work proposes a new method of fusion image using Dempster-Shafer theory and local variability (DST-LV). This method takes into account the behaviour of each pixel with its neighbours. It consists in calculating the quadratic distance between the value of the pixel I (x, y) of each point and the value of all the neighbouring pixels. Local variability is used to determine the mass function defined in DempsterShafer theory. The two classes of Dempster-Shafer theory studied are : the fuzzy part and the focused part. The results of the proposed method are significantly better when comparing them to results of other methods.
AUTOMATIC THRESHOLDING TECHNIQUES FOR OPTICAL IMAGESsipij
Image segmentation is one of the important tasks in computer vision and image processing. Thresholding is
a simple but most effective technique in segmentation. It based on classify image pixels into object and
background depended on the relation between the gray level value of the pixels and the threshold. Otsu
technique is a robust and fast thresholding techniques for most real world images with regard to uniformity
and shape measures. Otsu technique splits the object from the background by increasing the separability
factor between the classes. Our aim form this work is (1) making a comparison among five thresholding
techniques (Otsu technique, valley emphasis technique, neighborhood valley emphasis technique, variance
and intensity contrast technique, and variance discrepancy technique)on different applications. (2)
determining the best thresholding technique that extracted the object from the background. Our
experimental results ensure that every thresholding technique has shown a superior level of performance
on specific type of bimodal images.
OTSU Thresholding Method for Flower Image Segmentationijceronline
Segmentation is basic process in image processing. It always produces an effective result for next process. In this paper, we proposed the flower image segmentation. Oxford flower collection is used for segmentation.Different segmentation techniques are available. Different techniques and algorithm are developed to describe the segmentation.We proposed a OTSU thresholding technique for flower image segmentation in this paper. which gives good result as compared with the other methods and simple also.Segmentation subdivide the image into different parts.firstly, segmentation techniques and then otsu thresholding method described in this paper.CIE L*a*b color space is used in thresholding for better results.Thresholding apply seperatly on each L, a and b component. accordingly the features can be extracted like shape, color, texture etc. finally, results with the flower images are shown.
Study on Contrast Enhancement with the help of Associate Regions Histogram Eq...IJSRD
Histogram equalization is an uncomplicated and extensively used image distinction enhancement technique. The crucial drawback of histogram equalization is it transforms the brightness of the image. To overcome this drawback, different histogram Equalization methods have been projected. These methods protect the brightness on the result image but, do not have a usual look. Therefore this paper is an attempt to bridge the gap and results after the processed Associate regions are collected into one image. The mock-up result explains that the algorithm can not only improve image information successfully but also remain the imaginative image luminance well enough to make it likely to be used in video arrangement directly.
New approach to multispectral image fusion based on a weighted mergeIAEME Publication
The document discusses a new approach for multispectral image fusion based on a weighted merge. [1] It models each image band as a mixture of Gaussian distributions using the Expectation Maximization algorithm. [2] The weighted coefficients for each band are extracted based on the similarity between the band and other bands, calculated using a cost function of the distance between the Gaussian distribution parameters. [3] This new approach aims to reduce redundancy while emphasizing complementary data between bands.
Eugen Zaharescu-STATEMENT OF RESEARCH INTERESTEugen Zaharescu
- The document is a research statement from Dr. Eugen ZAHARESCU that outlines his interests in mathematical morphology, image analysis, and ontology generation.
- His research has included extending mathematical morphology theory to multivariate images and exploring morphological operators in logarithmic image processing.
- More recently, he has developed algorithms and tools for machine learning, computer vision, and image understanding by applying mathematical concepts from morphology.
Developing 3D Viewing Model from 2D Stereo Pair with its Occlusion RatioCSCJournals
We intend to make a 3D model using a stereo pair of images by using a novel method of local matching in pixel domain for calculating horizontal disparities. We also find the occlusion ratio using the stereo pair followed by the use of The Edge Detection and Image SegmentatiON (EDISON) system, on one the images, which provides a complete toolbox for discontinuity preserving filtering, segmentation and edge detection. Instead of assigning a disparity value to each pixel, a disparity plane is assigned to each segment. We then warp the segment disparities to the original image to get our final 3D viewing Model.
Object Shape Representation by Kernel Density Feature Points Estimator cscpconf
This paper introduces an object shape representation using Kernel Density Feature Points
Estimator (KDFPE). In this method we obtain the density of feature points within defined rings
around the centroid of the image. The Kernel Density Feature Points Estimator is then applied to
the vector of the image. KDFPE is invariant to translation, scale and rotation. This method of
image representation shows improved retrieval rate when compared to Density Histogram
Feature Points (DHFP) method. Analytic analysis is done to justify our method and we compared our results with object shape representation by the Density Histogram of Feature Points (DHFP) to prove its robustness.
This document proposes a new algorithm called 3D continuous dynamic programming (3DCDP) for segmentation-free registration and optimal matching of 3D voxel patterns. 3DCDP allows for matching of full voxels inside 3D objects, not just surfaces. It matches a reference 3D image to parts of an input image in a segmentation-free manner. The algorithm extends 2D continuous dynamic programming to 3D by combining three 2DCDP planes. Experiments show it can accurately match a reference image embedded within an input image.
This paper proposes a method to jointly match multiple 3D meshes by maximizing pairwise feature affinities and cycle consistency across models. It formulates the matching problem as a low-rank matrix recovery problem and uses nuclear norm relaxation for rank minimization. An alternating minimization algorithm is used to efficiently solve the optimization problem. Experimental results show the method provides an order of magnitude speed-up compared to state-of-the-art algorithms based on semi-definite programming, while achieving competitive performance. It also introduces a distortion term to the pairwise matching to help match reflexive sub-parts of models distinctly.
This document discusses different methods of image segmentation: thresholding, edge-based segmentation, and region-based segmentation. It provides details on various thresholding techniques including basic global thresholding, Otsu's method, multiple thresholding, and variable thresholding. For edge-based segmentation, it mentions basic edge detection, the Marr-Hildreth edge detector, and watersheds. Finally, it covers region-based segmentation and provides an algorithm for region growing.
This document compares three image restoration techniques - Iterated Geometric Harmonics, Markov Random Fields, and Wavelet Decomposition - for removing noise from images. It describes each technique and the process used to test them. Noise was artificially added to images using different noise generation functions. Wavelet Decomposition and Markov Random Fields were then used to detect the noise locations. These noise locations were then used to create versions of the noisy images suitable for reconstruction via Iterated Geometric Harmonics. The reconstructed images were then compared to the original to evaluate the performance of each technique.
Texture Unit based Monocular Real-world Scene Classification using SOM and KN...IDES Editor
In this paper a method is proposed to discriminate
real world scenes in to natural and manmade scenes of similar
depth. Global-roughness of a scene image varies as a function
of image-depth. Increase in image depth leads to increase in
roughness in manmade scenes; on the contrary natural scenes
exhibit smooth behavior at higher image depth. This particular
arrangement of pixels in scene structure can be well explained
by local texture information in a pixel and its neighborhood.
Our proposed method analyses local texture information of a
scene image using texture unit matrix. For final classification
we have used both supervised and unsupervised learning using
K-Nearest Neighbor classifier (KNN) and Self Organizing
Map (SOM) respectively. This technique is useful for online
classification due to very less computational complexity.
Texture Unit based Approach to Discriminate Manmade Scenes from Natural Scenesidescitation
This document summarizes a research paper that proposes a method to discriminate between natural and manmade scenes using texture analysis. It analyzes local texture information in images using a "texture unit matrix" approach. Texture units characterize the texture of a pixel and its neighbors. Texture unit matrices are generated from images and used to form feature vectors. A self-organizing map (SOM) classifier is then used to classify images as natural or manmade based on these feature vectors. The researchers tested their method on databases of "near" scenes within 10 meters and "far" scenes about 500 meters away. Their results found that analyzing the minimum texture unit matrix in a base-5 approach provided the most accurate classifications between natural and manmade scenes
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Blind Image Seperation Using Forward Difference Method (FDM)sipij
In this paper, blind image separation is performed, exploiting the property of sparseness to represent images. A new sparse representation called forward difference method is proposed. It is known that most of the independent component analysis (ICA) basis functions, extracted from images are sparse and gives unreliable sparseness measure. In the proposed method, the image mixture is first transformed to sparse images. These images are divided into blocks and for each block the sparseness measure ε0 norm is applied. The block having the most sparseness is considered to determine the separation matrix. The efficiency of the proposed method is compared with other sparse representation functions.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Zaharescu__Eugen Color Image Indexing Using Mathematical MorphologyEugen Zaharescu
This document discusses using mathematical morphology techniques for color image indexing. It proposes using morphological signatures as powerful descriptions of image content. Morphological signatures are defined as a series of morphological operations (openings and closings) using structuring elements of varying sizes. For color images, morphological feature extraction algorithms are used that include more complex operators like color gradient, homotopic skeleton, and Hit-or-Miss transform. Examples applying this approach on real images are also provided.
COMPARATIVE STUDY OF DISTANCE METRICS FOR FINDING SKIN COLOR SIMILARITY OF TW...cscpconf
This paper describes the comparative study of performance between the existing distance metrics like Manhattan, Euclidean, Vector Cosine Angle and Modified Euclidean distance for finding the similarity of complexion by calculating the distance between the skin colors of two color facial images. The existing methodologies have been tested on 110 male and 40 female facial images taken from FRAV2D database. To verify the result obtained from the existing methodologies an opinion poll of 100 peoples have been taken. The experimental result shows that the result obtained by the methodologies of Manhattan, Euclidean and Vector Cosine Angle distance contradict the survey result in 80% cases and for Modified Euclidean distance methodology the contradiction arises in 60% cases. The present work has been implemented using Matlab 7.
Lec11: Active Contour and Level Set for Medical Image SegmentationUlaş Bağcı
ActiveContour(Snake) • LevelSet
• Applications
Enhancement, Noise Reduction, and Signal Processing • MedicalImageRegistration • MedicalImageSegmentation • MedicalImageVisualization • Machine Learning in Medical Imaging • Shape Modeling/Analysis of Medical Images Deep Learning in Radiology Fuzzy Connectivity (FC) – Affinity functions • Absolute FC • Relative FC (and Iterative Relative FC) • Successful example applications of FC in medical imaging • Segmentation of Airway and Airway Walls using RFC based method Energyfunctional – Data and Smoothness terms • GraphCut – Min cut – Max Flow • ApplicationsinRadiologyImages
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
ALEXANDER FRACTIONAL INTEGRAL FILTERING OF WAVELET COEFFICIENTS FOR IMAGE DEN...sipij
The present paper, proposes an efficient denoising algorithm which works well for images corrupted with
Gaussian and speckle noise. The denoising algorithm utilizes the alexander fractional integral filter which
works by the construction of fractional masks window computed using alexander polynomial. Prior to the
application of the designed filter, the corrupted image is decomposed using symlet wavelet from which only
the horizontal, vertical and diagonal components are denoised using the alexander integral filter.
Significant increase in the reconstruction quality was noticed when the approach was applied on the
wavelet decomposed image rather than applying it directly on the noisy image. Quantitatively the results
are evaluated using the peak signal to noise ratio (PSNR) which was 30.8059 on an average for images
corrupted with Gaussian noise and 36.52 for images corrupted with speckle noise, which clearly
outperforms the existing methods.
This document presents research on content-based image retrieval using color and texture features. It proposes using both quadratic distance based on color histograms to measure color similarity, and pyramid structure wavelet transforms and gray level co-occurrence matrix (GLCM) to measure texture. For color features, quadratic distance is calculated between color histograms to retrieve similar images based on color. For texture, pyramid structure wavelet transforms are used to decompose images into sub-bands and calculate energy levels, while GLCM extracts texture statistics. The methods are evaluated on a dataset of 10000 images and results show the integrated approach of color and texture features provides more accurate and faster retrieval compared to individual features.
New approach to multispectral image fusion based on a weighted mergeIAEME Publication
The document discusses a new approach for multispectral image fusion based on a weighted merge. [1] It models each image band as a mixture of Gaussian distributions using the Expectation Maximization algorithm. [2] The weighted coefficients for each band are extracted based on the similarity between the band and other bands, calculated using a cost function of the distance between the Gaussian distribution parameters. [3] This new approach aims to reduce redundancy while emphasizing complementary data between bands.
Eugen Zaharescu-STATEMENT OF RESEARCH INTERESTEugen Zaharescu
- The document is a research statement from Dr. Eugen ZAHARESCU that outlines his interests in mathematical morphology, image analysis, and ontology generation.
- His research has included extending mathematical morphology theory to multivariate images and exploring morphological operators in logarithmic image processing.
- More recently, he has developed algorithms and tools for machine learning, computer vision, and image understanding by applying mathematical concepts from morphology.
Developing 3D Viewing Model from 2D Stereo Pair with its Occlusion RatioCSCJournals
We intend to make a 3D model using a stereo pair of images by using a novel method of local matching in pixel domain for calculating horizontal disparities. We also find the occlusion ratio using the stereo pair followed by the use of The Edge Detection and Image SegmentatiON (EDISON) system, on one the images, which provides a complete toolbox for discontinuity preserving filtering, segmentation and edge detection. Instead of assigning a disparity value to each pixel, a disparity plane is assigned to each segment. We then warp the segment disparities to the original image to get our final 3D viewing Model.
Object Shape Representation by Kernel Density Feature Points Estimator cscpconf
This paper introduces an object shape representation using Kernel Density Feature Points
Estimator (KDFPE). In this method we obtain the density of feature points within defined rings
around the centroid of the image. The Kernel Density Feature Points Estimator is then applied to
the vector of the image. KDFPE is invariant to translation, scale and rotation. This method of
image representation shows improved retrieval rate when compared to Density Histogram
Feature Points (DHFP) method. Analytic analysis is done to justify our method and we compared our results with object shape representation by the Density Histogram of Feature Points (DHFP) to prove its robustness.
This document proposes a new algorithm called 3D continuous dynamic programming (3DCDP) for segmentation-free registration and optimal matching of 3D voxel patterns. 3DCDP allows for matching of full voxels inside 3D objects, not just surfaces. It matches a reference 3D image to parts of an input image in a segmentation-free manner. The algorithm extends 2D continuous dynamic programming to 3D by combining three 2DCDP planes. Experiments show it can accurately match a reference image embedded within an input image.
This paper proposes a method to jointly match multiple 3D meshes by maximizing pairwise feature affinities and cycle consistency across models. It formulates the matching problem as a low-rank matrix recovery problem and uses nuclear norm relaxation for rank minimization. An alternating minimization algorithm is used to efficiently solve the optimization problem. Experimental results show the method provides an order of magnitude speed-up compared to state-of-the-art algorithms based on semi-definite programming, while achieving competitive performance. It also introduces a distortion term to the pairwise matching to help match reflexive sub-parts of models distinctly.
This document discusses different methods of image segmentation: thresholding, edge-based segmentation, and region-based segmentation. It provides details on various thresholding techniques including basic global thresholding, Otsu's method, multiple thresholding, and variable thresholding. For edge-based segmentation, it mentions basic edge detection, the Marr-Hildreth edge detector, and watersheds. Finally, it covers region-based segmentation and provides an algorithm for region growing.
This document compares three image restoration techniques - Iterated Geometric Harmonics, Markov Random Fields, and Wavelet Decomposition - for removing noise from images. It describes each technique and the process used to test them. Noise was artificially added to images using different noise generation functions. Wavelet Decomposition and Markov Random Fields were then used to detect the noise locations. These noise locations were then used to create versions of the noisy images suitable for reconstruction via Iterated Geometric Harmonics. The reconstructed images were then compared to the original to evaluate the performance of each technique.
Texture Unit based Monocular Real-world Scene Classification using SOM and KN...IDES Editor
In this paper a method is proposed to discriminate
real world scenes in to natural and manmade scenes of similar
depth. Global-roughness of a scene image varies as a function
of image-depth. Increase in image depth leads to increase in
roughness in manmade scenes; on the contrary natural scenes
exhibit smooth behavior at higher image depth. This particular
arrangement of pixels in scene structure can be well explained
by local texture information in a pixel and its neighborhood.
Our proposed method analyses local texture information of a
scene image using texture unit matrix. For final classification
we have used both supervised and unsupervised learning using
K-Nearest Neighbor classifier (KNN) and Self Organizing
Map (SOM) respectively. This technique is useful for online
classification due to very less computational complexity.
Texture Unit based Approach to Discriminate Manmade Scenes from Natural Scenesidescitation
This document summarizes a research paper that proposes a method to discriminate between natural and manmade scenes using texture analysis. It analyzes local texture information in images using a "texture unit matrix" approach. Texture units characterize the texture of a pixel and its neighbors. Texture unit matrices are generated from images and used to form feature vectors. A self-organizing map (SOM) classifier is then used to classify images as natural or manmade based on these feature vectors. The researchers tested their method on databases of "near" scenes within 10 meters and "far" scenes about 500 meters away. Their results found that analyzing the minimum texture unit matrix in a base-5 approach provided the most accurate classifications between natural and manmade scenes
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Blind Image Seperation Using Forward Difference Method (FDM)sipij
In this paper, blind image separation is performed, exploiting the property of sparseness to represent images. A new sparse representation called forward difference method is proposed. It is known that most of the independent component analysis (ICA) basis functions, extracted from images are sparse and gives unreliable sparseness measure. In the proposed method, the image mixture is first transformed to sparse images. These images are divided into blocks and for each block the sparseness measure ε0 norm is applied. The block having the most sparseness is considered to determine the separation matrix. The efficiency of the proposed method is compared with other sparse representation functions.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Zaharescu__Eugen Color Image Indexing Using Mathematical MorphologyEugen Zaharescu
This document discusses using mathematical morphology techniques for color image indexing. It proposes using morphological signatures as powerful descriptions of image content. Morphological signatures are defined as a series of morphological operations (openings and closings) using structuring elements of varying sizes. For color images, morphological feature extraction algorithms are used that include more complex operators like color gradient, homotopic skeleton, and Hit-or-Miss transform. Examples applying this approach on real images are also provided.
COMPARATIVE STUDY OF DISTANCE METRICS FOR FINDING SKIN COLOR SIMILARITY OF TW...cscpconf
This paper describes the comparative study of performance between the existing distance metrics like Manhattan, Euclidean, Vector Cosine Angle and Modified Euclidean distance for finding the similarity of complexion by calculating the distance between the skin colors of two color facial images. The existing methodologies have been tested on 110 male and 40 female facial images taken from FRAV2D database. To verify the result obtained from the existing methodologies an opinion poll of 100 peoples have been taken. The experimental result shows that the result obtained by the methodologies of Manhattan, Euclidean and Vector Cosine Angle distance contradict the survey result in 80% cases and for Modified Euclidean distance methodology the contradiction arises in 60% cases. The present work has been implemented using Matlab 7.
Lec11: Active Contour and Level Set for Medical Image SegmentationUlaş Bağcı
ActiveContour(Snake) • LevelSet
• Applications
Enhancement, Noise Reduction, and Signal Processing • MedicalImageRegistration • MedicalImageSegmentation • MedicalImageVisualization • Machine Learning in Medical Imaging • Shape Modeling/Analysis of Medical Images Deep Learning in Radiology Fuzzy Connectivity (FC) – Affinity functions • Absolute FC • Relative FC (and Iterative Relative FC) • Successful example applications of FC in medical imaging • Segmentation of Airway and Airway Walls using RFC based method Energyfunctional – Data and Smoothness terms • GraphCut – Min cut – Max Flow • ApplicationsinRadiologyImages
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
ALEXANDER FRACTIONAL INTEGRAL FILTERING OF WAVELET COEFFICIENTS FOR IMAGE DEN...sipij
The present paper, proposes an efficient denoising algorithm which works well for images corrupted with
Gaussian and speckle noise. The denoising algorithm utilizes the alexander fractional integral filter which
works by the construction of fractional masks window computed using alexander polynomial. Prior to the
application of the designed filter, the corrupted image is decomposed using symlet wavelet from which only
the horizontal, vertical and diagonal components are denoised using the alexander integral filter.
Significant increase in the reconstruction quality was noticed when the approach was applied on the
wavelet decomposed image rather than applying it directly on the noisy image. Quantitatively the results
are evaluated using the peak signal to noise ratio (PSNR) which was 30.8059 on an average for images
corrupted with Gaussian noise and 36.52 for images corrupted with speckle noise, which clearly
outperforms the existing methods.
This document presents research on content-based image retrieval using color and texture features. It proposes using both quadratic distance based on color histograms to measure color similarity, and pyramid structure wavelet transforms and gray level co-occurrence matrix (GLCM) to measure texture. For color features, quadratic distance is calculated between color histograms to retrieve similar images based on color. For texture, pyramid structure wavelet transforms are used to decompose images into sub-bands and calculate energy levels, while GLCM extracts texture statistics. The methods are evaluated on a dataset of 10000 images and results show the integrated approach of color and texture features provides more accurate and faster retrieval compared to individual features.
A combined method of fractal and glcm features for mri and ct scan images cla...sipij
Fractal analysis has been shown to be useful in image processing for characterizing shape and gray-scale
complexity. The fractal feature is a compact descriptor used to give a numerical measure of the degree of
irregularity of the medical images. This descriptor property does not give ownership of the local image
structure. In this paper, we present a combination of this parameter based on Box Counting with GLCM
Features. This powerful combination has proved good results especially in classification of medical texture
from MRI and CT Scan images of trabecular bone. This method has the potential to improve clinical
diagnostics tests for osteoporosis pathologies.
11.optimal nonlocal means algorithm for denoising ultrasound imageAlexander Decker
The document presents a new algorithm for denoising ultrasound images called the optimal nonlocal means algorithm. It calculates the mean distance of all pixel neighborhoods in the image, rather than totaling all neighborhood distances as in the original nonlocal means algorithm. The proposed algorithm exhibits better performance in noise removal, visual quality of restored images, and mean square error compared to the original algorithm, as evidenced by experiments on phantom and normal ultrasound images. Numerical measurements of SNR, RMSE, and PSNR support that calculating nonlocal means with mean neighborhood distances provides a better method for image denoising.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
The document proposes a new algorithm to reduce blocking artifacts in compressed images using a combination of the SAWS technique, Fuzzy Impulse Artifact Detection and Reduction Method (FIDRM), and Noise Adaptive Fuzzy Switching Median Filter (NAFSM). FIDRM uses fuzzy rules to detect noisy pixels, while NAFSM uses a median filter to correct pixels based on local information. Experimental results on test images show the proposed approach achieves better PSNR than other deblocking methods.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
Call for paper 2012, hard copy of Certificate, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJCER, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, research and review articles, IJCER Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathematics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer review journal, indexed journal, research and review articles, engineering journal, www.ijceronline.com, research journals,
yahoo journals, bing journals, International Journal of Computational Engineering Research, Google journals, hard copy of Certificate,
journal of engineering, online Submission
Content Based Image Retrieval Using Gray Level Co-Occurance Matrix with SVD a...ijcisjournal
In this paper, gray level co-occurrence matrix, gray level co-occurrence matrix with singular value
decomposition and local binary pattern are presented for content based image retrieval. Based upon the
feature vector parameters of energy, contrast, entropy and distance metrics such as Euclidean distance,
Canberra distance, Manhattan distance the retrieval efficiency, precision, and recall of the images are
calculated. The retrieval results of the proposed method are tested on Corel-1k database. The results after
being investigated shows a significant improvement in terms of average retrieval rate, average retrieval
precision and recall of different algorithms such as GLCM, GLCM & SVD, LBP with radius one and LBP
with radius two based on different distance metrics.
This document presents a survey of contemporary research on image segmentation through clustering techniques. It discusses various clustering approaches including exclusive clustering (e.g. k-means), overlapping clustering (e.g. fuzzy c-means), hierarchical clustering, and probabilistic D-clustering. It provides details on the algorithms and steps involved in each technique. The paper analyzes different clustering methods for image segmentation and concludes that fuzzy c-means is superior but has high computational costs, while probabilistic D-clustering can avoid this issue.
This document provides a summary of different image segmentation techniques through clustering. It discusses exclusive clustering methods like k-means clustering, overlapping clustering methods like fuzzy c-means, and hierarchical clustering. The paper reviews these clustering approaches and their application to image segmentation, which is the process of partitioning a digital image into multiple segments. Image segmentation through clustering has various uses including computer vision, medical imaging, and remote sensing.
This document provides an overview of texture analysis in image processing. It defines texture as a feature used to classify image regions based on spatial patterns of intensities. Texture analysis involves both texture classification, which identifies textured regions, and texture segmentation, which determines boundaries between regions. Common quantitative texture measures described include range, variance, gray level co-occurrence matrices (GLCM), and gray level difference statistics. GLCM capture the spatial relationship of intensity pairs and various statistical features can be extracted from them, such as contrast and homogeneity. Problems with GLCM include how to select the displacement vector. Texture analysis algorithms typically operate within a window across the image.
A Novel Feature Extraction Scheme for Medical X-Ray ImagesIJERA Editor
X-ray images are gray scale images with almost the same textural characteristic. Conventional texture or color
features cannot be used for appropriate categorization in medical x-ray image archives. This paper presents a
novel combination of methods like GLCM, LBP and HOG for extracting distinctive invariant features from Xray
images belonging to IRMA (Image Retrieval in Medical applications) database that can be used to perform
reliable matching between different views of an object or scene. GLCM represents the distributions of the
intensities and the information about relative positions of neighboring pixels of an image. The LBP features are
invariant to image scale and rotation, change in 3D viewpoint, addition of noise, and change in illumination A
HOG feature vector represents local shape of an object, having edge information at plural cells. These features
have been exploited in different algorithms for automatic classification of medical X-ray images. Excellent
experimental results obtained in true problems of rotation invariance, particular rotation angle, demonstrate that
good discrimination can be achieved with the occurrence statistics of simple rotation invariant local binary
patterns.
Local Distance and Dempster-Dhafer for Multi-Focus Image Fusionsipij
This work proposes a new method of fusion image using Dempster-Shafer theory and local variability
(DST-LV). This method takes into account the behaviour of each pixel with its neighbours. It consists in
calculating the quadratic distance between the value of the pixel I (x, y) of each point and the value of all
the neighbouring pixels. Local variability is used to determine the mass function defined in DempsterShafer theory. The two classes of Dempster-Shafer theory studied are : the fuzzy part and the focused part.
The results of the proposed method are significantly better when comparing them to results of other
methods.
Local Distance and Dempster-Dhafer for Multi-Focus Image Fusionsipij
The document describes a new method for fusing multi-focus images using Dempster-Shafer theory and local variability (DST-LV). The method calculates the quadratic distance between each pixel value and its neighboring pixels to determine local variability, which is then used to define mass functions in Dempster-Shafer theory. The proposed method was tested on 150 images and found to produce higher quality fused images compared to other popular fusion methods like PCA, DWT, and gradient bilateral, with lower RMSE values. Potential applications of the method include image fusion for drones, medical imaging, and food quality inspection.
SEGMENTATION USING ‘NEW’ TEXTURE FEATUREacijjournal
This document summarizes a research paper that proposes a new texture feature descriptor called "NEW" for image segmentation. The NEW descriptor labels neighboring pixels and forms eight-component binary vectors to represent texture. Fuzzy c-means clustering is then used to segment images into regions based on texture. Experimental results on texture images from the Brodatz dataset show the NEW descriptor can successfully segment images into the correct number of texture regions. Accuracy, precision, and recall metrics are used to evaluate the segmentation performance.
Different Image Segmentation Techniques for Dental Image ExtractionIJERA Editor
Image segmentation is the process of partitioning a digital image into multiple segments and often used to locate objects and boundaries (lines, curves etc.). In this paper, we have proposed image segmentation techniques: Region based, Texture based, Edge based. These techniques have been implemented on dental radiographs and gained good results compare to conventional technique known as Thresholding based technique. The quantitative results show the superiority of the image segmentation technique over three proposed techniques and conventional technique.
The document discusses a method for 3D object recognition from 2D images using centroidal representation. It involves several steps: filtering and binarizing the image, detecting edges, calculating the object center point, extracting features around the centroid, and creating mathematical models using wavelet transforms and autoregression. Centroidal samples represent distances from the center to the boundary every 45 degrees. Wavelet transforms and autoregression are used to create scale and position invariant representations of the object for recognition.
A STUDY AND ANALYSIS OF DIFFERENT EDGE DETECTION TECHNIQUEScscpconf
In the first study [1], a combination of K-means, watershed segmentation method, and Difference In Strength (DIS) map were used to perform image segmentation and edge detection
tasks. We obtained an initial segmentation based on K-means clustering technique. Starting from this, we used two techniques; the first is watershed technique with new merging
procedures based on mean intensity value to segment the image regions and to detect their boundaries. The second is edge strength technique to obtain accurate edge maps of our images without using watershed method. In this technique: We solved the problem of undesirable over segmentation results produced by the watershed algorithm, when used directly with raw data images. Also, the edge maps we obtained have no broken lines on entire image. In the 2nd study level set methods are used for the implementation of curve/interface evolution under various forces. In the third study the main idea is to detect regions (objects) boundaries, to isolate and extract individual components from a medical image. This is done using an active contours to detect regions in a given image, based on techniques of curve evolution, Mumford–Shah functional for segmentation and level sets. Once we classified our images into different intensity regions based on Markov Random Field. Then we detect regions whose boundaries are not necessarily defined by gradient by minimize an energy of Mumford–Shah functional forsegmentation, where in the level set formulation, the problem becomes a mean-curvature which will stop on the desired boundary. The stopping term does not depend on the gradient of the image as in the classical active contour. The initial curve of level set can be anywhere in the image, and interior contours are automatically detected. The final image segmentation is one
closed boundary per actual region in the image.
AN APPLICATION OF Gd -METRIC SPACES AND METRIC DIMENSION OF GRAPHSFransiskeran
The idea of metric dimension in graph theory was introduced by P J Slater in [2]. It has been found
applications in optimization, navigation, network theory, image processing, pattern recognition etc.
Several other authors have studied metric dimension of various standard graphs. In this paper we
introduce a real valued function called generalized metric → + Gd
: X × X × X R where X = r(v /W) =
{(d(v,v1
),d(v,v2
),...,d(v,vk
/) v∈V (G))}, denoted Gd
and is used to study metric dimension of graphs. It
has been proved that metric dimension of any connected finite simple graph remains constant if Gd
numbers of pendant edges are added to the non-basis vertices.
The idea of metric dimension in graph theory was introduced by P J Slater in [2]. It has been found
applications in optimization, navigation, network theory, image processing, pattern recognition etc.
Several other authors have studied metric dimension of various standard graphs. In this paper we
introduce a real valued function called generalized metric G X × X × X ® R+ d : where X = r(v /W) =
{(d(v,v1),d(v,v2 ),...,d(v,v ) / v V (G))} k Î , denoted d G and is used to study metric dimension of graphs. It
has been proved that metric dimension of any connected finite simple graph remains constant if d G
numbers of pendant edges are added to the non-basis vertices.
Similar to GREY LEVEL CO-OCCURRENCE MATRICES: GENERALISATION AND SOME NEW FEATURES (20)
Reimagining Your Library Space: How to Increase the Vibes in Your Library No ...Diana Rendina
Librarians are leading the way in creating future-ready citizens – now we need to update our spaces to match. In this session, attendees will get inspiration for transforming their library spaces. You’ll learn how to survey students and patrons, create a focus group, and use design thinking to brainstorm ideas for your space. We’ll discuss budget friendly ways to change your space as well as how to find funding. No matter where you’re at, you’ll find ideas for reimagining your space in this session.
Main Java[All of the Base Concepts}.docxadhitya5119
This is part 1 of my Java Learning Journey. This Contains Custom methods, classes, constructors, packages, multithreading , try- catch block, finally block and more.
This document provides an overview of wound healing, its functions, stages, mechanisms, factors affecting it, and complications.
A wound is a break in the integrity of the skin or tissues, which may be associated with disruption of the structure and function.
Healing is the body’s response to injury in an attempt to restore normal structure and functions.
Healing can occur in two ways: Regeneration and Repair
There are 4 phases of wound healing: hemostasis, inflammation, proliferation, and remodeling. This document also describes the mechanism of wound healing. Factors that affect healing include infection, uncontrolled diabetes, poor nutrition, age, anemia, the presence of foreign bodies, etc.
Complications of wound healing like infection, hyperpigmentation of scar, contractures, and keloid formation.
How to Make a Field Mandatory in Odoo 17Celine George
In Odoo, making a field required can be done through both Python code and XML views. When you set the required attribute to True in Python code, it makes the field required across all views where it's used. Conversely, when you set the required attribute in XML views, it makes the field required only in the context of that particular view.
Chapter wise All Notes of First year Basic Civil Engineering.pptxDenish Jangid
Chapter wise All Notes of First year Basic Civil Engineering
Syllabus
Chapter-1
Introduction to objective, scope and outcome the subject
Chapter 2
Introduction: Scope and Specialization of Civil Engineering, Role of civil Engineer in Society, Impact of infrastructural development on economy of country.
Chapter 3
Surveying: Object Principles & Types of Surveying; Site Plans, Plans & Maps; Scales & Unit of different Measurements.
Linear Measurements: Instruments used. Linear Measurement by Tape, Ranging out Survey Lines and overcoming Obstructions; Measurements on sloping ground; Tape corrections, conventional symbols. Angular Measurements: Instruments used; Introduction to Compass Surveying, Bearings and Longitude & Latitude of a Line, Introduction to total station.
Levelling: Instrument used Object of levelling, Methods of levelling in brief, and Contour maps.
Chapter 4
Buildings: Selection of site for Buildings, Layout of Building Plan, Types of buildings, Plinth area, carpet area, floor space index, Introduction to building byelaws, concept of sun light & ventilation. Components of Buildings & their functions, Basic concept of R.C.C., Introduction to types of foundation
Chapter 5
Transportation: Introduction to Transportation Engineering; Traffic and Road Safety: Types and Characteristics of Various Modes of Transportation; Various Road Traffic Signs, Causes of Accidents and Road Safety Measures.
Chapter 6
Environmental Engineering: Environmental Pollution, Environmental Acts and Regulations, Functional Concepts of Ecology, Basics of Species, Biodiversity, Ecosystem, Hydrological Cycle; Chemical Cycles: Carbon, Nitrogen & Phosphorus; Energy Flow in Ecosystems.
Water Pollution: Water Quality standards, Introduction to Treatment & Disposal of Waste Water. Reuse and Saving of Water, Rain Water Harvesting. Solid Waste Management: Classification of Solid Waste, Collection, Transportation and Disposal of Solid. Recycling of Solid Waste: Energy Recovery, Sanitary Landfill, On-Site Sanitation. Air & Noise Pollution: Primary and Secondary air pollutants, Harmful effects of Air Pollution, Control of Air Pollution. . Noise Pollution Harmful Effects of noise pollution, control of noise pollution, Global warming & Climate Change, Ozone depletion, Greenhouse effect
Text Books:
1. Palancharmy, Basic Civil Engineering, McGraw Hill publishers.
2. Satheesh Gopi, Basic Civil Engineering, Pearson Publishers.
3. Ketki Rangwala Dalal, Essentials of Civil Engineering, Charotar Publishing House.
4. BCP, Surveying volume 1
Walmart Business+ and Spark Good for Nonprofits.pdfTechSoup
"Learn about all the ways Walmart supports nonprofit organizations.
You will hear from Liz Willett, the Head of Nonprofits, and hear about what Walmart is doing to help nonprofits, including Walmart Business and Spark Good. Walmart Business+ is a new offer for nonprofits that offers discounts and also streamlines nonprofits order and expense tracking, saving time and money.
The webinar may also give some examples on how nonprofits can best leverage Walmart Business+.
The event will cover the following::
Walmart Business + (https://business.walmart.com/plus) is a new shopping experience for nonprofits, schools, and local business customers that connects an exclusive online shopping experience to stores. Benefits include free delivery and shipping, a 'Spend Analytics” feature, special discounts, deals and tax-exempt shopping.
Special TechSoup offer for a free 180 days membership, and up to $150 in discounts on eligible orders.
Spark Good (walmart.com/sparkgood) is a charitable platform that enables nonprofits to receive donations directly from customers and associates.
Answers about how you can do more with Walmart!"
This presentation was provided by Steph Pollock of The American Psychological Association’s Journals Program, and Damita Snow, of The American Society of Civil Engineers (ASCE), for the initial session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session One: 'Setting Expectations: a DEIA Primer,' was held June 6, 2024.
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
How to Manage Your Lost Opportunities in Odoo 17 CRMCeline George
Odoo 17 CRM allows us to track why we lose sales opportunities with "Lost Reasons." This helps analyze our sales process and identify areas for improvement. Here's how to configure lost reasons in Odoo 17 CRM
This presentation includes basic of PCOS their pathology and treatment and also Ayurveda correlation of PCOS and Ayurvedic line of treatment mentioned in classics.
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
How to Fix the Import Error in the Odoo 17Celine George
An import error occurs when a program fails to import a module or library, disrupting its execution. In languages like Python, this issue arises when the specified module cannot be found or accessed, hindering the program's functionality. Resolving import errors is crucial for maintaining smooth software operation and uninterrupted development processes.
How to Create a More Engaging and Human Online Learning Experience
GREY LEVEL CO-OCCURRENCE MATRICES: GENERALISATION AND SOME NEW FEATURES
1. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.2, No.2, April 2012
DOI : 10.5121/ijcseit.2012.2213 151
GREY LEVEL CO-OCCURRENCE MATRICES:
GENERALISATION AND SOME NEW FEATURES
Bino Sebastian V1
, A. Unnikrishnan2
and Kannan Balakrishnan1
1
Department of Computer Applications, Cochin University of Science and Technology,
Cochin
binosebastianv@gmail.com, mullayilkannan@gmail.com
2
Scientist ‘G’, Associate Director, Naval Physical and Oceanographic Laboratory, Cochin
unnikrishnan_a@live.com
ABSTRACT
Grey Level Co-occurrence Matrices (GLCM) are one of the earliest techniques used for image texture
analysis. In this paper we defined a new feature called trace extracted from the GLCM and its implications
in texture analysis are discussed in the context of Content Based Image Retrieval (CBIR). The theoretical
extension of GLCM to n-dimensional gray scale images are also discussed. The results indicate that trace
features outperform Haralick features when applied to CBIR.
KEYWORDS
Grey Level Co-occurrence Matrix, Texture Analysis, Haralick Features, N-Dimensional Co-occurrence
Matrix, Trace, CBIR
1. INTRODUCTION
Texture is an important characteristics used in identifying regions of interest in an image. Grey
Level Co-occurrence Matrices (GLCM) is one of the earliest methods for texture feature
extraction proposed by Haralick et.al. [1 ] back in 1973. Since then it has been widely used in
many texture analysis applications and remained to be an important feature extraction method in
the domain of texture analysis. Fourteen features were extracted by Haralick from the GLCMs to
characterize texture [2 ]. Many quantitative measures of texture are found in the literature [3, 4,
5,6]. Dacheng et.al.[7 ] used 3D co-occurrence matrices in CBIR applications. Kovalev and
Petrov [8] used special multidimensional co-occurrence matrices for object recognition and
matching. Multi dimensional texture analysis was introduced in [9], which is used in clustering
techniques. The objective of this work is to generalize the concept of co-occurrence matrices to n-
dimensional Euclidean spaces and to extract more features from the matrix. The newly defined
features are found to be useful in CBIR applications. This paper is organized as follows. The
theoretical development is presented in section 2, where the generalized co-occurrence matrices
and trace are defined and the numbers of possible co-occurrence matrices are evaluated. Section 3
illustrates the use of trace in CBIR by comparing its performance with the Haralick features.
Section 4 concludes the paper illustrating the future works.
2. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.2, No.2, April 2012
152
2. THEORETICAL BACKGROUND
In 1973 Haralick introduced the co-occurrence matrix and texture features for automated
classification of rocks into six categories [1 ]. These features are widely used for different kinds
of images. Now we will explore the definitions and background needed to understand the
computation of GLCM.
2.1. Construction of the Traditional Co-occurrence Matrices
Let I be a given grey scale image. Let N be the total number of grey levels in the image. The Grey
Level Co-occurrence Matrix defined by Haralick is a square matrix G of order N, where the (i, j)th
entry of G represents the number of occasions a pixel with intensity i is adjacent to a pixel with
intensity j. The normalized co-occurrence matrix is obtained by dividing each element of G by the
total number of co-occurrence pairs in G. The adjacency can be defined to take place in each of
the four directions (horizontal, vertical, left and right diagonal) as shown in figure1. The Haralick
texture features are calculated for each of these directions of adjacency [10].
Figure 1. The four directions of adjacency for calculating the Haralick texture features
The texture features are calculated by averaging over the four directional co-occurrence matrices.
To extend these concepts to n-dimensional Euclidean space, we precisely define grey scale
images in n-dimensional space and the above mentioned directions of adjacency in n-dimensional
images.
2.2. Generalized Gray Scale Images
In order to extend the concept of co-occurrence matrices to n-dimensional Euclidean space, a
mathematical model for the above concepts is required. We treat our universal set as n
Z . Here
n
Z =Z x Z x … x Z, the Cartesian product of Z taken n times with itself. Where, Z is the set of all
integers. A point (or pixel in n
Z ) X in n
Z is an n-tuple of the form X=(x1,x2,…,xn) where ix Z∈
1,2,3...i n∀ = . An image I is a function from a subset of n
Z to Z. That is :f I Z→ where n
I Z⊂ . If
X I∈ , then X is assigned an integer Y such that ( )Y f X= . Y is called the intensity of the pixel
X. The image is called a grey scale image in the n-dimensional space n
Z . Volumetric data [11]
can be treated as three dimensional images or images in 3
Z .
2.3. Generalized Co-occurrence Matrices
Consider a grey scale image I defined in n
Z . The gray level co-occurrence matrix is defined to be
a square matrix dG of size N where, N is the N be the total number of grey levels in the image.
the (i, j)th
entry of dG represents the number of times a pixel X with intensity value i is separated
from a pixel Y with intensity value j at a particular distance k in a particular direction d. where
3. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.2, No.2, April 2012
153
the distance k is a nonnegative integer and the direction d is specified by 1, 2, 3,( ..., )nd d d d d= , where
{0, , }id k k∈ − 1,2,3,...,i n∀ = .
As an illustration consider the grey scale image in 3
Z with the four intensity values 0, 1, 2 and 3.
The image is represented as a three dimensional matrix of size 3 3 3× × in which the three slices
are as follows.
0 0 1
0 1 2
0 2 3
,
1 2 3
0 2 3
0 1 2
and
1 3 0
0 3 1
3 2 1
The three dimensional co-occurrence matrix dG for this image in the direction (1,0,0)d = is the
4 4× matrix
1 3 2 1
0 0 3 1
0 1 0 3
1 1 1 0
dG
=
Note that
1 0 0 1
3 0 1 1
'
2 3 0 1
1 1 3 0
d dG G−
= =
It can be seen that X d Y+ = , so that 'd dG G− = , where 'dG is the transpose of dG . Hence d dG G−+ is
a symmetric matrix. Since 'd dG G− = , we say that dG and dG−
are dependent (or not independent).
Therefore the directions d and –d are called dependent or not independent.
Theorem: If n
X Z∈ , the number of independent directions from X in n
Z is 3 1
2
n
− .
Proof: Suppose n
X Z∈ . If n
Y Z∈ is such that X+d=Y, where 1, 2, 3,( ..., )nd d d d d= . We know
that {0, , }id k k∈ − , if the distance between X and Y is k. So we need to count the number of
possibilities for forming the direction d. There are n positions 1, 2, 3,..., nd d d d each of which can be
filled using any of the three numbers 0, k or –k. This can be done in 3n
ways by multiplication
principle. When all the positions are filled using 0, we have (0,0,0,...,0)d = so that X+d=Y
implies X=Y. Therefore there are 3 1n
− directions from X in which exactly half of the directions
are independent. Therefore there are 3 1
2
n
− independent directions from X in n
Z .
4. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.2, No.2, April 2012
154
If two directions are independent, the corresponding co-occurrence matrices are transposes of
each other. The above theorem indicates that the number of possible co-occurrence matrices for
an n-dimensional image is
3 1
2
n
−
.
2.4. Normalized Co-Occurrence Matrix
Consider d
i j
N= G (i,j)∑∑ , which is the total number of co-occurrence pairs in dG .
Let 1
( , ) ( , )d dGN i j G i j
N
= . dGN is called the normalized co-occurrence matrix, where the (i, j)th entry
of ( , )dGN i j is the joint probability of co-occurrences of pixels with intensity i and pixels with
intensity j separated by a distance k, in a particular direction d.
2.4. Trace
In addition to the well known Haralick features such as Angular Second Moment, Contrast,
Correlation etc. listed in [1], we define a new feature from the normalized co-occurrence matrix,
which can be used to identify constant regions in an image. For convenience we consider n=2, so
that the image is a two dimensional grey scale image and the normalized co-occurrence matrix
becomes the traditional Grey Level Co-occurrence Matrix.
Figure 2. Sample images taken from Brodatz texture album
Consider the images taken from the Brodatz texture album given in figure 2. The majority of the
nonzero entries of the co-occurrence matrices lie along the main diagonal [12] so that we treat the
trace (sum of the main diagonal entries) of the normalized co-occurrence matrix as a new feature.
Trace of ( , )dGN i j is defined as
( , )d
i
Trace GN i i=∑
From the definition of the co-occurrence matrix, it can be seen that an entry in the main diagonal
is the (i, i)th
entry. This implies two pixels with the same intensity value i occur together. Thus
higher values of trace implies more constant region in the image. The computed values of the
trace of the normalized co-occurrence matrices in Figure 2 with k=1 are 0.0682, 0.2253 and
0.2335 for the left, middle and right images respectively. Obviously the left image contains less
5. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.2, No.2, April 2012
155
amount of constant region and the other two images contain almost the same amount of constant
regions. The value of the trace indicates the same.
3. METHODOLOGY
Here we present the use of trace in content based image retrieval. The goal of image retrieval is to
compare a given query image with all potential target images in order to obtain numerical
measures of their similarity with the query image. Our database contains 333 images taken from
the Brodatz texture album which contains 36 classes, each class consisting of 9 images. Retrieval
results are evaluated by calculating average precision. Precision is the proportion of retrieved
images that are relevant to the query.
3.1. Image Retrieval Using Trace
The numerical value of trace provides only a measure of the amount of constant region in an
image. Thus we divide the main diagonal entries of the co-occurrence matrix into four equal parts
and the sum of the elements in each quarter is taken to be a measure of the image texture feature
for image retrieval, giving a four dimensional vector. The database is queried using the first and
the fourth images from all the 36 different classes. Eight images are retrieved in each run. The
average precision is found to be 0.8194.
Figure3. Screen shots of the output for the same query image using the trace features (left) and
the Haralick features (right)
3.2. Comparison of Results with Haralick features
Conducting the same experiment using the well known Haralick features Contrast, Correlation,
Energy and Homogeneity we obtain an average precision of 0.7222. Here also we use a four
dimensional feature vector for querying the database. This is a clear indication of the
improvement of performance using the proposed features.
6. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.2, No.2, April 2012
156
4. CONCLUSION
This paper illustrates the possible theoretical extensions of Grey Level Co-occurrence Matrices.
The use of trace in texture analysis is found to be promising. Trace itself can be used as a feature
which outperforms the Haralick features. Trace combined with Haralick features provides better
results. Only one third of the images from Brodatz texture database are used for testing. Our
future work is to investigate the performance of trace with the complete set of images in the
database. Trace extracted from three dimensional images is also to be investigated. The use of the
theoretical developments to n-dimensional Euclidean space need to explored.
REFERENCES
[1] R. Haralick, K. Shanmugam, and I. Dinstein, (1973) “Textural Features for Image Classification”,
IEEE Trans. on Systems, Man and Cybernetics, SMC–3(6):610–621
[2] R. M. HARALICK, (1979) “Statistical and structural approaches to texture”, Proc. IEEE, pp. 786-
804, May 1979
[3] T. Ojala, M. Pietikainen, T. Maenpaa, (2004) “Multiresolution Gray-Scale and Rotation Invariant
Texture Classification with Local Binary Patterns”, IEEE. Trans. On Pattern analysis and
Machine intelligence, Vol.24
[4] M. H. Bharati, J. Liu, J. F. MacGregor, (2004) “Image Texture Analysis: methods and comparisons”,
Chemometrics and Intelligent Laboratory System,s Volume 72, Issue 1, Pages 57- 71
[5] J. Zhang, T. Tan,(2002) “Brief review of invariant texture analysis methods”, Pattern Recognition 35,
735-747
[6] Tou, J. Y., Tay, Y. H., & Lau, P. Y. (2009). Recent Trends in Texture Classification: A Review.
Proceedings Symposium on Progress, in Information and Communication Technology 2009 (SPICT
2009), Kuala Lumpur, pp. 63-68.
[7] T. Dacheng, L. Xuelong, Y. Yuan, Y. Nenghai, L. Zhengkai, and T. Xiao-ou.(2002) “A Set of Novel
Textural Features Based on 3D Co-occurrence Matrix for Content-based Image Retrieval”,
Proceedings of the Fifth International Conference on Information Fusion, volume 2, pages
1403–1407
[8] V. Kovalev and M. Petrou, (1996) “Multidimensional Co-occurrence Matrices for Object Recognition
and Matching,” Graph. Models Image Processing, vol. 58, no. 3, pp. 187–197
[9] K. Hammouche, (2008) “Multidimensional texture analysis for unsupervised pattern classification”,
Pattern Recognition Techniques, Technology and Applications, pp 626
[10] F. I. Alam, R. U. Faruqui, (2011) “Optimized Calculations of Haralick Texture Features”, European
Journal of Scientific Research, Vol. 50 No. 4, pp. 543-553
[11] A. S. Kurani, D. H. Xu, J. Frust, (2004) “Co-occurrence matrices for volumetric Data”, The 7th
IASTED International Conference on Computer Graphics and Imaging - CGIM 2004, Kauai,
Hawaii, USA
[12] A. Eleyan, H. Demirel, (2009) “Co-Occurrence based Statistical Approach for Face Recognition”,
Computer and Information Sciences
7. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.2, No.2, April 2012
157
Authors
Bino Sebastian V, born in 1975 received his M. Sc degree in Mathematics and M.
Tech degree in Computer and Information Science from Cochin University of Science
and Technology, Cochin, India in 1997 and 2003 respectively. He is currently
working with Mar Athansius College, Kothamangalam, Kerala, India, as an Assistant
Professor in the Department of Mathematics. He is pursuing for his Ph. D degree in
the Department of Computer Applications, Cochin University of Science and
Technology
Dr. A Unnikrishnan, Graduated from REC (Calicut), India in Electrical
Engineering(1975), completed his M. Tech from IIT, Kanpur in Electrical
Engineering(1978) and PhD from IISc, Bangalore in “Image Data Structures”(1988).
Presently, he is The Associate Director Naval Physical and Oceanographical
Laboratory, Kochi which is a premiere Laboratory of Defence Research and
Development Organisation. His field of interests include Sonar Signal Processing,
Image Processing and Soft Computing. He has authored about fifty National and
International Journal and Conference Papers. He is a Fellow of IETE & IEI, India.
Dr. Kannan Balakrishnan, born in 1960, received his M. Sc and M. Phil degrees in
Mathematics from University of Kerala, India, M. Tech degree in Computer and
Information Science from Cochin University of Science & Technology, Cochin, India
and Ph. D in Futures Studies from University of Kerala, India in 1982, 1983, 1988 and
2006 respectively. He is currently working with Cochin University of Science &
Technology, Cochin, India, as an Associate Professor in the Department of Computer
Applications. Also he is the co investigator of Indo-Slovenian joint research project
by Department of Science and Technology, Government of India. He has published
several papers in international journals and national and international conference
proceedings. His present areas of interest are Graph Algorithms, Intelligent systems,
Image processing, CBIR and Machine Translation. He is a reviewer of American
Mathematical Reviews and several other journals.