Fractal analysis has been shown to be useful in image processing for characterizing shape and gray-scale
complexity. The fractal feature is a compact descriptor used to give a numerical measure of the degree of
irregularity of the medical images. This descriptor property does not give ownership of the local image
structure. In this paper, we present a combination of this parameter based on Box Counting with GLCM
Features. This powerful combination has proved good results especially in classification of medical texture
from MRI and CT Scan images of trabecular bone. This method has the potential to improve clinical
diagnostics tests for osteoporosis pathologies.
A Novel Feature Extraction Scheme for Medical X-Ray ImagesIJERA Editor
X-ray images are gray scale images with almost the same textural characteristic. Conventional texture or color
features cannot be used for appropriate categorization in medical x-ray image archives. This paper presents a
novel combination of methods like GLCM, LBP and HOG for extracting distinctive invariant features from Xray
images belonging to IRMA (Image Retrieval in Medical applications) database that can be used to perform
reliable matching between different views of an object or scene. GLCM represents the distributions of the
intensities and the information about relative positions of neighboring pixels of an image. The LBP features are
invariant to image scale and rotation, change in 3D viewpoint, addition of noise, and change in illumination A
HOG feature vector represents local shape of an object, having edge information at plural cells. These features
have been exploited in different algorithms for automatic classification of medical X-ray images. Excellent
experimental results obtained in true problems of rotation invariance, particular rotation angle, demonstrate that
good discrimination can be achieved with the occurrence statistics of simple rotation invariant local binary
patterns.
COMPOSITE TEXTURE SHAPE CLASSIFICATION BASED ON MORPHOLOGICAL SKELETON AND RE...sipij
After several decades of research, the development of an effective feature extraction method for texture
classification is still an ongoing effort. Therefore , several techniques have been proposed to resolve such
problems. In this paper a novel composite texture classification method based on innovative pre-processing
techniques, skeletonization and Regional moments (RM) is proposed. This proposed texture classification
approach, takes into account the ambiguity brought in by noise and the different caption and digitization
processes. To offer better classification rate, innovative pre-processing methods are applied on various
texture images first. Pre-processing mechanisms describe various methods of converting a grey level image
into binary image with minimal consideration of the noise model. Then shape features are evaluated using
RM on the proposed Morphological Skeleton (MS) method by suitable numerical characterization
measures for a precise classification. This texture classification study using MS and RM has given a good
performance. Good classification result is achieved from a single region moment RM10 while others failed
in classification.
BAYESIAN CLASSIFICATION OF FABRICS USING BINARY CO-OCCURRENCE MATRIXijistjournal
Classification of fabrics is usually performed manually which requires considerable human efforts. The goal of this paper is to recognize and classify the types of fabrics, in order to identify a weave pattern automatically using image processing system. In this paper, fabric texture feature is extracted using Grey Level Co-occurrence Matrices as well as Binary Level Co-occurrence Matrices. The Co-occurrence matrices functions characterize the texture of an image by calculating how often pairs of pixel with specific values and in a specified spatial relationship occur in an image, and then extracting statistical measures from this matrix. The extracted features from GLCM and BLCM are used to classify the texture by Bayesian classifier to compare their effectiveness.
Two-dimensional Block of Spatial Convolution Algorithm and SimulationCSCJournals
This paper proposes an algorithm based on sub image-segmentation strategy. The proposed scheme divides a grayscale image into overlapped 6×6 blocks each of which is segmented into four small 3x3 non-overlapped sub-images. A new spatial approach for efficiently computing 2-dimensional linear convolution or cross-correlation between suitable flipped and fixed filter coefficients (sub image for cross-correlation) and corresponding input sub image is presented. Computation of convolution is iterated vertically and horizontally for each of the four input sub-images. The convolution outputs of these four sub-images are processed to be converted from 6×6 arrays to 4×4 arrays so that the core of the original image is reproduced. The present algorithm proposes a simplified processing technique based on a particular arrangement of the input samples, spatial filtering and small sub-images. This results in reducing the computational complexity as compared with other well known FFT-based techniques. This algorithm lends itself for partitioned small sub-images, local image spatial filtering and noise reduction. The effectiveness of the algorithm is demonstrated through some simulation examples.
REMOVING OCCLUSION IN IMAGES USING SPARSE PROCESSING AND TEXTURE SYNTHESISIJCSEA Journal
We provide a solution to problem of occlusion in images by removing the occluding region and filling in the gap left behind. Inpainting algorithms fail in filling occlusions when the occluding region is large since there is loss of both structure and texture. We decompose the image into structure and texture images using a decomposition method based on sparseness of the image. The sparse reconstruction of the decomposed images result in an inpainted image with all the structures made intact. A texture synthesis is performed on the texture only image. Finally the structure and texture images are combined to get an image where the occlusion is filled. The performance of our algorithm in terms of visual effectiveness is compared with other algorithms used for inpainting.
Hierarchical Approach for Total Variation Digital Image InpaintingIJCSEA Journal
The art of recovering an image from damage in an undetectable form is known as inpainting. The manual work of inpainting is most often a very time consum ing process. Due to digitalization of this technique, it is automatic and faster. In this paper, after the user selects the regions to be reconstructed, the algorithm automatically reconstruct the lost regions with the help of the information surrounding them. The existing methods perform very well when the region to be reconstructed is very small, but fails in proper reconstruction as the area increases. This paper describes a Hierarchical method by which the area to be inpainted is reduced in multiple levels and Total Variation(TV) method is used to inpaint in each level. This algorithm gives better performance when compared with other existing algorithms such as nearest neighbor interpolation, Inpainting through Blurring and Sobolev Inpainting.
Image restoration based on morphological operationsijcseit
Image processing including noise suppression, feature extraction, edge detection, image segmentation,
shape recognition, texture analysis, image restoration and reconstruction, image compression etc uses
mathematical morphology which is a method of nonlinear filters.
It is modulated from traditional morphology to order morphology, soft mathematical morphology and fuzzy
soft mathematical morphology. This paper is covers 6 morphological operations which are implemented in
the matlab program, including erosion, dilation, opening, closing, boundary extraction and region filling.
A Novel Feature Extraction Scheme for Medical X-Ray ImagesIJERA Editor
X-ray images are gray scale images with almost the same textural characteristic. Conventional texture or color
features cannot be used for appropriate categorization in medical x-ray image archives. This paper presents a
novel combination of methods like GLCM, LBP and HOG for extracting distinctive invariant features from Xray
images belonging to IRMA (Image Retrieval in Medical applications) database that can be used to perform
reliable matching between different views of an object or scene. GLCM represents the distributions of the
intensities and the information about relative positions of neighboring pixels of an image. The LBP features are
invariant to image scale and rotation, change in 3D viewpoint, addition of noise, and change in illumination A
HOG feature vector represents local shape of an object, having edge information at plural cells. These features
have been exploited in different algorithms for automatic classification of medical X-ray images. Excellent
experimental results obtained in true problems of rotation invariance, particular rotation angle, demonstrate that
good discrimination can be achieved with the occurrence statistics of simple rotation invariant local binary
patterns.
COMPOSITE TEXTURE SHAPE CLASSIFICATION BASED ON MORPHOLOGICAL SKELETON AND RE...sipij
After several decades of research, the development of an effective feature extraction method for texture
classification is still an ongoing effort. Therefore , several techniques have been proposed to resolve such
problems. In this paper a novel composite texture classification method based on innovative pre-processing
techniques, skeletonization and Regional moments (RM) is proposed. This proposed texture classification
approach, takes into account the ambiguity brought in by noise and the different caption and digitization
processes. To offer better classification rate, innovative pre-processing methods are applied on various
texture images first. Pre-processing mechanisms describe various methods of converting a grey level image
into binary image with minimal consideration of the noise model. Then shape features are evaluated using
RM on the proposed Morphological Skeleton (MS) method by suitable numerical characterization
measures for a precise classification. This texture classification study using MS and RM has given a good
performance. Good classification result is achieved from a single region moment RM10 while others failed
in classification.
BAYESIAN CLASSIFICATION OF FABRICS USING BINARY CO-OCCURRENCE MATRIXijistjournal
Classification of fabrics is usually performed manually which requires considerable human efforts. The goal of this paper is to recognize and classify the types of fabrics, in order to identify a weave pattern automatically using image processing system. In this paper, fabric texture feature is extracted using Grey Level Co-occurrence Matrices as well as Binary Level Co-occurrence Matrices. The Co-occurrence matrices functions characterize the texture of an image by calculating how often pairs of pixel with specific values and in a specified spatial relationship occur in an image, and then extracting statistical measures from this matrix. The extracted features from GLCM and BLCM are used to classify the texture by Bayesian classifier to compare their effectiveness.
Two-dimensional Block of Spatial Convolution Algorithm and SimulationCSCJournals
This paper proposes an algorithm based on sub image-segmentation strategy. The proposed scheme divides a grayscale image into overlapped 6×6 blocks each of which is segmented into four small 3x3 non-overlapped sub-images. A new spatial approach for efficiently computing 2-dimensional linear convolution or cross-correlation between suitable flipped and fixed filter coefficients (sub image for cross-correlation) and corresponding input sub image is presented. Computation of convolution is iterated vertically and horizontally for each of the four input sub-images. The convolution outputs of these four sub-images are processed to be converted from 6×6 arrays to 4×4 arrays so that the core of the original image is reproduced. The present algorithm proposes a simplified processing technique based on a particular arrangement of the input samples, spatial filtering and small sub-images. This results in reducing the computational complexity as compared with other well known FFT-based techniques. This algorithm lends itself for partitioned small sub-images, local image spatial filtering and noise reduction. The effectiveness of the algorithm is demonstrated through some simulation examples.
REMOVING OCCLUSION IN IMAGES USING SPARSE PROCESSING AND TEXTURE SYNTHESISIJCSEA Journal
We provide a solution to problem of occlusion in images by removing the occluding region and filling in the gap left behind. Inpainting algorithms fail in filling occlusions when the occluding region is large since there is loss of both structure and texture. We decompose the image into structure and texture images using a decomposition method based on sparseness of the image. The sparse reconstruction of the decomposed images result in an inpainted image with all the structures made intact. A texture synthesis is performed on the texture only image. Finally the structure and texture images are combined to get an image where the occlusion is filled. The performance of our algorithm in terms of visual effectiveness is compared with other algorithms used for inpainting.
Hierarchical Approach for Total Variation Digital Image InpaintingIJCSEA Journal
The art of recovering an image from damage in an undetectable form is known as inpainting. The manual work of inpainting is most often a very time consum ing process. Due to digitalization of this technique, it is automatic and faster. In this paper, after the user selects the regions to be reconstructed, the algorithm automatically reconstruct the lost regions with the help of the information surrounding them. The existing methods perform very well when the region to be reconstructed is very small, but fails in proper reconstruction as the area increases. This paper describes a Hierarchical method by which the area to be inpainted is reduced in multiple levels and Total Variation(TV) method is used to inpaint in each level. This algorithm gives better performance when compared with other existing algorithms such as nearest neighbor interpolation, Inpainting through Blurring and Sobolev Inpainting.
Image restoration based on morphological operationsijcseit
Image processing including noise suppression, feature extraction, edge detection, image segmentation,
shape recognition, texture analysis, image restoration and reconstruction, image compression etc uses
mathematical morphology which is a method of nonlinear filters.
It is modulated from traditional morphology to order morphology, soft mathematical morphology and fuzzy
soft mathematical morphology. This paper is covers 6 morphological operations which are implemented in
the matlab program, including erosion, dilation, opening, closing, boundary extraction and region filling.
Abstract: Many applications such as robot navigation, defense, medical and remote sensing performvarious processing tasks, which can be performed more easily when all objects in different images of the same scene are combined into a single fused image. In this paper, we propose a fast and effective method for image fusion. The proposed method derives the intensity based variations that is large and small scale, from the source images. In this approach, guided filtering is employed for this extraction. Gaussian and Laplacian pyramidal approach is then used to fuse the different layers obtained. Experimental results demonstrate that the proposed method can obtain better performance for fusion of
all sets of images. The results clearly indicate the feasibility of the proposed approach.
Object recognition is the challenging problem in the real world application. Object recognition can be achieved through the shape matching. Shape matching is preceded by i) detecting the edges of the objects from the images. ii) Finding the correspondence between the shapes. iii) Measuring the dissimilarity between the shapes using the correspondence. iv) Classifying the object into classes by using this dissimilarity measures. In order to solve the correspondence problem, we attach a descriptor, the shape context, to each point. The shape context at a reference point captures the distribution of the remaining points relative to it, thus offering a globally discriminative characterization. Corresponding points on two similar shapes will have similar shape contexts. The one to one correspondence is achieved through the cost based bipartite graph matching. The cost matrix is reduced through Hungarian algorithm. The dissimilarity between the two shapes is computed using Canberra distance. The nearest neighbor classifier is used to classify the objects with the matching error. The results are obtained using the MATLAB for MINIST hand written digits.
Colour-Texture Image Segmentation using Hypercomplex Gabor Analysissipij
Texture analysis such as segmentation and classification plays a vital role in computer vision and pattern recognition and is widely applied to many areas such as industrial automation, bio-medical image processing and remote sensing. In this paper, we first extend the well-known Gabor filters to color images using a specific form of hypercomplex numbers known as quaternions. These filters are
constructed as windowed basis functions of the quaternion Fourier transform also known as hypercomplex Fourier transform. Based on this extension this paper presents the use of these new
quaternionic Gabor filters in colour texture image segmentation. Experimental results on two colour texture images are presented. We tested the robustness of this technique for segmentation by adding Gaussian noise to the texture images. Experimental results indicate that the proposed method gives better segmentation results even in the presence of strongest noise.
A HYBRID MORPHOLOGICAL ACTIVE CONTOUR FOR NATURAL IMAGESIJCSEA Journal
Morphological active contours for image segmentation have become popular due to their low computational complexity coupled with their accurate approximation of the partial differential equations
involved in the energy minimization of the segmentation process. In this paper, a morphological active contour which mimics the energy minimization of the popular Chan-Vese Active Contour without Edges is coupled with a morphological edge-driven segmentation term to accurately segment natural images. By using morphological approximations of the energy minimization steps, the algorithm has a low computational complexity. Additionally, the coupling of the edge-based and region-based segmentation techniques allows the proposed method to be robust and accurate. We will demonstrate the accuracy and robustness of the algorithm using images from the Weizmann Segmentation Evaluation Database and report on the segmentation results using the Sorensen-Dice similarity coefficient.
A HYBRID MORPHOLOGICAL ACTIVE CONTOUR FOR NATURAL IMAGESIJCSEA Journal
Morphological active contours for image segmentation have become popular due to their low computational complexity coupled with their accurate approximation of the partial differential equations involved in the energy minimization of the segmentation process. In this paper, a morphological active contour which mimics the energy minimization of the popular Chan-Vese Active Contour without Edges is coupled with a morphological edge-driven segmentation term to accurately segment natural images. By using morphological approximations of the energy minimization steps, the algorithm has a low computational complexity. Additionally, the coupling of the edge-based and region-based segmentation techniques allows the proposed method to be robust and accurate. We will demonstrate the accuracy and robustness of the algorithm using images from the Weizmann Segmentation Evaluation Database and report on the segmentation results using the Sorensen-Dice similarity coefficient.
Marker Controlled Segmentation Technique for Medical applicationRushin Shah
Medical image segmentation is a very important field for the medical science. In medical images, edge detection is an important work for object recognition of the human organs such as brain, heart or kidney etc. and it is an essential pre-processing step in medical image segmentation.
Medical images such as CT, MRI or X-Ray visualizes the various information’s of internal organs which is very important for doctors diagnoses as well as medical teaching, learning and research.
It is a tough job to locate the internal organs if images contains noise or rough structure of human body organs.
Comparative analysis and implementation of structured edge active contour IJECEIAES
This paper proposes modified chanvese model which can be implemented on image for segmentation. The structure of paper is based on Linear structure tensor (LST) as input to the variant model. Structure tensor is a matrix illustration of partial derivative information. In the proposed model, the original image is considered as information channel for computing structure tensor. Difference of Gaussian (DOG) is featuring improvement in which we can get less blurred image than original image. In this paper LST is modified by adding intensity information to enhance orientation information. Finally Active Contour Model (ACM) is used to segment the images. The proposed algorithm is tested on various images and also on some images which have intensity inhomogeneity and results are shown. Also, the results with other algorithms like chanvese, Bhattacharya, Gabor based chanvese and Novel structure tensor based model are compared. It is verified that accuracy of proposed model is the best. The biggest advantage of proposed model is clear edge enhancement.
Image fusion using nsct denoising and target extraction for visual surveillanceeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
There exists a plethora of algorithms to perform image segmentation and there are several issues related to
execution time of these algorithms. Image Segmentation is nothing but label relabeling problem under
probability framework. To estimate the label configuration, an iterative optimization scheme is
implemented to alternately carry out the maximum a posteriori (MAP) estimation and the maximum
likelihood (ML) estimations. In this paper this technique is modified in such a way so that it performs
segmentation within stipulated time period. The extensive experiments shows that the results obtained are
comparable with existing algorithms. This algorithm performs faster execution than the existing algorithm
to give automatic segmentation without any human intervention. Its result match image edges very closer to
human perception.
We propose a novel imaging biomarker of lung cancer relapse from 3-D texture analysis of CT images. Three-dimensional morphological nodular tissue properties are described in terms of 3-D Riesz-wavelets. The responses of the latter are aggregated within nodular regions by means of feature covariances, which leverage rich intra- and inter-variations of the feature space dimensions. The obtained Riesz-covariance descriptors lie on a manifold governed by Riemannian geometry requiring specific geodesic metrics to locally approximate scalar products. The latter are used to construct a kernel for support vector machines (SVM). The effectiveness of the presented models is evaluated on a dataset of 92 patients with non-small cell lung carcinoma (NSCLC) and cancer recurrence information. Disease recurrence within a timeframe of 12 months could be predicted with an accuracy above 80, and highlighted the importance of covariance-based texture aggregation. At the end of the talk, computer tools will be presented to easily extract 3D radiomics quantitative features from PET-CT images.
Image segmentation techniques
More information on this research can be found in:
Hussein, Rania, Frederic D. McKenzie. “Identifying Ambiguous Prostate Gland Contours from Histology Using Capsule Shape Information and Least Squares Curve Fitting.” The International Journal of Computer Assisted Radiology and Surgery ( IJCARS), Volume 2 Numbers 3-4, pp. 143-150, December 2007.
Segmentation of medical images using metric topology – a region growing approachIjrdt Journal
A metric topological approach to the region growing based segmentation is presented in this article. Region based growing techniques has gained a significant importance in the medical image processing field for finest of segregation of tumor detected part in the image. Conventional algorithms were concentrated on segmentation at the coarser level which failed to produce enough evidence for the validity of the algorithm. In this article a novel technique is proposed based on metric topological neighbourhood also with the introduction of new objective measure entropy, apart from the traditional validity measures of Accuracy, PSNR and MSE. This measure is introduced to prove the amount of information lost after segmentation is reduced to greater extent which elucidates the effectiveness of the algorithm. This algorithm is tested on the well known benchmarking of testing in ground truth images in par with the proposed region based growing segmented images. The results validated show the validation of effectiveness of the algorithm.
Automatic rectification of perspective distortion from a single image using p...ijcsa
Perspective distortion occurs due to the perspective projection of 3D scene on a 2D surface. Correcting the distortion of a single image without losing any desired information is one of the challenging task in the field of Computer Vision. We consider the problem of estimating perspective distortion from a single still image of an unstructured environment and to make perspective correction which is both quantitatively accurate as well as visually pleasing. Corners are detected based on the orientation of the image. A method based on plane homography and transformation is used to make perspective correction. The algorithm infers frontier information directly from the images, without any reference objects or prior knowledge of the camera parameters. The frontiers are detected using geometric context based segmentation. The goal of this paper is to present a framework providing fully automatic and fast perspective correction.
Unimodal Multi-Feature Fusion and one-dimensional Hidden Markov Models for Lo...IJECEIAES
The objective of low-resolution face recognition is to identify faces from small size or poor quality images with varying pose, illumination, expression, etc. In this work, we propose a robust low face recognition technique based on one-dimensional Hidden Markov Models. Features of each facial image are extracted using three steps: firstly, both Gabor filters and Histogram of Oriented Gradients (HOG) descriptor are calculated. Secondly, the size of these features is reduced using the Linear Discriminant Analysis (LDA) method in order to remove redundant information. Finally, the reduced features are combined using Canonical Correlation Analysis (CCA) method. Unlike existing techniques using HMMs, in which authors consider each state to represent one facial region (eyes, nose, mouth, etc), the proposed system employs 1D-HMMs without any prior knowledge about the localization of interest regions in the facial image. Performance of the proposed method will be measured using the AR database.
Parallax Effect Free Mosaicing of Underwater Video Sequence Based on Texture ...sipij
In this paper, we present feature-based technique for construction of mosaic image from underwater video
sequence, which suffers from parallax distortion due to propagation properties of light in the underwater
environment. The most of the available mosaic tools and underwater image mosaicing techniques yields
final result with some artifacts such as blurring, ghosting and seam due to presence of parallax in the input
images. The removal of parallax from input images may not reduce its effects instead it must be corrected
in successive steps of mosaicing. Thus, our approach minimizes the parallax effects by adopting an efficient
local alignment technique after global registration. We extract texture features using Centre Symmetric
Local Binary Pattern (CS-LBP) descriptor in order to find feature correspondences, which are used further
for estimation of homography through RANSAC. In order to increase the accuracy of global registration,
we perform preprocessing such as colour alignment between two selected frames based on colour
distribution adjustment. Because of existence of 100% overlap in consecutive frames of underwater video,
we select frames with minimum overlap based on mutual offset in order to reduce the computation cost
during mosaicing. Our approach minimizes the parallax effects considerably in final mosaic constructed
using our own underwater video sequences.
Application of parallel algorithm approach for performance optimization of oi...sipij
This paper gives a detailed study on the performance of image filter algorithm with various parameters
applied on an image of RGB model. There are various popular image filters, which consumes large amount
of computing resources for processing. Oil paint image filter is one of the very interesting filters, which is
very performance hungry. Current research tries to find improvement in oil paint image filter algorithm by
using parallel pattern library. With increasing kernel-size, the processing time of oil paint image filter
algorithm increases exponentially. I have also observed in various blogs and forums, the questions for
faster oil paint have been asked repeatedly.
Abstract: Many applications such as robot navigation, defense, medical and remote sensing performvarious processing tasks, which can be performed more easily when all objects in different images of the same scene are combined into a single fused image. In this paper, we propose a fast and effective method for image fusion. The proposed method derives the intensity based variations that is large and small scale, from the source images. In this approach, guided filtering is employed for this extraction. Gaussian and Laplacian pyramidal approach is then used to fuse the different layers obtained. Experimental results demonstrate that the proposed method can obtain better performance for fusion of
all sets of images. The results clearly indicate the feasibility of the proposed approach.
Object recognition is the challenging problem in the real world application. Object recognition can be achieved through the shape matching. Shape matching is preceded by i) detecting the edges of the objects from the images. ii) Finding the correspondence between the shapes. iii) Measuring the dissimilarity between the shapes using the correspondence. iv) Classifying the object into classes by using this dissimilarity measures. In order to solve the correspondence problem, we attach a descriptor, the shape context, to each point. The shape context at a reference point captures the distribution of the remaining points relative to it, thus offering a globally discriminative characterization. Corresponding points on two similar shapes will have similar shape contexts. The one to one correspondence is achieved through the cost based bipartite graph matching. The cost matrix is reduced through Hungarian algorithm. The dissimilarity between the two shapes is computed using Canberra distance. The nearest neighbor classifier is used to classify the objects with the matching error. The results are obtained using the MATLAB for MINIST hand written digits.
Colour-Texture Image Segmentation using Hypercomplex Gabor Analysissipij
Texture analysis such as segmentation and classification plays a vital role in computer vision and pattern recognition and is widely applied to many areas such as industrial automation, bio-medical image processing and remote sensing. In this paper, we first extend the well-known Gabor filters to color images using a specific form of hypercomplex numbers known as quaternions. These filters are
constructed as windowed basis functions of the quaternion Fourier transform also known as hypercomplex Fourier transform. Based on this extension this paper presents the use of these new
quaternionic Gabor filters in colour texture image segmentation. Experimental results on two colour texture images are presented. We tested the robustness of this technique for segmentation by adding Gaussian noise to the texture images. Experimental results indicate that the proposed method gives better segmentation results even in the presence of strongest noise.
A HYBRID MORPHOLOGICAL ACTIVE CONTOUR FOR NATURAL IMAGESIJCSEA Journal
Morphological active contours for image segmentation have become popular due to their low computational complexity coupled with their accurate approximation of the partial differential equations
involved in the energy minimization of the segmentation process. In this paper, a morphological active contour which mimics the energy minimization of the popular Chan-Vese Active Contour without Edges is coupled with a morphological edge-driven segmentation term to accurately segment natural images. By using morphological approximations of the energy minimization steps, the algorithm has a low computational complexity. Additionally, the coupling of the edge-based and region-based segmentation techniques allows the proposed method to be robust and accurate. We will demonstrate the accuracy and robustness of the algorithm using images from the Weizmann Segmentation Evaluation Database and report on the segmentation results using the Sorensen-Dice similarity coefficient.
A HYBRID MORPHOLOGICAL ACTIVE CONTOUR FOR NATURAL IMAGESIJCSEA Journal
Morphological active contours for image segmentation have become popular due to their low computational complexity coupled with their accurate approximation of the partial differential equations involved in the energy minimization of the segmentation process. In this paper, a morphological active contour which mimics the energy minimization of the popular Chan-Vese Active Contour without Edges is coupled with a morphological edge-driven segmentation term to accurately segment natural images. By using morphological approximations of the energy minimization steps, the algorithm has a low computational complexity. Additionally, the coupling of the edge-based and region-based segmentation techniques allows the proposed method to be robust and accurate. We will demonstrate the accuracy and robustness of the algorithm using images from the Weizmann Segmentation Evaluation Database and report on the segmentation results using the Sorensen-Dice similarity coefficient.
Marker Controlled Segmentation Technique for Medical applicationRushin Shah
Medical image segmentation is a very important field for the medical science. In medical images, edge detection is an important work for object recognition of the human organs such as brain, heart or kidney etc. and it is an essential pre-processing step in medical image segmentation.
Medical images such as CT, MRI or X-Ray visualizes the various information’s of internal organs which is very important for doctors diagnoses as well as medical teaching, learning and research.
It is a tough job to locate the internal organs if images contains noise or rough structure of human body organs.
Comparative analysis and implementation of structured edge active contour IJECEIAES
This paper proposes modified chanvese model which can be implemented on image for segmentation. The structure of paper is based on Linear structure tensor (LST) as input to the variant model. Structure tensor is a matrix illustration of partial derivative information. In the proposed model, the original image is considered as information channel for computing structure tensor. Difference of Gaussian (DOG) is featuring improvement in which we can get less blurred image than original image. In this paper LST is modified by adding intensity information to enhance orientation information. Finally Active Contour Model (ACM) is used to segment the images. The proposed algorithm is tested on various images and also on some images which have intensity inhomogeneity and results are shown. Also, the results with other algorithms like chanvese, Bhattacharya, Gabor based chanvese and Novel structure tensor based model are compared. It is verified that accuracy of proposed model is the best. The biggest advantage of proposed model is clear edge enhancement.
Image fusion using nsct denoising and target extraction for visual surveillanceeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
There exists a plethora of algorithms to perform image segmentation and there are several issues related to
execution time of these algorithms. Image Segmentation is nothing but label relabeling problem under
probability framework. To estimate the label configuration, an iterative optimization scheme is
implemented to alternately carry out the maximum a posteriori (MAP) estimation and the maximum
likelihood (ML) estimations. In this paper this technique is modified in such a way so that it performs
segmentation within stipulated time period. The extensive experiments shows that the results obtained are
comparable with existing algorithms. This algorithm performs faster execution than the existing algorithm
to give automatic segmentation without any human intervention. Its result match image edges very closer to
human perception.
We propose a novel imaging biomarker of lung cancer relapse from 3-D texture analysis of CT images. Three-dimensional morphological nodular tissue properties are described in terms of 3-D Riesz-wavelets. The responses of the latter are aggregated within nodular regions by means of feature covariances, which leverage rich intra- and inter-variations of the feature space dimensions. The obtained Riesz-covariance descriptors lie on a manifold governed by Riemannian geometry requiring specific geodesic metrics to locally approximate scalar products. The latter are used to construct a kernel for support vector machines (SVM). The effectiveness of the presented models is evaluated on a dataset of 92 patients with non-small cell lung carcinoma (NSCLC) and cancer recurrence information. Disease recurrence within a timeframe of 12 months could be predicted with an accuracy above 80, and highlighted the importance of covariance-based texture aggregation. At the end of the talk, computer tools will be presented to easily extract 3D radiomics quantitative features from PET-CT images.
Image segmentation techniques
More information on this research can be found in:
Hussein, Rania, Frederic D. McKenzie. “Identifying Ambiguous Prostate Gland Contours from Histology Using Capsule Shape Information and Least Squares Curve Fitting.” The International Journal of Computer Assisted Radiology and Surgery ( IJCARS), Volume 2 Numbers 3-4, pp. 143-150, December 2007.
Segmentation of medical images using metric topology – a region growing approachIjrdt Journal
A metric topological approach to the region growing based segmentation is presented in this article. Region based growing techniques has gained a significant importance in the medical image processing field for finest of segregation of tumor detected part in the image. Conventional algorithms were concentrated on segmentation at the coarser level which failed to produce enough evidence for the validity of the algorithm. In this article a novel technique is proposed based on metric topological neighbourhood also with the introduction of new objective measure entropy, apart from the traditional validity measures of Accuracy, PSNR and MSE. This measure is introduced to prove the amount of information lost after segmentation is reduced to greater extent which elucidates the effectiveness of the algorithm. This algorithm is tested on the well known benchmarking of testing in ground truth images in par with the proposed region based growing segmented images. The results validated show the validation of effectiveness of the algorithm.
Automatic rectification of perspective distortion from a single image using p...ijcsa
Perspective distortion occurs due to the perspective projection of 3D scene on a 2D surface. Correcting the distortion of a single image without losing any desired information is one of the challenging task in the field of Computer Vision. We consider the problem of estimating perspective distortion from a single still image of an unstructured environment and to make perspective correction which is both quantitatively accurate as well as visually pleasing. Corners are detected based on the orientation of the image. A method based on plane homography and transformation is used to make perspective correction. The algorithm infers frontier information directly from the images, without any reference objects or prior knowledge of the camera parameters. The frontiers are detected using geometric context based segmentation. The goal of this paper is to present a framework providing fully automatic and fast perspective correction.
Unimodal Multi-Feature Fusion and one-dimensional Hidden Markov Models for Lo...IJECEIAES
The objective of low-resolution face recognition is to identify faces from small size or poor quality images with varying pose, illumination, expression, etc. In this work, we propose a robust low face recognition technique based on one-dimensional Hidden Markov Models. Features of each facial image are extracted using three steps: firstly, both Gabor filters and Histogram of Oriented Gradients (HOG) descriptor are calculated. Secondly, the size of these features is reduced using the Linear Discriminant Analysis (LDA) method in order to remove redundant information. Finally, the reduced features are combined using Canonical Correlation Analysis (CCA) method. Unlike existing techniques using HMMs, in which authors consider each state to represent one facial region (eyes, nose, mouth, etc), the proposed system employs 1D-HMMs without any prior knowledge about the localization of interest regions in the facial image. Performance of the proposed method will be measured using the AR database.
Parallax Effect Free Mosaicing of Underwater Video Sequence Based on Texture ...sipij
In this paper, we present feature-based technique for construction of mosaic image from underwater video
sequence, which suffers from parallax distortion due to propagation properties of light in the underwater
environment. The most of the available mosaic tools and underwater image mosaicing techniques yields
final result with some artifacts such as blurring, ghosting and seam due to presence of parallax in the input
images. The removal of parallax from input images may not reduce its effects instead it must be corrected
in successive steps of mosaicing. Thus, our approach minimizes the parallax effects by adopting an efficient
local alignment technique after global registration. We extract texture features using Centre Symmetric
Local Binary Pattern (CS-LBP) descriptor in order to find feature correspondences, which are used further
for estimation of homography through RANSAC. In order to increase the accuracy of global registration,
we perform preprocessing such as colour alignment between two selected frames based on colour
distribution adjustment. Because of existence of 100% overlap in consecutive frames of underwater video,
we select frames with minimum overlap based on mutual offset in order to reduce the computation cost
during mosaicing. Our approach minimizes the parallax effects considerably in final mosaic constructed
using our own underwater video sequences.
Application of parallel algorithm approach for performance optimization of oi...sipij
This paper gives a detailed study on the performance of image filter algorithm with various parameters
applied on an image of RGB model. There are various popular image filters, which consumes large amount
of computing resources for processing. Oil paint image filter is one of the very interesting filters, which is
very performance hungry. Current research tries to find improvement in oil paint image filter algorithm by
using parallel pattern library. With increasing kernel-size, the processing time of oil paint image filter
algorithm increases exponentially. I have also observed in various blogs and forums, the questions for
faster oil paint have been asked repeatedly.
Contrast enhancement using various statistical operations and neighborhood pr...sipij
Histogram Equalization is a simple and effective contrast enhancement technique. In spite of its popularity
Histogram Equalization still have some limitations –produces artifacts, unnatural images and the local
details are not considered, therefore due to these limitations many other Equalization techniques have been
derived from it with some up gradation. In this proposed method statistics play an important role in image
processing, where statistical operations is applied to the image to get the desired result such as
manipulation of brightness and contrast. Thus, a novel algorithm using statistical operations and
neighborhood processing has been proposed in this paper where the algorithm has proven to be effective in
contrast enhancement based on the theory and experiment.
Offline handwritten signature identification using adaptive window positionin...sipij
The paper presents to address this challenge, we have proposed the use of Adaptive Window Positioning
technique which focuses on not just the meaning of the handwritten signature but also on the individuality
of the writer. This innovative technique divides the handwritten signature into 13 small windows of size nxn
(13x13). This size should be large enough to contain ample information about the style of the author and
small enough to ensure a good identification performance. The process was tested with a GPDS dataset
containing 4870 signature samples from 90 different writers by comparing the robust features of the test
signature with that of the user’s signature using an appropriate classifier. Experimental results reveal that
adaptive window positioning technique proved to be the efficient and reliable method for accurate
signature feature extraction for the identification of offline handwritten signatures .The contribution of this
technique can be used to detect signatures signed under emotional duress
A Novel Uncertainty Parameter SR ( Signal to Residual Spectrum Ratio ) Evalua...sipij
Usually, hearing impaired people use hearing aids which are implemented with speech enhancement
algorithms. Estimation of speech and estimation of nose are the components in single channel speech
enhancement system. The main objective of any speech enhancement algorithm is estimation of noise power
spectrum for non stationary environment. VAD (Voice Activity Detector) is used to identify speech pauses
and during these pauses only estimation of noise. MMSE (Minimum Mean Square Error) speech
enhancement algorithm did not enhance the intelligibility, quality and listener fatigues are the perceptual
aspects of speech. Novel evaluation approach SR (Signal to Residual spectrum ratio) based on uncertainty
parameter introduced for the benefits of hearing impaired people in non stationary environments to control
distortions. By estimation and updating of noise based on division of original pure signal into three parts
such as pure speech, quasi speech and non speech frames based on multiple threshold conditions. Different
values of SR and LLR demonstrate the amount of attenuation and amplification distortions. The proposed
method will compared with any one method WAT(Weighted Average Technique) Hence by using
parameters SR (signal to residual spectrum ratio) and LLR (log like hood ratio), MMSE (Minim Mean
Square Error) in terms of segmented SNR and LLR.
Review of ocr techniques used in automatic mail sorting of postal envelopessipij
This paper presents a review of various OCR techniq
ues used in the automatic mail sorting process. A
complete description on various existing methods fo
r address block extraction and digit recognition th
at
were used in the literature is discussed. The objec
tive of this study is to provide a complete overvie
w about
the methods and techniques used by many researchers
for automating the mail sorting process in postal
service in various countries. The significance of Z
ip code or Pincode recognition is discussed.
IDENTIFICATION OF SUITED QUALITY METRICS FOR NATURAL AND MEDICAL IMAGESsipij
To assess quality of the denoised image is one of the important task in image denoising application.
Numerous quality metrics are proposed by researchers with their particular characteristics till today. In
practice, image acquisition system is different for natural and medical images. Hence noise introduced in
these images is also different in nature. Considering this fact, authors in this paper tried to identify the
suited quality metrics for Gaussian, speckle and Poisson corrupted natural, ultrasound and X-ray images
respectively. In this paper, sixteen different quality metrics from full reference category are evaluated with
respect to noise variance and suited quality metric for particular type of noise is identified. Strong need to
develop noise dependent quality metric is also identified in this work.
Speaker Identification From Youtube Obtained Datasipij
An efficient, and intuitive algorithm is presented for the identification of speakers from a long dataset (like
YouTube long discussion, Cocktail party recorded audio or video).The goal of automatic speaker
identification is to identify the number of different speakers and prepare a model for that speaker by
extraction, characterization and speaker-specific information contained in the speech signal. It has many
diverse application specially in the field of Surveillance , Immigrations at Airport , cyber security ,
transcription in multi-source of similar sound source, where it is difficult to assign transcription arbitrary.
The most commonly speech parameterization used in speaker verification, K-mean, cepstral analysis, is
detailed. Gaussian mixture modeling, which is the speaker modeling technique is then explained. Gaussian
mixture models (GMM), perhaps the most robust machine learning algorithm has been introduced to
examine and judge carefully speaker identification in text independent. The application or employment of
Gaussian mixture models for monitoring & Analysing speaker identity is encouraged by the familiarity,
awareness, or understanding gained through experience that Gaussian spectrum depict the characteristics
of speaker's spectral conformational pattern and remarkable ability of GMM to construct capricious
densities after that we illustrate 'Expectation maximization' an iterative algorithm which takes some
arbitrary value in initial estimation and carry on the iterative process until the convergence of value is
observed We have tried to obtained 85 ~ 95% of accuracy using speaker modeling of vector quantization
and Gaussian Mixture model ,so by doing various number of experiments we are able to obtain 79 ~ 82%
of identification rate using Vector quantization and 85 ~ 92.6% of identification rate using GMM modeling
by Expectation maximization parameter estimation depending on variation of parameter.
Feature selection approach in animal classificationsipij
In this paper, we propose a model for automatic classification of Animals using different classifiers Nearest
Neighbour, Probabilistic Neural Network and Symbolic. Animal images are segmented using maximal
region merging segmentation. The Gabor features are extracted from segmented animal images.
Discriminative texture features are then selected using the different feature selection algorithm like
Sequential Forward Selection, Sequential Floating Forward Selection, Sequential Backward Selection and
Sequential Floating Backward Selection. To corroborate the efficacy of the proposed method, an
experiment was conducted on our own data set of 25 classes of animals, containing 2500 samples. The
data set has different animal species with similar appearance (small inter-class variations) across different
classes and varying appearance (large intra-class variations) within a class. In addition, the images of
flowers are of different poses, with cluttered background under different lighting and climatic conditions.
Experiment results reveal that Symbolic classifier outperforms Nearest Neighbour and Probabilistic Neural
Network classifiers.
Robust content based watermarking algorithm using singular value decompositio...sipij
Nowadays, image content is frequently subject to different malicious manipulations. To protect images
from this illegal manipulations computer science community have recourse to watermarking techniques. To
protect digital multimedia content we need just to embed an invisible watermark into images which
facilitate the detection of different manipulations, duplication, illegitimate distributions of these images. In
this work a robust watermarking technique is presented that embedding invisible watermarks into colour
images the singular value decomposition bloc by bloc of a robust transform of images that is the Radial
symmetry transform. Each bit of the watermark is inserted in a bloc of eight pixels large of the blue
channel a high singular value of the corresponding bloc into the radial symmetry map. We justified the
insertion in the blue channel by our feeble sensibility to perturbations in this colour channel of images. We
present also results obtained with different tests. We had tested the imperceptibility of the mark using this
approach and also its robustness face to several attacks.
Beamforming with per antenna power constraint and transmit antenna selection ...sipij
In this paper, transmit beamforming and antenna selection techniques are presented for the Cooperative
Distributed Antenna System. Beamforming technique with minimum total weighted transmit power
satisfying threshold SINR and Per-Antenna Power constraints is formulated as a convex optimization
problem for the efficient performance of Distributed Antenna System (DAS). Antenna Selection technique is
implemented in this paper to select the optimum Remote Antenna Units from all the available ones. This
achieves the best compromise between capacity and system complexity. Dual polarized and Triple
Polarized systems are considered. Simulation results prove that by integrating Beamforming with DAS
enhances its performance. Also by using convex optimization in Antenna Selection enhances the
performance of multi polarized systems.
Global threshold and region based active contour model for accurate image seg...sipij
In this contribution, we develop a novel global threshold-based active contour model. This model deploys a new
edge-stopping function to control the direction of the evolution and to stop the evolving contour at weak or
blurred edges. An implementation of the model requires the use of selective binary and Gaussian filtering
regularized level set (SBGFRLS) method. The method uses either a selective local or global segmentation
property. It penalizes the level set function to force it to become a binary function. This procedure is followed by
using a regularisation Gaussian. The Gaussian filters smooth the level set function and stabilises the evolution
process. One of the merits of our proposed model stems from the ability to initialise the contour anywhere inside
the image to extract object boundaries. The proposed method is found to perform well, notably when the
intensities inside and outside the object are homogenous. Our method is applied with satisfactory results on
various types of images, including synthetic, medical and Arabic-characters images.
Lossless image compression using new biorthogonal waveletssipij
Even though a large number of wavelets exist, one needs new wavelets for their specific applications. One
of the basic wavelet categories is orthogonal wavelets. But it was hard to find orthogonal and symmetric
wavelets. Symmetricity is required for perfect reconstruction. Hence, a need for orthogonal and symmetric
arises. The solution was in the form of biorthogonal wavelets which preserves perfect reconstruction
condition. Though a number of biorthogonal wavelets are proposed in the literature, in this paper four new
biorthogonal wavelets are proposed which gives better compression performance. The new wavelets are
compared with traditional wavelets by using the design metrics Peak Signal to Noise Ratio (PSNR) and
Compression Ratio (CR). Set Partitioning in Hierarchical Trees (SPIHT) coding algorithm was utilized to
incorporate compression of images.
Image retrieval and re ranking techniques - a surveysipij
There is a huge amount of research work focusing on the searching, retrieval and re-ranking of images in
the image database. The diverse and scattered work in this domain needs to be collected and organized for
easy and quick reference.
Relating to the above context, this paper gives a brief overview of various image retrieval and re-ranking
techniques. Starting with the introduction to existing system the paper proceeds through the core
architecture of image harvesting and retrieval system to the different Re-ranking techniques. These
techniques are discussed in terms of approaches, methodologies and findings and are listed in tabular form
for quick review.
A voting based approach to detect recursive order number of photocopy documen...sipij
Photocopy documents are very common in our normal life. People are permitted to carry and present
photocopied documents to avoid damages to the original documents. But this provision is misused for
temporary benefits by fabricating fake photocopied documents. Fabrication of fake photocopied document
is possible only in 2nd and higher order recursive order of photocopies. Whenever a photocopied document
is submitted, it may be required to check its originality. When the document is 1st order photocopy, chances
of fabrication may be ignored. On the other hand when the photocopy order is 2nd or above, probability of
fabrication may be suspected. Hence when a photocopy document is presented, the recursive order number
of photocopy is to be estimated to ascertain the originality. This requirement demands to investigate
methods to estimate order number of photocopy. In this work, a voting based approach is used to detect the
recursive order number of the photocopy document using probability distributions exponential, extreme
values and lognormal distributions is proposed. A detailed experimentation is performed on a generated
data set and the method exhibits efficiency close to 89%.
An intensity based medical image registration using genetic algorithmsipij
Medical imaging plays a vital role to create images of human body for clinical purposes. Biomedical
imaging has taken a leap by entering into the field of image registration. Image registration integrates the
large amount of medical information embedded in the images taken at different time intervals and images
at different orientations. In this paper, an intensity-based real-coded genetic algorithm is used for
registering two MRI images. To demonstrate the efficiency of the algorithm developed, the alignment of the
image is altered and algorithm is tested for better performance. Also the work involves the comparison of
two similarity metrics, and based on the outcome the best metric suited for genetic algorithm is studied.
BAYESIAN CLASSIFICATION OF FABRICS USING BINARY CO-OCCURRENCE MATRIXijistjournal
Classification of fabrics is usually performed manually which requires considerable human efforts. The goal of this paper is to recognize and classify the types of fabrics, in order to identify a weave pattern automatically using image processing system. In this paper, fabric texture feature is extracted using Grey Level Co-occurrence Matrices as well as Binary Level Co-occurrence Matrices. The Co-occurrence matrices functions characterize the texture of an image by calculating how often pairs of pixel with specific values and in a specified spatial relationship occur in an image, and then extracting statistical measures from this matrix. The extracted features from GLCM and BLCM are used to classify the texture by Bayesian classifier to compare their effectiveness.
The development of multimedia system technology in Content based Image Retrieval (CBIR) System is
one in every of the outstanding area to retrieve the images from an oversized collection of database. The feature
vectors of the query image are compared with feature vectors of the database images to get matching images.It is
much observed that anyone algorithm isn't beneficial in extracting all differing kinds of natural images. Thus an
intensive analysis of certain color, texture and shape extraction techniques are allotted to spot an efficient CBIR
technique that suits for a selected sort of images. The Extraction of an image includes feature description and
feature extraction. During this paper, we tend to projected Color Layout Descriptor (CLD), grey Level Co-
Occurrences Matrix (GLCM), Marker-Controlled Watershed Segmentation feature extraction technique that
extract the matching image based on the similarity of Color, Texture and shape within the database. For
performance analysis, the image retrieval timing results of the projected technique is calculated and compared
with every of the individual feature.
Combining Generative And Discriminative Classifiers For Semantic Automatic Im...CSCJournals
The object image annotation problem is basically a classification problem and there are many different modeling approaches for the solution. These approaches can be classified into two main categories such as generative and discriminative. An ideal classifier should combine these two complementary approaches. In this paper, we present a method achieving this combination by using the discriminative power of the neural networks and the generative nature of Bayesian networks. The evaluation of the proposed method on three typical image’s database has shown some success in automatic image annotation.
DOMAIN SPECIFIC CBIR FOR HIGHLY TEXTURED IMAGEScseij
It is A Challenging Task To Build A Cbir System Which Primarily Works On Texture Values As There
Meaning And Semantics Needs A Special Care To Be Mapped With Human Based Languages. We Have
Consider Highly Textured Images Having Properties(Entropy, Homogeneity, Contrast, Cluster Shade, Auto
Correlation)And Have Mapped Using A Fuzzy Minmax Scale W.R.T. Their Degree(High, Low,
Medium)And Technical Interpetation.This Developed System Is Performing Well In Terms Of Precision
And Recall Value Showing That Semantic Gap Has Been Reduced For Highly Textured Images Based Cbir.
This project is to retrieve the similar geographic images from the dataset based on the features extracted.
Retrieval is the process of collecting the relevant images from the dataset which contains more number of
images. Initially the preprocessing step is performed in order to remove noise occurred in input image with
the help of Gaussian filter. As the second step, Gray Level Co-occurrence Matrix (GLCM), Scale Invariant
Feature Transform (SIFT), and Moment Invariant Feature algorithms are implemented to extract the
features from the images. After this process, the relevant geographic images are retrieved from the dataset
by using Euclidean distance. In this, the dataset consists of totally 40 images. From that the images which
are all related to the input image are retrieved by using Euclidean distance. The approach of SIFT is to
perform reliable recognition, it is important that the feature extracted from the training image be
detectable even under changes in image scale, noise and illumination. The GLCM calculates how often a
pixel with gray level value occurs. While the focus is on image retrieval, our project is effectively used in
the applications such as detection and classification.
Change Detection of Water-Body in Synthetic Aperture Radar ImagesCSCJournals
Change detection is the art of quantifying the changes in the Synthetic Aperture Radar (SAR) images that have happened over a period of time. Remote sensing has been the parental technique to perform change detection analysis. This paper empirically investigates the impact of applying the combination of texture features for different classification techniques to separate water body from non-water body. At first, the images are classified using unsupervised Principle Component Analysis (PCA) based K-means clustering for dimension reduction. Then the texture features like Energy, Entropy, Contrast , Inverse Differential Moment , Directional Moment and the Median are extracted using Gray Level Co-occurrence Matrix (GLCM) and these features are utilized in Linear Vector Quantization (LVQ) and Support Vector Machine (SVM) classifiers. This paper aims to apply a combination of the texture features in order to significantly improve the accuracy of detection. The utility of detection analysis, influences management and policy decision making for long-term construction projects by predicting the preventable losses.
Application of Image Retrieval Techniques to Understand Evolving Weatherijsrd.com
Multispectral satellite images provide valuable information to understand the evolution of various weather systems such as tropical cyclones, shifting of intra tropical convergence zone, moments of various troughs etc., accurate prediction and estimation will save live and property. This work will deal with the development of an application which will enable users to search an image from database using either gray level, texture and shape features for meteorological satellite image retrieval .Gray level feature is extracted using histogram method. The Texture feature is extracted using gray level co-occurrence method and wavelet approach. The shape feature vector is extracted using morphological operations. The similarity between query image and database images is calculated using Euclidian distance. The performance of the system is evaluated using precision
Improving Performance of Texture Based Face Recognition Systems by Segmenting...IDES Editor
Textures play an important role in recognition of
images. This paper investigates the efficiency of performance
of three texture based feature extraction methods for face
recognition. The methods for comparative study are Grey Level
Co_occurence Matrix (GLCM), Local Binary Pattern (LBP)
and Elliptical Local Binary Template (ELBT). Experiments
were conducted on a facial expression database, Japanese
Female Facial Expression (JAFFE). With all facial expressions
LBP with 16 vicinity pixels is found to be a better face
recognition method among the tested methods. Experimental
results show that classification based on segmenting face
region improves recognition accuracy.
BEHAVIOR STUDY OF ENTROPY IN A DIGITAL IMAGE THROUGH AN ITERATIVE ALGORITHM O...ijscmcj
Image segmentation is a critical step in computer vision tasks constituting an essential issue for pattern
recognition and visual interpretation. In this paper, we study the behavior of entropy in digital images
through an iterative algorithm of mean shift filtering. The order of a digital image in gray levels is defined.
The behavior of Shannon entropy is analyzed and then compared, taking into account the number of
iterations of our algorithm, with the maximum entropy that could be achieved under the same order. The
use of equivalence classes it induced, which allow us to interpret entropy as a hyper-surface in real m-
dimensional space. The difference of the maximum entropy of order n and the entropy of the image is used
to group the the iterations, in order to caractrizes the performance of the algorithm
GREY LEVEL CO-OCCURRENCE MATRICES: GENERALISATION AND SOME NEW FEATURESijcseit
Grey Level Co-occurrence Matrices (GLCM) are one of the earliest techniques used for image texture
analysis. In this paper we defined a new feature called trace extracted from the GLCM and its implications
in texture analysis are discussed in the context of Content Based Image Retrieval (CBIR). The theoretical
extension of GLCM to n-dimensional gray scale images are also discussed. The results indicate that trace
features outperform Haralick features when applied to CBIR.
Segmentation by Fusion of Self-Adaptive SFCM Cluster in Multi-Color Space Com...CSCJournals
This paper proposes a new, simple, and efficient segmentation approach that could find diverse applications in pattern recognition as well as in computer vision, particularly in color image segmentation. First, we choose the best segmentation components among six different color spaces. Then, Histogram and SFCM techniques are applied for initialization of segmentation. Finally, we fuse the segmentation results and merge similar regions. Extensive experiments have been taken on Berkeley image database by using the proposed algorithm. The results show that, compared with some classical segmentation algorithms, such as Mean-Shift, FCR and CTM, etc, our method could yield reasonably good or better image partitioning, which illustrates practical value of the method.
A comparative study of dimension reduction methods combined with wavelet tran...ijcsit
In this paper, we present a comparative study of dimension reduction methods combined with wavelet
transform. This study is carried out for mammographic image classification. It is performed in three stages:
extraction of features characterizing the tissue areas then a dimension reduction was achieved by four
different methods of discrimination and finally the classification phase was carried. We have late compared
the performance of two classifiers KNN and decision tree.
Results show the classification accuracy in some cases has reached 100%. We also found that generally the
classification accuracy increases with the dimension but stabilizes after a certain value which is
approximately d=60.
A comparative study on content based image retrieval methodsIJLT EMAS
Content-based image retrieval (CBIR) is a method of
finding images from a huge image database according to persons’
interests. Content-based here means that the search involves
analysis the actual content present in the image. As database of
images is growing daybyday, researchers/scholars are searching
for better techniques for retrieval of images maintaining good
efficiency. This paper presents the visual features and various
ways for image retrieval from the huge image database.
PERFORMANCE EVALUATION OF DIFFERENT TECHNIQUES FOR TEXTURE CLASSIFICATION cscpconf
Texture is the term used to characterize the surface of a given object or phenomenon and is an
important feature used in image processing and pattern recognition. Our aim is to compare
various Texture analyzing methods and compare the results based on time complexity and
accuracy of classification. The project describes texture classification using Wavelet Transform
and Co occurrence Matrix. Comparison of features of a sample texture with database of
different textures is performed. In wavelet transform we use the Haar, Symlets and Daubechies
wavelets. We find that, thee ‘Haar’ wavelet proves to be the most efficient method in terms of
performance assessment parameters mentioned above. Comparison of Haar wavelet and Cooccurrence
matrix method of classification also goes in the favor of Haar. Though the time
requirement is high in the later method, it gives excellent results for classification accuracy
except if the image is rotated.
A novel predicate for active region merging in automatic image segmentationeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
A novel predicate for active region merging in automatic image segmentationeSAT Journals
Abstract Image segmentation is an elementary task in computer vision and image processing. This paper deals with the automatic image segmentation in a region merging method. Two essential problems in a region merging algorithm: order of merging and the stopping criterion. These two problems are solved by a novel predicate which is described by the sequential probability ratio test and the minimal cost criterion. In this paper we propose an Active Region merging algorithm which utilizes the information acquired from perceiving edges in color images in L*a*b* color space. By means of color gradient recognition method, pixels with no edges are clustered and considered alone to recognize some preliminary portion of the input image. The color information along with a region growth map consisting of completely grown regions are used to perform an Active region merging method to combine regions with similar characteristics. Experiments on real natural images are performed to demonstrate the performance of the proposed Active region merging method. Index Terms: Adaptive threshold generation, CIE L*a*b* color gradient, region merging, Sequential Probability Ratio Test (SPRT).
CELL TRACKING QUALITY COMPARISON BETWEEN ACTIVE SHAPE MODEL (ASM) AND ACTIVE ...ijitcs
The aim of this paper is to introduce a comparison between cell tracking using active shape model (ASM)
and active appearance model (AAM) algorithms, to compare the cells tracking quality between the two
methods to track the mobility of the living cells. Where sensitive and accurate cell tracking system is
essential to cell motility studies. The active shape model (ASM) and active appearance model (AAM)
algorithms has proved to be a successful methods for matching statistical models. The experimental results
indicate the ability of (AAM) meth
Texture classification of fabric defects using machine learning IJECEIAES
In this paper, a novel algorithm for automatic fabric defect classification was proposed, based on the combination of a texture analysis method and a support vector machine SVM. Three texture methods were used and compared, GLCM, LBP, and LPQ. They were combined with SVM’s classifier. The system has been tested using TILDA database. A comparative study of the performance and the running time of the three methods was carried out. The obtained results are interesting and show that LBP is the best method for recognition and classification and it proves that the SVM is a suitable classifier for such problems. We demonstrate that some defects are easier to classify than others.
A novel method is proposed for image segmentation based on probabilistic field theory. This model assumes that the whole pixels of an image and some unknown parameters form a field. According to this model, the pixel labels are generated by a compound function of the field. The main novelty of this model is it consider the features of the pixels and the interdependent among the pixels. The parameters are generated by a novel spatially variant mixture model and estimated by expectation-maximization (EM)-
based algorithm. Thus, we simultaneously impose the spatial smoothness on the prior knowledge. Numerical experiments are presented where the proposed method and other mixture model-based methods were tested on synthetic and real world images. These experimental results demonstrate that our algorithm achieves competitive performance compared to other methods.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
A combined method of fractal and glcm features for mri and ct scan images classification
1. Signal & Image Processing : An International Journal (SIPIJ) Vol.5, No.4, August 2014
A COMBINED METHOD OF FRACTAL AND
GLCM FEATURES FOR MRI AND CT SCAN
IMAGES CLASSIFICATION
Redouan Korchiynel, 2
, Sidi Mohamed Farssi2, Abderrahmane Sbihi3, Rajaa
Touahni1, Mustapha Tahiri Alaoui2
1
LASTID Laboratory, Ibn Tofail University- Faculty of Sciences, Kenitra, Morocco
2
Laboratory of medical Images and Bioinformatics, Cheikh Anta Diop University-
Polytechnic High School, Dakar, Senegal
3 LTI Laboratory, Abdelmalek Essaadi University-ENSA, Tangier, Morocco
ABSTRACT
Fractal analysis has been shown to be useful in image processing for characterizing shape and gray-scale
complexity. The fractal feature is a compact descriptor used to give a numerical measure of the degree of
irregularity of the medical images. This descriptor property does not give ownership of the local image
structure. In this paper, we present a combination of this parameter based on Box Counting with GLCM
Features. This powerful combination has proved good results especially in classification of medical texture
from MRI and CT Scan images of trabecular bone. This method has the potential to improve clinical
diagnostics tests for osteoporosis pathologies.
KEYWORDS
Fractal analysis, GLCM Features, Medical images, Osteoporosis pathologies.
1. INTRODUCTION
Osteoporosis is a progressive bone disease that is characterized by a decrease in bone mass and
density which can lead to an increased risk of fracture [1] (see Figure1). Several methods have
been applied to characterize the bone texture: (Genetic algorithm) [2] (Multi resolution analysis)
[3], hybrid algorithm consisting of Artificial Neural Networks and Genetic Algorithms [4],
texture analysis methods (Gabor filter, wavelet transforms and fractal dimension) [5].
Texture analysis is important in many applications of computer image analysis for classification
or segmentation of images based on local spatial variations of intensity or color. A feature is an
image characteristic that can capture certain visual property of the image. Four major methods
were used to characterize different textures in an image: statistical methods (co-occurrence
method) [6] [7], geometrical (Voroni tessellation features, fractal) [8] [9], model based method
(Markov random fields) [10] [11] and signal processing methods (Gabor filters, wavelet
transform and curvelets) [7] [11] [12]. The most widely used statistical methods are co-occurrence
features [6] [7].
Fractal analysis has been successfully applied in images processing [13]. Applications in medical
images are concerned to model the tissues and organ constitutions and analyzing different types
[14] [15]. The fractal objects are characterized by: large degree of heterogeneity, self-similarity,
DOI : 10.5121/sipij.2014.5409 85
2. Signal & Image Processing : An International Journal (SIPIJ) Vol.5, No.4, August 2014
and lack of a well-defined scale. Notion “self-similarity” means that small-scale structures of
fractal set resemble large-scale structures [16] [17] [18].
Texture classification is one of the four problem domains in the field of texture analysis [19].
Texture classification based on fractal geometry proved a correlation between texture and fractal
dimension [20]. In this paper, we use the combination of fractal dimension and the co-occurrence
matrix features (GLCM) to describe the medical texture and used for trabecular bone texture
classification.
86
Figure 1. Left: normal bone, right: osteoporotic bone [1].
2. FEATURE EXTRACTION METHODS
In this work, we propose to use two common feature extraction algorithms based on global fractal
descriptors and Grey Level Co-occurrences matrix (GLCM). In this section, an overview of these
two algorithms is given.
Before developing the proposed feature extraction methods based on fractal and GLCM features
and discussing their properties, we briefly review fractal theory and co-occurrences matrix as
applicable in our work.
2.1. Fractal theory
Fractal geometry was introduced and supported by the mathematician Benoit Mandelbrot to
characterize objects with unusual properties in classical geometry [21] [22], this concept helps to
interpret the inexplicable subjects by any law. It is shown to study irregular objects in the plan or
space, which is actually better suited to handle the real world. Fractal geometry has experienced
significant growth in recent years, a review of this theory is summarised by R. Lopes [23]. In
medical images analysis that the result of this theory has provided an evidence solutions to
several problems [24] source of many applications [18] [25].
A texture can be seen as a repetition of similar patterns randomly distributed in the image. Fractal
approach allows measuring the invariant translation, rotation and even scaling. Many approaches
have been developed and implemented to characterize real textures using concepts of this
geometry [26]. Fractal dimension is a fractional parameter and greater than the topological
dimension. The fractal dimension is a helpful of an image by describing the texture irregularity
[27] [28].
Practical way to obtain the fractal dimension is described as follows: a graph of Log(N())
contribution to Log() for each object must be established. Then the correlation between N()
and () is adjusted by linear regression (see Figure. 3). Fractal Dimension is estimated using the
least square method and compute the slope of the Log-Log curve composed by the (Log(N()),
Log()) points. Several methods have been introduced to calculate the fractal dimension [29] [30].
The method of boxes counting remains the most practical method to estimate the fractal
dimension [31]; Box Fractal Dimension is a simplification of the Hausdorff dimension for non-strictly
self-similar objects [32]. A given binary image, it is subdivided in a grid of size MxM
3. Signal Image Processing : An International Journal (SIPIJ) Vol.5, No.4, August 2014
where the side of each box formed is and N() represents the amount of boxes that contains one
pixel, the fractal dimension is defined as follow:
87
0
( ( ))
lim
Log N
( ) f
D
l Log
l
® l
= −
N() : Number of boxes
: Length of the box
The best methods have been introduced to reduce quantization levels and it was demonstrated the
utility of the fractal parameter and its variants in the transformed texture characterization images
[26]. However, given that this parameter alone is not sufficient to discriminate visually different
surfaces; the concepts of homogeneity of fractal dimension were examined. The Hausdorff
dimension is the most studied mathematically. We are used in other work the Hausdorff
multifractal spectrum to describe the different fractalities recovering the medical image [33]. It is
generally the best known, but individually, it is probably the least calculated.
Table 1 shows that each shapes can be decomposed into N similar copies of itself scaled by a
factor of = 0.5. The number of half-scaled copies N equals 2, 4 and 8 for the line, square and
cube respectively. This leads to fractal dimensions of D = −Log (N(l ) ) / Log (l ) = 1, 2 and 3 as
expected. However, the Sierpinski triangle, by construction, has only N = 3 halfscaled copies of
itself leading to a non-integral fractal dimension D = −Log(3) / Log(0.5) = 1.585 .
Table 1. Traditional geometry decomposed into N similar copies of itself by a factor of = 0.5 and
corresponding fractal dimension.
Log ( The shapes l N( )
)
N(l ) ( )
D
Log
l
l
= −
Line:
1
( )
l = = 0.5
N(l ) = 2 2
2
Log
= − =
( )
1
1 / 2
D
Log
Square:
1
( )
l = = 0.5
N(l ) = 4 2
4
Log
= − =
( )
2
1 / 2
D
Log
Cube:
1
( )
l = = 0.5
N(l ) = 8 2
8
Log
= − =
( )
3
1 / 2
D
Log
Sierpinski
Triangle:
1
l = = 0.5
N(l ) = 3
2
( 3
)
( )
1.585
= − =
0.5
Log
D
Log
2.2. The Grey Level Co-occurrence Matrix (GLCM) Features
Grey-Level Co-occurrence Matrix (GLCM) texture measurements have been the workhorse of
image texture since they were proposed by Haralick [34] [35], and 14 statistical features were
introduced. GLCM is a statistical method of examining texture that considers the spatial
relationship of pixels [36] [37]. The GLCM functions characterize the texture of an image by
calculating how often pairs of pixel with specific values and in a specified spatial relationship
occur in an image. These features are generated by calculating the features for each one of the co-occurrence
matrices obtained by using the directions 0°, 45°, 90°, and 135°, then averaging these
four values [38] (see Figure 3).
4. Signal Image Processing : An International Journal (SIPIJ) Vol.5, No.4, August 2014
We illustrate this concept with a binary model and its matrix MCoo for d= (0°, 1). The following
figure (see Figure 2) shows the calculates several values in the GLCM of the 5-by-5 image I.
Element (0,0) in the GLCM contains the value 6 because there is 6 instance in the image where
two, horizontally adjacent pixels have the values 0 and 0. Element (1, 1) in the GLCM contains
the value 10 because there are 10 instances in the image where two, horizontally adjacent pixels
have the values 1 and 1. The graycomatrix function continues this processing to fill in all the
values in the GLCM.
88
1 0 0 0 0
1 1 0 0 0 0 1
1 1 1 0 0 MCoo = 0 6 0
1 1 1 1 0 1 4 10
1 1 1 1 1
Figure 2. Binary model and corresponding co-occurrence matrix.
The some GLCM Features used in this work are:
• Contrast
The contrast returns a measure of the intensity contrast between a pixel and its neighbour over the
whole image.
−
( )
1
2
N
, 0
i j
i j
P i j
−
=
Contrast is 0 for a constant image.
• Correlation
The correlation returns a measure of how correlated a pixel is to its neighbour over the whole
image.
1 ( )( )
2
N
, 0
i j
i j
i j
P
μ μ
s
−
=
− −
Correlation is 1 or -1 for a perfectly positively or negatively correlated image. Correlation is NaN
for a constant image.
• Energy
The energy returns the sum of squared elements in the GLCM.
Energy is 1 for a constant image.
( )
1
2
N
, 0
i j
i j
P
−
=
• Homogeneity
The homogeneity returns a value that measures the closeness of the distribution of elements in the
GLCM to the GLCM diagonal. Homogeneity is 1 for a diagonal GLCM.
( )
1
2
N
, 0 1
i j
i j
P
i j
−
= + −
Pij = Element i, j of the normalized symmetrical GLCM.
N = Number of gray levels in the image as specified by number of levels in under quantization on
the GLCM.
μ = The GLCM mean, calculated as:
5. Signal Image Processing : An International Journal (SIPIJ) Vol.5, No.4, August 2014
89
1
N
=
μ iP
, 0
i j
i j
−
=
2 = The variance of the intensities of all reference pixel in the relationships that contributed to
the GLCM, calculated as:
= −
( )
1
N
2 2
s P i μ
, 0
i j
i j
−
=
-a- -b- -c-
Figure 3. (a) Example from Brodatz database (b) Computed box dimension (c) GLCM Matrix.
3. PROPOSED METHOD
3.1. Principle
Several authors have proposed to analyze medical images from the texture statistics or others
features as wavelets, Gabor filter, fractal dimension. Texture analysis based on the fractal concept
was introduced by Pentland in 1984 [30]. In this paper, the proposed method for classification is
based on the combination of fractal features and the Haralick features extracted using GLCM
Matrix [36] [37].
The proposed classification approach, as shown in Figure 4, depends on maintaining the
properties of Hölder transform images in minimizing the bit quota for no significant coefficients
and maintaining the most significant parts of the fractal and GLCM features. Moreover, it is
expected to utilize the efficiency of fractal image in detecting similarities. In the following
subsections the fractal features procedure, GLCM Features, and proposed combined features
scheme are further explained and clarified. This method consists of four main steps: threshold of
the regions of interest, application of the Hölder image, extraction of most discriminative texture
features, creation of a classifier. The images were analyzed to obtain texture parameters such as
the fractal geometry based box-counting dimension, and co-occurrence parameters. Fractal
dimension showed moderate correlation and decreased with GLCM features. A one tailed
procedure was used to determine the differences between the texture parameters of both healthy
and osteoporotic patients. The following figure (Figure 4) describes the data set classification
process of our proposed method.
3.2. Features Extraction
3.2.1. Fractal Features
The original image I0(i, j) is transformed to Hölder image IH((i, j)) and a multifractal spectrum is
plotted using a parameter couple (, f()) [33].
6. Signal Image Processing : An International Journal (SIPIJ) Vol.5, No.4, August 2014
90
= =
0 0
( ( ))
lim( ) lim
k
og( )
k
Log S
l l L
μ
a a
® ® l
Where ak is the Hölder exponent of the subspace Sk.
Then, considering that each value a of Hölder exponent, defined a set fractal E(a), which will
calculate the fractal dimension, and that the support of the image I0(i, j) is therefore formed the
union of sets fractals of different dimensions:
( ) {( , ) / ( , ) } k k k S = E a = i j a i j =a
and ( ( , )) ( ) H k
I a i j =UE a
k
The Hausdorff dimension is a way of calculating the fractal dimension based on box-counting
method. This parameter is given by:
( ( ))
( )
og( )
k
k
Log N
f
L
l
l
a
a
l
= −
N(ak) is the number of boxes Sk containing the value ak
The limiting values of the multifractal spectrum f(a) is calculated from linear regression from a
set of points in a bi-logarithmic diagram of Log(N(ak)) vs −Log():
a a
( ) lim( ( )) k f fl
0
l
®
=
3.2.2. GLCM Features
A GLCM is a function of co-occurring greyscale values at a given offset over an image using for
texture classification. We use Matlab Image Processing Toolbox to compute the four parameters
(cf. Section 2.2).
In this work, we use 40 samples of two classes of different medical textures and a GLCM
function with a horizontal offset of 2 and average value of four directions 0°, 45°, 90°, and 135°
are computed. Next, four features of the GLCM matrices are extracted: Contrast, Correlation,
Energy, and Homogeneity.
Table 2. The results of the some selected features
Fractal
Dimension
GLCM Features
Contrast Correlation Energy Homogeneity
Medical
Images
Min Max Min Max Min Max Min Max
Sample 1 1.9295 1.3243 1.5397 0.1558 0.2765 0.0924 0.0950 0.6276 0.6489
Sample 2 1.8742 1.4691 1.7972 0.2298 0.3687 0.0665 0.0701 0.6040 0.6310
3.2.3. Combined Method
In the previous sections, a set of features has been analyzed for characterization of medical
images and each classifier have been presented to address the osteoporosis detection. In this
section, we aim to combine the information of the different classifiers so as to enhance the overall
classification performance.
Fusion of information appeared to handle very large quantities of multi-source data [39]. It is to
combine information from multiple sources to help in decision making. Fusion methods have
been adapted and developed for applications in image processing and especially for the
classification. The goal here is not to reduce redundancy in the information from several sources,
but rather to consider in order improving decision making. The fusion of information depending
on the architecture used here will be to combine data (most often is data or parameters from
7. Signal Image Processing : An International Journal (SIPIJ) Vol.5, No.4, August 2014
sensors or different extraction methods) for classification, whether to classify them separately
according to potential of the selected classifier and then merge them.
The goal is thus to combine each output of SVM classifier and to generate a single scalar score.
This will be used to make the final decision and also give information on the confidence in the
decision. In this context, our research focuses on the one hand, on comparing the two methods
and secondly, their combination by vote.
91
Figure 4. The flow chart of proposed method
4. EXPERIMENTAL RESULTS AND DISCUSSION
4.1. The Data Set
This paper was focused on the texture classification of medical images. The images were obtained
from textures database INSERM U 703 [40], which is a set of 40 medical images from MRI
(Magnetic Resonance Images) and CT scan (Computed Tomography) on a specific region namely
the ROI (Region Of Interest) of trabecular bone, including 20 normal subjects and 20 abnormal
subjects of osteoporosis pathology. In this work, we present the experimental details of texture
classification based on fractal geometry and GLCM matrix. We used 20 ROI from MRI and CT
Scan images of medical images database [40].
Sample1: Normal ROI Sample 2: Pathological ROI
Figure 5. Samples from Medical Images Database [40]
8. Signal Image Processing : An International Journal (SIPIJ) Vol.5, No.4, August 2014
4.2. Medical images Classification
In this section, the experiments and analyses are done to evaluate the performance of the
proposed method. For each sample, a fractal features are extracted using the box counting method
and 4 features are selected by the GLCM matrix [34] [36] [41]. Due to the limitation of the
number of samples, we evaluate the performance method with the diverse training and test set by
selecting an automatic training and test sets.
The classification was applied on 40 subjects (20 normal regions and 20 abnormal regions). The
Support Vector Machine (SVM) classifier [42] [43] [44] [45], is used in this paper. The SVM [46,
47] have recently become popular as supervised classifiers of MRI data due to their high
performance, their ability to deal with large high-dimensional datasets, and their flexibility in
modelling diverse sources of data [48,49,50]. The most useful version is the one with the linear
kernel, whereby the discriminant function between two classes is a linear combination of the
inputs. The importance of a feature is associated with the absolute value of the corresponding
weight in the linear combination. SVM is originally derived for two classes.The classifier was
implemented in Matlab 7.8 and was investigated using a linear kernel function [22]. The
performance of the classifier models were measured [23], and the parameters calculated are: (i)
True Positives (TP) when the system correctly classifies subjects as normal, (ii) False Positives
(FP) where the system wrongly classifies subjects, (iii) true negatives (TN) when the system
correctly classifies subjects as abnormal. For the overall performance, we provide the correct
classification (CCR) rate which gives the percentage of correctly classified subjects.
4.2.1. Statistical measures
These results will be presented as statistical measures such confusion matrix and Correct
Classification Rate. In the first subsection, we define briefly these statistical measures. After that,
we present the obtained results for real test of trabecular bone texture.
Confusion Matrix: Confusion Matrix is a 2-by-2 numeric array with diagnostic counts. This
Matrix used to measure the quality of a classification system. The first column indicates the
number of samples that were classified as positive, with the number of true positives in the first
row, and the number of false positives in the second row. The second column indicates the
number of samples that were classified as negative, with the number of false negatives in the first
row, and the number of true negatives in the second row.
Correct classifications appear in the diagonal elements, and errors appear in the off-diagonal
elements. Inconclusive results are considered errors and counted in the off-diagonal elements.
Table 3 shows the confusion matrix of a system that allows classifying two classes 1 and 0.
Remember that for this predictable attribute, 1 means Normal and 0 means Abnormal.
92
Table 3. Confusion matrix for performance evaluation.
Predicted Class
Class 1 0
True Class
1 TP
(True Positive)
FN
(False Negative)
0 FP
(True Negative)
TN
(True Negative)
• TP (resp. TN) represents the number of subjects of class 1 (resp. class 0) well classified
by the system.
• FN (resp. FP) represents the number of subjects of class 1 (resp. class 0) that have been
classified by the system as subjects of class 0 (resp. class 1).
• T = TP+FN+FP+ TN is the sum of subjects of both classes.
9. Signal Image Processing : An International Journal (SIPIJ) Vol.5, No.4, August 2014
Performance evaluation: To evaluate the performance of trabecular bone classification,
specificity and sensitivity used to distinguish the subjects. Sensitivity and specificity are two
measures help to prove the significance of a test related to the presence or absence of pathology.
Equations (1) and (2) are used to calculate these two parameters, respectively.
93
*100 (1)
TP
Sensitivity
TP FN
=
+
*100 (2)
TN
Specificity
TN FN
=
+
Correct Classification Rate: The Correct Classification Rate noted CCR, also known as Success
rate, is the sum of good detections represented by TP and TN divided by the total number of
samples T.
*100
+
TP TN
CCR
T
=
4.2.2. Experiments
The objective of our study was to investigate texture classification analysis in an effort to
differentiate between normal and pathological subjects of osteoporosis pathology.
Comparative results with percentage of correct classification for our proposed method and
classical features extraction methods are summarized in Table 3.
We present in Figure 6, two box plots of the training and classified different simples using SVM
classifier.
-a- GLCM Features -b- Combined Features
Figure 6. The training and classified set of different simples using the SVM classifier
Table 3 shows in percent the confusion matrix of each method, that present the average
performance of the SVM classifier for each individual class with the actual class represented on
the left and the predicted class represented above. Like this Table, the diagonally shaded boxes
show the percent of correctly classified images and the percent of erroneously classified images
in the off-diagonal.
10. Signal Image Processing : An International Journal (SIPIJ) Vol.5, No.4, August 2014
94
Table 3. The average performance of SVM classifier using different methods.
Table 4 showed that the classification results using (SVM) classifier of our proposed method
succeeded in differentiating between patients. However the CCR is 77.50% (sensitivity, 78.50%;
specificity, 76.50%) for fractal Features and 78.50% (sensitivity, 79.20%; specificity, 77.80%) for
GLCM features. The best classification results were obtained from including the combination of
GLCM Features and Fractal Feature. Also the best percentage of correct classifications achieved
was 85.71% (sensitivity, 90.23%; specificity, 81.19%).
Table 4. Percentage classification accuracy (sensitivity; specificity) for each feature extraction method.
Fractal Features GLCM Features Proposed Method
Sensitivity (%) 78.50 79.20 90.23
Specificity (%) 76.50 77.80 81.19
CCR (%) 77.50 78.50 85.71
5. CONCLUSIONS AND FUTURE WORK
In this article, we demonstrated the texture classification based on a combination of fractal and
GLCM features prove best classification rate. In this method, we propose novel extracted
features. The fractal dimension is not sufficient to describe a texture. Therefore a combination
with other features will be needed to give good results.
This paper presents a trabecular bone classification method using a combined method based on
Fractal and GLCM features. In this method, we propose novel extracted features.
According to experimental results performed on real MRI and CT Scan Medical images, the
classification method provides a correct classification rate of 85.71%.
Further research works on a larger number of fractal features is required for validating the results
of this study and for finding additional shape and texture features that may provide information
for the prevention of osteoporosis on the initial MRI and CT Scan images; and improve the final
classification rate. Additionally the use of other classifiers could suggest a better classification a
scheme and will be examined.
ACKNOWLEDGMENT
The authors extend their sincere thanks to Dr. Patrick Dubois, university professor and hospital
practitioner in Institute of medical technology, hospital and Regional – Lille, France, for his help
about the images database.
11. Signal Image Processing : An International Journal (SIPIJ) Vol.5, No.4, August 2014
REFERENCES
[1] International Osteoporosis Foundation, http://www.iofbonehealth.org/what-is-osteoporosis.
[2] Jennane R, Almhdie-Imjabber A, Hambli R, Ucan ON, Benhamou CL, (2010) Genetic algorithm and
image processing for osteoporosis diagnosis, Annual International Conference of the IEEE
Engineering in Medicine and Biology Society (EMBC), IEEE Xplore Digital Library, pp 5597 –
5600.
[3] Khider M, Taleb-Ahmed A, Dubois P, Haddad B, (2007) Classification of trabecular bone texture
from MRI and CT scan images by multi resolution analysis, The 1st International Conference on
Bioinformatics and Biomedical Engineering, IEEE Xplore Digital Library, Print ISBN: 1-4244-1120-
3, pp 1339 – 1342.
[4] George C. Anastassopoulos, Adam Adamopoulos, Georgios Drosos, Konstantinos Kazakos, Harris
Papadopoulos, (2013) Diagnostic Feature Extraction on Osteoporosis Clinical Data Using Genetic
Algorithms, Artificial Intelligence Applications and Innovations, pp 302-310.
[5] J.T. Pramudito, S. Soegijoko, T.R. Mengko, F.I. Muchtadi, R.G. Wachjudi, (2007) Trabecular Pattern
Analysis of Proximal Femur Radiographs for Osteoporosis Detection, Journal of Biomedical
Pharmaceutical Engineering, ISSN: 1793-4532, Vol.1, N°1, pp 45-51.
[6] Haralick, R.M.; Shanmugam, K.; Denstein, I., (1973) Textural Features of Image Classification, IEEE
95
Transactions on Systems, Man and Cybernetics, vol. 3, N° 6, pp 610–621.
[7] Arivazhagan, S. and L. Ganesan, (2006) Texture classification using wavelet transform. Pattern
Recog. Lett., Vol. 27, pp 1875-1883.
[8] Unser, M., (1986) Local linear transform for texture measurements. Signal Proc., Vol. 11, pp 61-79.
[9] Mandelbrot, B., (1983) The Fractal Geometry of Nature. W.H. Freeman.
[10] Yongqing, X. and V.R. Yingling, (2006) Comparative assessment of bone mass and structure using
texture-based and histomorphometric analysis. J. Bone, Vol. 40, pp 544-552.
[11] Lucia, D. and L. Semlar, (2007) A comparison of wavelet, ridgelet and curvelet-based texture
classification algorithms in computed tomography. Comput. Biol. Med., Vol. 37, pp 486-498.
[12] Candes, E., L. Demanet, D. Donoho and L. Ying, (2006) Fast discrete curvelet transforms. Multistage
Model. Simulat. Vol. 5, pp 861-899.
[13] Maeda, J., Novianto, S., Miyashita, A., Saga, S., Suzuki, Y., (1998) Fuzzy region-growing
segmentation of natural images using local fractal dimension. Proceedings of the 14th International
Conference on Pattern Recognition, pp 991-993.
[14] Liu, J., Zhang, L., Yue, G., (2003) Fractal dimension in human cerebellum measured by magnetic
resonance imaging. Biophysical Journal Vol. 85, pp 4041-4046.
[15] Lee, W., Chen, Y., Chen, Y., Hsieh, K., (2005) Unsupervised segmentation of ultrasonic liver images
by multiresolution fractal feature vector. Information Sciences 175, pp 177-199.
[16] Li, H., Giger, M.L., Olopade, O.I., Lan, L., (2007) Fractal analysis of mammographic parenchymal
patterns in breast cancer risk assessment. Acad. Radiol. 14, pp 513-521.
[17] Mandelbrot, B., (1967) How long is the coast of Britain? Statistical self-similarity and fractional
dimension. Science 156, pp 636-638.
[18] R. Lopes, A. Ayache, N. Makni, P. Puech, A. Villers, S. Mordon and N. Betrouni, (2011) Prostate
cancer characterization on MR images using fractal features, International Journal of Medical Physics
Research and Practice. Vol. 38, No. 1.
[19] T Ojala, M Pietikäinen, (2001) Texture classification, CVonline: The Evolving, Distributed, Non-
Proprietary, On-Line Compendium of Computer Vision.
[20] Varma, M., Garg, R., (2007) Locally invariant fractal features for statistical texture classification. In:
In: International Conference on Computer Vision, vol. 1, pp 1–8.
[21] Mandelbrot, B., (1974) the Fractal Geometry of Nature, Network Books : W.H. Freeman.
[22] Lévy-Véhel, J.,. (1995) Fractal approaches in signal processing. Fractals 3, pp 755-775.
[23] R. Lopes and N. Betrouni, (2009) “Fractal and multifractal analysis: A review,” Medical Image
Analysis, Vol. 13, No. 4, pp 634-649.
[24] Edward Oczeretko, Marta Borowska, Agnieszka Kitlas, Andrzej Borusiewicz, and Malgorzata
Sobolewska-Siemieniuk, (2008) Fractal Analysis of Medical Images in the Irregular Regions of
Interest, 8th IEEE International Conference on BioInformatics and BioEngineering, pp 1-6.
[25] R. Korchiyne, A. Sbihi, S. M. Farssi, R. Touahni, M. Tahiri Alaoui, (2012) Medical Image texture
Segmentation using Multifractal Analysis. The 3rd International Conference on Multimedia
Computing and Systems (ICMCS'12), IEEE Xplore Digital Library, Print ISBN: 978-1-4673-1518-0),
pp 422–425.
12. Signal Image Processing : An International Journal (SIPIJ) Vol.5, No.4, August 2014
[26] João Batista Florindo, Odemir Martinez Bruno, (2012) Fractal descriptors based on Fourier spectrum
applied to texture analysis, Physica A :Statistical Mechanics and its Applications, Vol. 391, No. 20,
pp 4909–4922.
[27] Edward Oczeretko, Marta Borowska, Izabela Szarmach, Agnieszka Kitlas, Janusz Szarmach, and
Andrzej Radwa´nski, (2010) Fractal Analysis of Dental Radiographic Images in the Irregular Regions
of Interest, E. Pi˛etka and J. Kawa (Eds.): Information Technologies in Biomedicine, Springer, AISC
69, pp 191–199.
[28] Oczeretko, E., Borowska, M., Kitlas, A., Borusiewicz, A., Sobolewska-Siemieniuk, M. (2008) Fractal
analysis of medical images in the irregular regions of interest. In: 8th IEEE International Conference
on Bioinformatics and BioEngineering, IEEE Xplore Digital Library, ISBN: 978-1-4244-2845-8, pp
1-6.
[29] Sarkar, N., Chaudhuri, B. B., (1994) An efficient differential box counting approach to compute
fractal dimension of image. IEEE Transactions on Systems, Man and Cybernetics, Vol. 24, No. 1, pp
115-120.
[30] Pentland, A. P. (1984) Fractal-based description of nature scenes. IEEE Transactions on Pattern
96
Analysis and Machine Intelligence, Vol. 6, No. 6, pp 315-326.
[31] Min Long, Fei Peng, (2013) A Box-Counting Method with Adaptable Box Height for Measuring the
Fractal Feature of Images, Radioengineering, Vol. 22, No. 1.
[32] Li, J., Sun, C., Du, Q., (2006) A new box-counting method for estimation of image fractal
dimensions. In Proceedings of IEEE International Conference on Image Processing, pp 3029-3022.
[33] R. Korchiyne A. Sbihi S. M. Farssi R. Touahni M. Tahiri Alaoui, (2012) Multifractal Spectrum:
Applications on Real CT-Scan Images of Trabecular Bone Texture, International Journal of Advanced
Research in Electronics and Communication Engineering (IJARECE), ISSN: 2278 – 909X, Vol. 1,
No. 6, pp 53-57.
[34] R.M. Haralick, K. Shanmugam, I. Dinstein, (1973) Textural Features for Image Classification, IEEE
Transactions on Systems, Man, and Cybernetics, Vol. 3, No. 6, pp 610–621.
[35] Bino Sebastian V, A. Unnikrishnan and Kannan Balakrishnan, (2012) Grey Level Co-occurrence
Matrices: Generalization and Some New Features, International Journal of Computer Science,
Engineering and Information Technology (IJCSEIT), Vol. 2, No.2.
[36] L. Soh and C. Tsatsoulis, (1999) Texture Analysis of SAR Sea Ice Imagery Using Gray Level Co-
Occurrence Matrices, IEEE Transactions on Geoscience and Remote Sensing, Vol. 37 , No. 2, pp
780-795.
[37] F. I. Alam, R. U. Faruqui, (2011) Optimized Calculations of Haralick Texture Features, European
Journal of Scientific Research, Vol. 50 No. 4, pp 543-553.
[38] D A. Clausi, (2002) An analysis of co-occurrence texture statistics as a function of grey level
quantization, Can. J. Remote Sensing, Vol. 28, No. 1, pp 45-62.
[39] G. Zhao, X. Xiao, J. Yuan, and G. W. Ng, (2014). Fusion of 3d-lidar and camera data for scene
parsing. J. Vis. Comun. Image Represent., 25(1):165–183.
[40] P. Dubois, INSERM (French National Institute of Health And Medical Research) database U703,
Lille-France.
[41] D A. Clausi, (2002) An analysis of co-occurrence texture statistics as a function of grey level
quantization, Can. J. Remote Sensing, Vol. 28, No. 1, pp 45-62.
[42] Kecman, V. (2001) Learning and Soft Computing (Cambridge, MA: MIT Press).
[43] Suykens, J.A.K., Van Gestel, T., De Brabanter, J., De Moor, B., and Vandewalle, J. (2002) Least
Squares Support Vector Machines (Singapore: World Scientific).
[44] Scholkopf, B., and Smola, A.J. (2002) Learning with Kernels (Cambridge, MA: MIT Press).
[45] Cristianini, N. and Shawe-Taylor, J. (2000) An Introduction to Support Vector Machines and Other
Kernel-based Learning Methods, First Edition (Cambridge: Cambridge University Press).
http://www.support-vector.net/.
[46] C. Cortes and V. Vapnik, (1995) “Support-vector networks,” Machine Learning, Vol. 20, No. 3, pp.
273–297.
[47] V. N. Vapnik, (1995) The Nature of Statistical Learning Theory, Springer, Vol. 8.
[48] M. Timothy, D. Alok, V. Svyatoslav et al., (2012) “Support Vector Machine classification and
characterization of age-related reorganization of functional brain networks,” NeuroImage, Vol. 60,
No. 1, pp. 601–613.
[49] E. Formisano, F. De Martino, and G. Valente, (2008) “Multivariate analysis of fMRI time series:
classification and regression of brain responses using machine learning,” Magnetic Resonance
Imaging, Vol. 26, No. 7, pp. 921–934.
13. Signal Image Processing : An International Journal (SIPIJ) Vol.5, No.4, August 2014
[50] S. J. Hanson and Y. O. Halchenko, (2008) “Brain reading using full brain Support Vector Machines
for object recognition: there is no “face” identification area,” Neural Computation, Vol. 20, No. 2, pp
486–503.
97
AUTHORS
Redouan KORCHIYNE has obtained the Diploma of Advanced Higher Studies DESA in Informatics
and Telecommunications in 2005 at the Ibn Tofail University Faculty of Science, Kenitra-Morocco.
Currently, he is pursuing her PhD degree at the Ibn Tofail University Faculty of Sciences - Kenitra, her
research focuses on the texture analysis of medical images by the multifractal approach.
Sidi Mohamed FARSSI, Dr. of State es-science, Director of the Doctoral School of Telecommunications
at the University of Dakar-Senegal, Professor at the Polytechnic Higher School of Dakar, President of the
National Council of Moroccans in Senegal (CNMS) and Director of the laboratory for research in medical
imaging and bioinformatics.
Abderrahmane SBIHI is a University Professor, currently is a Director of the National School of Applied
Sciences, Tangier, Morocco. He is president of the Moroccan association for the development of
electronics, computer and Automatic AMDEIA and a member of the standing committee of the African
colloquy on Research in Computer Science at Morocco. His main research interests are computer vision,
pattern recognition and multidimensional data analysis.
Rajaa TOUAHNI Is a Professor at the Ibn Tofail University Faculty of Sciences. Kenitra - Morocco.
Responsible for the research team in Information Processing and engineering of the Decision. His main
research interests are on image processing and analysis of multidimensional data.
Mustapha TAHIRI ALAOUI has obtained the degree of National Doctorate in 2005 at the university
Mohammed V, faculty of sciences-Agdal of Rabat-Morocco within the UFR: Analysis and Design of
Computer Systems. His research focuses on the texture characterization of the ultrasound images.