It is A Challenging Task To Build A Cbir System Which Primarily Works On Texture Values As There
Meaning And Semantics Needs A Special Care To Be Mapped With Human Based Languages. We Have
Consider Highly Textured Images Having Properties(Entropy, Homogeneity, Contrast, Cluster Shade, Auto
Correlation)And Have Mapped Using A Fuzzy Minmax Scale W.R.T. Their Degree(High, Low,
Medium)And Technical Interpetation.This Developed System Is Performing Well In Terms Of Precision
And Recall Value Showing That Semantic Gap Has Been Reduced For Highly Textured Images Based Cbir.
Texture Unit based Approach to Discriminate Manmade Scenes from Natural Scenesidescitation
In this paper a method is proposed to discriminate
natural and manmade scenes of similar depth. Increase in
image depth leads to increase in roughness in manmade
scenes; on the contrary natural scenes exhibit smooth behavior
at higher image depth. This particular arrangement of pixels
in scene structure can be well explained by local texture
information in a pixel and its neighborhood. Our proposed
method analyses local texture information of a scene image
using texture unit matrix. For final classification we have
used unsupervised learning using Self Organizing Map
(SOM). This technique is useful for online classification due
to very less computational complexity.
AUTOMATIC THRESHOLDING TECHNIQUES FOR OPTICAL IMAGESsipij
Image segmentation is one of the important tasks in computer vision and image processing. Thresholding is
a simple but most effective technique in segmentation. It based on classify image pixels into object and
background depended on the relation between the gray level value of the pixels and the threshold. Otsu
technique is a robust and fast thresholding techniques for most real world images with regard to uniformity
and shape measures. Otsu technique splits the object from the background by increasing the separability
factor between the classes. Our aim form this work is (1) making a comparison among five thresholding
techniques (Otsu technique, valley emphasis technique, neighborhood valley emphasis technique, variance
and intensity contrast technique, and variance discrepancy technique)on different applications. (2)
determining the best thresholding technique that extracted the object from the background. Our
experimental results ensure that every thresholding technique has shown a superior level of performance
on specific type of bimodal images.
Texture Unit based Monocular Real-world Scene Classification using SOM and KN...IDES Editor
In this paper a method is proposed to discriminate
real world scenes in to natural and manmade scenes of similar
depth. Global-roughness of a scene image varies as a function
of image-depth. Increase in image depth leads to increase in
roughness in manmade scenes; on the contrary natural scenes
exhibit smooth behavior at higher image depth. This particular
arrangement of pixels in scene structure can be well explained
by local texture information in a pixel and its neighborhood.
Our proposed method analyses local texture information of a
scene image using texture unit matrix. For final classification
we have used both supervised and unsupervised learning using
K-Nearest Neighbor classifier (KNN) and Self Organizing
Map (SOM) respectively. This technique is useful for online
classification due to very less computational complexity.
Texture Unit based Approach to Discriminate Manmade Scenes from Natural Scenesidescitation
In this paper a method is proposed to discriminate
natural and manmade scenes of similar depth. Increase in
image depth leads to increase in roughness in manmade
scenes; on the contrary natural scenes exhibit smooth behavior
at higher image depth. This particular arrangement of pixels
in scene structure can be well explained by local texture
information in a pixel and its neighborhood. Our proposed
method analyses local texture information of a scene image
using texture unit matrix. For final classification we have
used unsupervised learning using Self Organizing Map
(SOM). This technique is useful for online classification due
to very less computational complexity.
AUTOMATIC THRESHOLDING TECHNIQUES FOR OPTICAL IMAGESsipij
Image segmentation is one of the important tasks in computer vision and image processing. Thresholding is
a simple but most effective technique in segmentation. It based on classify image pixels into object and
background depended on the relation between the gray level value of the pixels and the threshold. Otsu
technique is a robust and fast thresholding techniques for most real world images with regard to uniformity
and shape measures. Otsu technique splits the object from the background by increasing the separability
factor between the classes. Our aim form this work is (1) making a comparison among five thresholding
techniques (Otsu technique, valley emphasis technique, neighborhood valley emphasis technique, variance
and intensity contrast technique, and variance discrepancy technique)on different applications. (2)
determining the best thresholding technique that extracted the object from the background. Our
experimental results ensure that every thresholding technique has shown a superior level of performance
on specific type of bimodal images.
Texture Unit based Monocular Real-world Scene Classification using SOM and KN...IDES Editor
In this paper a method is proposed to discriminate
real world scenes in to natural and manmade scenes of similar
depth. Global-roughness of a scene image varies as a function
of image-depth. Increase in image depth leads to increase in
roughness in manmade scenes; on the contrary natural scenes
exhibit smooth behavior at higher image depth. This particular
arrangement of pixels in scene structure can be well explained
by local texture information in a pixel and its neighborhood.
Our proposed method analyses local texture information of a
scene image using texture unit matrix. For final classification
we have used both supervised and unsupervised learning using
K-Nearest Neighbor classifier (KNN) and Self Organizing
Map (SOM) respectively. This technique is useful for online
classification due to very less computational complexity.
REMOVING OCCLUSION IN IMAGES USING SPARSE PROCESSING AND TEXTURE SYNTHESISIJCSEA Journal
We provide a solution to problem of occlusion in images by removing the occluding region and filling in the gap left behind. Inpainting algorithms fail in filling occlusions when the occluding region is large since there is loss of both structure and texture. We decompose the image into structure and texture images using a decomposition method based on sparseness of the image. The sparse reconstruction of the decomposed images result in an inpainted image with all the structures made intact. A texture synthesis is performed on the texture only image. Finally the structure and texture images are combined to get an image where the occlusion is filled. The performance of our algorithm in terms of visual effectiveness is compared with other algorithms used for inpainting.
Computer apparition plays the most important role in human perception, which is limited to only the visual band of the electromagnetic spectrum. The need for Radar imaging systems, to recover some sources that
are not within human visual band, is raised. This paper present new algorithm for Synthetic Aperture Radar (SAR) images segmentation based on thresholding technique. Entropy based image thresholding has
received sustainable interest in recent years. It is an important concept in the area of image processing.
Pal (1996) proposed a cross entropy thresholding method based on Gaussian distribution for bi-modal images. Our method is derived from Pal method that segment images using cross entropy thresholding based on Gamma distribution and can handle bi-modal and multimodal images. Our method is tested using
Synthetic Aperture Radar (SAR) images and it gave good results for bi-modal and multimodal images. The
results obtained are encouraging.
SEGMENTATION USING ‘NEW’ TEXTURE FEATUREacijjournal
Color, texture, shape and luminance are the prominent features for image segmentation. Texture is an
organized group of spatial repetitive arrangements in an image and it is a vital attribute in many image
processing and computer vision applications. The objective of this work is to segment the texture sub
images from the given arbitrary image. The main contribution of this work is to introduce “NEW” texture
feature descriptor to the image segmentation field. The NEW texture descriptor labels the neighborhood
pixels of a pixel in an image as N,W,NW,NE,WW,NN and NNE(N-North, W-West).To find the prediction
value, the gradient of the intensity functions are calculated. Eight component binary vectors are formed
and compared to prediction value. Finally end up with 256 possible vectors. Fuzzy c-means clustering is
used to segment the similar regions in textural image Extensive experimentation shows that the proposed
methodology works better for segmenting the texture images, and also segmentation performance are
evaluated.
AUTOMATIC THRESHOLDING TECHNIQUES FOR SAR IMAGEScsitconf
Segmentation of Synthetic Aperture Radar (SAR) images have a great use in observing the
global environment, and in analysing the target detection and recognition .But , segmentation
of (SAR) images is known as a very complex task, due to the existence of speckle noise.
Therefore, in this paper we present a fast SAR images segmentation based on between class
variance. Our choice for used (BCV) method, because it is one of the most effective thresholding
techniques for most real world images with regard to uniformity and shape measures. Our
experiments will be as a test to determine which technique is effective in thresholding
(extraction) the oil spill for numerous SAR images, and in the future these thresholding
techniques can be very useful in detection objects in other SAR images
AUTOMATIC THRESHOLDING TECHNIQUES FOR SAR IMAGEScscpconf
Segmentation of Synthetic Aperture Radar (SAR) images have a great use in observing the global environment, and in analysing the target detection and recognition .But , segmentation of (SAR) images is known as a very complex task, due to the existence of speckle noise. Therefore, in this paper we present a fast SAR images segmentation based on between class variance. Our choice for used (BCV) method, because it is one of the most effective thresholding techniques for most real world images with regard to uniformity and shape measures. Our experiments will be as a test to determine which technique is effective in thresholding (extraction) the oil spill for numerous SAR images, and in the future these thresholding
techniques can be very useful in detection objects in other SAR images
A Novel Feature Extraction Scheme for Medical X-Ray ImagesIJERA Editor
X-ray images are gray scale images with almost the same textural characteristic. Conventional texture or color
features cannot be used for appropriate categorization in medical x-ray image archives. This paper presents a
novel combination of methods like GLCM, LBP and HOG for extracting distinctive invariant features from Xray
images belonging to IRMA (Image Retrieval in Medical applications) database that can be used to perform
reliable matching between different views of an object or scene. GLCM represents the distributions of the
intensities and the information about relative positions of neighboring pixels of an image. The LBP features are
invariant to image scale and rotation, change in 3D viewpoint, addition of noise, and change in illumination A
HOG feature vector represents local shape of an object, having edge information at plural cells. These features
have been exploited in different algorithms for automatic classification of medical X-ray images. Excellent
experimental results obtained in true problems of rotation invariance, particular rotation angle, demonstrate that
good discrimination can be achieved with the occurrence statistics of simple rotation invariant local binary
patterns.
Presented by Adrien Depeursinge, PhD, at MICCAI 2015 Tutorial on Biomedical Texture Analysis (BTA), Munich, Oct 5 2015.
Texture-based imaging biomarkers complement focal, invasive biopsy based biomarkers by providing information on tissue structure over broad regions, non-invasively, and repeatedly across multiple time points. Texture has been used to predict patient survival, tissue function, disease subtypes and genomics (imagenomics and radiogenomics). Nevertheless, several challenges remain, such as: the lack of an appropriate framework for multi-scale, multi-spectral analysis in 2D and 3D; localization uncertainty of texture operators; validation; and, translation to routine clinical applications.
Textural Feature Extraction of Natural Objects for Image ClassificationCSCJournals
The field of digital image processing has been growing in scope in the recent years. A digital image is represented as a two-dimensional array of pixels, where each pixel has the intensity and location information. Analysis of digital images involves extraction of meaningful information from them, based on certain requirements. Digital Image Analysis requires the extraction of features, transforms the data in the high-dimensional space to a space of fewer dimensions. Feature vectors are n-dimensional vectors of numerical features used to represent an object. We have used Haralick features to classify various images using different classification algorithms like Support Vector Machines (SVM), Logistic Classifier, Random Forests Multi Layer Perception and Naïve Bayes Classifier. Then we used cross validation to assess how well a classifier works for a generalized data set, as compared to the classifications obtained during training.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
PDE BASED FEATURES FOR TEXTURE ANALYSIS USING WAVELET TRANSFORMIJCI JOURNAL
In the present paper, a novel method of partial differential equation (PDE) based features for texture
analysis using wavelet transform is proposed. The aim of the proposed method is to investigate texture
descriptors that perform better with low computational cost. Wavelet transform is applied to obtain
directional information from the image. Anisotropic diffusion is used to find texture approximation from
directional information. Further, texture approximation is used to compute various statistical features.
LDA is employed to enhance the class separability. The k-NN classifier with tenfold experimentation is
used for classification. The proposed method is evaluated on Brodatz dataset. The experimental results
demonstrate the effectiveness of the method as compared to the other methods in the literature.
IMAGE SEGMENTATION BY USING THRESHOLDING TECHNIQUES FOR MEDICAL IMAGEScseij
Image binarization is the process of separation of pixel values into two groups, black as background and
white as foreground. Thresholding can be categorized into global thresholding and local thresholding. This
paper describes a locally adaptive thresholding technique that removes background by using local mean
and standard deviation. Most common and simplest approach to segment an image is using thresholding.
In this work we present an efficient implementation for threshoding and give a detailed comparison of
Niblack and sauvola local thresholding algorithm. Niblack and sauvola thresholding algorithm is
implemented on medical images. The quality of segmented image is measured by statistical parameters:
Jaccard Similarity Coefficient, Peak Signal to Noise Ratio (PSNR).
Combining Generative And Discriminative Classifiers For Semantic Automatic Im...CSCJournals
The object image annotation problem is basically a classification problem and there are many different modeling approaches for the solution. These approaches can be classified into two main categories such as generative and discriminative. An ideal classifier should combine these two complementary approaches. In this paper, we present a method achieving this combination by using the discriminative power of the neural networks and the generative nature of Bayesian networks. The evaluation of the proposed method on three typical image’s database has shown some success in automatic image annotation.
Kandemir Inferring Object Relevance From Gaze In Dynamic ScenesKalle
As prototypes of data glasses having both data augmentation and gaze tracking capabilities are becoming available, it is now possible to develop proactive gaze-controlled user interfaces to display information about objects, people, and other entities in real-world setups. In order to decide which objects the augmented information should be about, and how saliently to augment, the system needs an estimate of the importance or relevance of the objects of the scene for the user at a given time. The estimates will be used to minimize distraction of the user, and for providing efficient spatial management of the augmented items. This work is a feasibility study on inferring the relevance of objects in dynamic scenes from gaze. We collected gaze data from subjects watching a video for a pre-defined task. The results show that a simple ordinal logistic regression model gives relevance rankings of scene objects with a promising accuracy.
Different Image Segmentation Techniques for Dental Image ExtractionIJERA Editor
Image segmentation is the process of partitioning a digital image into multiple segments and often used to locate objects and boundaries (lines, curves etc.). In this paper, we have proposed image segmentation techniques: Region based, Texture based, Edge based. These techniques have been implemented on dental radiographs and gained good results compare to conventional technique known as Thresholding based technique. The quantitative results show the superiority of the image segmentation technique over three proposed techniques and conventional technique.
Content Based Image Retrieval Using Gray Level Co-Occurance Matrix with SVD a...ijcisjournal
In this paper, gray level co-occurrence matrix, gray level co-occurrence matrix with singular value
decomposition and local binary pattern are presented for content based image retrieval. Based upon the
feature vector parameters of energy, contrast, entropy and distance metrics such as Euclidean distance,
Canberra distance, Manhattan distance the retrieval efficiency, precision, and recall of the images are
calculated. The retrieval results of the proposed method are tested on Corel-1k database. The results after
being investigated shows a significant improvement in terms of average retrieval rate, average retrieval
precision and recall of different algorithms such as GLCM, GLCM & SVD, LBP with radius one and LBP
with radius two based on different distance metrics.
Web image annotation by diffusion maps manifold learning algorithmijfcstjournal
Automatic image annotation is one of the most challenging problems in machine vision areas. The goal of this task is to predict number of keywords automatically for images captured in real data. Many methods are based on visual features in order to calculate similarities between image samples. But the computation cost of these approaches is very high. These methods require many training samples to be stored in memory. To lessen thisburden, a number of techniques have been developed to reduce the number
of features in a dataset. Manifold learning is a popular approach to nonlinear dimensionality reduction. In
this paper, we investigate Diffusion maps manifold learning method for webimage auto-annotation task.Diffusion maps
manifold learning method isused to reduce the dimension of some visual features. Extensive experiments and analysis onNUS-WIDE-LITE web image dataset with
different visual featuresshow how this manifold learning dimensionality reduction method can be applied effectively to image annotation.
A SURVEY ON GPU SYSTEM CONSIDERING ITS PERFORMANCE ON DIFFERENT APPLICATIONScseij
In this paper we study NVIDIA graphics processing unit (GPU) along with its computational power and applications. Although these units are specially designed for graphics application we can employee there computation power for non graphics application too. GPU has high parallel processing power, low cost of computation and less time utilization; it gives good result of performance per energy ratio. This GPU deployment property for excessive computation of similar small set of instruction played a significant role in reducing CPU overhead. GPU has several key advantages over CPU architecture as it provides high parallelism, intensive computation and significantly higher throughput. It consists of thousands of hardware threads that execute programs in a SIMD fashion hence GPU can be an alternate to CPU in high performance environment and in supercomputing environment. The base line is GPU based general purpose computing is a hot topics of research and there is great to explore rather than only graphics processing application.
Smart application for ams using face recognitioncseij
Attendance Management System (AMS) can be made into smarter way by using face recognition technique, where we use a CCTV camera to be fixed at the entry point of a classroom, which automatically captures the image of the person and checks the observed image with the face database using android enhanced smart phone.
It is typically used for two purposes. Firstly, marking attendance for student by comparing the face images produced recently and secondly, recognition of human who are strange to the environment i.e. an unauthorized person
For verification of image, a newly emerging trend 3D Face Recognition is used which claims to provide more accuracy in matching the image databases and has an ability to recognize a subject at different view angles.
REMOVING OCCLUSION IN IMAGES USING SPARSE PROCESSING AND TEXTURE SYNTHESISIJCSEA Journal
We provide a solution to problem of occlusion in images by removing the occluding region and filling in the gap left behind. Inpainting algorithms fail in filling occlusions when the occluding region is large since there is loss of both structure and texture. We decompose the image into structure and texture images using a decomposition method based on sparseness of the image. The sparse reconstruction of the decomposed images result in an inpainted image with all the structures made intact. A texture synthesis is performed on the texture only image. Finally the structure and texture images are combined to get an image where the occlusion is filled. The performance of our algorithm in terms of visual effectiveness is compared with other algorithms used for inpainting.
Computer apparition plays the most important role in human perception, which is limited to only the visual band of the electromagnetic spectrum. The need for Radar imaging systems, to recover some sources that
are not within human visual band, is raised. This paper present new algorithm for Synthetic Aperture Radar (SAR) images segmentation based on thresholding technique. Entropy based image thresholding has
received sustainable interest in recent years. It is an important concept in the area of image processing.
Pal (1996) proposed a cross entropy thresholding method based on Gaussian distribution for bi-modal images. Our method is derived from Pal method that segment images using cross entropy thresholding based on Gamma distribution and can handle bi-modal and multimodal images. Our method is tested using
Synthetic Aperture Radar (SAR) images and it gave good results for bi-modal and multimodal images. The
results obtained are encouraging.
SEGMENTATION USING ‘NEW’ TEXTURE FEATUREacijjournal
Color, texture, shape and luminance are the prominent features for image segmentation. Texture is an
organized group of spatial repetitive arrangements in an image and it is a vital attribute in many image
processing and computer vision applications. The objective of this work is to segment the texture sub
images from the given arbitrary image. The main contribution of this work is to introduce “NEW” texture
feature descriptor to the image segmentation field. The NEW texture descriptor labels the neighborhood
pixels of a pixel in an image as N,W,NW,NE,WW,NN and NNE(N-North, W-West).To find the prediction
value, the gradient of the intensity functions are calculated. Eight component binary vectors are formed
and compared to prediction value. Finally end up with 256 possible vectors. Fuzzy c-means clustering is
used to segment the similar regions in textural image Extensive experimentation shows that the proposed
methodology works better for segmenting the texture images, and also segmentation performance are
evaluated.
AUTOMATIC THRESHOLDING TECHNIQUES FOR SAR IMAGEScsitconf
Segmentation of Synthetic Aperture Radar (SAR) images have a great use in observing the
global environment, and in analysing the target detection and recognition .But , segmentation
of (SAR) images is known as a very complex task, due to the existence of speckle noise.
Therefore, in this paper we present a fast SAR images segmentation based on between class
variance. Our choice for used (BCV) method, because it is one of the most effective thresholding
techniques for most real world images with regard to uniformity and shape measures. Our
experiments will be as a test to determine which technique is effective in thresholding
(extraction) the oil spill for numerous SAR images, and in the future these thresholding
techniques can be very useful in detection objects in other SAR images
AUTOMATIC THRESHOLDING TECHNIQUES FOR SAR IMAGEScscpconf
Segmentation of Synthetic Aperture Radar (SAR) images have a great use in observing the global environment, and in analysing the target detection and recognition .But , segmentation of (SAR) images is known as a very complex task, due to the existence of speckle noise. Therefore, in this paper we present a fast SAR images segmentation based on between class variance. Our choice for used (BCV) method, because it is one of the most effective thresholding techniques for most real world images with regard to uniformity and shape measures. Our experiments will be as a test to determine which technique is effective in thresholding (extraction) the oil spill for numerous SAR images, and in the future these thresholding
techniques can be very useful in detection objects in other SAR images
A Novel Feature Extraction Scheme for Medical X-Ray ImagesIJERA Editor
X-ray images are gray scale images with almost the same textural characteristic. Conventional texture or color
features cannot be used for appropriate categorization in medical x-ray image archives. This paper presents a
novel combination of methods like GLCM, LBP and HOG for extracting distinctive invariant features from Xray
images belonging to IRMA (Image Retrieval in Medical applications) database that can be used to perform
reliable matching between different views of an object or scene. GLCM represents the distributions of the
intensities and the information about relative positions of neighboring pixels of an image. The LBP features are
invariant to image scale and rotation, change in 3D viewpoint, addition of noise, and change in illumination A
HOG feature vector represents local shape of an object, having edge information at plural cells. These features
have been exploited in different algorithms for automatic classification of medical X-ray images. Excellent
experimental results obtained in true problems of rotation invariance, particular rotation angle, demonstrate that
good discrimination can be achieved with the occurrence statistics of simple rotation invariant local binary
patterns.
Presented by Adrien Depeursinge, PhD, at MICCAI 2015 Tutorial on Biomedical Texture Analysis (BTA), Munich, Oct 5 2015.
Texture-based imaging biomarkers complement focal, invasive biopsy based biomarkers by providing information on tissue structure over broad regions, non-invasively, and repeatedly across multiple time points. Texture has been used to predict patient survival, tissue function, disease subtypes and genomics (imagenomics and radiogenomics). Nevertheless, several challenges remain, such as: the lack of an appropriate framework for multi-scale, multi-spectral analysis in 2D and 3D; localization uncertainty of texture operators; validation; and, translation to routine clinical applications.
Textural Feature Extraction of Natural Objects for Image ClassificationCSCJournals
The field of digital image processing has been growing in scope in the recent years. A digital image is represented as a two-dimensional array of pixels, where each pixel has the intensity and location information. Analysis of digital images involves extraction of meaningful information from them, based on certain requirements. Digital Image Analysis requires the extraction of features, transforms the data in the high-dimensional space to a space of fewer dimensions. Feature vectors are n-dimensional vectors of numerical features used to represent an object. We have used Haralick features to classify various images using different classification algorithms like Support Vector Machines (SVM), Logistic Classifier, Random Forests Multi Layer Perception and Naïve Bayes Classifier. Then we used cross validation to assess how well a classifier works for a generalized data set, as compared to the classifications obtained during training.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
PDE BASED FEATURES FOR TEXTURE ANALYSIS USING WAVELET TRANSFORMIJCI JOURNAL
In the present paper, a novel method of partial differential equation (PDE) based features for texture
analysis using wavelet transform is proposed. The aim of the proposed method is to investigate texture
descriptors that perform better with low computational cost. Wavelet transform is applied to obtain
directional information from the image. Anisotropic diffusion is used to find texture approximation from
directional information. Further, texture approximation is used to compute various statistical features.
LDA is employed to enhance the class separability. The k-NN classifier with tenfold experimentation is
used for classification. The proposed method is evaluated on Brodatz dataset. The experimental results
demonstrate the effectiveness of the method as compared to the other methods in the literature.
IMAGE SEGMENTATION BY USING THRESHOLDING TECHNIQUES FOR MEDICAL IMAGEScseij
Image binarization is the process of separation of pixel values into two groups, black as background and
white as foreground. Thresholding can be categorized into global thresholding and local thresholding. This
paper describes a locally adaptive thresholding technique that removes background by using local mean
and standard deviation. Most common and simplest approach to segment an image is using thresholding.
In this work we present an efficient implementation for threshoding and give a detailed comparison of
Niblack and sauvola local thresholding algorithm. Niblack and sauvola thresholding algorithm is
implemented on medical images. The quality of segmented image is measured by statistical parameters:
Jaccard Similarity Coefficient, Peak Signal to Noise Ratio (PSNR).
Combining Generative And Discriminative Classifiers For Semantic Automatic Im...CSCJournals
The object image annotation problem is basically a classification problem and there are many different modeling approaches for the solution. These approaches can be classified into two main categories such as generative and discriminative. An ideal classifier should combine these two complementary approaches. In this paper, we present a method achieving this combination by using the discriminative power of the neural networks and the generative nature of Bayesian networks. The evaluation of the proposed method on three typical image’s database has shown some success in automatic image annotation.
Kandemir Inferring Object Relevance From Gaze In Dynamic ScenesKalle
As prototypes of data glasses having both data augmentation and gaze tracking capabilities are becoming available, it is now possible to develop proactive gaze-controlled user interfaces to display information about objects, people, and other entities in real-world setups. In order to decide which objects the augmented information should be about, and how saliently to augment, the system needs an estimate of the importance or relevance of the objects of the scene for the user at a given time. The estimates will be used to minimize distraction of the user, and for providing efficient spatial management of the augmented items. This work is a feasibility study on inferring the relevance of objects in dynamic scenes from gaze. We collected gaze data from subjects watching a video for a pre-defined task. The results show that a simple ordinal logistic regression model gives relevance rankings of scene objects with a promising accuracy.
Different Image Segmentation Techniques for Dental Image ExtractionIJERA Editor
Image segmentation is the process of partitioning a digital image into multiple segments and often used to locate objects and boundaries (lines, curves etc.). In this paper, we have proposed image segmentation techniques: Region based, Texture based, Edge based. These techniques have been implemented on dental radiographs and gained good results compare to conventional technique known as Thresholding based technique. The quantitative results show the superiority of the image segmentation technique over three proposed techniques and conventional technique.
Content Based Image Retrieval Using Gray Level Co-Occurance Matrix with SVD a...ijcisjournal
In this paper, gray level co-occurrence matrix, gray level co-occurrence matrix with singular value
decomposition and local binary pattern are presented for content based image retrieval. Based upon the
feature vector parameters of energy, contrast, entropy and distance metrics such as Euclidean distance,
Canberra distance, Manhattan distance the retrieval efficiency, precision, and recall of the images are
calculated. The retrieval results of the proposed method are tested on Corel-1k database. The results after
being investigated shows a significant improvement in terms of average retrieval rate, average retrieval
precision and recall of different algorithms such as GLCM, GLCM & SVD, LBP with radius one and LBP
with radius two based on different distance metrics.
Web image annotation by diffusion maps manifold learning algorithmijfcstjournal
Automatic image annotation is one of the most challenging problems in machine vision areas. The goal of this task is to predict number of keywords automatically for images captured in real data. Many methods are based on visual features in order to calculate similarities between image samples. But the computation cost of these approaches is very high. These methods require many training samples to be stored in memory. To lessen thisburden, a number of techniques have been developed to reduce the number
of features in a dataset. Manifold learning is a popular approach to nonlinear dimensionality reduction. In
this paper, we investigate Diffusion maps manifold learning method for webimage auto-annotation task.Diffusion maps
manifold learning method isused to reduce the dimension of some visual features. Extensive experiments and analysis onNUS-WIDE-LITE web image dataset with
different visual featuresshow how this manifold learning dimensionality reduction method can be applied effectively to image annotation.
A SURVEY ON GPU SYSTEM CONSIDERING ITS PERFORMANCE ON DIFFERENT APPLICATIONScseij
In this paper we study NVIDIA graphics processing unit (GPU) along with its computational power and applications. Although these units are specially designed for graphics application we can employee there computation power for non graphics application too. GPU has high parallel processing power, low cost of computation and less time utilization; it gives good result of performance per energy ratio. This GPU deployment property for excessive computation of similar small set of instruction played a significant role in reducing CPU overhead. GPU has several key advantages over CPU architecture as it provides high parallelism, intensive computation and significantly higher throughput. It consists of thousands of hardware threads that execute programs in a SIMD fashion hence GPU can be an alternate to CPU in high performance environment and in supercomputing environment. The base line is GPU based general purpose computing is a hot topics of research and there is great to explore rather than only graphics processing application.
Smart application for ams using face recognitioncseij
Attendance Management System (AMS) can be made into smarter way by using face recognition technique, where we use a CCTV camera to be fixed at the entry point of a classroom, which automatically captures the image of the person and checks the observed image with the face database using android enhanced smart phone.
It is typically used for two purposes. Firstly, marking attendance for student by comparing the face images produced recently and secondly, recognition of human who are strange to the environment i.e. an unauthorized person
For verification of image, a newly emerging trend 3D Face Recognition is used which claims to provide more accuracy in matching the image databases and has an ability to recognize a subject at different view angles.
CONCEPTUAL FRAMEWORK OF REDUNDANT LINK AGGREGATIONcseij
This is era of information blast. A huge quantity of information is pouring in from various sources. The
revolutionary advancement of Information and Communication technologies bring the world close
together.A pile of information in different formats is just a click away. Which motivate the organizations to
get more internet bandwidth to consume and publish theinformationoverexploding cloudof Internet. The
standard router redundancyprotocolis used to handle backup link showever it cannot aggregate
them.Whereas thelink standard aggregation protocol can aggregate the link but it support only Ethernet
technology.In this researchpaper a concept of Redundant Link Aggregation (RLA)is proposed. RLA can
aggregate and handle backup links with main links regardless of carriertechnology. Furthermore a
dataforwardingmechanism Odd Load Balancing (OLB) is also proposed for RLA scheme. For the sake of
performance evaluation, Redundant Link Aggregation (RLA) is compared with Virtual Router Redundancy
Protocol (VRRP). The simulation result reveals that Redundant Link Aggregation (RLA) can cover the
bandwidth demand of the network in peak hours by consuming backup links as well which with Virtual
Router Redundancy Protocol (VRRP)cannot.It is further noted thatOdd Load Balancing (OLB) feature can
be used to save the cost in terms of money per annum.
This short paper suggests that there might be numerals that do not represent numbers. It introduces an
alternative proof that the set of complex numbers is denumerable, and also an algorithm for denumerating
them. Both the proof and the contained denumeration are easy to be understood.
The Generational Garbage collection involves organizing the heap into different divisions of memory space
in-order to filter long-lived objects from short-lived objects through moving the surviving object of each
generation’s GC cycle to another memory space, updating its age and reclaiming space from the dead
ones. The problem in this method is that, the longer an object is alive during its initial generations, the
longer the garbage collector will have to deal with it by checking for its reachability from the root and
promoting it to other space divisions, where as the ultimate goal of the GC is to reclaim memory from
unreachable objects at a minimal time possible. This paper is a proposal of a method where the lifetime of
every object getting into the heap will be predicted and will be placed in heap accordingly for the garbage
collector to deal more with reclaiming space from dead object and less in promoting the live ones to the
higher level.
Uav route planning for maximum target coveragecseij
Utilization of Unmanned Aerial Vehicles (UAVs) in military and civil operations is getting popular. One of
the challenges in effectively tasking these expensive vehicles is planning the flight routes to monitor the
targets. In this work, we aim to develop an algorithm which produces routing plans for a limited number of
UAVs to cover maximum number of targets considering their flight range.
The proposed solution for this practical optimization problem is designed by modifying the Max-Min Ant
System (MMAS) algorithm. To evaluate the success of the proposed method, an alternative approach,
based on the Nearest Neighbour (NN) heuristic, has been developed as well. The results showed the success
of the proposed MMAS method by increasing the number of covered targets compared to the solution based
on the NN heuristic.
Vision based entomology how to effectively exploit color and shape featurescseij
Entomology has been deeply rooted in various cultures since prehistoric times for the purpose of
agriculture. Nowadays, many scientists are interested in the field of biodiversity in order to maintain the
diversity of species within our ecosystem. Out of 1.3 million known species on this earth, insects account
for more than two thirds of these known species. Since 400 million years ago, there have been various kinds
of interactions between humans and insects. There have been several attempts to create a method to
perform insect identification accurately. Great knowledge and experience on entomology are required for
accurate insect identification. Automation of insect identification is required because there is a shortage of
skilled entomologists. We propose an automatic insect identification framework that can identify
grasshoppers and butterflies from colored images. Two classes of insects are chosen for a proof-ofconcept.
Classification is achieved by manipulating insects’ color and their shape feature since each class
of sample case has different color and distinctive body shapes. The proposed insect identification process
starts by extracting features from samples and splitting them into two training sets. One training
emphasizes on computing RGB features while the other one is normalized to estimate the area of binary
color that signifies the shape of the insect. SVM classifier is used to train the data obtained. Final decision
of the classifier combines the result of these two features to determine which class an unknown instance
belong to. The preliminary results demonstrate the efficacy and efficiency of our two-step automatic insect
identification approach and motivate us to extend this framework to identify a variety of other species of
insects.
The development of multimedia system technology in Content based Image Retrieval (CBIR) System is
one in every of the outstanding area to retrieve the images from an oversized collection of database. The feature
vectors of the query image are compared with feature vectors of the database images to get matching images.It is
much observed that anyone algorithm isn't beneficial in extracting all differing kinds of natural images. Thus an
intensive analysis of certain color, texture and shape extraction techniques are allotted to spot an efficient CBIR
technique that suits for a selected sort of images. The Extraction of an image includes feature description and
feature extraction. During this paper, we tend to projected Color Layout Descriptor (CLD), grey Level Co-
Occurrences Matrix (GLCM), Marker-Controlled Watershed Segmentation feature extraction technique that
extract the matching image based on the similarity of Color, Texture and shape within the database. For
performance analysis, the image retrieval timing results of the projected technique is calculated and compared
with every of the individual feature.
Content Based Image Retrieval (CBIR) is one of the
most active in the current research field of multimedia retrieval.
It retrieves the images from the large databases based on images
feature like color, texture and shape. In this paper, Image
retrieval based on multi feature fusion is achieved by color and
texture features as well as the similarity measures are
investigated. The work of color feature extraction is obtained by
using Quadratic Distance and texture features by using Pyramid
Structure Wavelet Transforms and Gray level co-occurrence
matrix. We are comparing all these methods for best image
retrieval
A comparative study on content based image retrieval methodsIJLT EMAS
Content-based image retrieval (CBIR) is a method of
finding images from a huge image database according to persons’
interests. Content-based here means that the search involves
analysis the actual content present in the image. As database of
images is growing daybyday, researchers/scholars are searching
for better techniques for retrieval of images maintaining good
efficiency. This paper presents the visual features and various
ways for image retrieval from the huge image database.
An image texture is a set of metrics calculated in image processing designed to quantify the perceived texture of an image. Image Texture gives us information about the spatial arrangement of color or intensities in an image or selected region of an image. This presentation consists of its types, uses, methods and approaches.
FUZZY SET THEORETIC APPROACH TO IMAGE THRESHOLDINGIJCSEA Journal
Thresholding is a fast, popular and computationally inexpensive segmentation technique that is always critical and decisive in some image processing applications. The result of image thresholding is not always satisfactory because of the presence of noise and vagueness and ambiguity among the classes. Since the theory of fuzzy sets is a generalization of the classical set theory, it has greater flexibility to capture faithfully the various aspects of incompleteness or imperfectness in information of situation. To overcome this problem, in this paper we proposed a two-stage fuzzy set theoretic approach to image thresholding utilizing the measure of fuzziness to evaluate the fuzziness of an image and to determine an adequate threshold value. At first, images are preprocessed to reduce noise without any loss of image details by fuzzy rule-based filtering and then in the final stage a suitable threshold is determined with the help of a fuzziness measure as a criterion function. Experimental results on test images have demonstrated the effectiveness of this method.
GREY LEVEL CO-OCCURRENCE MATRICES: GENERALISATION AND SOME NEW FEATURESijcseit
Grey Level Co-occurrence Matrices (GLCM) are one of the earliest techniques used for image texture
analysis. In this paper we defined a new feature called trace extracted from the GLCM and its implications
in texture analysis are discussed in the context of Content Based Image Retrieval (CBIR). The theoretical
extension of GLCM to n-dimensional gray scale images are also discussed. The results indicate that trace
features outperform Haralick features when applied to CBIR.
Texture Segmentation Based on Multifractal Dimensionijsc
Texture segmentation can be considered the most important problem, since human can distinguish different
textures quit easily, but the automatic segmentation is quit complex and it is still an open problem for
research. In this paper focus on implement novel supervised algorithm for multitexture segmentation and
this algorithm based on blocking procedure where each image divide into block (16×16 pixels) and extract
vector feature for each block to classification these block based on these feature. These feature extract
using Box Counting Method (BCM). BCM generate single feature for each block and this feature not
enough to characterize each block ,therefore, must be implement algorithm provide more than one slide for
the image based on new method produce multithresolding, after this use BCM to generate single feature for
each slide.
Texture Segmentation Based on Multifractal Dimension ijsc
Texture segmentation can be considered the most important problem, since human can distinguish different textures quit easily, but the automatic segmentation is quit complex and it is still an open problem for research. In this paper focus on implement novel supervised algorithm for multitexture segmentation and this algorithm based on blocking procedure where each image divide into block (16×16 pixels) and extract vector feature for each block to classification these block based on these feature. These feature extract using Box Counting Method (BCM). BCM generate single feature for each block and this feature not enough to characterize each block ,therefore, must be implement algorithm provide more than one slide for the image based on new method produce multithresolding, after this use BCM to generate single feature for each slide.
A combined method of fractal and glcm features for mri and ct scan images cla...sipij
Fractal analysis has been shown to be useful in image processing for characterizing shape and gray-scale
complexity. The fractal feature is a compact descriptor used to give a numerical measure of the degree of
irregularity of the medical images. This descriptor property does not give ownership of the local image
structure. In this paper, we present a combination of this parameter based on Box Counting with GLCM
Features. This powerful combination has proved good results especially in classification of medical texture
from MRI and CT Scan images of trabecular bone. This method has the potential to improve clinical
diagnostics tests for osteoporosis pathologies.
BEHAVIOR STUDY OF ENTROPY IN A DIGITAL IMAGE THROUGH AN ITERATIVE ALGORITHM O...ijscmcj
Image segmentation is a critical step in computer vision tasks constituting an essential issue for pattern
recognition and visual interpretation. In this paper, we study the behavior of entropy in digital images
through an iterative algorithm of mean shift filtering. The order of a digital image in gray levels is defined.
The behavior of Shannon entropy is analyzed and then compared, taking into account the number of
iterations of our algorithm, with the maximum entropy that could be achieved under the same order. The
use of equivalence classes it induced, which allow us to interpret entropy as a hyper-surface in real m-
dimensional space. The difference of the maximum entropy of order n and the entropy of the image is used
to group the the iterations, in order to caractrizes the performance of the algorithm
Behavior study of entropy in a digital image through an iterative algorithmijscmcj
Image segmentation is a critical step in computer vision tasks constituting an essential issue for pattern recognition and visual interpretation. In this paper, we study the behavior of entropy in digital images through an iterative algorithm of mean shift filtering. The order of a digital image in gray levels is defined. The behavior of Shannon entropy is analyzed and then compared, taking into account the number of iterations of our algorithm, with the maximum entropy that could be achieved under the same order. The use of equivalence classes it induced, which allow us to interpret entropy as a hyper-surface in real m dimensional space. The difference of the maximum entropy of order n and the entropy of the image is used to group the the iterations, in order to caractrizes the performance of the algorithm.
Dual Tree Complex Wavelet Transform, Probabilistic Neural Network and Fuzzy C...IJAEMSJORNAL
The venture suggests an Adhoc technique of MRI brain image classification and image segmentation tactic. It is a programmed structure for phase classification using learning mechanism and to sense the Brain Tumor through spatial fuzzy clustering methods for bio medical applications. Automated classification and recognition of tumors in diverse MRI images is enthused for the high precision when dealing with human life. Our proposal employs a segmentation technique, Spatial Fuzzy Clustering Algorithm, for segmenting MRI images to diagnose the Brain Tumor in its earlier phase for scrutinizing the anatomical makeup. The Artificial Neural Network (ANN) will be exploited to categorize the pretentious tumor part in the brain. Dual Tree-CWT decomposition scheme is utilized for texture scrutiny of an image. Probabilistic Neural Network (PNN)-Radial Basis Function (RBF) will be engaged to execute an automated Brain Tumor classification. The preprocessing steps were operated in two phases: feature mining by means of classification via PNN-RBF network. The functioning of the classifier was assessed with the training performance and classification accuracies.
Awarded presentation of my research activity, PhD Day 2011, February 23th 2011, Cagliari, Italy.
This presentation has been awarded as the best one of the track on information engineering.
Want to know more?
see my publications at
http://prag.diee.unica.it/pra/ita/people/satta
Image Segmentation from RGBD Images by 3D Point Cloud Attributes and High-Lev...CSCJournals
In this paper, an approach is developed for segmenting an image into major surfaces and potential objects using RGBD images and 3D point cloud data retrieved from a Kinect sensor. In the proposed segmentation algorithm, depth and RGB data are mapped together. Color, texture, XYZ world coordinates, and normal-, surface-, and graph-based segmentation index features are then generated for each pixel point. These attributes are used to cluster similar points together and segment the image. The inclusion of new depth-related features provided improved segmentation performance over RGB-only algorithms by resolving illumination and occlusion problems that cannot be handled using graph-based segmentation algorithms, as well as accurately identifying pixels associated with the main structure components of rooms (walls, ceilings, floors). Since each segment is a potential object or structure, the output of this algorithm is intended to be used for object recognition. The algorithm has been tested on commercial building images and results show the usability of the algorithm in real time applications.
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
Similar to DOMAIN SPECIFIC CBIR FOR HIGHLY TEXTURED IMAGES (20)
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
The Metaverse and AI: how can decision-makers harness the Metaverse for their...Jen Stirrup
The Metaverse is popularized in science fiction, and now it is becoming closer to being a part of our daily lives through the use of social media and shopping companies. How can businesses survive in a world where Artificial Intelligence is becoming the present as well as the future of technology, and how does the Metaverse fit into business strategy when futurist ideas are developing into reality at accelerated rates? How do we do this when our data isn't up to scratch? How can we move towards success with our data so we are set up for the Metaverse when it arrives?
How can you help your company evolve, adapt, and succeed using Artificial Intelligence and the Metaverse to stay ahead of the competition? What are the potential issues, complications, and benefits that these technologies could bring to us and our organizations? In this session, Jen Stirrup will explain how to start thinking about these technologies as an organisation.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Enhancing Performance with Globus and the Science DMZGlobus
ESnet has led the way in helping national facilities—and many other institutions in the research community—configure Science DMZs and troubleshoot network issues to maximize data transfer performance. In this talk we will present a summary of approaches and tips for getting the most out of your network infrastructure using Globus Connect Server.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
DOMAIN SPECIFIC CBIR FOR HIGHLY TEXTURED IMAGES
1. Computer Science & Engineering: An International Journal (CSEIJ), Vol. 3, No. 2, April 2013
DOI : 10.5121/cseij.2013.3203 33
DOMAIN SPECIFIC CBIR FOR HIGHLY TEXTURED
IMAGES
Tajman sandhu (Research scholar)
Department of Information Technology
Chandigarh Engineering College, Landran, Punjab, India
yuvi_taj@yahoo.com
Parminder Singh(Assistant Professor)
Department of Information Technology
Chandigarh Engineering College, Landran, Punjab, India
Singh.parminder06@gmail.com
ABSTRACT
It is A Challenging Task To Build A Cbir System Which Primarily Works On Texture Values As There
Meaning And Semantics Needs A Special Care To Be Mapped With Human Based Languages. We Have
Consider Highly Textured Images Having Properties(Entropy, Homogeneity, Contrast, Cluster Shade, Auto
Correlation)And Have Mapped Using A Fuzzy Minmax Scale W.R.T. Their Degree(High, Low,
Medium)And Technical Interpetation.This Developed System Is Performing Well In Terms Of Precision
And Recall Value Showing That Semantic Gap Has Been Reduced For Highly Textured Images Based Cbir.
KEYWORDS
CBIR; fuzzyminmax; recall; precision; Texel; texture
1. INTRODUCTION
Content Based Image Retrieval system is that system in which the retrieval is based on the
content which is numerical in nature. The word “content” here means in mathematical terms, the
color values, the texture features or other statistical numerical values that can be calculated from
the metrics of the image. Therefore, a system that works to store and retrieve such content and to
retrieve such content is known as CBIR.This unit of information is basically a numerical value
and hard to interpret in terms of human understanding. Therefore, there is an urgent need to
remove the barriers to have a streamlined retrieval system which works for humans and with
humans. There is a growing interest in CBIR because of the limitations inherent in metadata-
based systems, as well as the large range of possible uses for efficient image retrieval E.g. texture
value which measures the look for visual patterns in images and how they are spatially defined.
Textures are represented by’ texels’[1] which are placed into a number of sets, depending on how
many textures are detected in that image. Texture is a difficult concept to represent in human
based languages Texture is a difficult concept to represent. The identification of specific textures
in an image is achieved primarily by modeling texture as a two-dimensional gray level variation.
The relative brightness of pairs of pixels is computed such that degree of contrast, regularity,
coarseness and directionality may be estimated. However, the problem is in identifying patterns
of co-pixel variation and associating them with particular classes of textures such as silky,
or rough. A Texel is similar to a pixel (picture element) because it represents an elementary unit
2. Computer Science & Engineering: An International Journal (CSEIJ), Vol. 3, No. 2, April 2013
34
in a graphic. But there are differences between the Texel’s in a texture map and the pixels in an
image display. In special instances, there might be a one-to-one correspondence between Texel’s
and pixels in some parts of the rendition of a 3-D object. But for most, if not all, of a 3-D
rendition, the Texel’s and pixels cannot be paired off in such a simple way. There are three major
categories of texture-based techniques, namely probabilistic/statistical, spectral and structural
approaches. Probabilistic methods treat texture patterns as samples of certain random fields and
extract texture features from these properties. Spectral approaches involve the sub-band
decomposition of images into different channels and the analysis of spatial frequency content in
each of these sub-bands in order to extract texture features. Structural techniques model texture
features based on heuristic rules of spatial placements of primitive image elements that attempt to
mimic human perception of textural patterns.
Texture based feature extraction techniques such as co-occurrence matrix, Fractals, Gabor Filters,
variations of wavelet transform, other transform have also been widely used [2], [3]. In Gray level
Co-occurrence matrix (GLCM), the texture features from gray scale image are extracted [4]. As
per GLCM, the following texture features are considered for identifying differentiating textures of
images properties
1.1 Entropy: Entropy is a statistical measure of randomness that can be used to characterize the
texture of the input image. Entropy is defined as
p = -pixel Count .* log (pixel Count)
p (isnan(p)) = []
p (isinf(p)) = []
EntropyValue = -sum (p)
pixelCount are the total number of pixels in image region, calculated from the histogram of the
gray image region and the variable p means temporary variable to calculate entropy. Entropy can
also be shown as:
-sum (p.*log2 (p))
Where p contains the histogram counts. Image (I) can be a multidimensional image. If I have
more than two dimensions, the entropy function treats it as a multidimensional grayscale image
and not as an RGB image.
1.2 Homogeneity: The homogeneity consists of two parts- the standard deviation and the
discontinuity of the intensities at each pixel of the image. The standard derivation Sij at pixel
Pij can be written as
Where mij is the mean of nw intensities within the window Wd (Pij), which has a size of d by d and
is centered at Pij.
A measure of the discontinuity Dij at pixel Pij can be written as:
3. Computer Science & Engineering: An International Journal (CSEIJ), Vol. 3, No. 2, April 2013
35
where Gx and Gy are the gradients at pixel Pij in the x and y direction. Thus, the homogeneity Hij
at Pij can be written as
1.3 Contrast: Contrast is the difference in luminance and/or color that makes an object (or its
representation in an image or display) distinguishable. In visual perception of the real world,
contrast is determined by the difference in the color and brightness of the object and other
objects within the same field of view. The maximum contrast of an image is the contrast ratio
or dynamic range. It is mathematically defined as:
Luminance Difference
Average Luminance
1.4 Cluster Shade: Cluster shade[8] and cluster prominence are measures of the skewness of
the matrix, in other words the lack of symmetry. When cluster shade and cluster prominence
are high, the image is not symmetric with respect to its texture values. It is defined as follows:
where
C (i,j) is the (i,j)the entry in a co-occurrence matrix C
means where M is the number of rows
means where N is the number of columns
means
is defined as:
is defined as:
1.5 Auto Correlation: In any time series containing non-random patterns of behavior, it is
likely that any particular item in the series is related in some way to other items in the same
series. This can be described by the autocorrelation function and/or autocorrelation
coefficient.
The correlation of two continuous functions f(x) and g(x) can be defined as:
If f(x) and g(x) are the same functions, f(x).g(x) is called autocorrelation function.
For a 2D image, its autocorrelation function (ACF) can be calculated as:
Where f(x,y) is the two-dimensional brightness function that defines the image, and a and b are
the variables of integration. Like the original image, the ACF is a 2D function. Although the
dimensions of the ACF and the original image are exactly the same, they have different meaning.
In the original image, a given coordinate point (x, y) denotes a pixel position, while in the ACF, a
given coordinate point (a,b) denotes the endpoint of a neighbourhood vector.
4. Computer Science & Engineering: An International Journal (CSEIJ), Vol. 3, No. 2, April 2013
36
2. PROPOSED SYSTEM
For reducing the semantic gap, we have developed fuzzy sets and associated each instance with
its degree and interpretation as follows:
Fig.1 The flowchart of the proposed algorithm
1. Development of a representative data set of textured images.
2. Study of various linguistic descriptors (Fuzzy Sets) based on the domains of textured images.
3. Use of Fuzzy Sets collected in step 2; development of color and texture feature sets for the said
data set developed in step 1as follows:-
I. Get texture properties fuzzy sets for each feature (Entropy/Energy, Autocorrelation,
Cluster shade, Contrast, Homogeneity).
II. For each texture properties fuzzy set, develop a decision making function which
indicates the membership of each element of the fuzzy set.
III. If ’ X ‘ is a universe of discourse and x is a particular element of X (Texture Data image
dataset), then a fuzzy set A defined on X may be written as a collection of ordered pairs à = {(x,
µÃ(x)), x belongs to texture feature(s)} where each pair (x, µÃ(x)) is called a singleton [5]. It can
also be represented as: Ã=∑ µÃ (xi) / xi;
IV. Where ‘X’ may be a set having boundaries defined by Max, Min, Mean, Mim and Mam
for all the texture features. The Entropy/Energy values fall in the range [0,1] and this range is
divided into four separate regions namely low, medium, high and very high. The fuzzy
boundaries obtained are further associated with human dialect for defining textures into smooth,
regular and coarse categories and then degree with interpretation
4. Development of a storage schema which maps the Hyper Model and texture feature sets
developed in step 3.
5. Development of an interface for storing the highly textured images information w.r.t. previous
steps in the database.
6. Development of a structured query based on which information of texture images can be
retrieved.
7. Development of an application for running the queries developed in step 6.
8. Calculation of precision and recall values.
5. Computer Science & Engineering: An International Journal (CSEIJ), Vol. 3, No. 2, April 2013
37
3. RESULTS AND INTERPRETATION
Figure 2. PRECISION GRAPH:
The precision [6] values of the proposed system are as shown in Fig 2, where the precision value
(in percentage) for each query has been calculated .it is shown as a bar. The framework applied in
the proposed system is for different queries that run on the image instance dataset and the
standard situation where competing results are obtained on the same data. The shape of the graph
shows that the precision value remains in a bracket of 48-70% and the graph query produces
significant amount of results from the total database. So based on the formula of precision, it can
be understood that close to 2/3rd
of the results are relevant from the total results. the overall
average precision value which is calculated to be 70.2%, it can be inferred that the system is able
to retrieve results close to what the user is expecting. So degree of semantic gap reduction can be
inferred and understood.
Figure. 2 RECALL GRAPH:
The recall [6] value of any system will be high if the selection process of the dataset images have
been done very carefully .If the dataset has high quality of images, highly significant and relevant
to the technical subject based on which the user is trying to find a unit of information, the recall
value will normally be good enough to satisfy user needs. So it can be seen from Fig 3, that for
unique queries, 1/3rd
of the database remains relevant and contributes towards results. Also, the
image instances content stored in the domain dataset of interest is indexed and is searchable.
Based on the formula for recall, the average recall value calculated is 35%.
6. Computer Science & Engineering: An International Journal (CSEIJ), Vol. 3, No. 2, April 2013
38
Retrieval algorithm can fall into one of the two cases as given below.
Case1: The algorithm has higher precision values at lower recall percentage than at higher recall
percentage levels. This implies that the algorithm does not retrieve as well (in terms of numbers),
but rank its retrieved results set well. It can identify good relevance in its ranking metric, but does
not match well to retrieve all or many relevant/irrelevant result sets. This algorithm is more
selective.
Case 2: The algorithm has lower precision values at higher recall levels than at lower recall
levels. This implies that the algorithm retrieves well, but does not rank the retrieved image
instances result high enough. It does not have a good relevance ranking metric, but is able to
search and retrieve well. This algorithm has junkie results and is not stable. Based on the
interpretation of the precision and recall values, the proposed system is the first case algorithm.
Figure. 4 Time taken by queries:
This pertains to the time taken for the system to bring results. Current technological advances are
in direction of intelligence augmentation by using information just in time to take decision and
solve many real time problems and issues. Therefore, it is important to look into the time aspect
of retrieval to check the performance. Fig 3 shows the time taken by the queries. From the graph
for retrieving the data, one can have a visualization of minimum and maximum value of time in
mill seconds.
4. CONCLUSION
The precision value obtained by the proposed method is close to 70% which is fairly good enough
as it shows that the user is getting 2/3rd
of the information quite relevant out of the total results
retrieved for the query inquired by the user. The recall value obtained by the proposed method is
close to 35% which reflects that the image quality and the associated information fed into the
system while developing its database is significant in aiding technical work for the said textured
what user is looking for is good. In the proposed algorithm, the execution time of the query fed by
the user ranges from 0.2434 to 0.2527 milliseconds.
5. FUTURE WORK
In further development of the CBIR [7], many other multi-model feature extraction algorithms
must be used to make it rich in terms of both low level and high level concepts and then one can
also explore the need of the machine learning algorithms if the size of the dataset increases many
7. Computer Science & Engineering: An International Journal (CSEIJ), Vol. 3, No. 2, April 2013
39
folds. Then, for building next generation CBIR, next focus must also be on incorporating
algorithms that are helping to reduce sensory gap also.
ACKNOWLEDGMENT
I extent my gratitude to all who helped me in successful completion of this current research work.
I would like to acknowledge the assistance and support received from Er. Parminder Singh who
gave me fruitful opportunity to expose our knowledge by doing this research work.
REFERENCES
[1] Todorovic, S.,’’ Texel-based texture segmentation/”Computer Vision, 2009 IEEE 12th International
Conference on Sept. 29 2009-Oct. 2 2009, pp. 841 - 848J. Clerk Maxwell, A Treatise on Electricity
and Magnetism, 3rd ed., vol. 2. Oxford: Clarendon, 1892, pp.68-73.
[2] H.Tamura, S. Mori, T. Yamawaki, “Textural features corresponding to visual perception”, In IEEE
Transactions on Systems, Man and Cybernetics, Vol. 8, Issue 6, 1978, pp. 460–473.
[3] B.S. Manjunath, Jens- Rainer Ohm, Vinod V.Vasudevan, Akio Yamada,“Color and texture
descriptors”, In IEEE Transactions on Circuits and Systems for Video Technology, Vol.11, Issue 6,
pp. 703–715, June 2001.
[4] P.G.Reddy, “Extraction of Image Features for an Effective CBIR System”, In IEEE Conference
Publications - Recent Advances in Space Technology Services and Climate Change (RSTSCC),
ISBN: 978-1-4244-9184-1, pp. 138-142, 2010.
[5] S. Rajasekaran and G.A. Vijayalaksmi Pai, “Neural Networks, Fuzzy Logic and Genetic
Algorithms”, Publisher: Prentice Hall of India Private Limited, ISBN: 81-203-2186-3, Sixth Printing,
2006
[6] VIJAY V. RAGHAVAN and GWANG S. JUNG,” A Critical Investigation of Recall and Precision as
Measures of Retrieval System Performance” University of Southwestern Louisiana And PETER
BOLLMANN Technische Universitat Berlin
[7] Ricardo da Silva Torres and Alexandre Xavier Falcão ,” Content-Based Image Retrieval: Theory and
Applications”
[8] Eva M. van Rikxoort,” Texture analysis” April 15, 2004