In this paper a new method for indoor-outdoor image classification is presented; where the concept of Color Correlated Temperature is used to extract distinguishing features between the two classes. In this process, using Hue color component, each image is segmented into different color channels and color correlated temperature is calculated for each channel. These values are then incorporated to build the image feature vector. Besides color temperature values, the feature vector also holds information about the color formation of the image. In the classification phase, KNN classifier is used to classify images as indoor or outdoor. Two different datasets are used for test purposes; a collection of images gathered from the internet and a second dataset built by frame extraction from different video sequences from one video capturing device. High classification rate, compared to other state of the art methods shows the ability of the proposed method for indoor-outdoor image classification.
This document proposes a new method for segmenting outdoor images called Color Cluster Elimination (CCE) which utilizes color clustering and texture analysis. CCE performs color clustering in a multi-resolution pyramid to gradually eliminate larger color clusters, preventing them from dominating segmentation and allowing smaller clusters to emerge more clearly. It then examines regions for adjacent homochromatic objects with different textures, introducing Texture Sewn Response (TSR) to indicate texture strength across resolutions/directions. The method is evaluated on the BSDS500 dataset against other metrics, demonstrating satisfactory performance for outdoor scene segmentation.
Image enhancement technique plays vital role in improving the quality of the image. Enhancement
technique basically enhances the foreground information and retains the background and improve the
overall contrast of an image. In some case the background of an image hides the structural information of
an image. This paper proposes an algorithm which enhances the foreground image and the background
part separately and stretch the contrast of an image at inter-object level and intra-object level and then
combines it to an enhanced image. The results are compared with various classical methods using image
quality measures
Image segmentation involves grouping similar image components, such as pixels, into segments. It has applications in medical imaging, satellite imagery, and video summarization. Common methods include thresholding, k-means clustering, and region-based approaches. Thresholding segments an image based on pixel intensity values, while k-means clustering groups pixels into a specified number of clusters based on color or other feature similarity. Region-based methods grow or merge regions of similar pixels. Watershed segmentation treats an image as a topographic surface and finds boundaries between regions.
A version of watershed algorithm for color image segmentationHabibur Rahman
The document summarizes a master's thesis presentation on a new watershed algorithm for color image segmentation. The thesis addresses issues with existing watershed algorithms like over-segmentation and sensitivity to noise. The contributions of the thesis include an adaptive masking and thresholding mechanism to overcome over-segmentation and perform well on noisy images. The thesis is evaluated using five image quality assessment metrics on 20 classes of images, showing the proposed method performs better and has lower computational complexity than other algorithms. In conclusions, the adaptive watershed algorithm ensures accurate segmentation and is suitable for real-time applications.
This document summarizes a research paper on color image segmentation using an improved region growing and k-means method. It begins with an abstract describing the use of enhanced watershed and region growing algorithms for image and color segmentation. It then reviews related work on image segmentation techniques. The document describes the traditional watershed and region growing techniques, and proposes a new algorithm using Ncut and k-means clustering. It presents results of applying the new method versus traditional watershed segmentation, showing lower Liu's F-factor values for the new method indicating better segmentation. The conclusion is that the new technique performs both image segmentation and color segmentation more efficiently than older methods.
A Novel Feature Extraction Scheme for Medical X-Ray ImagesIJERA Editor
X-ray images are gray scale images with almost the same textural characteristic. Conventional texture or color
features cannot be used for appropriate categorization in medical x-ray image archives. This paper presents a
novel combination of methods like GLCM, LBP and HOG for extracting distinctive invariant features from Xray
images belonging to IRMA (Image Retrieval in Medical applications) database that can be used to perform
reliable matching between different views of an object or scene. GLCM represents the distributions of the
intensities and the information about relative positions of neighboring pixels of an image. The LBP features are
invariant to image scale and rotation, change in 3D viewpoint, addition of noise, and change in illumination A
HOG feature vector represents local shape of an object, having edge information at plural cells. These features
have been exploited in different algorithms for automatic classification of medical X-ray images. Excellent
experimental results obtained in true problems of rotation invariance, particular rotation angle, demonstrate that
good discrimination can be achieved with the occurrence statistics of simple rotation invariant local binary
patterns.
Performance Evaluation of Filters for Enhancement of Images in Different Appl...IOSR Journals
This document evaluates the performance of different filters for enhancing images in various application areas. It discusses contrast stretching and histogram equalization as common spatial domain techniques for contrast enhancement. Bi-histogram equalization was introduced to preserve image brightness during contrast enhancement. However, contrast enhancement also enhances noise, causing blurriness. The proposed BHEGF method aims to reduce this while providing more accurate results. The document uses median, Gaussian, average, and motion filters and evaluates them based on processing time, mean squared error, brightness count, and peak signal-to-noise ratio to determine which filter provides the best performance for image enhancement.
This document proposes a new method for segmenting outdoor images called Color Cluster Elimination (CCE) which utilizes color clustering and texture analysis. CCE performs color clustering in a multi-resolution pyramid to gradually eliminate larger color clusters, preventing them from dominating segmentation and allowing smaller clusters to emerge more clearly. It then examines regions for adjacent homochromatic objects with different textures, introducing Texture Sewn Response (TSR) to indicate texture strength across resolutions/directions. The method is evaluated on the BSDS500 dataset against other metrics, demonstrating satisfactory performance for outdoor scene segmentation.
Image enhancement technique plays vital role in improving the quality of the image. Enhancement
technique basically enhances the foreground information and retains the background and improve the
overall contrast of an image. In some case the background of an image hides the structural information of
an image. This paper proposes an algorithm which enhances the foreground image and the background
part separately and stretch the contrast of an image at inter-object level and intra-object level and then
combines it to an enhanced image. The results are compared with various classical methods using image
quality measures
Image segmentation involves grouping similar image components, such as pixels, into segments. It has applications in medical imaging, satellite imagery, and video summarization. Common methods include thresholding, k-means clustering, and region-based approaches. Thresholding segments an image based on pixel intensity values, while k-means clustering groups pixels into a specified number of clusters based on color or other feature similarity. Region-based methods grow or merge regions of similar pixels. Watershed segmentation treats an image as a topographic surface and finds boundaries between regions.
A version of watershed algorithm for color image segmentationHabibur Rahman
The document summarizes a master's thesis presentation on a new watershed algorithm for color image segmentation. The thesis addresses issues with existing watershed algorithms like over-segmentation and sensitivity to noise. The contributions of the thesis include an adaptive masking and thresholding mechanism to overcome over-segmentation and perform well on noisy images. The thesis is evaluated using five image quality assessment metrics on 20 classes of images, showing the proposed method performs better and has lower computational complexity than other algorithms. In conclusions, the adaptive watershed algorithm ensures accurate segmentation and is suitable for real-time applications.
This document summarizes a research paper on color image segmentation using an improved region growing and k-means method. It begins with an abstract describing the use of enhanced watershed and region growing algorithms for image and color segmentation. It then reviews related work on image segmentation techniques. The document describes the traditional watershed and region growing techniques, and proposes a new algorithm using Ncut and k-means clustering. It presents results of applying the new method versus traditional watershed segmentation, showing lower Liu's F-factor values for the new method indicating better segmentation. The conclusion is that the new technique performs both image segmentation and color segmentation more efficiently than older methods.
A Novel Feature Extraction Scheme for Medical X-Ray ImagesIJERA Editor
X-ray images are gray scale images with almost the same textural characteristic. Conventional texture or color
features cannot be used for appropriate categorization in medical x-ray image archives. This paper presents a
novel combination of methods like GLCM, LBP and HOG for extracting distinctive invariant features from Xray
images belonging to IRMA (Image Retrieval in Medical applications) database that can be used to perform
reliable matching between different views of an object or scene. GLCM represents the distributions of the
intensities and the information about relative positions of neighboring pixels of an image. The LBP features are
invariant to image scale and rotation, change in 3D viewpoint, addition of noise, and change in illumination A
HOG feature vector represents local shape of an object, having edge information at plural cells. These features
have been exploited in different algorithms for automatic classification of medical X-ray images. Excellent
experimental results obtained in true problems of rotation invariance, particular rotation angle, demonstrate that
good discrimination can be achieved with the occurrence statistics of simple rotation invariant local binary
patterns.
Performance Evaluation of Filters for Enhancement of Images in Different Appl...IOSR Journals
This document evaluates the performance of different filters for enhancing images in various application areas. It discusses contrast stretching and histogram equalization as common spatial domain techniques for contrast enhancement. Bi-histogram equalization was introduced to preserve image brightness during contrast enhancement. However, contrast enhancement also enhances noise, causing blurriness. The proposed BHEGF method aims to reduce this while providing more accurate results. The document uses median, Gaussian, average, and motion filters and evaluates them based on processing time, mean squared error, brightness count, and peak signal-to-noise ratio to determine which filter provides the best performance for image enhancement.
Abstract Image Segmentation plays a vital role in image processing. The research in this area is still relevant due to its wide applications. Image segmentation is a process of assigning a label to every pixel in an image such that pixels with same label share certain visual characteristics. Sometimes it becomes necessary to calculate the total number of colors from the given RGB image to quantize the image, to detect cancer and brain tumour. The goal of this paper is to provide the best algorithm for image segmentation. Keywords: Image segmentation, RGB
Segmentation of Color Image using Adaptive Thresholding and Masking with Wate...Habibur Rahman
The document proposes a modified watershed algorithm for image segmentation. It applies adaptive masking and thresholding to each color channel before combining the results. The modified algorithm is compared to FCM, RG, and HKM using metrics like PSNR, MSE, PSNRRGB, and CQM on 10 images. Results show the proposed method ensures accuracy and quality while being faster than other algorithms, making it suitable for real-time use. It performs better than the other algorithms according to visual and quantitative analysis.
An Improved Way of Segmentation and Classification of Remote Sensing Images U...ijsrd.com
The Ultimate significance of Images lies in processing the digital image which stems from two principal application areas: Advances of pictorial information for human interpretation; and dispensation of image data for storage, communication, and illustration for self-sufficient machine perception. The objective of this research work is to define the meaning and possibility of image segmentation based on remote sensing images which are successively classified with statistical measures. In this paper kernel induced Possiblistic C-means clustering algorithm has been implemented for classifying remote sensing image data with image features. As a final point of the proposed work is to point out that this algorithm works well for segmenting and classifying the image with better accuracy with statistical metrices.
Research Inventy : International Journal of Engineering and Scienceresearchinventy
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Fuzzy Region Merging Using Fuzzy Similarity Measurement on Image Segmentation IJECEIAES
Some image’s regions have unbalance information, such as blurred contour, shade, and uneven brightness. Those regions are called as ambiguous regions. Ambiguous region cause problem during region merging process in interactive image segmentation because that region has double information, both as object and background. We proposed a new region merging strategy using fuzzy similarity measurement for image segmentation. The proposed method has four steps; the first step is initial segmentation using mean-shift algorithm. The second step is giving markers manually to indicate the object and background region. The third step is determining the fuzzy region or ambiguous region in the images. The last step is fuzzy region merging using fuzzy similarity measurement. The experimental results demonstrated that the proposed method is able to segment natural images and dental panoramic images successfully with the average value of misclassification error (ME) 1.96% and 5.47%, respectively.
Feature Extraction of an Image by Using Adaptive Filtering and Morpological S...IOSR Journals
Abstract: For enhancing an image various enhancement schemes are used which includes gray scale manipulation, filtering and Histogram Equalization, Where Histogram equalization is one of the well known image enhancement technique. It became a popular technique for contrast enhancement because it is simple and effective. The basic idea of Histogram Equalization method is to remap the gray levels of an image. Here using morphological segmentation we can get the segmented image. Morphological reconstruction is used to segment the image. Comparative analysis of different enhancement and segmentation will be carried out. This comparison will be done on the basis of subjective and objective parameters. Subjective parameter is visual quality and objective parameters are Area, Perimeter, Min and Max intensity, Avg Voxel Intensity, Std Dev of Intensity, Eccentricity, Coefficient of skewness, Coefficient of Kurtosis, Median intensity, Mode intensity. Keywords: Histogram Equalization, Segmentation, Morphological Reconstruction .
Object-Oriented Approach of Information Extraction from High Resolution Satel...iosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
This document summarizes and reviews several techniques for image mining, including feature extraction, image clustering, and object recognition algorithms. It discusses color, texture, and edge feature extraction techniques and evaluates their precision and recall. It also describes the block truncation algorithm for image recognition and the cascade feature extraction approach. The key techniques - color moments, block truncation coding, and cascade classifiers - are evaluated based on experimental recall and precision results. Overall, the document provides an overview of different image mining techniques and evaluates their effectiveness.
CONTRAST ENHANCEMENT TECHNIQUES USING HISTOGRAM EQUALIZATION METHODS ON COLOR...IJCSEA Journal
Histogram equalization (HE) is a simple and widely used image contrast enhancement technique. The basic disadvantage of HE is it changes the brightness of the image. In order to overcome this drawback, various HE methods have been proposed. These methods preserves the brightness on the output image but, does not have a natural look. In order to overcome this problem the, present paper uses Multi-HE methods, which decompose the image into several sub images, and classical HE method is applied to each sub image. The algorithm is applied on various images and has been analysed using both objective and subjective assessment.
Supervised and unsupervised classification techniques for satellite imagery i...gaup_geo
This document compares supervised and unsupervised classification techniques for satellite imagery analysis of land cover in the Porto Alegre region of Brazil. Supervised classification involved collecting over 500 training sites to create signatures for 8 land cover classes. Unsupervised classification used ISOcluster to generate 36 spectral classes which were grouped into the 8 informational classes. Both classifications underwent post-processing including majority filtering and polygon elimination to produce final 1-hectare minimum mapping unit vector maps. Accuracy assessments found the supervised classification to be more accurate at 76% compared to 48% for the unsupervised method.
This document presents a hybrid approach for color image segmentation that integrates color edge information and seeded region growing. It uses color edge detection in CIE L*a*b color space to select initial seed regions and guide region growth. Seeded region growing is performed based on color similarity between pixels. The edge map and region map are fused to produce homogeneous regions with closed boundaries. Small regions are then merged. The approach is tested on images from the Berkeley segmentation dataset and produces reasonably good segmentation results by combining color and edge information.
Orientation Spectral Resolution Coding for Pattern RecognitionIOSRjournaljce
In the approach of pattern recognition, feature descriptions are of greater importance. Features are represented in spatial domain and transformed domain. Wherein, spatial domain features are of lower representation, transformed domains are finer and more informative. In the transformed domain representation, features are represented using spectral coding using advanced transformation technique such as wavelet transformation. However, the feature extraction approach considers the band coefficients; the orientation variation is not considered. In this paper towards inherent orientation variation among each spectral band is derived, and the approach of orientation filtration is made for effective feature representation. The obtained result illustrates an improvement in the recognition accuracy, in comparison to conventional retrieval system.
Automated Colorization of Grayscale Images Using Texture DescriptorsIDES Editor
The document proposes a novel automated process called ACTD for colorizing grayscale images using texture descriptors without human intervention. It analyzes sample color images to extract coherent texture regions, then uses Gabor filtering for texture-based segmentation. Texture descriptors and color information are computed and stored for each region. These are then used to colorize a new grayscale image based on texture matching. The method combines techniques such as Gabor filtering, fuzzy C-means clustering with a new "Gki factor" for noise tolerance, and content-based image retrieval of texture descriptors. Preliminary results found the approach viable but improvements were needed for scale and rotation invariance.
The document discusses image segmentation techniques. It begins by defining segmentation as partitioning an image into distinct regions that correlate with objects or features of interest. The goal of segmentation is to find meaningful groups of pixels. Several segmentation techniques are described, including region growing/shrinking, clustering methods, and boundary detection. Region growing uses homogeneity tests to merge neighboring regions, while clustering divides space based on similarity within groups. Boundary detection finds boundaries between objects. The document provides examples and details of applying these segmentation methods.
Review on Image Enhancement in Spatial Domainidescitation
With the proliferation in electronic imaging devices
like in mobiles, computer vision, medical field and space field;
image enhancement field has become the quite interesting
and important area of research. These imaging devices are
viewed under a diverse range of viewing conditions and a huge
loss in contrast under bright outdoor viewing conditions; thus
viewing condition parameters such as surround effects,
correlated color temperature and ambient lighting have
become of significant importance. Therefore, Principle
objective of Image enhancement is to adjust the quality of an
image for better human visual perception. Appropriate choice
of enhancement techniques is greatly influenced by the
imaging modality, task at hand and viewing conditions.
Basically, image enhancement techniques have been classified
into two broad categories: Spatial domain image enhancement
and Frequency domain image enhancement. This survey report
gives an overview of different methodologies have been used
for enhancement under the spatial domain category. It is noted
that in this field still more research is to be done.
The development of multimedia system technology in Content based Image Retrieval (CBIR) System is
one in every of the outstanding area to retrieve the images from an oversized collection of database. The feature
vectors of the query image are compared with feature vectors of the database images to get matching images.It is
much observed that anyone algorithm isn't beneficial in extracting all differing kinds of natural images. Thus an
intensive analysis of certain color, texture and shape extraction techniques are allotted to spot an efficient CBIR
technique that suits for a selected sort of images. The Extraction of an image includes feature description and
feature extraction. During this paper, we tend to projected Color Layout Descriptor (CLD), grey Level Co-
Occurrences Matrix (GLCM), Marker-Controlled Watershed Segmentation feature extraction technique that
extract the matching image based on the similarity of Color, Texture and shape within the database. For
performance analysis, the image retrieval timing results of the projected technique is calculated and compared
with every of the individual feature.
Content Based Image Retrieval using Color Boosted Salient Points and Shape fe...CSCJournals
Salient points are locations in an image where there is a significant variation with respect to a chosen image feature. Since the set of salient points in an image capture important local characteristics of that image, they can form the basis of a good image representation for content-based image retrieval (CBIR). Salient features are generally determined from the local differential structure of images. They focus on the shape saliency of the local neighborhood. Most of these detectors are luminance based which have the disadvantage that the distinctiveness of the local color information is completely ignored in determining salient image features. To fully exploit the possibilities of salient point detection in color images, color distinctiveness should be taken into account in addition to shape distinctiveness. This paper presents a method for salient points determination based on color saliency. The color and texture information around these points of interest serve as the local descriptors of the image. In addition, the shape information is captured in terms of edge images computed using Gradient Vector Flow fields. Invariant moments are then used to record the shape features. The combination of the local color, texture and the global shape features provides a robust feature set for image retrieval. The experimental results demonstrate the efficacy of the method.
Multispectral Satellite Color Image Segmentation Using Fuzzy Based Innovative...Dibya Jyoti Bora
Multispectral satellite color images need special treatment for object-based classification like segmentation.
Traditional algorithms are not efficient enough for performing segmentation of such high-resolution images as
they often result in a serious problem: over-segmentation. So, an innovative approach for segmentation of
multispectral color images is proposed in this paper to tackle the same. The proposed approach consists of two
phases. In the first phase, the pre-processing of the selected bands is conducted for noise removal and contrast
enhancement of the input multispectral satellite color image on the HSV color space. In the second phase, fuzzy
segmentation of the enhanced version of the image obtained in the first phase is carried out by FCM algorithm
through optimal parameter passing. Final shifting from HSV to RGB color space presents the segmentation
result by separating different regions of interest with proper and distinguished color labeling. The results found
are quite promising and comparatively better than the other state of the art algorithms.
OBJECT SEGMENTATION USING MULTISCALE MORPHOLOGICAL OPERATIONSijcseit
Object segmentation plays an important role in human visual perception, medical image processing and content based image retrieval. It provides information for recognition and interpretation. This paper uses mathematical morphology for image segmentation. Object segmentation is difficult because one usually does not know a priori what type of object exists in an image, how many different shapes are there and what regions the image has. To carryout discrimination and segmentation several innovative segmentation methods, based on morphology are proposed. The present study proposes segmentation method based on multiscale morphological reconstructions. Various sizes of structuring elements have been used to segment simple and complex shapes. It enhances local boundaries that may lead to improve segmentation accuracy.The method is tested on various datasets and results shows that it can be used for both interactive and automatic segmentation.
User Interactive Color Transformation between ImagesIJMER
Abstract: In this paper we present a process called color
transfer which can borrow one image’s color
characteristics from another. Most current colorization
algorithms either require a significant user effort or have
large computational time. Here focus on orthogonal color
space i.e. lαβ color space without correlation between the
axes is given. Here we have implemented two global color
transfer algorithms in lαβ color space using simple color
statistical information such as mean, standard deviation
and covariance between the pixels of image. Our approach
is the extension of Reinhard's. Our local color transfer
algorithm uses simple color statistical analysis to recolor
the target image according to selected color range in
source image. Target image’s color influence mask is
prepared. It is a mask that specifies what parts of target
image will be affected according to selected color range.
After that target image is recolored in lαβ color space
according to prepared color influence map. In the lαβ
color space luminance and chrominance information is
separate so it allows making image recoloring optional.
The basic color transformation uses stored color statistics
of source and target image. All the algorithms are
implemented in JAVA object oriented language. The main
advantage of proposed method over the existing one is it
allows the user to recolor a part of the image in a simple &
intuitive way, preserving other color intact & achieving
natural look.
Index Terms: color transfer, local color statistics, color
characteristics, orthogonal color space, color influence
map.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Application of Image Retrieval Techniques to Understand Evolving Weatherijsrd.com
Multispectral satellite images provide valuable information to understand the evolution of various weather systems such as tropical cyclones, shifting of intra tropical convergence zone, moments of various troughs etc., accurate prediction and estimation will save live and property. This work will deal with the development of an application which will enable users to search an image from database using either gray level, texture and shape features for meteorological satellite image retrieval .Gray level feature is extracted using histogram method. The Texture feature is extracted using gray level co-occurrence method and wavelet approach. The shape feature vector is extracted using morphological operations. The similarity between query image and database images is calculated using Euclidian distance. The performance of the system is evaluated using precision
Abstract Image Segmentation plays a vital role in image processing. The research in this area is still relevant due to its wide applications. Image segmentation is a process of assigning a label to every pixel in an image such that pixels with same label share certain visual characteristics. Sometimes it becomes necessary to calculate the total number of colors from the given RGB image to quantize the image, to detect cancer and brain tumour. The goal of this paper is to provide the best algorithm for image segmentation. Keywords: Image segmentation, RGB
Segmentation of Color Image using Adaptive Thresholding and Masking with Wate...Habibur Rahman
The document proposes a modified watershed algorithm for image segmentation. It applies adaptive masking and thresholding to each color channel before combining the results. The modified algorithm is compared to FCM, RG, and HKM using metrics like PSNR, MSE, PSNRRGB, and CQM on 10 images. Results show the proposed method ensures accuracy and quality while being faster than other algorithms, making it suitable for real-time use. It performs better than the other algorithms according to visual and quantitative analysis.
An Improved Way of Segmentation and Classification of Remote Sensing Images U...ijsrd.com
The Ultimate significance of Images lies in processing the digital image which stems from two principal application areas: Advances of pictorial information for human interpretation; and dispensation of image data for storage, communication, and illustration for self-sufficient machine perception. The objective of this research work is to define the meaning and possibility of image segmentation based on remote sensing images which are successively classified with statistical measures. In this paper kernel induced Possiblistic C-means clustering algorithm has been implemented for classifying remote sensing image data with image features. As a final point of the proposed work is to point out that this algorithm works well for segmenting and classifying the image with better accuracy with statistical metrices.
Research Inventy : International Journal of Engineering and Scienceresearchinventy
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Fuzzy Region Merging Using Fuzzy Similarity Measurement on Image Segmentation IJECEIAES
Some image’s regions have unbalance information, such as blurred contour, shade, and uneven brightness. Those regions are called as ambiguous regions. Ambiguous region cause problem during region merging process in interactive image segmentation because that region has double information, both as object and background. We proposed a new region merging strategy using fuzzy similarity measurement for image segmentation. The proposed method has four steps; the first step is initial segmentation using mean-shift algorithm. The second step is giving markers manually to indicate the object and background region. The third step is determining the fuzzy region or ambiguous region in the images. The last step is fuzzy region merging using fuzzy similarity measurement. The experimental results demonstrated that the proposed method is able to segment natural images and dental panoramic images successfully with the average value of misclassification error (ME) 1.96% and 5.47%, respectively.
Feature Extraction of an Image by Using Adaptive Filtering and Morpological S...IOSR Journals
Abstract: For enhancing an image various enhancement schemes are used which includes gray scale manipulation, filtering and Histogram Equalization, Where Histogram equalization is one of the well known image enhancement technique. It became a popular technique for contrast enhancement because it is simple and effective. The basic idea of Histogram Equalization method is to remap the gray levels of an image. Here using morphological segmentation we can get the segmented image. Morphological reconstruction is used to segment the image. Comparative analysis of different enhancement and segmentation will be carried out. This comparison will be done on the basis of subjective and objective parameters. Subjective parameter is visual quality and objective parameters are Area, Perimeter, Min and Max intensity, Avg Voxel Intensity, Std Dev of Intensity, Eccentricity, Coefficient of skewness, Coefficient of Kurtosis, Median intensity, Mode intensity. Keywords: Histogram Equalization, Segmentation, Morphological Reconstruction .
Object-Oriented Approach of Information Extraction from High Resolution Satel...iosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
This document summarizes and reviews several techniques for image mining, including feature extraction, image clustering, and object recognition algorithms. It discusses color, texture, and edge feature extraction techniques and evaluates their precision and recall. It also describes the block truncation algorithm for image recognition and the cascade feature extraction approach. The key techniques - color moments, block truncation coding, and cascade classifiers - are evaluated based on experimental recall and precision results. Overall, the document provides an overview of different image mining techniques and evaluates their effectiveness.
CONTRAST ENHANCEMENT TECHNIQUES USING HISTOGRAM EQUALIZATION METHODS ON COLOR...IJCSEA Journal
Histogram equalization (HE) is a simple and widely used image contrast enhancement technique. The basic disadvantage of HE is it changes the brightness of the image. In order to overcome this drawback, various HE methods have been proposed. These methods preserves the brightness on the output image but, does not have a natural look. In order to overcome this problem the, present paper uses Multi-HE methods, which decompose the image into several sub images, and classical HE method is applied to each sub image. The algorithm is applied on various images and has been analysed using both objective and subjective assessment.
Supervised and unsupervised classification techniques for satellite imagery i...gaup_geo
This document compares supervised and unsupervised classification techniques for satellite imagery analysis of land cover in the Porto Alegre region of Brazil. Supervised classification involved collecting over 500 training sites to create signatures for 8 land cover classes. Unsupervised classification used ISOcluster to generate 36 spectral classes which were grouped into the 8 informational classes. Both classifications underwent post-processing including majority filtering and polygon elimination to produce final 1-hectare minimum mapping unit vector maps. Accuracy assessments found the supervised classification to be more accurate at 76% compared to 48% for the unsupervised method.
This document presents a hybrid approach for color image segmentation that integrates color edge information and seeded region growing. It uses color edge detection in CIE L*a*b color space to select initial seed regions and guide region growth. Seeded region growing is performed based on color similarity between pixels. The edge map and region map are fused to produce homogeneous regions with closed boundaries. Small regions are then merged. The approach is tested on images from the Berkeley segmentation dataset and produces reasonably good segmentation results by combining color and edge information.
Orientation Spectral Resolution Coding for Pattern RecognitionIOSRjournaljce
In the approach of pattern recognition, feature descriptions are of greater importance. Features are represented in spatial domain and transformed domain. Wherein, spatial domain features are of lower representation, transformed domains are finer and more informative. In the transformed domain representation, features are represented using spectral coding using advanced transformation technique such as wavelet transformation. However, the feature extraction approach considers the band coefficients; the orientation variation is not considered. In this paper towards inherent orientation variation among each spectral band is derived, and the approach of orientation filtration is made for effective feature representation. The obtained result illustrates an improvement in the recognition accuracy, in comparison to conventional retrieval system.
Automated Colorization of Grayscale Images Using Texture DescriptorsIDES Editor
The document proposes a novel automated process called ACTD for colorizing grayscale images using texture descriptors without human intervention. It analyzes sample color images to extract coherent texture regions, then uses Gabor filtering for texture-based segmentation. Texture descriptors and color information are computed and stored for each region. These are then used to colorize a new grayscale image based on texture matching. The method combines techniques such as Gabor filtering, fuzzy C-means clustering with a new "Gki factor" for noise tolerance, and content-based image retrieval of texture descriptors. Preliminary results found the approach viable but improvements were needed for scale and rotation invariance.
The document discusses image segmentation techniques. It begins by defining segmentation as partitioning an image into distinct regions that correlate with objects or features of interest. The goal of segmentation is to find meaningful groups of pixels. Several segmentation techniques are described, including region growing/shrinking, clustering methods, and boundary detection. Region growing uses homogeneity tests to merge neighboring regions, while clustering divides space based on similarity within groups. Boundary detection finds boundaries between objects. The document provides examples and details of applying these segmentation methods.
Review on Image Enhancement in Spatial Domainidescitation
With the proliferation in electronic imaging devices
like in mobiles, computer vision, medical field and space field;
image enhancement field has become the quite interesting
and important area of research. These imaging devices are
viewed under a diverse range of viewing conditions and a huge
loss in contrast under bright outdoor viewing conditions; thus
viewing condition parameters such as surround effects,
correlated color temperature and ambient lighting have
become of significant importance. Therefore, Principle
objective of Image enhancement is to adjust the quality of an
image for better human visual perception. Appropriate choice
of enhancement techniques is greatly influenced by the
imaging modality, task at hand and viewing conditions.
Basically, image enhancement techniques have been classified
into two broad categories: Spatial domain image enhancement
and Frequency domain image enhancement. This survey report
gives an overview of different methodologies have been used
for enhancement under the spatial domain category. It is noted
that in this field still more research is to be done.
The development of multimedia system technology in Content based Image Retrieval (CBIR) System is
one in every of the outstanding area to retrieve the images from an oversized collection of database. The feature
vectors of the query image are compared with feature vectors of the database images to get matching images.It is
much observed that anyone algorithm isn't beneficial in extracting all differing kinds of natural images. Thus an
intensive analysis of certain color, texture and shape extraction techniques are allotted to spot an efficient CBIR
technique that suits for a selected sort of images. The Extraction of an image includes feature description and
feature extraction. During this paper, we tend to projected Color Layout Descriptor (CLD), grey Level Co-
Occurrences Matrix (GLCM), Marker-Controlled Watershed Segmentation feature extraction technique that
extract the matching image based on the similarity of Color, Texture and shape within the database. For
performance analysis, the image retrieval timing results of the projected technique is calculated and compared
with every of the individual feature.
Content Based Image Retrieval using Color Boosted Salient Points and Shape fe...CSCJournals
Salient points are locations in an image where there is a significant variation with respect to a chosen image feature. Since the set of salient points in an image capture important local characteristics of that image, they can form the basis of a good image representation for content-based image retrieval (CBIR). Salient features are generally determined from the local differential structure of images. They focus on the shape saliency of the local neighborhood. Most of these detectors are luminance based which have the disadvantage that the distinctiveness of the local color information is completely ignored in determining salient image features. To fully exploit the possibilities of salient point detection in color images, color distinctiveness should be taken into account in addition to shape distinctiveness. This paper presents a method for salient points determination based on color saliency. The color and texture information around these points of interest serve as the local descriptors of the image. In addition, the shape information is captured in terms of edge images computed using Gradient Vector Flow fields. Invariant moments are then used to record the shape features. The combination of the local color, texture and the global shape features provides a robust feature set for image retrieval. The experimental results demonstrate the efficacy of the method.
Multispectral Satellite Color Image Segmentation Using Fuzzy Based Innovative...Dibya Jyoti Bora
Multispectral satellite color images need special treatment for object-based classification like segmentation.
Traditional algorithms are not efficient enough for performing segmentation of such high-resolution images as
they often result in a serious problem: over-segmentation. So, an innovative approach for segmentation of
multispectral color images is proposed in this paper to tackle the same. The proposed approach consists of two
phases. In the first phase, the pre-processing of the selected bands is conducted for noise removal and contrast
enhancement of the input multispectral satellite color image on the HSV color space. In the second phase, fuzzy
segmentation of the enhanced version of the image obtained in the first phase is carried out by FCM algorithm
through optimal parameter passing. Final shifting from HSV to RGB color space presents the segmentation
result by separating different regions of interest with proper and distinguished color labeling. The results found
are quite promising and comparatively better than the other state of the art algorithms.
OBJECT SEGMENTATION USING MULTISCALE MORPHOLOGICAL OPERATIONSijcseit
Object segmentation plays an important role in human visual perception, medical image processing and content based image retrieval. It provides information for recognition and interpretation. This paper uses mathematical morphology for image segmentation. Object segmentation is difficult because one usually does not know a priori what type of object exists in an image, how many different shapes are there and what regions the image has. To carryout discrimination and segmentation several innovative segmentation methods, based on morphology are proposed. The present study proposes segmentation method based on multiscale morphological reconstructions. Various sizes of structuring elements have been used to segment simple and complex shapes. It enhances local boundaries that may lead to improve segmentation accuracy.The method is tested on various datasets and results shows that it can be used for both interactive and automatic segmentation.
User Interactive Color Transformation between ImagesIJMER
Abstract: In this paper we present a process called color
transfer which can borrow one image’s color
characteristics from another. Most current colorization
algorithms either require a significant user effort or have
large computational time. Here focus on orthogonal color
space i.e. lαβ color space without correlation between the
axes is given. Here we have implemented two global color
transfer algorithms in lαβ color space using simple color
statistical information such as mean, standard deviation
and covariance between the pixels of image. Our approach
is the extension of Reinhard's. Our local color transfer
algorithm uses simple color statistical analysis to recolor
the target image according to selected color range in
source image. Target image’s color influence mask is
prepared. It is a mask that specifies what parts of target
image will be affected according to selected color range.
After that target image is recolored in lαβ color space
according to prepared color influence map. In the lαβ
color space luminance and chrominance information is
separate so it allows making image recoloring optional.
The basic color transformation uses stored color statistics
of source and target image. All the algorithms are
implemented in JAVA object oriented language. The main
advantage of proposed method over the existing one is it
allows the user to recolor a part of the image in a simple &
intuitive way, preserving other color intact & achieving
natural look.
Index Terms: color transfer, local color statistics, color
characteristics, orthogonal color space, color influence
map.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Application of Image Retrieval Techniques to Understand Evolving Weatherijsrd.com
Multispectral satellite images provide valuable information to understand the evolution of various weather systems such as tropical cyclones, shifting of intra tropical convergence zone, moments of various troughs etc., accurate prediction and estimation will save live and property. This work will deal with the development of an application which will enable users to search an image from database using either gray level, texture and shape features for meteorological satellite image retrieval .Gray level feature is extracted using histogram method. The Texture feature is extracted using gray level co-occurrence method and wavelet approach. The shape feature vector is extracted using morphological operations. The similarity between query image and database images is calculated using Euclidian distance. The performance of the system is evaluated using precision
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Content based image retrieval based on shape with texture featuresAlexander Decker
This document describes a content-based image retrieval system that extracts shape and texture features from images. It uses the HSV color space and wavelet transform for feature extraction. Color features are extracted by quantizing the H, S, and V components of HSV into unequal intervals based on human color perception. Texture features are extracted using wavelet transforms. The color and texture features are then combined to form a feature vector for each image. During retrieval, the similarity between a query image and images in the database is measured using the Euclidean distance between their feature vectors. The results show that retrieving images using HSV color features provides more accurate results and faster retrieval times compared to using RGB color features.
SEMANTIC IMAGE RETRIEVAL USING MULTIPLE FEATUREScscpconf
In Content Based Image Retrieval (CBIR) some problem such as recognizing the similar
images, the need for databases, the semantic gap, and retrieving the desired images from huge
collections are the keys to improve. CBIR system analyzes the image content for indexing,
management, extraction and retrieval via low-level features such as color, texture and shape.
To achieve higher semantic performance, recent system seeks to combine the low-level features
of images with high-level features that conation perceptual information for human beings.
Performance improvements of indexing and retrieval play an important role for providing
advanced CBIR services. To overcome these above problems, a new query-by-image technique
using combination of multiple features is proposed. The proposed technique efficiently sifts through the dataset of images to retrieve semantically similar images.
This document describes a content-based image retrieval system that uses 2-D discrete wavelet transform with texture features. It proposes using DWT to reduce image dimensions before extracting texture features from images using gray level co-occurrence matrix. Texture features and Euclidean distance are then used to retrieve similar images from a database. The system is tested on a dataset of 1000 images from 10 classes and achieves an average retrieval accuracy of 89.8%.
Content Based Image Retrieval Using 2-D Discrete Wavelet TransformIOSR Journals
This document proposes a content-based image retrieval system using 2D discrete wavelet transform and texture features. The system decomposes images using 2D DWT, extracts texture features from low frequency coefficients using GLCM, and retrieves similar images by calculating Euclidean distances between feature vectors. Experimental results on Wang's database show the proposed approach achieves 89.8% average retrieval accuracy.
Improvement of Objective Image Quality Evaluation Applying Colour Differences...CSCJournals
In this work perceived colour distance is employed in a simple and functional way in order to improve full-reference image quality assessment. The difference between colours in the CIELAB colour space is employed as perceived colour distance. This quantity is used to process images that are to be feed to full-reference image quality algorithms. This image processing stage consists of identifying the image regions or pixels that are expected to be perceived identically by a human observer in both the reference image and the image having its quality evaluated. In order to verify the validity of the proposal, objective scores are compared with subjective ones for public available image databases. Despite being a very simple strategy, the proposed approach was effective to improve the agreement between subjective and the SSIM (Structural Similarity Index Metric) objective score.
Image Information Retrieval From Incomplete Queries Using Color and Shape Fea...sipij
Content based image retrieval (CBIR) is the task of searching digital images from a large database based on the extraction of features, such as color, texture and shape of the image. Most of the research in CBIR has been carried out with complete queries which were present in the database. This paper investigates utility of CBIR techniques for retrieval of incomplete and distorted queries. Studies were made in two categories of the query: first is complete and second is incomplete. The query image is considered to be distorted or incomplete image if it has some missing information, some undesirable objects, blurring, noise due to disturbance at the time of image acquisition etc. Color (hue, saturation and value (HSV) color space model) and shape (moment invariants and Fourier descriptor) features are used to represent the image. The algorithm was tested on database consisting of 1875 images. The results show that retrieval accuracy of incomplete queries is highly increased by fusing color and shape features giving precision of 79.87%. MATLAB ® 7.01 and its image processing toolbox have been used to implement the algorithm.
Wavelet-Based Color Histogram on Content-Based Image RetrievalTELKOMNIKA JOURNAL
The growth of image databases in many domains, including fashion, biometric, graphic design,
architecture, etc. has increased rapidly. Content Based Image Retrieval System (CBIR) is a technique used
for finding relevant images from those huge and unannotated image databases based on low-level features
of the query images. In this study, an attempt to employ 2nd level Wavelet Based Color Histogram (WBCH)
on a CBIR system is proposed. Image database used in this study are taken from Wang’s image database
containing 1000 color images. The experiment results show that 2nd level WBCH gives better precision
(0.777) than the other methods, including 1st level WBCH, Color Histogram, Color Co-occurrence Matrix,
and Wavelet texture feature. It can be concluded that the 2nd Level of WBCH can be applied to CBIR system.
Query Image Searching With Integrated Textual and Visual Relevance Feedback f...IJERA Editor
There are many researchers who have studied the relevance feedback in the literature of content based image
retrieval (CBIR) community, but none of CBIR search engines support it because of scalability, effectiveness
and efficiency issues. In this, we had implemented an integrated relevance feedback for retrieving of web
images. Here, we had concentrated on integration of both textual features (TF) and visual features (VF) based
relevance feedback (RF), simultaneously we also tested them individually. The TFRF employs and effective
search result clustering (SRC) algorithm to get salient phrases. Then a new user interface (UI) is proposed to
support RF. Experimental results show that the proposed algorithm is scalable, effective and accurated
Implementation of High Dimension Colour Transform in Domain of Image ProcessingIRJET Journal
This document discusses implementing a high-dimensional color transform method for salient region detection in images. It aims to extract salient regions by designing a saliency map using global and local image features. The creation of the saliency map involves mapping colors from RGB space to a high-dimensional color space to find an optimal linear combination of color coefficients. This allows composing an accurate saliency map. The performance is further improved by using relative location and color contrast between superpixels as features and resolving the saliency estimation from an initial trimap using a learning-based algorithm. The method is analyzed on a dataset of training images.
Content-based Image Retrieval Using The knowledge of Color, Texture in Binary...Zahra Mansoori
This document presents a new approach for content-based image retrieval that combines color, texture, and a binary tree structure to describe images and their features. Color histograms in HSV color space and wavelet texture features are extracted as low-level features. A binary tree partitions each image into regions based on color and represents higher-level spatial relationships. The performance of the proposed system is evaluated on a subset of the COREL image database and compared to the SIMPLIcity image retrieval system. Experimental results show the proposed system has better retrieval performance than SIMPLIcity in some categories and comparable performance in others.
SEGMENTATION OF LUNG GLANDULAR CELLS USING MULTIPLE COLOR SPACESIJCSEA Journal
Early detection of lung cancer is a challenging problem, the world faces today. Prior to classify glandular cells as malignant or benign a reliable segmentation technique is required. In this paper we present a novel lung glandular cell segmentation technique. The technique uses a combination of multiple color spaces and various clustering algorithms to automatically find the best possible segmentation result. Unsupervised clustering methods of K-means and Fuzzy C-means were used on multiple color spaces such as HSV, LAB, LUV, xyY. Experimental results of segmentation using various color spaces are provided to show the performance of the proposed system.
Information search using text and image queryeSAT Journals
Abstract An image retrieval and re-ranking system utilizing a visual re-ranking framework which is proposed in this paper the system retrieves a dataset from the World Wide Web based on textual query submitted by the user. These results are kept as data set for information retrieval. This dataset is then re-ranked using a visual query (multiple images selected by user from the dataset) which conveys user’s intention semantically. Visual descriptors (MPEG-7) which describe image with respect to low-level feature like color, texture, etc are used for calculating distances. These distances are a measure of similarity between query images and members of the dataset. Our proposed system has been assessed on different types of queries such as apples, Console, Paris, etc. It shows significant improvement on initial text-based search results.This system is well suitable for online shopping application. Index Terms: MPEG-7, Color Layout Descriptor (CLD), Edge Histogram Descriptor (EHD), image retrieval and re-ranking system
This document summarizes a research paper that proposes an image retrieval and re-ranking system using both text and visual queries. The system first retrieves images from the web based on a textual query submitted by the user. The user can then select multiple example images from the results to better convey their intent. The system calculates visual similarities between the example images and results based on MPEG-7 descriptors like color and texture. Distances are combined to re-rank the initial text-based search results, aiming to improve relevance by incorporating the visual query. The system is evaluated on queries like "apples", "Paris" and "Console" and shows better results than text-only searches according to the document.
This document presents research on content-based image retrieval using color and texture features. It proposes using both quadratic distance based on color histograms to measure color similarity, and pyramid structure wavelet transforms and gray level co-occurrence matrix (GLCM) to measure texture. For color features, quadratic distance is calculated between color histograms to retrieve similar images based on color. For texture, pyramid structure wavelet transforms are used to decompose images into sub-bands and calculate energy levels, while GLCM extracts texture statistics. The methods are evaluated on a dataset of 10000 images and results show the integrated approach of color and texture features provides more accurate and faster retrieval compared to individual features.
A Novel Color Image Fusion for Multi Sensor Night Vision ImagesEditor IJCATR
In this paper presents a simple and fast color fusion approach for night vision images. Image fusion involves merging of two
or more images in such a way, to get the most advantageous characteristics of each image. Here the Visible image is fused with the
InfraRed (IR) image, so the desired result will be single, highly informative image providing full information. This paper focuses on
color constancy and color contrast problem.
Firstly the contrast of the infrared and visible image is enhanced using Local Histogram Equation. Then the two enhanced
images are fused in three compounds of a LAB image using aDWT image fusion. This paper adopts an approach which transfer color
from the reference image to the fused image using Color Transfer Technology. To enhance the contrast between the target and the
background, a scaling factor is introduced in the transferring equation in the b channel. Finally our approach gives the Multiband
Fused image with the natural day-time color appearance and the hot targets are popped out with intense colors while the background
details present with the natural color appearance.
Similar to A New Method for Indoor-outdoor Image Classification Using Color Correlated Temperature (20)
Leveraging Generative AI to Drive Nonprofit InnovationTechSoup
In this webinar, participants learned how to utilize Generative AI to streamline operations and elevate member engagement. Amazon Web Service experts provided a customer specific use cases and dived into low/no-code tools that are quick and easy to deploy through Amazon Web Service (AWS.)
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
This presentation includes basic of PCOS their pathology and treatment and also Ayurveda correlation of PCOS and Ayurvedic line of treatment mentioned in classics.
Reimagining Your Library Space: How to Increase the Vibes in Your Library No ...Diana Rendina
Librarians are leading the way in creating future-ready citizens – now we need to update our spaces to match. In this session, attendees will get inspiration for transforming their library spaces. You’ll learn how to survey students and patrons, create a focus group, and use design thinking to brainstorm ideas for your space. We’ll discuss budget friendly ways to change your space as well as how to find funding. No matter where you’re at, you’ll find ideas for reimagining your space in this session.
Main Java[All of the Base Concepts}.docxadhitya5119
This is part 1 of my Java Learning Journey. This Contains Custom methods, classes, constructors, packages, multithreading , try- catch block, finally block and more.
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
This presentation was provided by Steph Pollock of The American Psychological Association’s Journals Program, and Damita Snow, of The American Society of Civil Engineers (ASCE), for the initial session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session One: 'Setting Expectations: a DEIA Primer,' was held June 6, 2024.
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
A New Method for Indoor-outdoor Image Classification Using Color Correlated Temperature
1. A. Nadian Ghomsheh & A. Talebpour
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 167
A New Method for Indoor-outdoor Image Classification Using
Color Correlated Temperature
A. Nadian Ghomsheh a_nadian@sbu.ac.ir
Electrical and Computer Engineering Department
Shahid Beheshti University
Tehran, 1983963113, Iran
A. Talebpour talebpour@sbu.ac.ir
Electrical and Computer Engineering Department
Shahid Beheshti University
Tehran, 1983963113, Iran
Abstract
In this paper a new method for indoor-outdoor image classification is presented; where the
concept of Color Correlated Temperature is used to extract distinguishing features between
the two classes. In this process, using Hue color component, each image is segmented into
different color channels and color correlated temperature is calculated for each channel.
These values are then incorporated to build the image feature vector. Besides color
temperature values, the feature vector also holds information about the color formation of the
image. In the classification phase, KNN classifier is used to classify images as indoor or
outdoor. Two different datasets are used for test purposes; a collection of images gathered
from the internet and a second dataset built by frame extraction from different video
sequences from one video capturing device. High classification rate, compared to other state
of the art methods shows the ability of the proposed method for indoor-outdoor image
classification.
Keywords: Indoor-outdoor Image Classification, Color Segmentation, Color Correlated
Temperature
1. INTRODUCTION
Scene classification problem is a big challenge in the field of computer vision [1]. With rapid
technological advances in digital photography and expanding online storage space available
to users, demands for better organization and retrieval of image data base is increasing.
Precise indoor-outdoor image classification improves scene classification and allows such
processing systems to improve the performance by taking different methods based on the
scene class [2]. To classify images as indoor-outdoor, it is common to divide an image into
sub-blocks and process each block separately, benefiting from computational ease. Within
each image sub-block, different low-level features such as color, texture, and edge
information are extracted and used for classification.
Color provides strong information for characterizing landscape senses. Different color spaces
have been tested for scene classification. In a pioneer work [3], an image was divided into
44× blocks, and Ohta color space [4]was used to construct the required histograms for
extracting color information. Using only color information they achieved a classification
accuracy of 74.2%. LST color space used by [5] achieved a classification accuracy of 67.6%.
[6] used first order statistical features from color histograms, computed in RGB color space,
and achieved a classification rate with 65.7% recall and 93.8% precession. [7] compared
different features extracted from color histograms. These features include: opponent color
chromaticity histogram, color correlogram [8], MPEG-7 color descriptor, colored pattern
appearance histogram [9], and layered color indexing [10].Results of this comparison showed
that no winner could be selected for all types of images but significant amount of redundancy
in histograms can be removed. In [11] mixed channels of RGB, HSV, and Luv color spaces
were incorporated to calculate color histograms. First and second order statistical moments of
each histogram served as significant features for indoor-outdoor classification. To improve
2. A. Nadian Ghomsheh & A. Talebpour
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 168
classification accuracy, camera metadata related to capture conditions was used in [12]. In
[13], each image was divided into 2432 × sub-blocks and a 2D color histogram in Luv color
space along with a composite feature that correlates region color with spatial location derived
from H component in HSV color space where used for indoor-outdoor image classification.
[14, 15] proposed Color Oriented Histograms (COH) obtained from images divided into 5
blocks. They also used Edge Oriented Histograms (EOH) and combined them together to
obtain CEOH as the feature to classify images into indoor-outdoor classes.
Texture and Edge features were also used for indoor-outdoor image classification. [5]
extracted texture features obtained from a two-level wavelet decomposition on the L-channel
of the LST color space[16]. In [17] each image was segmented via fuzzy C-means clustering
and the extracted mean and variance of each segment represented texture features. [18]
used variance, coefficient of variation, energy, and entropy to extract texture features for
indoor-outdoor classification. Texture orientation was considered in [19]. From the analysis of
indoor and outdoor images, and images of synthetic and organic objects, [20] observed that
organic objects have a larger amount of small erratic edges due to their fractal nature. The
synthetic objects, in comparison, have edges that are straighter, and less erratic.
Although color has shown to be a strong feature for indoor-outdoor image classification,
dividing images into different blocks regardless of the information present in each individual
image will degrade the classification results, as the mixture of colors in each block would be
unpredictable. Another thing that has not gained much attention is the scene illumination
where the image was captured in. This is an important aspect since indoor and outdoor
illuminations are quite different and incorporating such information can effectively enhance
the classification results.
To overcome such limitations when color information is considered for indoor-outdoor image
classification, a new method based on the Color Correlated Temperature feature (CCT) is
proposed in this paper. As will be shown, the apparent color of an object will change when the
scene illumination changes. This feature can be very effective for the aims of this paper, since
the illuminants of indoor scenes are much different compared to outdoor scenes. CCT is a
way to show the apparent color of an image under different lighting conditions. In this
process, each image is divided into different color segments and CCT is found for each
segment. These values form the image feature vector, where, other than CCT information,
color formation of the image is inherently present in the feature vector. In the classification
phase KNN classifier is used for classification [21]. The focus of this paper is on the ability of
color information for indoor-outdoor image classification. Edge and texture features can also
be added to the image to enhance the classification rate, but this is beyond the scope of this
paper.
The reminder of the paper is organized as follows: section 2 reviews the theory of color image
formation in order to provide insight to why the concept of CCT can be effective for the aims
of this research. In section 3 the proposed method for calculating the temperature vector for
an arbitrary image is explained. Section 4 shows the experimental results, and section 5
concludes the paper.
2. REVIEW OF COLOR IMAGE THEORY
A color image is result of light illuminating the scene, the way objects reflect the light hitting
their surfaces, and characteristics of the image-capturing device. In the following subsections,
first, Dichromatic Reflection Model (DRM) is described and then various light sources are
briefly looked at to show differences among them. This explanation shows why the concept of
CCT can be utilized for indoor-outdoor image classification.
2.1 Dichromatic Reflection Model
A scene captured by a color camera can be modeled by spectral integration. This is often
described by DRM. Light striking a surface of non-homogeneous materials passes through air
and contacts the surface of the material. Due to difference in mediums index of refraction,
some of the light will reflect from the surface of the material which is called surface
reflectance (Ls). The light that penetrates into the body of the material is absorbed by the
colorant, transmitted through the material, or will re-emit from the entering surface. This
3. A. Nadian Ghomsheh & A. Talebpour
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 169
component is called the body reflectance (Lb). [22]. The total light (L), reflected from an object
is the sum of surface and body reflectance, and can be formulized as;
),(),(),( Ω+Ω=Ω λλλ bs LLL (1)
where, λ is the wavelength of the incident light and the photometric angle that includes
the viewing angle e, the phase angle g, and the illumination direction angle i (fig.1). Ls and Lb
are both dependent on the relative spectral power distribution (SPD), defined by Cs and Cb,
and a geometric scaling factor ms and mb which can be formulized by;
)()(),( λλ sss CmL Ω=Ω (2)
)()(),( λλ bbb CmL Ω=Ω (3)
Cs and Cb are also both product of the incident light spectrum (E), and the material’s spectral
surface reflectance (ρS) and body reflectance (ρB), defined by;
)()()( λρλλ sss EC = (4)
)()()( λρλλ bbb EC = (5)
i
e
g
Incident light Surface reflected light
Body reflected light
Normal
FIGURE 1: DRM model, showing surface reflectance and body reflectance from surface of an object.
By inspecting equations, (2) to (5) it can be observed that the light that enters into the camera
is dependent on the power spectrum of the light source; therefore, for the same object
different apparent color is perceived under different illuminations. Due to this fact, it can be
implied that different groups of illuminants give an object different perceived colors. The less
within class variance among a group of light sources and the more between class variance
among different groups of light sources indicate a better distinguishing feature for indoor
outdoor image classification when scene illumination is considered. In the next subsection
different light sources are reviewed to how different classes of light sources differ from each
other.
2.2- Light Sources
The color of a light source is defined by its spectral composition. The spectral composition of
a light source may be described by the CCT. A spectrum with a low CCT has a maximum in
the radiant power distribution at long wavelengths, which gives the material a reddish
appearance, e.g. sunlight during sunset. A light source with a high CCT has a maximum in
the radiant power distribution at short wavelengths and gives the material a bluish
appearance, e.g., special fluorescent lamps [23]. Fig 2-a shows three fluorescent lamp SPDs
compared with a halogen bulb, and Fig 2-b shows SPD of daylight in various hours of day.
Comparison between these two types shows how the spectrums of fluorescents lamps follow
the same pattern and how they are different with respect to the halogen lamp.
Fig. 2-c shows three diverse light spectrums; Blackbody radiator, a fluorescent lamp, and
daylight all having a CCT of 6200 K. It can be seen that the daylight spectrum is close to the
Blackbody spectrum whereas the fluorescent lamp spectrum deviates significantly from the
blackbody spectrum.172 light sources measured in [24] showed that illuminant chromaticity
4. A. Nadian Ghomsheh & A. Talebpour
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 170
falls on a long thin band in the chromaticity plane which is very close to the Planckian locus of
Blackbody radiators
By reviewing the SPDs of various light sources it can be implied that the same group of light
sources show the same pattern in their relative radiant power when compared to other
groups. This distinguishing feature makes it possible to classify a scene based on scene
illuminant; which in this paper is incorporated for indoor-outdoor image classification.
300 400 500 600 700 800
0
.5
1
1.5
2
2.5
Wavelength (nm)
Relativeradiantpower
Sylvania halogen
Mabeth flurescent
Philips flurescent
Sylvania flurescent
(a)
(b) (c)
FIGURE 2: Different light sources with their SPDs. (a) SPDs for three different fluorescent lamps
compared to SPD of halogen bulb [23]. (b) Daylight SPDs at different hours of day, normalized at
560nm. (c) Spectra of different light sources all with a CCT of 6200K and normalized at λ = 560 nm [24].
3. PROPOSED METHOD
Review of color image formation showed that the color of an image captured by the camera is
result of the reflected light from the surface of objects as a sum of body and surface
reflectance (eq. 1). Surface reflectance has same properties of the incident light and it is a
significant feature for detecting the light source illuminating the scene. Body reflectance on
the other hand most closely resembles the color of the material taking into account the
spectra of the incident light; which in this paper is used as a metric to classify images as
indoor and outdoor. Different steps of the proposed algorithm are shown in fig. 3.
5. A. Nadian Ghomsheh & A. Talebpour
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 171
.
.
.
Channel1
Channel2
Channeln
.
.
.
.
.
Average Channel 1
CalculateCCT 1
Average Channel 2
CalculateCCT 2
Average Channel 1
CalculateCCT 1
TestFeatureVector(TFV)
Indoor Fvi i=1,2,…,n
Outdoor FVi i=1,2,…,m
KNN classifier
Indoor outdoor
Input
image
Color
Segmentation
CCT calculation
Trainingvectors
FIGURE 3: Block Diagram of the proposed method for indoor-outdoor classification
In the first step, color segmentation is performed and the image is partitioned into N color
channels (each color segment is thought of an independent color channel). Next CCT is
calculated for each channel, and the feature vector obtained is used for indoor-outdoor image
classification. Finally a KNN classifier is used to partition images into two classes of indoor
and outdoor. Each step of the algorithm is next explained in detail.
3.1 Color Segmentation
Many color spaces have been used for color segmentation. HSV is nonlinear color spaces
and a cylindrical-coordinate representation of points in RGB color model. The nonlinear
property of this color space makes it possible to segment colors in one color component
independent of other components which is an advantage over linear color spaces [25]. This
is color space is used here to segment images into desired number of segments; where each
segment is called a “color channel”. HSV is obtained from RGB space by:
=°+
−
−
×°
=°+
−
−
×°
=°
−
−
×°
=
=
bif
gr
gif
rb
rif
bg
if
h
max240
minmax
60
max120
minmax
60
max360mod
minmax
60
minmax0
(6)
−
=
=
otherwise
if
s
maxmin/1
0max0
(7)
max=v (8)
here max and min are the respective maximum and minimum values of r, g, and b, the RGB
color components for each pixel in the image.
The following steps show how each image is converted to its corresponding color channels
(Ch).For an input image with color components: Red, Green, and Blue:
1) Convert image from RGB color space to HSV
2) Quantize H component to 2n
where n=1,2,…,N
3) Find a mask that determines each color channel (ch):
0),(
1),(),(
=
==
yxmaskelse
yxmasknyxHif
n
n
4) Each Ch is then obtained by:
.Blue)]mask.Green), (k.red), (ms=[(maskCh nnnn
6. A. Nadian Ghomsheh & A. Talebpour
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 172
where “.” Represent point-by-point multiplication of matrixes. After dividing each image into
the respective color channels, CCT for each available channel has to be calculated.
3.2 Calculating CCT
A few algorithms have been introduced to estimate temperature of a digital image for scene
classification or finding scene illumination[26, 27]. Different purpose of the previous
algorithms makes them impractical for the aim of this paper.
Planckian, or blackbody locus can by calculated by colorimetrically integrating the Planck
function at many different temperatures, with each temperature specifying a unique pair of
chromaticity coordinates on the locus [28]. Inversely if one has the Planckian locus, then CCT
can be calculated for every chromaticity coordinate. To find the CCT for each color channel it
is necessary to find a chromaticity value to represent each color channel. The algorithm
proposed for calculation CCT for each color channel has the following steps:
1) From V component of HSV color space, discard dark pixels that their value is smaller
than 10% of maximum range of V for each ch.
2) Take average of the remaining pixels in RGB color space to discard surface reflectance
through the iterative process.
3) Find the CCT of the color channel using the average chromaticity value.
From the DRM model it will be straight forward that dark pixels in the image do not hold
information about the scene luminance since they are either not illuminated, or the reflected
light from their surface does not enter the camera. In steps 2, pixels, which their values hold
information mainly about surface reflectance, are discarded through an iterative process. In
this process instead of just discarding bright points, the algorithm discards pixels with respect
to luminance of other pixels. Using this average value, the CCT of each color channel is the
calculated.
3.2.1 Calculating Average Chromaticity of Lb
The aim of averaging each channel is to discard pixels that have been exposed to direct light,
or where the reflection from object’s surface is in line with camera lenses. The value of such
pixels may vary with respect to the surface reflectance and lighting condition. The flow chart
to implement the proposed algorithm is shown in Fig. 4.
7. A. Nadian Ghomsheh & A. Talebpour
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 173
)( nchchannelcolorfor
21
1
)(
)(
:
n
i
n
i
ch
chaverage
calculate
−=
=
+
+
µσ
µ
1+iµ1
1
Tif ii
>−+
µµ
otherwiseifyxMask
TATifyxMask
yxChA
n
n
n
i
0),(
1),(
),(
33
1
=
×<<×=
−= +
σσ
µ
nnn Maskyxchyxch ).,(),( =
FIGURE 4: Different steps of color segmentation process
In this algorithm, Ch(x,y) is pixel value at position (x,y) in Chn, M is the mask that discards the
unwanted information. “.” is the inner product. σ is variance, and µ is the mean value of the
remaining pixels in iteration i. T1, T2, and T3 are the thresholds that have to be determined.
The convergence of this algorithm is guaranteed since in worth case only one pixel of the
image will be left unchanged and the calculated µ for that pixel will stay constant. Therefore,
thresholds T1, T2, and T3 can be chosen arbitrary. However, to find a good set of thresholds
they are found by training them on 40 training images. To do so it is desired to find triple
T=(T1,T2,T3) that for each color channel the best average point is obtained. This point is
defined as "the point where the average values that is most frequently occurred". Let each
Chi,n,C be all the training color channels:
(9)
where i (i=1,2,…,I) is the number of image samples, n (n= 1,2,…,N) is the number of color
channel in each image, and C (C=1,2,3) shows Red, Green, and Blue components. Also let
T1, T2, and T3 be vectors:
],5...,,2,1[
]5.1...,,2.1,1.1[
]3.1...,,1.1,05.1[
3
2
1
=
=
=
T
T
T
which can take the form:
{ })(),...,(),()( 5
3
6
2
6
1
2
3
1
2
1
1
1
3
1
2
1
1 TTTTTTTTTT =α (10)
where α=1,2,…,180. Set T(α) consists of all threshold combinations that have to be checked
in order to find the best choice of thresholds, T
opt
. For each sample Ch, the average value of
the channel in the respective color component is calculated for T(α), and stored in the
average vector Avg:
=
1,1,
1,,11,1,1
,,
i
n
Cni
ch
chch
ch M
L
2,1,
2,,12,1,1
i
n
ch
chch
M
L
3,,3,1,
3,,13,1,1
nii
n
chch
chch
L
MOM
L
8. A. Nadian Ghomsheh & A. Talebpour
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 174
))]180(()),...,2(()),1(([)( TAvgTAvgTAvg=αAvg (11)
Next, the histogram of Avg with ten bin resolution is calculated (Fig. 5). The most frequent bin
is considered as the acceptable average values of the Ch defined by Avgmax. Set T
opt
shows
the thresholds that yield Avgmax for a certain Ch, and is defined as:
))((maxarg
)(
α
α
TT
T
opt
Avg=
(12)
By calculating T
opt
for all color channels a set T
opt
(t), t=1,2,…,960 is achieved which can be
written as:
=
===
blue
opt
Green
opt
red
opttT TTT
bluegreenred ttt
obt
960:641640:321320:1
,,)(
(13)
190 200 210
0
50
100
150
Average Values
Frequency
FIGURE 5: Finding different T that yield Avgmax
From this set it is possible to find the optimum thresholds for each color component, for
example for the red channel T
opt,red
, can be written as:
`)(maxarg
,
tT red
optT
redt
redobt
=
(14)
T
obt,green
and T
opt,blue
are calculated in the same way as (14), and from them all the trained
thresholds are obtained as:
=
blueobtgreenobtredobt
blueobtgreenobtredobt
blueobtgreenobtredobt
TTT
TTT
TTT
,
3
,
3
,
3
,
2
,
2
,
2
,
1
,
1
,
1
obt
T
(15)
After calculating the average chromaticity value µ, it is possible to find the CCT of each Chn..
3.2.2 Calculating CCT For The Average Chromaticity
To calculate the CCT of a chromaticity value µ(x,y), it is transformed into Luv color space [25],
where pixel µ is represented by coordinates (u,v). Fig 6 shows the Luv chromaticity plane with
the plankian locus showing various Color temperatures, the perpendicular lines on the locus
are iso-temperature lines.
9. A. Nadian Ghomsheh & A. Talebpour
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 175
FIGURE 6: Luv chromaticity plane with the plankian locus.
By utilizing iso-temperature line CCT can be found by interpolation from look-up tables and
charts. The most famous such method is Robertson's [29] (Fig 7). In this method CCT of a
chromaticity value Tc, can be found by calculating:
−
+
+=
+121
1
1
1111
iic TTTT θθ
θ
(16)
where, θ is the angle between two isotherms. Ti and Ti+1are the color temperatures of the
look-up isotherms and i is chosen such that Ti<Tc<Ti+1 (fig 7). If the isotherms are tight
enough, it can assumed that, (θ1/θ2)≈(sinθ1/sinθ2), leading to:
−
−
+=
++ 111
1111
iiii
i
c TTdd
d
TT (17)
di is distance of the test point to the i'th isotherm given by:
2
1
)()(
i
iciic
i
m
uumvv
d
+
−−−
= (18)
where, (ui,vi) is the chromaticity coordinates of the i’th isotherm on the Planckian locus and mi
is the isotherm's slope. Since it is perpendicular to the locus, it follows that mi=-1/li where li is
the slope of the locus at (ui,vi). Upon calculation of CCT for all color channels, the feature
vector fv containing the respective CCT for each color channel is obtained:
],...,,[ 21 NCCTCCTCCTfv = (19)
where, N is the number of color channels in each image. This vector can now be used for
classification. In the classification phase, KNN classifier is used for classification. K is a user-
defined constant, and the test feature vectors are classified by assigning the label which is
most frequent among the K training samples nearest to that test point.
1θ
2θ
1+− id
id
iT
1+iT
FIGURE 7: Robertson method for finding CCT of a chromaticity value.
10. A. Nadian Ghomsheh & A. Talebpour
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 176
4. EXPERIMENTAL RESULTS
To assess the performance of the proposed method, some tests are conducted on two sets of
indoor and outdoor images. First a collection 800 images gathered from the internet named
DB1 and a second dataset selected by extracting 800 frames from several video clips
captured by a Sony HD handy cam (DB2).
Using DB2 it is possible to denote that camera effects are the same for all images; hence the
classification ability of the proposed method is evaluated regardless of camera calibrations. A
total of 40 clips captured in outdoors, and 25 different clips captured indoor were used to
make DB2. The clips were captured at 29.97 frames per second and data were stored in RGB
color space with 8bit resolution for each color channel. Fig 8 shows some sample indoor and
outdoor frames. To show the difference between normal averaging and the averaging process
introduced in this paper where the effect of surface reflectance is eliminated fig 9 is utilized.
FIGURE 8: Top and middle rows: 4 indoor and outdoor and Bottom row: 4 consecutive outdoor frames
(a)
(b) (c)
11. A. Nadian Ghomsheh & A. Talebpour
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 177
FIGURE 9: An example of the proposed algorithm for averaging color channels. a) Green channels of
pictures taken in outdoor (top row) and at indoor situations (bottom row). (b) Normal averaging of the
pixels. (c) Averaging pixels by the proposed algorithm.
Fig 9.a shows ten green color channels from five outdoor and five indoor scenes. For each
color channel the corresponding average chromaticity value in the uv plane is shown. Outdoor
images are shown with circles and indoor images are shown with black squares. As it can be
seen when the effect of surface reflectance is obscured, the indoor and outdoor images are
significantly separable. To calculate the average points T
opt
was trained. The histogram of
average values obtained for channel Ch1,1,1 (as an example) are shown in fig10. Based on the
histograms calculated for all different channels, the matrix Topt
was obtained as:
=
3.14.14.1
1.12.11.1
112
optT
FIGURE 10: Average values of the Ch1,1,1 component on the database
N=4 N=8 N=12 N=16 N=18
K=15
indoor 73.5 90.5 88 85.5 83.2
outdoor 51 69.2 80.5 89.5 82.4
K=20
indoor 78 86.5 92 87.5 88.1
outdoor 63.5 74 80.5 89 82.2
K=30
indoor 70 85.5 90 87 87.1
outdoor 52 72 80 87.5 77.2
TABLE 1: Results of classification on DB1
After finding the fv for an image, KNN classifier is used for classification. Table1 shows the
result of image classification for different N (number of color channels) and K (number of
neighbors in KNN classifier). From this table it can be seen N=16 yields the best result when
considering both indoor and outdoor detection rates. Furthermore when K=20 is chosen,
87.5% indoor and 89% outdoor images are correctly classified. Table 2 shows the
classification results on DB2. In this experiment, the results also show that choosing 16
channels for classification achieves higher classification rates. The overall comparisons on
DB1 and DB2 show that results only differ by a small percentage. This shows that the method
is robust for classification of images taken under unknown camera calibration and image
compression.
12. A. Nadian Ghomsheh & A. Talebpour
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 178
N=4 N=8 N=12 N=16 N=18
K=15
indoor 77 92.5 91.5 87.5 90.5
outdoor 54 72 82.5 90 78.5
K=20
indoor 81.5 88.5 90.5 90 85.2
outdoor 63.5 86 84.5 89.5 86.5
K=30
indoor 74 87.5 88.5 88.5 84.5
outdoor 57 74.5 82.5 89.5 83.7
TABLE 2: Results of classification on DB2
To observe the results based on accuracy of classification and to find the K which yields the
best results, fig. 11 shows the accuracy of each experiment. Based on this figure it can be
seen that when DB1 and DB2 with N=16 and K=20 are chosen best classification rates with
88.25% and 89.75% in accuracy are obtained.
In Most cases, it can be seen that indoor images have been detected with more accuracy.
This because the outdoor image are exposed to more diverse lighting conditions since the
spectrum of sunlight is changing during daytime and also reflections from the sea, clouds and
the ground makes the spectrum of the reflected light more complex.
59
64
69
74
79
84
89
15 20 30
Accuracy(%)
K
N= 4 DB1
N= 8 DB1
N=12 DB1
N=16 DB1
N=18 DB1
N= 4 DB2
N= 8 DB2
N=12 DB2
N=16 DB2
N=18 DB2
FIGURE 11: Accuracy of the all test cases
To evaluate classification accuracy of the proposed method, two color features: Color
Oriented Histograms (COH) and the Ohta color space used in [15] and [3] respectively are
extracted and tested on images in DB1. Table 3 shows the result of this comparison. These
features are extracted as explained in the original document.
13. A. Nadian Ghomsheh & A. Talebpour
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 179
CCT COH Ohta
K=15
Indoor 85.5 94 85.5
Outdoor 89.5 68.5 70.25
Accuracy 87.5 81.25 78.75
K=20
Indoor 87.5 95 84
Outdoor 89 69.5 70
Accuracy 88.25 82.25 77
K=25
Indoor 87 96.5 86
Outdoor 87.5 68 68.5
Accuracy 87.25 82.25 77.25
TABLE 3: Comparison of different methods
From the results of this table, it is clear that the proposed method outperforms the state of the
art methods at least by 5%. The indoor classification rate using COH is in most cases more
significant in comparison to the classification rate using CCT, but outdoor classification using
COH shows quite low detection rates. Ohta color space shows not to be a preferable color
space for indoor-outdoor image classification. To further investigate the robustness of the
proposed method, in another experiment the result of indoor-outdoor classification is tested
when the JPEG Compression Ratio (CR) is changed for image in DB1[30].
CR=2 CR=3 CR=5 CR=10
CCT
Indoor 86 87.5 77 74.5
Outdoor 80 76 73 67
Accuracy 83 81.75 75 70.75
COH
Indoor 88 79.5 76 72.5
Outdoor 62.5 60 53 54.5
Accuracy 75.25 69.75 64.5 63.5
Ohta
Indoor 71 73.5 69.5 63
Outdoor 69.3 67 61 54.5
Accuracy 70.15 70.25 65.25 58.75
TABLE 4:. Effect of JPEG compression classification results
Table 4 summarizes the result of indoor-outdoor image classification for different compression
rates on DB1. From this table it can be seen that the CCT of the image is less affected when
CR=2 and 3 but as CR is increased the results for all methods start to degrade. For CR=10
classification accuracy based on CCT feature is still higher than 70% while for two other
tested approached , they are decreased to less than 65%. This result shows the robustness
of the proposed method against changes in compression changes applied to digital images.
5. CONCLUSIONS
In this paper, a new method based on image CCT was proposed for indoor-outdoor image
classification. Images were first segmented into different color channels and based on the
proposed algorithm CCT of each color channels was calculated. These CCTs formed the
feature vector which was fed to a KNN classifier to classify images as indoor or outdoor. Tests
were conducted on two datasets collected from the internet and video frames extracted from
40 different video clips. The classification results showed that incorporating CCT information
yields high classification accuracy of 88.25% on DB1 and 89.75% on DB2. The result of
classification on DB1 showed 5% improvement compared to other state of the art methods. In
addition, the method was tested against changes in the JPEG compression rate applied to
14. A. Nadian Ghomsheh & A. Talebpour
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 180
images where the method showed to be more robust compared to other methods. High
classification rate and robustness of the presented method makes it highly applicable for the
application of indoor-outdoor image classification.
6. REFERENCES
[1] Angadi, S.A. and M.M. Kodabagi, A Texture Based Methodology for Text Region
Extraction from Low Resolution Natural Scene Images. International Journal of Image
Processing, 2009. 3(5): p. 229-245.
[2] Bianco, S., et al., Improving Color Constancy Using Indoor–Outdoor Image
Classification. IEEE transaction on Image Processing, 2008. 17(12): p. 2381-2392.
[3] Szummer, M. and R.W. Picard, Indoor-outdoor image classification, in IEEE Workshop
on Content-Based Access of Image and Video Database. 1998: Bombay, India. p. 42-
51.
[4] Ohta, Y.I., T. Kanade, and T. Sakai, Color information for region segmentation.
Computer Graphics and Image Processing, 1980. 13(3): p. 222-241.
[5] Serrano, N., A. Savakis, and J. Luo, A computationally efficient approach to
indoor/outdoor scene classification, in International Conference on Pattern Recognition.
2002: QC, Canada, . p. 146-149.
[6] Miene, A., et al., Automatic shot boundary detection and classification of indoor and
outdoor scenes, in Information Technology, 11th Text Retrieval Conference. 2003,
Citeseer. p. 615-620.
[7] Qiu, G., X. Feng, and J. Fang, Compressing histogram representations for automatic
colour photo categorization. Pattern Recognition, 2004. 37(11): p. 2177-2193.
[8] Huang, J., et al., Image indexing using color correlograms. International Journal of
Computer Vision, 2001: p. 245–268.
[9] Qiu, G., Indexing chromatic and achromatic patterns for content-based colour image
retrieval. Pattern Recognition, 2002. 35(8): p. 1675-1686.
[10] Qiu, G. and K.M. Lam, Spectrally layered color indexing. Lecture Notes in Computer
Science, 2002. 2384: p. 100-107.
[11] Collier, J. and A. Ramirez-Serrano, Environment Classification for Indoor/Outdoor
Robotic Mapping, in Canadian Conference on Computer and Robot Vision. 2009, IEEE.
p. 276-283.
[12] Boutell, M. and J. Luo, Bayesian Fusion of Camera Metadata Cues in Semantic Scene
Classification, in Computer Vision and Pattern Recognition (CVPR). 2004: Washington,
D.C. p. 623-630.
[13] Tao, L., Y.H. Kim, and Y.T. Kim, An efficient neural network based indoor-outdoor
scene classification algorithm, in International Conference on Consumer Electronics.
2010: Las Vegas, NV p. 317-318.
[14] Vailaya, A., et al., Image classification for content-based indexing. IEEE Transactions
on Image Processing, 2001. 10(1): p. 117-130.
[15] Kim, W., J. Park, and C. Kim, A Novel Method for Efficient Indoor–Outdoor Image
Classification. Signal Processing Systems, 2010. 61(3): p. 1-8.
[16] Daubechies, I., Ten Lectures on Wavlets. 1992, Philadelphia: SIAM Publications.
15. A. Nadian Ghomsheh & A. Talebpour
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 181
[17] Gupta, L., et al., Indoor versus outdoor scene classification using probabilistic neural
network Eurasip Journal on Advances in Signal Processing, 2007. 1: p. 123-133.
[18] Tolambiya, A., S. Venkatraman, and P.K. Kalra, Content-based image classification
with wavelet relevance vector machines. Soft Computing, 2010. 14(2): p. 129-136.
[19] Hu, G.H., J.J. Bu, and C. Chen, A novel Bayesian framework for indoor-outdoor image
classification, in International Conference on Machine Learning and Cybernetics 2003:
Xian. p. 3028-3032
[20] Payne, A. and S. Singh, Indoor vs. outdoor scene classification in digital photographs.
Pattern Recognition, 2005. 38(10): p. 1533-1545.
[21] Songbo, T., An effective refinement strategy for KNN text classifier. Expert Systems
with Application, 2006. 30(2): p. 290-298.
[22] Shafer, S.A., Using color to separate reflection components. Color Research and
Application, 1985. 10(4): p. 210.
[23] Ebner, M., Color constancy. 2007, West Sussex: Wily.
[24] Finlayson, G., G. Schaefer, and S. Surface, Single surface colour constancy, in 7th
Color Imaging Conference: Color Science, Systems, and Applications. 1999:
Scottsdale, USA. p. 106-113.
[25] Cheng, H.D., et al., color image segmentation: advances and prospects. Pattern
Recognition, 2001. 34(12): p. 2259-2281.
[26] WNUKOWICZ, K. and W. SKARBEK, Colour temperature estimation algorithm for
digital images - properties and convergence. OPTO-ELECTRONICS REVIEW, 2003.
11(3): p. 193-196.
[27] Lee, H., et al., One-dimensional conversion of color temperature in perceived
illumination. Consumer Electronics, IEEE Transactions on, 2001. 47(3): p. 340-346.
[28] Javier, H., J. Romero, and L.R. Lee, Calculating correlated color temperatures across
the entire gamut of daylight and skylight chromaticities. Applied Optis, 1999. 38(27): p.
5703-5709.
[29] Robertson, A. and R. Alan, computation of Correlated Color Temperature abd
Distribution Temperature. Color Research and Application, 1968. 58(11): p. 1528-1535.
[30] Yang, E. and L. Wang, Joint optimization of run-length coding, Huffman coding, and
quantization table with complete baseline JPEG decoder compatibility. Image
Processing, IEEE Transactions on, 2009. 18(1): p. 63-74.