Image retrieval system is an active area to propose a new approach to retrieve images from the
large image database. In this concerned, we proposed an algorithm to represent images using
divisive based and partitioned based clustering approaches. The HSV color component and Haar wavelet transform is used to extract image features. These features are taken to segment an image to obtain objects. For segmenting an image, we used modified k-means clustering algorithm to group similar pixel together into K groups with cluster centers. To modify Kmeans, we proposed a divisive based clustering algorithm to determine the number of cluster and get back with number of cluster to k-means to obtain significant object groups. In addition, we also discussed the similarity distance measure using threshold value and object uniqueness to quantify the results.
Research Inventy : International Journal of Engineering and Scienceinventy
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Multi Resolution features of Content Based Image RetrievalIDES Editor
Many content based retrieval systems have been
proposed to manage and retrieve images on the basis of their
content. In this paper we proposed Color Histogram, Discrete
Wavelet Transform and Complex Wavelet Transform
techniques for efficient image retrieval from huge database.
Color Histogram technique is based on exact matching of
histogram of query image and database. Discrete Wavelet
transform technique retrieves images based on computation
of wavelet coefficients of subbands. Complex Wavelet
Transform technique includes computation of real and
imaginary part to extract the details from texture. The
proposed method is tested on COREL1000 database and
retrieval results have demonstrated a significant improvement
in precision and recall.
Query Image Searching With Integrated Textual and Visual Relevance Feedback f...IJERA Editor
There are many researchers who have studied the relevance feedback in the literature of content based image
retrieval (CBIR) community, but none of CBIR search engines support it because of scalability, effectiveness
and efficiency issues. In this, we had implemented an integrated relevance feedback for retrieving of web
images. Here, we had concentrated on integration of both textual features (TF) and visual features (VF) based
relevance feedback (RF), simultaneously we also tested them individually. The TFRF employs and effective
search result clustering (SRC) algorithm to get salient phrases. Then a new user interface (UI) is proposed to
support RF. Experimental results show that the proposed algorithm is scalable, effective and accurated
Real-time Multi-object Face Recognition Using Content Based Image Retrieval (...IJECEIAES
Face recognition system in real time is divided into three processes, namely feature extraction, clustering, detection, and recognition. Each of these stages uses different methods, Local Binary Pattern (LBP), Agglomerative Hierarchical Clustering (AHC) and Euclidean Distance. Multi-face image search using Content Based Image Retrieval (CBIR) method. CBIR performs image search by image feature itself. Based on real time trial results, the accuracy value obtained is 61.64%.
Research Inventy : International Journal of Engineering and Scienceinventy
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Multi Resolution features of Content Based Image RetrievalIDES Editor
Many content based retrieval systems have been
proposed to manage and retrieve images on the basis of their
content. In this paper we proposed Color Histogram, Discrete
Wavelet Transform and Complex Wavelet Transform
techniques for efficient image retrieval from huge database.
Color Histogram technique is based on exact matching of
histogram of query image and database. Discrete Wavelet
transform technique retrieves images based on computation
of wavelet coefficients of subbands. Complex Wavelet
Transform technique includes computation of real and
imaginary part to extract the details from texture. The
proposed method is tested on COREL1000 database and
retrieval results have demonstrated a significant improvement
in precision and recall.
Query Image Searching With Integrated Textual and Visual Relevance Feedback f...IJERA Editor
There are many researchers who have studied the relevance feedback in the literature of content based image
retrieval (CBIR) community, but none of CBIR search engines support it because of scalability, effectiveness
and efficiency issues. In this, we had implemented an integrated relevance feedback for retrieving of web
images. Here, we had concentrated on integration of both textual features (TF) and visual features (VF) based
relevance feedback (RF), simultaneously we also tested them individually. The TFRF employs and effective
search result clustering (SRC) algorithm to get salient phrases. Then a new user interface (UI) is proposed to
support RF. Experimental results show that the proposed algorithm is scalable, effective and accurated
Real-time Multi-object Face Recognition Using Content Based Image Retrieval (...IJECEIAES
Face recognition system in real time is divided into three processes, namely feature extraction, clustering, detection, and recognition. Each of these stages uses different methods, Local Binary Pattern (LBP), Agglomerative Hierarchical Clustering (AHC) and Euclidean Distance. Multi-face image search using Content Based Image Retrieval (CBIR) method. CBIR performs image search by image feature itself. Based on real time trial results, the accuracy value obtained is 61.64%.
Comparision of Clustering Algorithms usingNeural Network Classifier for Satel...IJERA Editor
This paper presents a hybrid clustering algorithm and feed-forward neural network classifier for land-cover mapping of trees, shade, building and road. It starts with the single step preprocessing procedure to make the image suitable for segmentation. The pre-processed image is segmented using the hybrid genetic-Artificial Bee Colony(ABC) algorithm that is developed by hybridizing the ABC and FCM to obtain the effective segmentation in satellite image and classified using neural network . The performance of the proposed hybrid algorithm is compared with the algorithms like, k-means, Fuzzy C means(FCM), Moving K-means, Artificial Bee Colony(ABC) algorithm, ABC-GA algorithm, Moving KFCM and KFCM algorithm.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Information search using text and image queryeSAT Journals
Abstract An image retrieval and re-ranking system utilizing a visual re-ranking framework which is proposed in this paper the system retrieves a dataset from the World Wide Web based on textual query submitted by the user. These results are kept as data set for information retrieval. This dataset is then re-ranked using a visual query (multiple images selected by user from the dataset) which conveys user’s intention semantically. Visual descriptors (MPEG-7) which describe image with respect to low-level feature like color, texture, etc are used for calculating distances. These distances are a measure of similarity between query images and members of the dataset. Our proposed system has been assessed on different types of queries such as apples, Console, Paris, etc. It shows significant improvement on initial text-based search results.This system is well suitable for online shopping application. Index Terms: MPEG-7, Color Layout Descriptor (CLD), Edge Histogram Descriptor (EHD), image retrieval and re-ranking system
A broad ranging open access journal Fast and efficient online submission Expe...ijceronline
nternational Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
A comparative analysis of retrieval techniques in content based image retrievalcsandit
Basic group of visual techniques such as color, shape, texture are used in Content Based Image
Retrievals (CBIR) to retrieve query image or sub region of image to find similar images in
image database. To improve query result, relevance feedback is used many times in CBIR to
help user to express their preference and improve query results. In this paper, a new approach
for image retrieval is proposed which is based on the features such as Color Histogram, Eigen
Values and Match Point. Images from various types of database are first identified by using
edge detection techniques .Once the image is identified, then the image is searched in the
particular database, then all related images are displayed. This will save the retrieval time.
Further to retrieve the precise query image, any of the three techniques are used and
comparison is done w.r.t. average retrieval time. Eigen value technique found to be the best as
compared with other two techniques.
An Improved Way of Segmentation and Classification of Remote Sensing Images U...ijsrd.com
The Ultimate significance of Images lies in processing the digital image which stems from two principal application areas: Advances of pictorial information for human interpretation; and dispensation of image data for storage, communication, and illustration for self-sufficient machine perception. The objective of this research work is to define the meaning and possibility of image segmentation based on remote sensing images which are successively classified with statistical measures. In this paper kernel induced Possiblistic C-means clustering algorithm has been implemented for classifying remote sensing image data with image features. As a final point of the proposed work is to point out that this algorithm works well for segmenting and classifying the image with better accuracy with statistical metrices.
OBTAINING SUPER-RESOLUTION IMAGES BY COMBINING LOW-RESOLUTION IMAGES WITH HIG...ijcsit
In this paper, we propose a new algorithm to estimate a super-resolution image from a given low-resolution
image, by adding high-frequency information that is extracted from natural high-resolution images in the
training dataset. The selection of the high-frequency information from the training dataset is accomplished in
two steps, a nearest-neighbor search algorithm is used to select the closest images from the training dataset,
which can be implemented in the GPU, and a sparse-representation algorithm is used to estimate a weight
parameter to combine the high-frequency information of selected images. This simple but very powerful
super-resolution algorithm can produce state-of-the-art results. Qualitatively and quantitatively, we
demonstrate that the proposed algorithm outperforms existing state-of-the-art super-resolution algorithms.
The content based Image Retrieval is the restoration of images with respect to the visual appearances
like texture, shape and color.The methods, components and the algorithms adopted in this content based
retrieval of images were commonly derived from the areas like pattern identification, signal progressing
and the computer vision. Moreover the shape and the color features were abstracted in the course of
wavelet transformation and color histogram. Thus the new content based retrieval is proposed in this
research paper.In this paper the algorithms were required to propose with regards to the shape, shade and
texture feature abstraction .The concept of discrete wavelet transform to be implemented in order to
compute the Euclidian distance.The calculation of clusters was made with the help of the modified KMeans
clustering technique. Thus the analysis is made in among the query image and the database
image.The MATLAB software is implemented to execute the queries. The K-Means of abstraction is
proposed by performing fragmentation and grid-means module, feature extraction and K- nearest neighbor
clustering algorithms to construct the content based image retrieval system.Thus the obtained result are
made to compute and compared to all other algorithm for the retrieval of quality image features
Web Image Retrieval Using Visual Dictionaryijwscjournal
In this research, we have proposed semantic based image retrieval system to retrieve set of relevant images for the given query image from the Web. We have used global color space model and Dense SIFT feature extraction technique to generate visual dictionary using proposed quantization algorithm. The images are transformed into set of features. These features are used as inputs in our proposed Quantization algorithm for generating the code word to form visual dictionary. These codewords are used to represent images semantically to form visual labels using Bag-of-Features (BoF). The Histogram intersection method is used to measure the distance between input image and the set of images in the image database to retrieve similar images. The experimental results are evaluated over a collection of 1000 generic Web images to demonstrate the effectiveness of the proposed system.
Super-resolution (SR) is the process of obtaining a high resolution (HR) image or
a sequence of HR images from a set of low resolution (LR) observations. The block
matching algorithms used for motion estimation to obtain motion vectors between the
frames in Super-resolution. The implementation and comparison of two different types of
block matching algorithms viz. Exhaustive Search (ES) and Spiral Search (SS) are
discussed. Advantages of each algorithm are given in terms of motion estimation
computational complexity and Peak Signal to Noise Ratio (PSNR). The Spiral Search
algorithm achieves PSNR close to that of Exhaustive Search at less computation time than
that of Exhaustive Search. The algorithms that are evaluated in this paper are widely used
in video super-resolution and also have been used in implementing various video standards
like H.263, MPEG4, H.264.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
The development of multimedia system technology in Content based Image Retrieval (CBIR) System is
one in every of the outstanding area to retrieve the images from an oversized collection of database. The feature
vectors of the query image are compared with feature vectors of the database images to get matching images.It is
much observed that anyone algorithm isn't beneficial in extracting all differing kinds of natural images. Thus an
intensive analysis of certain color, texture and shape extraction techniques are allotted to spot an efficient CBIR
technique that suits for a selected sort of images. The Extraction of an image includes feature description and
feature extraction. During this paper, we tend to projected Color Layout Descriptor (CLD), grey Level Co-
Occurrences Matrix (GLCM), Marker-Controlled Watershed Segmentation feature extraction technique that
extract the matching image based on the similarity of Color, Texture and shape within the database. For
performance analysis, the image retrieval timing results of the projected technique is calculated and compared
with every of the individual feature.
Abstract: Many applications such as robot navigation, defense, medical and remote sensing performvarious processing tasks, which can be performed more easily when all objects in different images of the same scene are combined into a single fused image. In this paper, we propose a fast and effective method for image fusion. The proposed method derives the intensity based variations that is large and small scale, from the source images. In this approach, guided filtering is employed for this extraction. Gaussian and Laplacian pyramidal approach is then used to fuse the different layers obtained. Experimental results demonstrate that the proposed method can obtain better performance for fusion of
all sets of images. The results clearly indicate the feasibility of the proposed approach.
Enhanced target tracking based on mean shift algorithm for satellite imageryeSAT Journals
Abstract Target tracking in high resolution satellite images is challenging task for computer vision field. In this paper we have proposed a mean shift algorithm based enhanced target tracking system for high resolution satellite imagery. In proposed tracking algorithm, Target modeling is done using spectral features of target object i.e. Mean & Energy density function. Feature Vector space with minimum Euclidean Distance is used for predicting next possible position of target object in consecutive frames. Proposed tracking algorithm has been tested using two high resolution databases i.e. Harbor & Airport region database acquired by WorldView-2 satellite at different times. Recall, Precision & F1 score etc. performance parameters are also calculated for showing the tracking ability of the proposed method in real-time applications and are compared with the results of Regional Operator Design based tracking algorithm proposed in [1]. The results show that our proposed method gives relatively better performance than the other tracking algorithms used in satellite imagery. Keywords- Target tracking, Mean shift algorithm, Energy density function, Feature Vector Space, Frame
Segmentation of Images by using Fuzzy k-means clustering with ACOIJTET Journal
Abstract— Super pixels are becoming increasingly popular for use in computer vision applications. Image segmentation is the process of partitioning a digital image into multiple segments (known as super pixels). In this paper, we developed fuzzy k-means clustering with Ant Colony Optimization (ACO). In this propose algorithm the initial assumptions are made in the calculation of the mean value, which are depends on the colors of neighbored pixel in the image. Fuzzy mean is calculated for the whole image, this process having set of rules that rules are applied iteratively which is used to cluster the whole image. Once choosing a neighbor around that the fitness function is calculated in the optimization process. Based on the optimized clusters the image is segmented. By using fuzzy k-means clustering with ACO technique the image segmentation obtain high accuracy and the segmentation time is reduced compared to previous technique that is Lazy random walk (LRW) methodology. This LRW is optimized from Random walk technique.
Comprehensive Performance Comparison of Cosine, Walsh, Haar, Kekre, Sine, Sla...CSCJournals
The desire of better and faster retrieval techniques has always fuelled to the research in content based image retrieval (CBIR). The extended comparison of innovative content based image retrieval (CBIR) techniques based on feature vectors as fractional coefficients of transformed images using various orthogonal transforms is presented in the paper. Here the fairly large numbers of popular transforms are considered along with newly introduced transform. The used transforms are Discrete Cosine, Walsh, Haar, Kekre, Discrete Sine, Slant and Discrete Hartley transforms. The benefit of energy compaction of transforms in higher coefficients is taken to reduce the feature vector size per image by taking fractional coefficients of transformed image. Smaller feature vector size results in less time for comparison of feature vectors resulting in faster retrieval of images. The feature vectors are extracted in fourteen different ways from the transformed image, with the first being all the coefficients of transformed image considered and then fourteen reduced coefficients sets are considered as feature vectors (as 50%, 25%, 12.5%, 6.25%, 3.125%, 1.5625% ,0.7813%, 0.39%, 0.195%, 0.097%, 0.048%, 0.024%, 0.012% and 0.06% of complete transformed image coefficients). To extract Gray and RGB feature sets the seven image transforms are applied on gray image equivalents and the color components of images. Then these fourteen reduced coefficients sets for gray as well as RGB feature vectors are used instead of using all coefficients of transformed images as feature vector for image retrieval, resulting into better performance and lower computations. The Wang image database of 1000 images spread across 11 categories is used to test the performance of proposed CBIR techniques. 55 queries (5 per category) are fired on the database o find net average precision and recall values for all feature sets per transform for each proposed CBIR technique. The results have shown performance improvement (higher precision and recall values) with fractional coefficients compared to complete transform of image at reduced computations resulting in faster retrieval. Finally Kekre transform surpasses all other discussed transforms in performance with highest precision and recall values for fractional coefficients (6.25% and 3.125% of all coefficients) and computation are lowered by 94.08% as compared to Cosine or Sine or Hartlay transforms.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Extended Performance Appraise of Image Retrieval Using the Feature Vector as ...Waqas Tariq
The extension to the content based image retrieval (CBIR) technique based on row mean of transformed columns of image is presented here. As compared to earlier contemplation three image transforms, now the performance appraise of proposed CBIR technique is done using seven different image transforms like Discrete Cosine Transform (DCT), Discrete Sine Transform (DST), Hartley Transform, Haar Transform, Kekre Transform, Walsh Transform and Slant Transform. The generic image database with 1000 images spread across 11 categories is used to test the performance of proposed CBIR techniques. For each transform 55 queries (5 per category) were fired on the image database. Every technique is tested on both the color and grey version of image database. To compare the performance of image retrieval technique across transforms average precision and recall are computed of all queries. The results have shown the performance improvement (higher precision and recall values) with proposed methods compared to all pixel data of image at reduced computations resulting in faster retrieval in both gray as well as color versions of image database. Even the variation of considering DC component of transformed columns as part of feature vector and excluding it are also tested and it is found that presence of DC component in feature vector improvises the results in image retrieval. The ranking of transforms for performance in proposed gray CBIR techniques with DC component consideration can be given as DST, Haar, Hartley, DCT, Walsh, Slant and Kekre. In color variants of proposed techniques with DC component, the performance ranking of image transforms starting from best can be listed as DCT, Haar, Walsh, Slant, DST, Hartley and Kekre transform.
Comparision of Clustering Algorithms usingNeural Network Classifier for Satel...IJERA Editor
This paper presents a hybrid clustering algorithm and feed-forward neural network classifier for land-cover mapping of trees, shade, building and road. It starts with the single step preprocessing procedure to make the image suitable for segmentation. The pre-processed image is segmented using the hybrid genetic-Artificial Bee Colony(ABC) algorithm that is developed by hybridizing the ABC and FCM to obtain the effective segmentation in satellite image and classified using neural network . The performance of the proposed hybrid algorithm is compared with the algorithms like, k-means, Fuzzy C means(FCM), Moving K-means, Artificial Bee Colony(ABC) algorithm, ABC-GA algorithm, Moving KFCM and KFCM algorithm.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Information search using text and image queryeSAT Journals
Abstract An image retrieval and re-ranking system utilizing a visual re-ranking framework which is proposed in this paper the system retrieves a dataset from the World Wide Web based on textual query submitted by the user. These results are kept as data set for information retrieval. This dataset is then re-ranked using a visual query (multiple images selected by user from the dataset) which conveys user’s intention semantically. Visual descriptors (MPEG-7) which describe image with respect to low-level feature like color, texture, etc are used for calculating distances. These distances are a measure of similarity between query images and members of the dataset. Our proposed system has been assessed on different types of queries such as apples, Console, Paris, etc. It shows significant improvement on initial text-based search results.This system is well suitable for online shopping application. Index Terms: MPEG-7, Color Layout Descriptor (CLD), Edge Histogram Descriptor (EHD), image retrieval and re-ranking system
A broad ranging open access journal Fast and efficient online submission Expe...ijceronline
nternational Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
A comparative analysis of retrieval techniques in content based image retrievalcsandit
Basic group of visual techniques such as color, shape, texture are used in Content Based Image
Retrievals (CBIR) to retrieve query image or sub region of image to find similar images in
image database. To improve query result, relevance feedback is used many times in CBIR to
help user to express their preference and improve query results. In this paper, a new approach
for image retrieval is proposed which is based on the features such as Color Histogram, Eigen
Values and Match Point. Images from various types of database are first identified by using
edge detection techniques .Once the image is identified, then the image is searched in the
particular database, then all related images are displayed. This will save the retrieval time.
Further to retrieve the precise query image, any of the three techniques are used and
comparison is done w.r.t. average retrieval time. Eigen value technique found to be the best as
compared with other two techniques.
An Improved Way of Segmentation and Classification of Remote Sensing Images U...ijsrd.com
The Ultimate significance of Images lies in processing the digital image which stems from two principal application areas: Advances of pictorial information for human interpretation; and dispensation of image data for storage, communication, and illustration for self-sufficient machine perception. The objective of this research work is to define the meaning and possibility of image segmentation based on remote sensing images which are successively classified with statistical measures. In this paper kernel induced Possiblistic C-means clustering algorithm has been implemented for classifying remote sensing image data with image features. As a final point of the proposed work is to point out that this algorithm works well for segmenting and classifying the image with better accuracy with statistical metrices.
OBTAINING SUPER-RESOLUTION IMAGES BY COMBINING LOW-RESOLUTION IMAGES WITH HIG...ijcsit
In this paper, we propose a new algorithm to estimate a super-resolution image from a given low-resolution
image, by adding high-frequency information that is extracted from natural high-resolution images in the
training dataset. The selection of the high-frequency information from the training dataset is accomplished in
two steps, a nearest-neighbor search algorithm is used to select the closest images from the training dataset,
which can be implemented in the GPU, and a sparse-representation algorithm is used to estimate a weight
parameter to combine the high-frequency information of selected images. This simple but very powerful
super-resolution algorithm can produce state-of-the-art results. Qualitatively and quantitatively, we
demonstrate that the proposed algorithm outperforms existing state-of-the-art super-resolution algorithms.
The content based Image Retrieval is the restoration of images with respect to the visual appearances
like texture, shape and color.The methods, components and the algorithms adopted in this content based
retrieval of images were commonly derived from the areas like pattern identification, signal progressing
and the computer vision. Moreover the shape and the color features were abstracted in the course of
wavelet transformation and color histogram. Thus the new content based retrieval is proposed in this
research paper.In this paper the algorithms were required to propose with regards to the shape, shade and
texture feature abstraction .The concept of discrete wavelet transform to be implemented in order to
compute the Euclidian distance.The calculation of clusters was made with the help of the modified KMeans
clustering technique. Thus the analysis is made in among the query image and the database
image.The MATLAB software is implemented to execute the queries. The K-Means of abstraction is
proposed by performing fragmentation and grid-means module, feature extraction and K- nearest neighbor
clustering algorithms to construct the content based image retrieval system.Thus the obtained result are
made to compute and compared to all other algorithm for the retrieval of quality image features
Web Image Retrieval Using Visual Dictionaryijwscjournal
In this research, we have proposed semantic based image retrieval system to retrieve set of relevant images for the given query image from the Web. We have used global color space model and Dense SIFT feature extraction technique to generate visual dictionary using proposed quantization algorithm. The images are transformed into set of features. These features are used as inputs in our proposed Quantization algorithm for generating the code word to form visual dictionary. These codewords are used to represent images semantically to form visual labels using Bag-of-Features (BoF). The Histogram intersection method is used to measure the distance between input image and the set of images in the image database to retrieve similar images. The experimental results are evaluated over a collection of 1000 generic Web images to demonstrate the effectiveness of the proposed system.
Super-resolution (SR) is the process of obtaining a high resolution (HR) image or
a sequence of HR images from a set of low resolution (LR) observations. The block
matching algorithms used for motion estimation to obtain motion vectors between the
frames in Super-resolution. The implementation and comparison of two different types of
block matching algorithms viz. Exhaustive Search (ES) and Spiral Search (SS) are
discussed. Advantages of each algorithm are given in terms of motion estimation
computational complexity and Peak Signal to Noise Ratio (PSNR). The Spiral Search
algorithm achieves PSNR close to that of Exhaustive Search at less computation time than
that of Exhaustive Search. The algorithms that are evaluated in this paper are widely used
in video super-resolution and also have been used in implementing various video standards
like H.263, MPEG4, H.264.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
The development of multimedia system technology in Content based Image Retrieval (CBIR) System is
one in every of the outstanding area to retrieve the images from an oversized collection of database. The feature
vectors of the query image are compared with feature vectors of the database images to get matching images.It is
much observed that anyone algorithm isn't beneficial in extracting all differing kinds of natural images. Thus an
intensive analysis of certain color, texture and shape extraction techniques are allotted to spot an efficient CBIR
technique that suits for a selected sort of images. The Extraction of an image includes feature description and
feature extraction. During this paper, we tend to projected Color Layout Descriptor (CLD), grey Level Co-
Occurrences Matrix (GLCM), Marker-Controlled Watershed Segmentation feature extraction technique that
extract the matching image based on the similarity of Color, Texture and shape within the database. For
performance analysis, the image retrieval timing results of the projected technique is calculated and compared
with every of the individual feature.
Abstract: Many applications such as robot navigation, defense, medical and remote sensing performvarious processing tasks, which can be performed more easily when all objects in different images of the same scene are combined into a single fused image. In this paper, we propose a fast and effective method for image fusion. The proposed method derives the intensity based variations that is large and small scale, from the source images. In this approach, guided filtering is employed for this extraction. Gaussian and Laplacian pyramidal approach is then used to fuse the different layers obtained. Experimental results demonstrate that the proposed method can obtain better performance for fusion of
all sets of images. The results clearly indicate the feasibility of the proposed approach.
Enhanced target tracking based on mean shift algorithm for satellite imageryeSAT Journals
Abstract Target tracking in high resolution satellite images is challenging task for computer vision field. In this paper we have proposed a mean shift algorithm based enhanced target tracking system for high resolution satellite imagery. In proposed tracking algorithm, Target modeling is done using spectral features of target object i.e. Mean & Energy density function. Feature Vector space with minimum Euclidean Distance is used for predicting next possible position of target object in consecutive frames. Proposed tracking algorithm has been tested using two high resolution databases i.e. Harbor & Airport region database acquired by WorldView-2 satellite at different times. Recall, Precision & F1 score etc. performance parameters are also calculated for showing the tracking ability of the proposed method in real-time applications and are compared with the results of Regional Operator Design based tracking algorithm proposed in [1]. The results show that our proposed method gives relatively better performance than the other tracking algorithms used in satellite imagery. Keywords- Target tracking, Mean shift algorithm, Energy density function, Feature Vector Space, Frame
Segmentation of Images by using Fuzzy k-means clustering with ACOIJTET Journal
Abstract— Super pixels are becoming increasingly popular for use in computer vision applications. Image segmentation is the process of partitioning a digital image into multiple segments (known as super pixels). In this paper, we developed fuzzy k-means clustering with Ant Colony Optimization (ACO). In this propose algorithm the initial assumptions are made in the calculation of the mean value, which are depends on the colors of neighbored pixel in the image. Fuzzy mean is calculated for the whole image, this process having set of rules that rules are applied iteratively which is used to cluster the whole image. Once choosing a neighbor around that the fitness function is calculated in the optimization process. Based on the optimized clusters the image is segmented. By using fuzzy k-means clustering with ACO technique the image segmentation obtain high accuracy and the segmentation time is reduced compared to previous technique that is Lazy random walk (LRW) methodology. This LRW is optimized from Random walk technique.
Comprehensive Performance Comparison of Cosine, Walsh, Haar, Kekre, Sine, Sla...CSCJournals
The desire of better and faster retrieval techniques has always fuelled to the research in content based image retrieval (CBIR). The extended comparison of innovative content based image retrieval (CBIR) techniques based on feature vectors as fractional coefficients of transformed images using various orthogonal transforms is presented in the paper. Here the fairly large numbers of popular transforms are considered along with newly introduced transform. The used transforms are Discrete Cosine, Walsh, Haar, Kekre, Discrete Sine, Slant and Discrete Hartley transforms. The benefit of energy compaction of transforms in higher coefficients is taken to reduce the feature vector size per image by taking fractional coefficients of transformed image. Smaller feature vector size results in less time for comparison of feature vectors resulting in faster retrieval of images. The feature vectors are extracted in fourteen different ways from the transformed image, with the first being all the coefficients of transformed image considered and then fourteen reduced coefficients sets are considered as feature vectors (as 50%, 25%, 12.5%, 6.25%, 3.125%, 1.5625% ,0.7813%, 0.39%, 0.195%, 0.097%, 0.048%, 0.024%, 0.012% and 0.06% of complete transformed image coefficients). To extract Gray and RGB feature sets the seven image transforms are applied on gray image equivalents and the color components of images. Then these fourteen reduced coefficients sets for gray as well as RGB feature vectors are used instead of using all coefficients of transformed images as feature vector for image retrieval, resulting into better performance and lower computations. The Wang image database of 1000 images spread across 11 categories is used to test the performance of proposed CBIR techniques. 55 queries (5 per category) are fired on the database o find net average precision and recall values for all feature sets per transform for each proposed CBIR technique. The results have shown performance improvement (higher precision and recall values) with fractional coefficients compared to complete transform of image at reduced computations resulting in faster retrieval. Finally Kekre transform surpasses all other discussed transforms in performance with highest precision and recall values for fractional coefficients (6.25% and 3.125% of all coefficients) and computation are lowered by 94.08% as compared to Cosine or Sine or Hartlay transforms.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Extended Performance Appraise of Image Retrieval Using the Feature Vector as ...Waqas Tariq
The extension to the content based image retrieval (CBIR) technique based on row mean of transformed columns of image is presented here. As compared to earlier contemplation three image transforms, now the performance appraise of proposed CBIR technique is done using seven different image transforms like Discrete Cosine Transform (DCT), Discrete Sine Transform (DST), Hartley Transform, Haar Transform, Kekre Transform, Walsh Transform and Slant Transform. The generic image database with 1000 images spread across 11 categories is used to test the performance of proposed CBIR techniques. For each transform 55 queries (5 per category) were fired on the image database. Every technique is tested on both the color and grey version of image database. To compare the performance of image retrieval technique across transforms average precision and recall are computed of all queries. The results have shown the performance improvement (higher precision and recall values) with proposed methods compared to all pixel data of image at reduced computations resulting in faster retrieval in both gray as well as color versions of image database. Even the variation of considering DC component of transformed columns as part of feature vector and excluding it are also tested and it is found that presence of DC component in feature vector improvises the results in image retrieval. The ranking of transforms for performance in proposed gray CBIR techniques with DC component consideration can be given as DST, Haar, Hartley, DCT, Walsh, Slant and Kekre. In color variants of proposed techniques with DC component, the performance ranking of image transforms starting from best can be listed as DCT, Haar, Walsh, Slant, DST, Hartley and Kekre transform.
Web Image Retrieval Using Visual Dictionaryijwscjournal
In this research, we have proposed semantic based image retrieval system to retrieve set of relevant images for the given query image from the Web. We have used global color space model and Dense SIFT feature extraction technique to generate visual dictionary using proposed quantization algorithm. The images are transformed into set of features. These features are used as inputs in our proposed Quantization algorithm for generating the code word to form visual dictionary. These codewords are used to represent images semantically to form visual labels using Bag-of-Features (BoF). The Histogram intersection method is used to measure the distance between input image and the set of images in the image database to retrieve similar images. The experimental results are evaluated over a collection of 1000 generic Web images to demonstrate the effectiveness of the proposed system.
A COMPARATIVE ANALYSIS OF RETRIEVAL TECHNIQUES IN CONTENT BASED IMAGE RETRIEVALcscpconf
Basic group of visual techniques such as color, shape, texture are used in Content Based Image Retrievals (CBIR) to retrieve query image or sub region of image to find similar images in image database. To improve query result, relevance feedback is used many times in CBIR to help user to express their preference and improve query results. In this paper, a new approach for image retrieval is proposed which is based on the features such as Color Histogram, Eigen Values and Match Point. Images from various types of database are first identified by using edge detection techniques .Once the image is identified, then the image is searched in the particular database, then all related images are displayed. This will save the retrieval time. Further to retrieve the precise query image, any of the three techniques are used and comparison is done w.r.t. average retrieval time. Eigen value technique found to be the best as compared with other two techniques.
Analysis of combined approaches of CBIR systems by clustering at varying prec...IJECEIAES
The image retrieving system is used to retrieve images from the image database. Two types of Image retrieval techniques are commonly used: content-based and text-based techniques. One of the well-known image retrieval techniques that extract the images in an unsupervised way, known as the cluster-based image retrieval technique. In this cluster-based image retrieval, all visual features of an image are combined to find a better retrieval rate and precisions. The objectives of the study were to develop a new model by combining the three traits i.e., color, shape, and texture of an image. The color-shape and colortexture models were compared to a threshold value with various precision levels. A union was formed of a newly developed model with a color-shape, and color-texture model to find the retrieval rate in terms of precisions of the image retrieval system. The results were experimented on on the COREL standard database and it was found that the union of three models gives better results than the image retrieval from the individual models. The newly developed model and the union of the given models also gives better results than the existing system named clusterbased retrieval of images by unsupervised learning (CLUE).
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
PERFORMANCE ANALYSIS OF CLUSTERING BASED IMAGE SEGMENTATION AND OPTIMIZATION ...cscpconf
Partitioning of an image into several constituent components is called image segmentation.
Myriad algorithms using different methods have been proposed for image segmentation. Many
clustering algorithms and optimization techniques are also being used for segmentation of
images. A major challenge in segmentation evaluation comes from the fundamental conflict
between generality and objectivity. As there is a glut of image segmentation techniques
available today, customer who is the real user of these techniques may get obfuscated. In this
paper to address the above described problem some image segmentation techniques are evaluated based on their consistency in different applications. Based on the parameters used quantification of different clustering algorithms is done.
The content based image retrieval (CBIR) technique
is one of the most popular and evolving research areas of the
digital image processing. The goal of CBIR is to extract visual
content like colour, texture or shape, of an image automatically.
This paper proposes an image retrieval method that uses colour
and texture for feature extraction. This system uses the query by
example model. The system allows user to choose the feature on
the basis of which retrieval will take place. For the retrieval
based on colour feature, RGB and HSV models are taken into
consideration. Whereas for texture the GLCM is used for
extracting the textural features which then goes into Vector
Quantization phase to speed up the retrieval process.
SEMANTIC IMAGE RETRIEVAL USING MULTIPLE FEATUREScscpconf
In Content Based Image Retrieval (CBIR) some problem such as recognizing the similar
images, the need for databases, the semantic gap, and retrieving the desired images from huge
collections are the keys to improve. CBIR system analyzes the image content for indexing,
management, extraction and retrieval via low-level features such as color, texture and shape.
To achieve higher semantic performance, recent system seeks to combine the low-level features
of images with high-level features that conation perceptual information for human beings.
Performance improvements of indexing and retrieval play an important role for providing
advanced CBIR services. To overcome these above problems, a new query-by-image technique
using combination of multiple features is proposed. The proposed technique efficiently sifts through the dataset of images to retrieve semantically similar images.
In Content-Based Image Retrieval (CBIR) systems, the visual contents of the
images in the database are took out and represented by multi-dimensional characteristic
vectors. A well known CBIR system that retrieves images by unsupervised method known
as cluster based image retrieval system. For enhancing the performance and retrieval rate
of CBIR system, we fuse the visual contents of an image. Recently, we developed two
cluster-based CBIR systems by fusing the scores of two visual contents of an image. In this
paper, we analyzed the performance of the two recommended CBIR systems at different
levels of precision using images of varying sizes and resolutions. We also compared the
performance of the recommended systems with that of the other two existing CBIR systems
namely UFM and CLUE. Experimentally, we find that the recommended systems
outperform the other two existing systems and one recommended system also comparatively
performed better in every resolution of image.
Applications of spatial features in cbir a surveycsandit
With advances in the computer technology and the World Wide Web there has been an
explosion in the amount and complexity of multimedia data that are generated, stored,
transmitted, analyzed, and accessed. In order to extract useful information from this huge
amount of data, many content based image retrieval (CBIR) systems have been developed in the
last decade. A typical CBIR system captures image features that represent image properties
such as color, texture, or shape of objects in the query image and try to retrieve images from the
database with similar features. Retrieval efficiency and accuracy are the important issues in
designing Content Based Image Retrieval System. The Shape and Spatial features are quiet easy
and simple to derive and effective. Researchers are moving towards finding spatial features and
the scope of implementing these features in to the image retrieval framework for reducing the
semantic gap. This Survey paper focuses on the detailed review of different methods and their
evaluation techniques used in the recent works based on spatial features in CBIR systems.
Finally, several recommendations for future research directions have been suggested based on
the recent technologies.
APPLICATIONS OF SPATIAL FEATURES IN CBIR : A SURVEYcscpconf
With advances in the computer technology and the World Wide Web there has been an explosion in the amount and complexity of multimedia data that are generated, stored,transmitted, analyzed, and accessed. In order to extract useful information from this hugeamount of data, many content based image retrieval (CBIR) systems have been developed in the
last decade. A typical CBIR system captures image features that represent image properties such as color, texture, or shape of objects in the query image and try to retrieve images from the
database with similar features. Retrieval efficiency and accuracy are the important issues in designing Content Based Image Retrieval System. The Shape and Spatial features are quiet easy and simple to derive and effective. Researchers are moving towards finding spatial features and the scope of implementing these features in to the image retrieval framework for reducing the semantic gap. This Survey paper focuses on the detailed review of different methods and their
evaluation techniques used in the recent works based on spatial features in CBIR systems. Finally, several recommendations for future research directions have been suggested based on
the recent technologies.
Similar to WEB IMAGE RETRIEVAL USING CLUSTERING APPROACHES (20)
ANALYSIS OF LAND SURFACE DEFORMATION GRADIENT BY DINSAR cscpconf
The progressive development of Synthetic Aperture Radar (SAR) systems diversify the exploitation of the generated images by these systems in different applications of geoscience. Detection and monitoring surface deformations, procreated by various phenomena had benefited from this evolution and had been realized by interferometry (InSAR) and differential interferometry (DInSAR) techniques. Nevertheless, spatial and temporal decorrelations of the interferometric couples used, limit strongly the precision of analysis results by these techniques. In this context, we propose, in this work, a methodological approach of surface deformation detection and analysis by differential interferograms to show the limits of this technique according to noise quality and level. The detectability model is generated from the deformation signatures, by simulating a linear fault merged to the images couples of ERS1 / ERS2 sensors acquired in a region of the Algerian south.
4D AUTOMATIC LIP-READING FOR SPEAKER'S FACE IDENTIFCATIONcscpconf
A novel based a trajectory-guided, concatenating approach for synthesizing high-quality image real sample renders video is proposed . The lips reading automated is seeking for modeled the closest real image sample sequence preserve in the library under the data video to the HMM predicted trajectory. The object trajectory is modeled obtained by projecting the face patterns into an KDA feature space is estimated. The approach for speaker's face identification by using synthesise the identity surface of a subject face from a small sample of patterns which sparsely each the view sphere. An KDA algorithm use to the Lip-reading image is discrimination, after that work consisted of in the low dimensional for the fundamental lip features vector is reduced by using the 2D-DCT.The mouth of the set area dimensionality is ordered by a normally reduction base on the PCA to obtain the Eigen lips approach, their proposed approach by[33]. The subjective performance results of the cost function under the automatic lips reading modeled , which wasn’t illustrate the superior performance of the
method.
MOVING FROM WATERFALL TO AGILE PROCESS IN SOFTWARE ENGINEERING CAPSTONE PROJE...cscpconf
Universities offer software engineering capstone course to simulate a real world-working environment in which students can work in a team for a fixed period to deliver a quality product. The objective of the paper is to report on our experience in moving from Waterfall process to Agile process in conducting the software engineering capstone project. We present the capstone course designs for both Waterfall driven and Agile driven methodologies that highlight the structure, deliverables and assessment plans.To evaluate the improvement, we conducted a survey for two different sections taught by two different instructors to evaluate students’ experience in moving from traditional Waterfall model to Agile like process. Twentyeight students filled the survey. The survey consisted of eight multiple-choice questions and an open-ended question to collect feedback from students. The survey results show that students were able to attain hands one experience, which simulate a real world-working environment. The results also show that the Agile approach helped students to have overall better design and avoid mistakes they have made in the initial design completed in of the first phase of the capstone project. In addition, they were able to decide on their team capabilities, training needs and thus learn the required technologies earlier which is reflected on the final product quality
PROMOTING STUDENT ENGAGEMENT USING SOCIAL MEDIA TECHNOLOGIEScscpconf
Using social media in education provides learners with an informal way for communication. Informal communication tends to remove barriers and hence promotes student engagement. This paper presents our experience in using three different social media technologies in teaching software project management course. We conducted different surveys at the end of every semester to evaluate students’ satisfaction and engagement. Results show that using social media enhances students’ engagement and satisfaction. However, familiarity with the tool is an important factor for student satisfaction.
A SURVEY ON QUESTION ANSWERING SYSTEMS: THE ADVANCES OF FUZZY LOGICcscpconf
In real world computing environment with using a computer to answer questions has been a human dream since the beginning of the digital era, Question-answering systems are referred to as intelligent systems, that can be used to provide responses for the questions being asked by the user based on certain facts or rules stored in the knowledge base it can generate answers of questions asked in natural , and the first main idea of fuzzy logic was to working on the problem of computer understanding of natural language, so this survey paper provides an overview on what Question-Answering is and its system architecture and the possible relationship and
different with fuzzy logic, as well as the previous related research with respect to approaches that were followed. At the end, the survey provides an analytical discussion of the proposed QA models, along or combined with fuzzy logic and their main contributions and limitations.
DYNAMIC PHONE WARPING – A METHOD TO MEASURE THE DISTANCE BETWEEN PRONUNCIATIONS cscpconf
Human beings generate different speech waveforms while speaking the same word at different times. Also, different human beings have different accents and generate significantly varying speech waveforms for the same word. There is a need to measure the distances between various words which facilitate preparation of pronunciation dictionaries. A new algorithm called Dynamic Phone Warping (DPW) is presented in this paper. It uses dynamic programming technique for global alignment and shortest distance measurements. The DPW algorithm can be used to enhance the pronunciation dictionaries of the well-known languages like English or to build pronunciation dictionaries to the less known sparse languages. The precision measurement experiments show 88.9% accuracy.
INTELLIGENT ELECTRONIC ASSESSMENT FOR SUBJECTIVE EXAMS cscpconf
In education, the use of electronic (E) examination systems is not a novel idea, as Eexamination systems have been used to conduct objective assessments for the last few years. This research deals with randomly designed E-examinations and proposes an E-assessment system that can be used for subjective questions. This system assesses answers to subjective questions by finding a matching ratio for the keywords in instructor and student answers. The matching ratio is achieved based on semantic and document similarity. The assessment system is composed of four modules: preprocessing, keyword expansion, matching, and grading. A survey and case study were used in the research design to validate the proposed system. The examination assessment system will help instructors to save time, costs, and resources, while increasing efficiency and improving the productivity of exam setting and assessments.
TWO DISCRETE BINARY VERSIONS OF AFRICAN BUFFALO OPTIMIZATION METAHEURISTICcscpconf
African Buffalo Optimization (ABO) is one of the most recent swarms intelligence based metaheuristics. ABO algorithm is inspired by the buffalo’s behavior and lifestyle. Unfortunately, the standard ABO algorithm is proposed only for continuous optimization problems. In this paper, the authors propose two discrete binary ABO algorithms to deal with binary optimization problems. In the first version (called SBABO) they use the sigmoid function and probability model to generate binary solutions. In the second version (called LBABO) they use some logical operator to operate the binary solutions. Computational results on two knapsack problems (KP and MKP) instances show the effectiveness of the proposed algorithm and their ability to achieve good and promising solutions.
DETECTION OF ALGORITHMICALLY GENERATED MALICIOUS DOMAINcscpconf
In recent years, many malware writers have relied on Dynamic Domain Name Services (DDNS) to maintain their Command and Control (C&C) network infrastructure to ensure a persistence presence on a compromised host. Amongst the various DDNS techniques, Domain Generation Algorithm (DGA) is often perceived as the most difficult to detect using traditional methods. This paper presents an approach for detecting DGA using frequency analysis of the character distribution and the weighted scores of the domain names. The approach’s feasibility is demonstrated using a range of legitimate domains and a number of malicious algorithmicallygenerated domain names. Findings from this study show that domain names made up of English characters “a-z” achieving a weighted score of < 45 are often associated with DGA. When a weighted score of < 45 is applied to the Alexa one million list of domain names, only 15% of the domain names were treated as non-human generated.
GLOBAL MUSIC ASSET ASSURANCE DIGITAL CURRENCY: A DRM SOLUTION FOR STREAMING C...cscpconf
The amount of piracy in the streaming digital content in general and the music industry in specific is posing a real challenge to digital content owners. This paper presents a DRM solution to monetizing, tracking and controlling online streaming content cross platforms for IP enabled devices. The paper benefits from the current advances in Blockchain and cryptocurrencies. Specifically, the paper presents a Global Music Asset Assurance (GoMAA) digital currency and presents the iMediaStreams Blockchain to enable the secure dissemination and tracking of the streamed content. The proposed solution provides the data owner the ability to control the flow of information even after it has been released by creating a secure, selfinstalled, cross platform reader located on the digital content file header. The proposed system provides the content owners’ options to manage their digital information (audio, video, speech, etc.), including the tracking of the most consumed segments, once it is release. The system benefits from token distribution between the content owner (Music Bands), the content distributer (Online Radio Stations) and the content consumer(Fans) on the system blockchain.
IMPORTANCE OF VERB SUFFIX MAPPING IN DISCOURSE TRANSLATION SYSTEMcscpconf
This paper discusses the importance of verb suffix mapping in Discourse translation system. In
discourse translation, the crucial step is Anaphora resolution and generation. In Anaphora
resolution, cohesion links like pronouns are identified between portions of text. These binders
make the text cohesive by referring to nouns appearing in the previous sentences or nouns
appearing in sentences after them. In Machine Translation systems, to convert the source
language sentences into meaningful target language sentences the verb suffixes should be
changed as per the cohesion links identified. This step of translation process is emphasized in
the present paper. Specifically, the discussion is on how the verbs change according to the
subjects and anaphors. To explain the concept, English is used as the source language (SL) and
an Indian language Telugu is used as Target language (TL)
EXACT SOLUTIONS OF A FAMILY OF HIGHER-DIMENSIONAL SPACE-TIME FRACTIONAL KDV-T...cscpconf
In this paper, based on the definition of conformable fractional derivative, the functional
variable method (FVM) is proposed to seek the exact traveling wave solutions of two higherdimensional
space-time fractional KdV-type equations in mathematical physics, namely the
(3+1)-dimensional space–time fractional Zakharov-Kuznetsov (ZK) equation and the (2+1)-
dimensional space–time fractional Generalized Zakharov-Kuznetsov-Benjamin-Bona-Mahony
(GZK-BBM) equation. Some new solutions are procured and depicted. These solutions, which
contain kink-shaped, singular kink, bell-shaped soliton, singular soliton and periodic wave
solutions, have many potential applications in mathematical physics and engineering. The
simplicity and reliability of the proposed method is verified.
AUTOMATED PENETRATION TESTING: AN OVERVIEWcscpconf
The using of information technology resources is rapidly increasing in organizations,
businesses, and even governments, that led to arise various attacks, and vulnerabilities in the
field. All resources make it a must to do frequently a penetration test (PT) for the environment
and see what can the attacker gain and what is the current environment's vulnerabilities. This
paper reviews some of the automated penetration testing techniques and presents its
enhancement over the traditional manual approaches. To the best of our knowledge, it is the
first research that takes into consideration the concept of penetration testing and the standards
in the area.This research tackles the comparison between the manual and automated
penetration testing, the main tools used in penetration testing. Additionally, compares between
some methodologies used to build an automated penetration testing platform.
CLASSIFICATION OF ALZHEIMER USING fMRI DATA AND BRAIN NETWORKcscpconf
Since the mid of 1990s, functional connectivity study using fMRI (fcMRI) has drawn increasing
attention of neuroscientists and computer scientists, since it opens a new window to explore
functional network of human brain with relatively high resolution. BOLD technique provides
almost accurate state of brain. Past researches prove that neuro diseases damage the brain
network interaction, protein- protein interaction and gene-gene interaction. A number of
neurological research paper also analyse the relationship among damaged part. By
computational method especially machine learning technique we can show such classifications.
In this paper we used OASIS fMRI dataset affected with Alzheimer’s disease and normal
patient’s dataset. After proper processing the fMRI data we use the processed data to form
classifier models using SVM (Support Vector Machine), KNN (K- nearest neighbour) & Naïve
Bayes. We also compare the accuracy of our proposed method with existing methods. In future,
we will other combinations of methods for better accuracy.
VALIDATION METHOD OF FUZZY ASSOCIATION RULES BASED ON FUZZY FORMAL CONCEPT AN...cscpconf
In order to treat and analyze real datasets, fuzzy association rules have been proposed. Several
algorithms have been introduced to extract these rules. However, these algorithms suffer from
the problems of utility, redundancy and large number of extracted fuzzy association rules. The
expert will then be confronted with this huge amount of fuzzy association rules. The task of
validation becomes fastidious. In order to solve these problems, we propose a new validation
method. Our method is based on three steps. (i) We extract a generic base of non redundant
fuzzy association rules by applying EFAR-PN algorithm based on fuzzy formal concept analysis.
(ii) we categorize extracted rules into groups and (iii) we evaluate the relevance of these rules
using structural equation model.
PROBABILITY BASED CLUSTER EXPANSION OVERSAMPLING TECHNIQUE FOR IMBALANCED DATAcscpconf
In many applications of data mining, class imbalance is noticed when examples in one class are
overrepresented. Traditional classifiers result in poor accuracy of the minority class due to the
class imbalance. Further, the presence of within class imbalance where classes are composed of
multiple sub-concepts with different number of examples also affect the performance of
classifier. In this paper, we propose an oversampling technique that handles between class and
within class imbalance simultaneously and also takes into consideration the generalization
ability in data space. The proposed method is based on two steps- performing Model Based
Clustering with respect to classes to identify the sub-concepts; and then computing the
separating hyperplane based on equal posterior probability between the classes. The proposed
method is tested on 10 publicly available data sets and the result shows that the proposed
method is statistically superior to other existing oversampling methods.
CHARACTER AND IMAGE RECOGNITION FOR DATA CATALOGING IN ECOLOGICAL RESEARCHcscpconf
Data collection is an essential, but manpower intensive procedure in ecological research. An
algorithm was developed by the author which incorporated two important computer vision
techniques to automate data cataloging for butterfly measurements. Optical Character
Recognition is used for character recognition and Contour Detection is used for imageprocessing.
Proper pre-processing is first done on the images to improve accuracy. Although
there are limitations to Tesseract’s detection of certain fonts, overall, it can successfully identify
words of basic fonts. Contour detection is an advanced technique that can be utilized to
measure an image. Shapes and mathematical calculations are crucial in determining the precise
location of the points on which to draw the body and forewing lines of the butterfly. Overall,
92% accuracy were achieved by the program for the set of butterflies measured.
SOCIAL MEDIA ANALYTICS FOR SENTIMENT ANALYSIS AND EVENT DETECTION IN SMART CI...cscpconf
Smart cities utilize Internet of Things (IoT) devices and sensors to enhance the quality of the city
services including energy, transportation, health, and much more. They generate massive
volumes of structured and unstructured data on a daily basis. Also, social networks, such as
Twitter, Facebook, and Google+, are becoming a new source of real-time information in smart
cities. Social network users are acting as social sensors. These datasets so large and complex
are difficult to manage with conventional data management tools and methods. To become
valuable, this massive amount of data, known as 'big data,' needs to be processed and
comprehended to hold the promise of supporting a broad range of urban and smart cities
functions, including among others transportation, water, and energy consumption, pollution
surveillance, and smart city governance. In this work, we investigate how social media analytics
help to analyze smart city data collected from various social media sources, such as Twitter and
Facebook, to detect various events taking place in a smart city and identify the importance of
events and concerns of citizens regarding some events. A case scenario analyses the opinions of
users concerning the traffic in three largest cities in the UAE
SOCIAL NETWORK HATE SPEECH DETECTION FOR AMHARIC LANGUAGEcscpconf
The anonymity of social networks makes it attractive for hate speech to mask their criminal
activities online posing a challenge to the world and in particular Ethiopia. With this everincreasing
volume of social media data, hate speech identification becomes a challenge in
aggravating conflict between citizens of nations. The high rate of production, has become
difficult to collect, store and analyze such big data using traditional detection methods. This
paper proposed the application of apache spark in hate speech detection to reduce the
challenges. Authors developed an apache spark based model to classify Amharic Facebook
posts and comments into hate and not hate. Authors employed Random forest and Naïve Bayes
for learning and Word2Vec and TF-IDF for feature selection. Tested by 10-fold crossvalidation,
the model based on word2vec embedding performed best with 79.83%accuracy. The
proposed method achieve a promising result with unique feature of spark for big data.
GENERAL REGRESSION NEURAL NETWORK BASED POS TAGGING FOR NEPALI TEXTcscpconf
This article presents Part of Speech tagging for Nepali text using General Regression Neural
Network (GRNN). The corpus is divided into two parts viz. training and testing. The network is
trained and validated on both training and testing data. It is observed that 96.13% words are
correctly being tagged on training set whereas 74.38% words are tagged correctly on testing
data set using GRNN. The result is compared with the traditional Viterbi algorithm based on
Hidden Markov Model. Viterbi algorithm yields 97.2% and 40% classification accuracies on
training and testing data sets respectively. GRNN based POS Tagger is more consistent than the
traditional Viterbi decoding technique.
The Indian economy is classified into different sectors to simplify the analysis and understanding of economic activities. For Class 10, it's essential to grasp the sectors of the Indian economy, understand their characteristics, and recognize their importance. This guide will provide detailed notes on the Sectors of the Indian Economy Class 10, using specific long-tail keywords to enhance comprehension.
For more information, visit-www.vavaclasses.com
Palestine last event orientationfvgnh .pptxRaedMohamed3
An EFL lesson about the current events in Palestine. It is intended to be for intermediate students who wish to increase their listening skills through a short lesson in power point.
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
How to Create Map Views in the Odoo 17 ERPCeline George
The map views are useful for providing a geographical representation of data. They allow users to visualize and analyze the data in a more intuitive manner.
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
2. 28 Computer Science & Information Technology (CS & IT)
motivated to utilize the technique of CBIR to implement our proposed model to retrieve Web
images.
In recent years, many researchers have been involved in the area of Content-based image retrieval
(CBIR) from unannotated image databases [4]. Now a days, enormously many people uses a
Digital image and video libraries as a visual information. The technologies needed for retrieving,
managing and browsing this growing amount of information are still open issues; however, quite
still there is an open challenge for the research community. Many projects have been started in
recent years to research and develop efficient systems for content- based image retrieval. The
best-known CBIR system is probably Query by Image Content (QBIC) [5] developed at the IBM
Almaden Research Center. Other noteworthy systems include MIT's Photobook [6] and its more
recent version, FourEyes [7], the search engine family of VisualSEEk, WebSEEk, and MetaSEEk
[8],[9],[10], which all are developed at Columbia University. The Virage [11] is a commercial
content-based search engine developed at Virage Technologies.
Our proposed model uses a cluster based approach to compute the centroids for every image. This
is used for image representation. The main advantage of this algorithm is very easy and takes less
time to preserve the cluster of pixels. Basically K-Means is unsupervised and iterative clustering
algorithm, which finds the optimal set of cluster. The retrieval method is based on the similarity
measure is discussed in detail in the section 2.4.
The organization of the paper is as follows: In section 2, the Methodology of Extracting Image
Features, Pixel Segmentation, Object Clustering, Similarity Distance Computation and
Implementation are discussed. In Section 3, we discussed the conclusion of the paper.
2. METHODOLOGY
The block diagram of image retrieval system is shown in figure 1. It consists of two phases: off-
line and On-line.
In Off-line, feature extraction model extracts color and texture features. These features are
inputted to the clustering module to find the centroids of every image, and then stored in the
centroids database.
Input Image Set of similar
Images
Off-line
Query Image
On-line
Figure1: The Block Diagram of Image Retrieval system.
In on-line, Image searching is actually processed in the retrieval phase. It accepts query image
and compares with image features using threshold and object uniqueness to measure the
similarity to retrieve the images from the image database.
In brief, in this paper we discussed about three important task of the proposed system.
1. Image pre-processing module, it helps to obtain image features. These features are used
to segment to get objects/regions by grouping pixels with similar descriptions.
2. Clustered the objects to tune up the speed of the retrieval system.
Feature
Extraction
Divisive
Clustering
K-means
Clusterin
Centroids
Database
Similarit
y
Measure
3. Computer Science & Information Technology (CS & IT) 29
3. Employed similarity distance measure to determine the distance between the image query
and image features database.
2.1. Feature Extraction
In this section, we discuss how to extract the image features of color and textures. According to
Li and Wang [12] and Zhang [13], image consists of large number of pixels. The values of pixels
are redundant; it can be reduced in the pre-processing stage of an image. In the pre-processing
stage, to achieve good result the image is portioned into 4 by 4 blocks; each block is replaced by
the average of 4 by 4 pixel values to compromise between texture granularity, segmentation
coarseness, and computation time. The HSV color space is widely used for extracting color
features due to its ability to transform from RGB to HSV and vice versa. Since HSV color space
has an advantage to produce a collection of colors that is also compact and complete. These
features are denoted as {f1, f2, and f3}.
To obtain other three texture features, we applied Haar wavelet transform to the L component of
an image. The Haar wavelet is discontinuous and resembles a step function. It represents the
energy in high frequency bands of the Haar wavelet transform [14]. After one-level wavelet
transform, a 4 by 4 block is decomposed into four frequency bands, each band containing a 2 by 2
matrix of coefficients, suppose the coefficients in the HL band are {ck+i,ck,l+1,ck+1,l,ck+1,l+1}.
Then the feature of the block in HL band is computed using equation 2.1:
f = ቂ
ଵ
ସ
∑ ∑ c୩ା୧,୪ା୨
ଶଵ
୨ୀ
ଵ
୧ୀ ቃ
భ
మ
2.1
The other two features are computed similarly in the LH and HH bands. These three features of
the block are denoted as {f4, f5, f6}.
Let, Image I represented as ܫ = ሼܴଵ, ܴଶ, … ܴ ሽ, ݔ ∈ ܴௗ
be a region based feature vector set.
These regions feature have extracted by using K-means algorithm. To find the signature of ܰ
weight vectors, that is ܿ = ሼܿଵ, ܿଶ, … , ܿሽ as representatives of the feature vector of I= {f1, f2, f3, f4,
f5, and f6}.
2.2. Pixel Segmentation
In section 2.1, we discussed the procedure to get the required features. These six features are
taking into consideration for segment an image into objects. For segmenting an image, we
modified K-Means clustering algorithm to group similar pixel together into k groups with cluster
centers. This is quite efficient and minimizes the sum square error is defined in the equation 2.2.
Dሺkሻ = ∑ ݉݅݊ଵஸ୨ஸ୩൫x୧ − xത୨൯
ଶ୪
୧ୀଵ 2.2
The advantage of K-Means algorithm is simple and quite efficient. It works well when clusters
are not well separated from each other [15]. This could be happen in web images. But the
disadvantage of K-means is expected to receive the value of k from the user; each value of k is
represented by their centroids. To certain extent for selecting the value of k requires domain
knowledge. If the value of k is selected randomly, it gives different distinct total Sum Squared
Errors. This is not a good idea to choose the value of k randomly to make a good cluster. This is
the issue mainly we focused in this paper to choose the value of k of K-Means dynamically to
determined the number of k values instead of randomly chosen.
This is the reason we used divisive based algorithm to obtain initial cluster centers. In this
approach, first merges pair of the closest pixels to form cluster. These clusters are then merged to
4. 30 Computer Science & Information Technology (CS & IT)
generate a new bigger cluster and finally form a single hierarchical cluster that covers the whole
image.
To calculate the distance between the new cluster and remaining clusters, we use the average
linkage method. Let ܿ be a new cluster produced by merging of clusters ܿ and ܿ. Let ܿ be a
remaining cluster. The distances between the new cluster and the remaining clusters are computed
using equation 2.3.
Distሺc୬, c୩ሻ =
|ୡ|
|ୡ|ାหୡౠห
Distሺc୬, c୩ሻ +
|ୡ|
|ୡ|ାหୡౠห
Dist൫c୨, c୩൯ 2.3
We used person correlation coefficient function to find distance between each pixel to obtained
better cluster than Euclidean distance during image segmentation. Let ݔ be the features of pixel1
and ݕ be pixel2, where i=1...6. To find the distance between the pixel x and y is computed using
the person correlation coefficient. Since the number of features per pixel is 6. We set different
weight between color (Wc) and texture (Wt) to get better clustering result.
2.3. Object Clustering and Similarity Distance Computation
In this section, we discussed about object clustering and similarity distance computation. The
scope of this discussion is to increase the speed of image retrieval system. We cluster similar
objects to compare the query image to all images in the database. This takes more time and it
consumes more memory, when the database is large. To reduce the computational cost of image
retrieval system, we perform hierarchical clustering to the entire objects. Each object is identified
by six features, which are the average features of all the member pixels.
The query image goes through the same image segmentation algorithm to obtain objects. These
objects are compared to the cluster centers in the database and similarity is calculated using L2
distance. The objects in the database that has a minimum distance will be returned to perform
global image distance computation between query image and database image. The objects in the
database that has a minimum distance will be returned to perform global image distance
computation between query image and database image.
Figure2.2: Overall similarity distance between image query and image database.
Fig 2.2 shows that, image1 is the query image and image2 is the returned image from the image
database. To determine the overall distance between the query image and the returned image, we
should match all the objects from the two images. From these two images, we measured the
distance between the first object from image1 (Oଵଵ) and the first object from image2 (Oଶଶ), is
defined as:
Oଵଵ,ଶଵ = ∑ ሺfଵଵ୧ − fଶଵ୧ሻଶ
୧ୀଵ 2.4
where fi is the color and texture from each object.
5. Computer Science & Information Technology (CS & IT) 31
The similarity distance computation between image1 and image2 can be defined as:
w୮ୀ ౦ஒ౦ஓ౦
2.5
Where α୮ = 1 when the two region being match is the closest distance
α୮ = 0 when the two region being matched is not the closest distance
β୮ = (#of pixels for object p) / (#of pixel in the whole image)
γ୮ = Object uniqueness of the object based on the appearance of the object
We determine the uniqueness of the object is based on the visual appearance of the object in the
object cluster. When an object appears frequently in the database, it can be consider less unique
and vice versa. To get the value of γ୮ , first we carried out hierarchical clustering to group similar
objects in the same group. Next, we determine the uniqueness of object p (γ୮ ) based on the total
number of objects that belong to the cluster compared to the total number of objects in the
database. The similarity distance between ܦሺ,1݉ܫ 2݉ܫሻ and ܦሺ,2݉ܫ 1݉ܫሻ is not symmetric,
therefore in order to make it symmetric, we take the average between them. The overall distance
is defined as:
Overall DሺIm1, Im2ሻ =
ୈሺ୍୫ଵ,୍୫ଶሻା ୈሺ୍୫ଶ,୍୫ଵሻ
ଶ
2.6
By using this equation, it is straightforward to determine ܦሺ,ܺ݉ܫ ܻ݉ܫሻ for every images in the
database. This is most well balanced technique in similarity measure between regional and global
matching. This cluster of object groups is necessary for two reasons: for faster image retrieval and
for determining the value of object uniqueness is discussed in the subsection 2.4.2
2.4. Implementation
To retrieve relevant images, user sends a query in the form of image to the image retrieval system
to find out the set of similar images. Whenever it receives the query, it sends to the feature
extraction module to compute the signature of the image. The signature of the image is compared
with features vectors stored in the database using the similarity measure. The similar images are
retrieved and then displayed the corresponding thumbnail images at the user place. These
thumbnail images are available in the database for the user perspective point of view in the form
of JPEG format.
2.4.1. Performance
Currently, we have conducted an experiment with two image databases of different sizes. The
first database contains 1,000 images with size of 384 X256 resolution. The second database
contains 10,000 low resolution web-crawled miscellaneous image database with the size of
128 X 85 resolution. Most of the images are color photographs in JPEG format and used in
WBIIS download from the image collection residing at http://wang.ist.psu.edu/docs/related.shtml.
There are 10 categories, around 100 images per category in the first database and 100 categories,
around 100 images per categories in the second database.
We used a Pentium(R) 4 CPU 3.00 GHz PC using the Windows-XP operating system. It takes
approximately 3 hours to compute the feature vectors. On average, 0.003 seconds is needed to
compute the features of all blocks of an image. The feature clustering process is performed by
only once for our database. The clustering procedure we have discussed with detail in section 2.3.
6. 32 Computer Science & Information Technology (CS & IT)
We have conducted an experiment to measure the recall and precision of the system based on the
images which are presented in the database as a query image. Similarly, images which are not
present in the image database as a query image. We obtain better precision for the given query
image shown in figure 2.3, 2.4, 2.5 and 2.6. Due to the limitations of space, we shows only two
rows of images with the top 10 matches of sample query.
2.4.2. Experiments
In pre-processing stage, we used MATLAB image processing tools to extract the pixel features
on each image.
2.4.2.1. Experiments #1-Similarity Distance Measure.
We obtained objects during pixel clustering. To find a suitable number of objects per image, we
need to set the similarity distance measure during divisive clustering. To do this, we randomly
selected 10 images from each image class which corresponds to 100 images from the database.
We ran this experiment for one time. From each image, we set the similarity measure at 0.6, 0.7
and 0.8. The number of clusters based on these measures was recorded and the average number of
clusters for each similarity measure was computed is shown in Table 2.1.
Table 2.1 Correlation between similarity measures and average number of objects
similarity
measure
Average number of
objects
0.6 3.3
0.7 6.4
0.8 9.2
We concluded to set a similarity measure equal to 0.7 because at similarity measure 0.7 the
average number of clusters for each image is 6.3 which is a good number of objects. We argue
that 6.3 is a good number of objects for this particular database because when we randomly select
100 images and visually counted the number of objects per image, the average is 6.3. To see the
performance of experiment #1 we randomly select 2 images from different class, namely horse
and flower. Each query returns the top 10 images from database. The two query retrievals are
shown in Figure 2.2 and 2.3
Figure 2.2: Best top 10 matches of a sample horse image query using similarity distance.
7. Computer Science & Information Technology (CS & IT) 33
Figure 2.3: Best top 9 matches of a sample flower image query using similarity distance.
2.4.2.2. Experiments #2 – Object Uniqueness
The idea of object uniqueness is to give more weight to objects that are more distinctive
compared to other objects that are more common. To do this, we clustered all objects in database.
Objects that belong to larger clusters will get less uniqueness weight compare to objects that
belong to smaller clusters.
The selection of the number of clusters in object clustering affects the result of accuracy during
image retrieval. The parameter of object uniqueness is obtained from the object clustering. Since
object uniqueness is one of the variables to compute similarity distance computation, therefore, to
select the number of cluster that results in the best performance. Therefore, during the experiment
we choose four different number of cluster during object clustering: 1, 10, 25, and 35. All 100
images from each image class are selected for query, and each query returns the result of top 10
from the database. For each query we examine the precision of the returned images on their
relevance of semantic meaning. Since similar semantic meaning belong to the same class, and
each class has the same number of images, therefore it is straightforward to calculate the
precision and recall. The precision and recall using different numbers of object clusters are then
compared is shown in table 2.1.
Table 2.1: Average precision/recall comparison of top 1, 10, 25, and 35 closest distances.
Image
Id
object
cluster(1)
object
cluster(10)
object
cluster(25)
object
cluster(35)
1 0.22 0.27 0.31 0.3
2 0.28 0.31 0.35 0.34
3 0.29 0.35 0.42 0.423
4 0.75 0.61 0.76 0.75
5 0.89 0.96 1.02 0.97
6 0.6 0.59 0.78 0.76
7 0.31 0.28 0.32 0.32
8 0.41 0.4 0.42 0.41
9 0.34 0.31 0.36 0.35
10 0.41 0.39 0.44 0.42
From Figure 2.4 we examined that the precision and recall is higher in almost all image class
when the number of object cluster is set to 25. The outcome of 35 object clusters is better
compared with 1 object cluster, while the result for 10 object cluster is worse compare to the rest
of the cluster number. From this result we conclude that the parameter of object uniqueness in
8. 34 Computer Science & Information Technology (CS & IT)
similarity distance computation increase the performance of image retrieval. However, to get the
best result, we need to find the accurate number of cluster during object clustering.
Figure 2.4 Correlation between similarity measures and average number of objects
To see the performance of Experiment #2, we randomly select 2 images from different class,
namely horse and flower. Each query returns the top 10 images from database. The two query
retrievals are shown in Figure 2.5 and 2.6
Figure 2.5: Best top 10 matches of a sample horse image query.
Figure 2.6: Best top 10 matches of a sample flower image query.
9. Computer Science & Information Technology (CS & IT) 35
3. CONCLUSIONS
We developed image retrieval system using clustering approaches. Our proposed model uses a
modified clustering algorithm to improve image segmentation and discussed new similarity
distance measure where object is considered during computation. This increases the speed of
retrieval system to achieve better precision and recall. We have conducted experiments to
evaluate the performance of the system by similarity measure using a threshold value and object
uniqueness. We observed in experiment #2, the object uniqueness measures gives better results
than experiment #1, because it gives more weight to the object that are more distinctive compared
to other objects. The performance of the system is evaluated based on similarity measures using a
threshold value and object uniqueness. Our proposed model gives better result in some of the
classes and performs worse when the image is complex and the objects have smooth edges.
During the implementation, we also proved that by considering object uniqueness during
similarity distance computation improve the accuracy during retrieval. This work can be extended
to classify the images using an advanced classifier to achieve better performance for a particular
class.
REFERENCES
[1] http://images.google.com/
[2] http://www.ditto.com/
[3] http://www.altavista.com/
[4] Rui, Y., Huang, T.S., Chang, S.-F., 1999. Image retrieval: current techniques, promising directions,
and open issues. Journal of Visual Communication and Image Representation 10 (1), 39-62.
[5] Flickner, M., Sawhney, H., Niblack, W., et al., 1995. Query by image and video content: the QBIC
system. IEEE Computer September, 23-31.
[6] Pentland, A., Picard, R.W., Sclarof, S., 1994. Photobook: tools for content-based manipulation of
image databases. In: Storage and Retrieval for Image and Video Databases II. In: SPIE Proceedings
Series, Vol. 2185. San Jose, CA, USA.
[7] Minka, T.P., 1996. An image database browser that learns from user interaction. Master's thesis,
M.I.T., Cambridge, MA.
[8] Michael J. Swain., Charles Frankel., and Vassilis Athitsos, “WebSeer: An Image Search Engine for the
World Wide Web”, Technical Report 96-14, 1997.
[9] J. R. Smith, “Integrated Spatial and Feature Image Systems: Retrieval, Compression and Analysis”.
PhD thesis, Graduate School of Arts and Sciences, Columbia University, February 1997.
[10] S. Sclaroff., L. Taycher., and M. La Cascia. “Imagerover: A content-based image browser for the
world wide Web”. In Proceedings IEEE Workshop on Content-based Access of Image and Video
Libraries, June ’97, 1997.
[11] Bach, J.R., Fuller, C., Gupta, A., et al., 1996. The Virage image search engine: an open framework for
image management. In: Sethi, I.K., Jain, R.J. (Eds.), Storage and Retrieval for Image and Video
Databases IV. In: SPIE Proceedings Series, Vol. 2670. San Jose, CA, USA.
[12] Li, J., Wang, J. Z. and Wiederhold, G., (2000), “Integrated Region Matching for Image Retrieval,”
ACM Multimedia, p. 147-156.
[13] Zhang, R. and Zhang, Z., (2002), “A Clustering Based Approach to Efficient Image Retrieval”,
Proceedings of the 14th IEEE International Conference on Tools with Artificial Intelligence, pp. 339.
[14] Daubechies, I., (1992), “Ten Lectures on Wavelets,” Capital City Press, Montpelier,Vermont.
[15] Guha, S., Rastogi, R., and Shim, K., (1998), “CURE: An Efficient Clustering Algorithm for Large
Databases” Proc. of ACM SIGMOD International Conference on Management of Data, pp.73-84.
10. 36 Computer Science & Information Technology (CS & IT)
Authors
Umesh K K is a faculty member with
the Department of Information
Science and Engineering, Sri
Jayachamarajendra College of
Engineering, Mysore. He is also a
Research Scholar, Department of
Studies of Mysore and M.Tech
degree in 2003 from Visvesvaraya
Technological University, Karnataka.
He is working towards doctoral
work under the supervision of Dr.
Suresha.
Dr.Suresha is presently working as
Associate Professor, Department of
Studies in Computer Science,
University of Mysore. He received his
B.Sc. degree in 1987 from University
of Mysore, M.Sc degree in 1989 from
University of Mysore and M. Phil
degree in 1991 from DAVV, Indore.
He received M.Tech degree from
Indian Institute of Technology,
Kharagpur in 1996 and Ph. D. from
prestigious Indian Institute of Science,
Bangalore in 2007. He was awarded
second prize in IRISS-2002
competition, which is an all India level
research student competition called
“Inter Research Institute Student
Seminar”. He has a teaching
experience of 21 years at post graduate
level, has 12 publications to his credit
and currently supervising five research
scholars towards their doctoral work.