Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Query Image Searching With Integrated Textual and Visual Relevance Feedback f...IJERA Editor
There are many researchers who have studied the relevance feedback in the literature of content based image
retrieval (CBIR) community, but none of CBIR search engines support it because of scalability, effectiveness
and efficiency issues. In this, we had implemented an integrated relevance feedback for retrieving of web
images. Here, we had concentrated on integration of both textual features (TF) and visual features (VF) based
relevance feedback (RF), simultaneously we also tested them individually. The TFRF employs and effective
search result clustering (SRC) algorithm to get salient phrases. Then a new user interface (UI) is proposed to
support RF. Experimental results show that the proposed algorithm is scalable, effective and accurated
Image fusion is a technique used to integrate a highresolution
panchromatic image with multispectral low-resolution
image to produce a multispectral high-resolution image, that
contains both the spatial information of the panchromatic highresolution
image and the color information of the multispectral
image .Although an increasing number of high-resolution images
are available along with sensor technology development, the
process of image fusion is still a popular and important method to
interpret the image data for obtaining a more suitable image for a
variety of applications, like visual interpretation and digital
classification. To get the complete information from the single
image we need to have a method to fuse the images. In the current
paper we are going to propose a method that uses hybrid of
wavelets for Image fusion.
Super-resolution (SR) is the process of obtaining a high resolution (HR) image or
a sequence of HR images from a set of low resolution (LR) observations. The block
matching algorithms used for motion estimation to obtain motion vectors between the
frames in Super-resolution. The implementation and comparison of two different types of
block matching algorithms viz. Exhaustive Search (ES) and Spiral Search (SS) are
discussed. Advantages of each algorithm are given in terms of motion estimation
computational complexity and Peak Signal to Noise Ratio (PSNR). The Spiral Search
algorithm achieves PSNR close to that of Exhaustive Search at less computation time than
that of Exhaustive Search. The algorithms that are evaluated in this paper are widely used
in video super-resolution and also have been used in implementing various video standards
like H.263, MPEG4, H.264.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
OBTAINING SUPER-RESOLUTION IMAGES BY COMBINING LOW-RESOLUTION IMAGES WITH HIG...ijcsit
In this paper, we propose a new algorithm to estimate a super-resolution image from a given low-resolution
image, by adding high-frequency information that is extracted from natural high-resolution images in the
training dataset. The selection of the high-frequency information from the training dataset is accomplished in
two steps, a nearest-neighbor search algorithm is used to select the closest images from the training dataset,
which can be implemented in the GPU, and a sparse-representation algorithm is used to estimate a weight
parameter to combine the high-frequency information of selected images. This simple but very powerful
super-resolution algorithm can produce state-of-the-art results. Qualitatively and quantitatively, we
demonstrate that the proposed algorithm outperforms existing state-of-the-art super-resolution algorithms.
Comparison of various Image Registration Techniques with the Proposed Hybrid ...idescitation
Image Registration is termed as the method to
transform different forms of image data into one coordinate
system. Registration is a important part in image processing
which is used for matching the pictures which are obtained at
different time intervals or from various sensors. A broad range
of registration techniques have been developed for the various
types of image data. These techniques are independently
studied for many applications resulting in the large body of
result. Vision is the most advanced of human sensors, so
naturally images play one of the most important roles in
human perception. Image registration is one of the branches
encompassed by the diverse field of digital image processing.
Due to its importance in many application areas as well as
since its nature is complicated; image registration is now the
topic of much recent research. Registration algorithms tend
to compute transformations to set correspondence betweenthe two images. In this paper the survey is done on various
image registration techniques. Also the different techniques
are compared with the proposed system of the projec
Query Image Searching With Integrated Textual and Visual Relevance Feedback f...IJERA Editor
There are many researchers who have studied the relevance feedback in the literature of content based image
retrieval (CBIR) community, but none of CBIR search engines support it because of scalability, effectiveness
and efficiency issues. In this, we had implemented an integrated relevance feedback for retrieving of web
images. Here, we had concentrated on integration of both textual features (TF) and visual features (VF) based
relevance feedback (RF), simultaneously we also tested them individually. The TFRF employs and effective
search result clustering (SRC) algorithm to get salient phrases. Then a new user interface (UI) is proposed to
support RF. Experimental results show that the proposed algorithm is scalable, effective and accurated
Image fusion is a technique used to integrate a highresolution
panchromatic image with multispectral low-resolution
image to produce a multispectral high-resolution image, that
contains both the spatial information of the panchromatic highresolution
image and the color information of the multispectral
image .Although an increasing number of high-resolution images
are available along with sensor technology development, the
process of image fusion is still a popular and important method to
interpret the image data for obtaining a more suitable image for a
variety of applications, like visual interpretation and digital
classification. To get the complete information from the single
image we need to have a method to fuse the images. In the current
paper we are going to propose a method that uses hybrid of
wavelets for Image fusion.
Super-resolution (SR) is the process of obtaining a high resolution (HR) image or
a sequence of HR images from a set of low resolution (LR) observations. The block
matching algorithms used for motion estimation to obtain motion vectors between the
frames in Super-resolution. The implementation and comparison of two different types of
block matching algorithms viz. Exhaustive Search (ES) and Spiral Search (SS) are
discussed. Advantages of each algorithm are given in terms of motion estimation
computational complexity and Peak Signal to Noise Ratio (PSNR). The Spiral Search
algorithm achieves PSNR close to that of Exhaustive Search at less computation time than
that of Exhaustive Search. The algorithms that are evaluated in this paper are widely used
in video super-resolution and also have been used in implementing various video standards
like H.263, MPEG4, H.264.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
OBTAINING SUPER-RESOLUTION IMAGES BY COMBINING LOW-RESOLUTION IMAGES WITH HIG...ijcsit
In this paper, we propose a new algorithm to estimate a super-resolution image from a given low-resolution
image, by adding high-frequency information that is extracted from natural high-resolution images in the
training dataset. The selection of the high-frequency information from the training dataset is accomplished in
two steps, a nearest-neighbor search algorithm is used to select the closest images from the training dataset,
which can be implemented in the GPU, and a sparse-representation algorithm is used to estimate a weight
parameter to combine the high-frequency information of selected images. This simple but very powerful
super-resolution algorithm can produce state-of-the-art results. Qualitatively and quantitatively, we
demonstrate that the proposed algorithm outperforms existing state-of-the-art super-resolution algorithms.
Comparison of various Image Registration Techniques with the Proposed Hybrid ...idescitation
Image Registration is termed as the method to
transform different forms of image data into one coordinate
system. Registration is a important part in image processing
which is used for matching the pictures which are obtained at
different time intervals or from various sensors. A broad range
of registration techniques have been developed for the various
types of image data. These techniques are independently
studied for many applications resulting in the large body of
result. Vision is the most advanced of human sensors, so
naturally images play one of the most important roles in
human perception. Image registration is one of the branches
encompassed by the diverse field of digital image processing.
Due to its importance in many application areas as well as
since its nature is complicated; image registration is now the
topic of much recent research. Registration algorithms tend
to compute transformations to set correspondence betweenthe two images. In this paper the survey is done on various
image registration techniques. Also the different techniques
are compared with the proposed system of the projec
Multi Resolution features of Content Based Image RetrievalIDES Editor
Many content based retrieval systems have been
proposed to manage and retrieve images on the basis of their
content. In this paper we proposed Color Histogram, Discrete
Wavelet Transform and Complex Wavelet Transform
techniques for efficient image retrieval from huge database.
Color Histogram technique is based on exact matching of
histogram of query image and database. Discrete Wavelet
transform technique retrieves images based on computation
of wavelet coefficients of subbands. Complex Wavelet
Transform technique includes computation of real and
imaginary part to extract the details from texture. The
proposed method is tested on COREL1000 database and
retrieval results have demonstrated a significant improvement
in precision and recall.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Comparision of Clustering Algorithms usingNeural Network Classifier for Satel...IJERA Editor
This paper presents a hybrid clustering algorithm and feed-forward neural network classifier for land-cover mapping of trees, shade, building and road. It starts with the single step preprocessing procedure to make the image suitable for segmentation. The pre-processed image is segmented using the hybrid genetic-Artificial Bee Colony(ABC) algorithm that is developed by hybridizing the ABC and FCM to obtain the effective segmentation in satellite image and classified using neural network . The performance of the proposed hybrid algorithm is compared with the algorithms like, k-means, Fuzzy C means(FCM), Moving K-means, Artificial Bee Colony(ABC) algorithm, ABC-GA algorithm, Moving KFCM and KFCM algorithm.
A novel Image Retrieval System using an effective region based shape represen...CSCJournals
With recent improvements in methods for the acquisition and rendering of shapes, the need for retrieval of shapes from large repositories of shapes has gained prominence. A variety of methods have been proposed that enable the efficient querying of shape repositories for a desired shape or image. Many of these methods use a sample shape as a query and attempt to retrieve shapes from the database that have a similar shape. This paper introduces a novel and efficient shape matching approach for the automatic identification of real world objects. The identification process is applied on isolated objects and requires the segmentation of the image into separate objects, followed by the extraction of representative shape signatures and the similarity estimation of pairs of objects considering the information extracted from the segmentation process and shape signature. We compute a 1D shape signature function from a region shape and use it for region shape representation and retrieval through similarity estimation. The proposed region shape feature is much more efficient to compute than other region shape techniques invariant to image transformation.
Wavelet-Based Color Histogram on Content-Based Image RetrievalTELKOMNIKA JOURNAL
The growth of image databases in many domains, including fashion, biometric, graphic design,
architecture, etc. has increased rapidly. Content Based Image Retrieval System (CBIR) is a technique used
for finding relevant images from those huge and unannotated image databases based on low-level features
of the query images. In this study, an attempt to employ 2nd level Wavelet Based Color Histogram (WBCH)
on a CBIR system is proposed. Image database used in this study are taken from Wang’s image database
containing 1000 color images. The experiment results show that 2nd level WBCH gives better precision
(0.777) than the other methods, including 1st level WBCH, Color Histogram, Color Co-occurrence Matrix,
and Wavelet texture feature. It can be concluded that the 2nd Level of WBCH can be applied to CBIR system.
SINGLE IMAGE SUPER RESOLUTION: A COMPARATIVE STUDYcsandit
The majority of applications requiring high resolution images to derive and analyze data
accurately and easily. Image super resolution is playing an effective role in those applications.
Image super resolution is the process of producing high resolution image from low resolution
image. In this paper, we study various image super resolution techniques with respect to the
quality of results and processing time. This comparative study introduces a comparison between
four algorithms of single image super-resolution. For fair comparison, the compared algorithms
are tested on the same dataset and same platform to show the major advantages of one over the
others.
The content based image retrieval (CBIR) technique
is one of the most popular and evolving research areas of the
digital image processing. The goal of CBIR is to extract visual
content like colour, texture or shape, of an image automatically.
This paper proposes an image retrieval method that uses colour
and texture for feature extraction. This system uses the query by
example model. The system allows user to choose the feature on
the basis of which retrieval will take place. For the retrieval
based on colour feature, RGB and HSV models are taken into
consideration. Whereas for texture the GLCM is used for
extracting the textural features which then goes into Vector
Quantization phase to speed up the retrieval process.
International Refereed Journal of Engineering and Science (IRJES) is a peer reviewed online journal for professionals and researchers in the field of computer science. The main aim is to resolve emerging and outstanding problems revealed by recent social and technological change. IJRES provides the platform for the researchers to present and evaluate their work from both theoretical and technical aspects and to share their views.
www.irjes.com
The project aims at development of efficient segmentation method for the CBIR system. Mean-shift segmentation generates a list of potential objects which are meaningful and then these objects are clustered according to a predefined similarity measure. The method was tested on benchmark data and F-Score of .30 was achieved.
SIGNIFICANCE OF DIMENSIONALITY REDUCTION IN IMAGE PROCESSING sipij
The aim of this paper is to present a comparative study of two linear dimension reduction methods namely
PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis). The main idea of PCA is to
transform the high dimensional input space onto the feature space where the maximal variance is
displayed. The feature selection in traditional LDA is obtained by maximizing the difference between
classes and minimizing the distance within classes. PCA finds the axes with maximum variance for the
whole data set where LDA tries to find the axes for best class seperability. The neural network is trained
about the reduced feature set (using PCA or LDA) of images in the database for fast searching of images
from the database using back propagation algorithm. The proposed method is experimented over a general
image database using Matlab. The performance of these systems has been evaluated by Precision and
Recall measures. Experimental results show that PCA gives the better performance in terms of higher
precision and recall values with lesser computational complexity than LDA
The content based Image Retrieval is the restoration of images with respect to the visual appearances
like texture, shape and color.The methods, components and the algorithms adopted in this content based
retrieval of images were commonly derived from the areas like pattern identification, signal progressing
and the computer vision. Moreover the shape and the color features were abstracted in the course of
wavelet transformation and color histogram. Thus the new content based retrieval is proposed in this
research paper.In this paper the algorithms were required to propose with regards to the shape, shade and
texture feature abstraction .The concept of discrete wavelet transform to be implemented in order to
compute the Euclidian distance.The calculation of clusters was made with the help of the modified KMeans
clustering technique. Thus the analysis is made in among the query image and the database
image.The MATLAB software is implemented to execute the queries. The K-Means of abstraction is
proposed by performing fragmentation and grid-means module, feature extraction and K- nearest neighbor
clustering algorithms to construct the content based image retrieval system.Thus the obtained result are
made to compute and compared to all other algorithm for the retrieval of quality image features
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
WEB IMAGE RETRIEVAL USING CLUSTERING APPROACHEScscpconf
Image retrieval system is an active area to propose a new approach to retrieve images from the
large image database. In this concerned, we proposed an algorithm to represent images using
divisive based and partitioned based clustering approaches. The HSV color component and Haar wavelet transform is used to extract image features. These features are taken to segment an image to obtain objects. For segmenting an image, we used modified k-means clustering algorithm to group similar pixel together into K groups with cluster centers. To modify Kmeans, we proposed a divisive based clustering algorithm to determine the number of cluster and get back with number of cluster to k-means to obtain significant object groups. In addition, we also discussed the similarity distance measure using threshold value and object uniqueness to quantify the results.
Multi Resolution features of Content Based Image RetrievalIDES Editor
Many content based retrieval systems have been
proposed to manage and retrieve images on the basis of their
content. In this paper we proposed Color Histogram, Discrete
Wavelet Transform and Complex Wavelet Transform
techniques for efficient image retrieval from huge database.
Color Histogram technique is based on exact matching of
histogram of query image and database. Discrete Wavelet
transform technique retrieves images based on computation
of wavelet coefficients of subbands. Complex Wavelet
Transform technique includes computation of real and
imaginary part to extract the details from texture. The
proposed method is tested on COREL1000 database and
retrieval results have demonstrated a significant improvement
in precision and recall.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Comparision of Clustering Algorithms usingNeural Network Classifier for Satel...IJERA Editor
This paper presents a hybrid clustering algorithm and feed-forward neural network classifier for land-cover mapping of trees, shade, building and road. It starts with the single step preprocessing procedure to make the image suitable for segmentation. The pre-processed image is segmented using the hybrid genetic-Artificial Bee Colony(ABC) algorithm that is developed by hybridizing the ABC and FCM to obtain the effective segmentation in satellite image and classified using neural network . The performance of the proposed hybrid algorithm is compared with the algorithms like, k-means, Fuzzy C means(FCM), Moving K-means, Artificial Bee Colony(ABC) algorithm, ABC-GA algorithm, Moving KFCM and KFCM algorithm.
A novel Image Retrieval System using an effective region based shape represen...CSCJournals
With recent improvements in methods for the acquisition and rendering of shapes, the need for retrieval of shapes from large repositories of shapes has gained prominence. A variety of methods have been proposed that enable the efficient querying of shape repositories for a desired shape or image. Many of these methods use a sample shape as a query and attempt to retrieve shapes from the database that have a similar shape. This paper introduces a novel and efficient shape matching approach for the automatic identification of real world objects. The identification process is applied on isolated objects and requires the segmentation of the image into separate objects, followed by the extraction of representative shape signatures and the similarity estimation of pairs of objects considering the information extracted from the segmentation process and shape signature. We compute a 1D shape signature function from a region shape and use it for region shape representation and retrieval through similarity estimation. The proposed region shape feature is much more efficient to compute than other region shape techniques invariant to image transformation.
Wavelet-Based Color Histogram on Content-Based Image RetrievalTELKOMNIKA JOURNAL
The growth of image databases in many domains, including fashion, biometric, graphic design,
architecture, etc. has increased rapidly. Content Based Image Retrieval System (CBIR) is a technique used
for finding relevant images from those huge and unannotated image databases based on low-level features
of the query images. In this study, an attempt to employ 2nd level Wavelet Based Color Histogram (WBCH)
on a CBIR system is proposed. Image database used in this study are taken from Wang’s image database
containing 1000 color images. The experiment results show that 2nd level WBCH gives better precision
(0.777) than the other methods, including 1st level WBCH, Color Histogram, Color Co-occurrence Matrix,
and Wavelet texture feature. It can be concluded that the 2nd Level of WBCH can be applied to CBIR system.
SINGLE IMAGE SUPER RESOLUTION: A COMPARATIVE STUDYcsandit
The majority of applications requiring high resolution images to derive and analyze data
accurately and easily. Image super resolution is playing an effective role in those applications.
Image super resolution is the process of producing high resolution image from low resolution
image. In this paper, we study various image super resolution techniques with respect to the
quality of results and processing time. This comparative study introduces a comparison between
four algorithms of single image super-resolution. For fair comparison, the compared algorithms
are tested on the same dataset and same platform to show the major advantages of one over the
others.
The content based image retrieval (CBIR) technique
is one of the most popular and evolving research areas of the
digital image processing. The goal of CBIR is to extract visual
content like colour, texture or shape, of an image automatically.
This paper proposes an image retrieval method that uses colour
and texture for feature extraction. This system uses the query by
example model. The system allows user to choose the feature on
the basis of which retrieval will take place. For the retrieval
based on colour feature, RGB and HSV models are taken into
consideration. Whereas for texture the GLCM is used for
extracting the textural features which then goes into Vector
Quantization phase to speed up the retrieval process.
International Refereed Journal of Engineering and Science (IRJES) is a peer reviewed online journal for professionals and researchers in the field of computer science. The main aim is to resolve emerging and outstanding problems revealed by recent social and technological change. IJRES provides the platform for the researchers to present and evaluate their work from both theoretical and technical aspects and to share their views.
www.irjes.com
The project aims at development of efficient segmentation method for the CBIR system. Mean-shift segmentation generates a list of potential objects which are meaningful and then these objects are clustered according to a predefined similarity measure. The method was tested on benchmark data and F-Score of .30 was achieved.
SIGNIFICANCE OF DIMENSIONALITY REDUCTION IN IMAGE PROCESSING sipij
The aim of this paper is to present a comparative study of two linear dimension reduction methods namely
PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis). The main idea of PCA is to
transform the high dimensional input space onto the feature space where the maximal variance is
displayed. The feature selection in traditional LDA is obtained by maximizing the difference between
classes and minimizing the distance within classes. PCA finds the axes with maximum variance for the
whole data set where LDA tries to find the axes for best class seperability. The neural network is trained
about the reduced feature set (using PCA or LDA) of images in the database for fast searching of images
from the database using back propagation algorithm. The proposed method is experimented over a general
image database using Matlab. The performance of these systems has been evaluated by Precision and
Recall measures. Experimental results show that PCA gives the better performance in terms of higher
precision and recall values with lesser computational complexity than LDA
The content based Image Retrieval is the restoration of images with respect to the visual appearances
like texture, shape and color.The methods, components and the algorithms adopted in this content based
retrieval of images were commonly derived from the areas like pattern identification, signal progressing
and the computer vision. Moreover the shape and the color features were abstracted in the course of
wavelet transformation and color histogram. Thus the new content based retrieval is proposed in this
research paper.In this paper the algorithms were required to propose with regards to the shape, shade and
texture feature abstraction .The concept of discrete wavelet transform to be implemented in order to
compute the Euclidian distance.The calculation of clusters was made with the help of the modified KMeans
clustering technique. Thus the analysis is made in among the query image and the database
image.The MATLAB software is implemented to execute the queries. The K-Means of abstraction is
proposed by performing fragmentation and grid-means module, feature extraction and K- nearest neighbor
clustering algorithms to construct the content based image retrieval system.Thus the obtained result are
made to compute and compared to all other algorithm for the retrieval of quality image features
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
WEB IMAGE RETRIEVAL USING CLUSTERING APPROACHEScscpconf
Image retrieval system is an active area to propose a new approach to retrieve images from the
large image database. In this concerned, we proposed an algorithm to represent images using
divisive based and partitioned based clustering approaches. The HSV color component and Haar wavelet transform is used to extract image features. These features are taken to segment an image to obtain objects. For segmenting an image, we used modified k-means clustering algorithm to group similar pixel together into K groups with cluster centers. To modify Kmeans, we proposed a divisive based clustering algorithm to determine the number of cluster and get back with number of cluster to k-means to obtain significant object groups. In addition, we also discussed the similarity distance measure using threshold value and object uniqueness to quantify the results.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal,
Performance Evaluation Of Ontology And Fuzzybase Cbiracijjournal
In This Paper, We Have Done Performance Evaluation Of Ontology Using Low-Level Features Like
Color, Texture And Shape Based Cbir, With Topic Specific Cbir.The Resulting Ontology Can Be Used
To Extract The Appropriate Images From The Image Database. Retrieving Appropriate Images From An
Image Database Is One Of The Difficult Tasks In Multimedia Technology. Our Results Show That The
Values Of Recall And Precision Can Be Enhanced And This Also Shows That Semantic Gap Can Also Be
Reduced. The Proposed Algorithm Also Extracts The Texture Values From The Images Automatically
With Also Its Category (Like Smooth, Course Etc) As Well As Its Technical Interpretation
PERFORMANCE EVALUATION OF ONTOLOGY AND FUZZYBASE CBIRacijjournal
IN THIS PAPER, WE HAVE DONE PERFORMANCE EVALUATION OF ONTOLOGY USING LOW-LEVEL FEATURES LIKE
COLOR, TEXTURE AND SHAPE BASED CBIR, WITH TOPIC SPECIFIC CBIR.THE RESULTING ONTOLOGY CAN BE USED
TO EXTRACT THE APPROPRIATE IMAGES FROM THE IMAGE DATABASE. RETRIEVING APPROPRIATE IMAGES FROM AN
IMAGE DATABASE IS ONE OF THE DIFFICULT TASKS IN MULTIMEDIA TECHNOLOGY. OUR RESULTS SHOW THAT THE
VALUES OF RECALL AND PRECISION CAN BE ENHANCED AND THIS ALSO SHOWS THAT SEMANTIC GAP CAN ALSO BE
REDUCED. THE PROPOSED ALGORITHM ALSO EXTRACTS THE TEXTURE VALUES FROM THE IMAGES AUTOMATICALLY
WITH ALSO ITS CATEGORY (LIKE SMOOTH, COURSE ETC) AS WELL AS ITS TECHNICAL INTERPRETATION.
A COMPARATIVE ANALYSIS OF RETRIEVAL TECHNIQUES IN CONTENT BASED IMAGE RETRIEVALcscpconf
Basic group of visual techniques such as color, shape, texture are used in Content Based Image Retrievals (CBIR) to retrieve query image or sub region of image to find similar images in image database. To improve query result, relevance feedback is used many times in CBIR to help user to express their preference and improve query results. In this paper, a new approach for image retrieval is proposed which is based on the features such as Color Histogram, Eigen Values and Match Point. Images from various types of database are first identified by using edge detection techniques .Once the image is identified, then the image is searched in the particular database, then all related images are displayed. This will save the retrieval time. Further to retrieve the precise query image, any of the three techniques are used and comparison is done w.r.t. average retrieval time. Eigen value technique found to be the best as compared with other two techniques.
A comparative analysis of retrieval techniques in content based image retrievalcsandit
Basic group of visual techniques such as color, shape, texture are used in Content Based Image
Retrievals (CBIR) to retrieve query image or sub region of image to find similar images in
image database. To improve query result, relevance feedback is used many times in CBIR to
help user to express their preference and improve query results. In this paper, a new approach
for image retrieval is proposed which is based on the features such as Color Histogram, Eigen
Values and Match Point. Images from various types of database are first identified by using
edge detection techniques .Once the image is identified, then the image is searched in the
particular database, then all related images are displayed. This will save the retrieval time.
Further to retrieve the precise query image, any of the three techniques are used and
comparison is done w.r.t. average retrieval time. Eigen value technique found to be the best as
compared with other two techniques.
A Review of Feature Extraction Techniques for CBIR based on SVMIJEEE
As with the advancement of multimedia technologies, users are not gratified with the conventional retrieval system techniques. So a application “Content Based Image Retrieval System” is introduced. CBIR is the application to retrieve the images or to search the digital images from the large database .The term “content” deals with the colour, shape, texture and all the information which is extracted from the image itself. This paper reviews the CBIR system which uses SVM classifier based algorithms for feature extraction phase.
In Content-Based Image Retrieval (CBIR) systems, the visual contents of the
images in the database are took out and represented by multi-dimensional characteristic
vectors. A well known CBIR system that retrieves images by unsupervised method known
as cluster based image retrieval system. For enhancing the performance and retrieval rate
of CBIR system, we fuse the visual contents of an image. Recently, we developed two
cluster-based CBIR systems by fusing the scores of two visual contents of an image. In this
paper, we analyzed the performance of the two recommended CBIR systems at different
levels of precision using images of varying sizes and resolutions. We also compared the
performance of the recommended systems with that of the other two existing CBIR systems
namely UFM and CLUE. Experimentally, we find that the recommended systems
outperform the other two existing systems and one recommended system also comparatively
performed better in every resolution of image.
Ijaems apr-2016-16 Active Learning Method for Interactive Image RetrievalINFOGAIN PUBLICATION
With many possible multimedia applications, content-based image retrieval (CBIR) has recently gained more interest for image management and web search. CBIR is a technique that utilizes the visual content of an image, to search for similar images in large-scale image databases, according to a user’s concern. In image retrieval algorithms, retrieval is according to feature similarities with respect to the query, ignoring the similarities among images in database. To use the feature similarities information, this paper presents the k-means clustering algorithm to image retrieval system. This clustering algorithm optimizes the relevance results by firstly clustering the similar images in the database. In this paper, we are also implementing wavelet transform which demonstrates significant rough and precise filtering. We also apply the Euclidean distance metric and input a query image based on similarity features of which we can retrieve the output images. The results show that the proposed approach can greatly improve the efficiency and performances of image retrieval.
We can retrieve images based on texture and we getting matched images in targeted path, when we want partical images on condition we are using texture image retrieval system to getting 100% accurate output on the targeted system,
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Video surveillance is active research topic in
computer vision research area for humans & vehicles, so it is
used over a great extent. Multiple images generated using a fixed
camera contains various objects, which are taken under different
variations, illumination changes after that the object’s identity
and orientation are provided to the user. This scheme is used to
represent individual images as well as various objects classes in a
single, scale and rotation invariant model.The objective is to
improve object recognition accuracy for surveillance purposes &
to detect multiple objects with sufficient level of scale
invariance.Multiple objects detection& recognition is important
in the analysis of video data and higher level security system. This
method can efficiently detect the objects from query images as
well as videos by extracting frames one by one. When given a
query image at runtime, by generating the set of query features
and it will find best match it to other sets within the database.
Using SURF algorithm find the database object with the best
feature matching, then object is present in the query image.
Similar to Research Inventy : International Journal of Engineering and Science (20)
Experimental Investigation of a Household Refrigerator Using Evaporative-Cool...inventy
The objective of this paper was to investigate experimentally the effect of Evaporative-cooled condenser in a household refrigerator. The experiment was done using HCF134a as the refrigerant. The performance of the household refrigerator with air-cooled and Evaporative-cooled condenser was compared for different load conditions. The results indicate that the refrigerator performance had improved when evaporative-cooled condenser was used instead of air-cooled condenser on all load conditions. Evaporativecooled condenser reduced the energy consumption when compared with the air-cooled condenser. There was also an enhancement in coefficient of performance (COP) when evaporative-cooled condenser was used instead of air-cooled condenser. The Evaporative cooled heat exchanger was designed and the system was modified by retrofitting it, instead of the conventional air-cooled condenser by making drop wise condensation using water and forced circulation over the condenser. From the experimental analysis it is observed that the COP of evaporative cooled system increased by 13.44% compared to that of air cooled system. So the overall efficiency and refrigerating effect is increased. In minimum constructional, maintenance and running cost, the system is much useful for domestic purpose. This study also revealed that combining a evaporative cooled system along with conventional water cooled system under the condition that the defrost water obtained from the freezer is used for drop wise condensation over condenser and water cooled condensation of the condenser at the bottom using remaining defrost water would reduce the power consumption, work done and hence further increase in refrigerating effect of the system. The study has shown that such a system is technically feasible and economically viable
Copper Strip Corrossion Test in Various Aviation Fuelsinventy
This research work takes in to account of corrosiveness test on various aviation fuels in the state of Telengana (India). The purpose of this experiment is to determine the corrosiveness test of fuels. This determination will be accomplished by using copper strip corrosion test by using the copper strip experiment we can determine the corrosive property of the fuel and hence the efficiency of fuel. The research covers the importance of knowing the corrosive property of different petroleum fuels including aviation turbine fuel.
Additional Conservation Laws for Two-Velocity Hydrodynamics Equations with th...inventy
A series of the differential identities connecting velocities, pressure and body force in the twovelocity hydrodynamics equations with equilibrium of pressure phases in reversible hydrodynamic approximation is obtaned.
Comparative Study of the Quality of Life, Quality of Work Life and Organisati...inventy
People’s lives are increasingly centred on work; they spend at least one-third of their time within the organisations that employ them. Investigating the factors that interfere with employees’ well-being and the organisational environment is becoming an increasing concern in organisations. This article identifies the criteria of the quality of life (QoL), quality of working life (QWL) and organisational climate instruments to point out their similarities. For bibliographic construction and data research, articles were sought in national and international journals, books and dissertations/articles in SciELO, Science Direct, Medline and Pub Med databases. The results show direct relationships amongst QoL, QWL and organisational climate instruments. The relationship between QoL and QWL instruments is based on fair compensation, social interaction, organisational communication, working conditions and functional capacity. QWL and organisational climate instruments are related through social interaction and interfaces. QoL and organisational climate instruments are related based on social interaction, organisational communication, and work conditions.
A Study of Automated Decision Making Systemsinventy
The decision making process of many operations are dependent on analysing very large data sets, previous decisions and their results. The information generated from the large data sets are used as an input for making decisions. Since the decisions to be taken in day to day operations are expanding, the time taken for manual decision making is also expanding. In order to reduce the time, cost and to increase the efficiency and accuracy, which are the most important things for customer satisfaction, many organisations are adopting the automated decision making systems. This paper is about the technologies used for automated decision making systems and the areas in which automated decisions systems works more efficiently and accurately.
Crystallization of L-Glutamic Acid: Mechanism of Heterogeneous β -Form Nuclea...inventy
The mechanism of heterogeneous nucleation of β-form L-glutamic acid was deeply investigated in cooling crystallization. The present study found that the β-form crystals were epitaxially grown on the α-form crystals and they were preferably crystallized on the (011) and (001) surfaces instead of the (111) surfaces of α- form crystals. This result was explained via the molecular simulation. The molecular simulation indicated that the different surfaces of α-form crystals provided different functional groups, resulting in different sites for the heterogeneous nucleation of β-form crystals. Here, the functional group were COO- , C=O and O-H on the (011) and (001) surfaces of α-form crystals, respectively, while it was the NH3 + on the (111) surfaces of α-form crystals. As such, the degree of lattice matching (E) between the β-form crystals and the various surfaces of α- form crystal was distinguished, where the degree of lattice matching (E) between the β-form crystals and the (011), (001) and (111) surfaces of α-form crystal were estimated as 5.30, 5.25 and 2.39, respectively, implying that the (011) and (001) surfaces of α-form crystal were more favorable to generate the heterogeneous nucleation of β-form crystals than the (111) surfaces of α-form crystal
Evaluation of Damage by the Reliability of the Traction Test on Polymer Test ...inventy
In recent decades, polymers have undergone a remarkable historical development and their use has been greatly imposed by gradually dethroning most of the secular materials. These polymer materials have always distinguished themselves by their simple shaping and inexpensive price, their versatility, lightness, and chemical stability but despite their massive use in everyday life as well as in advanced technologies. Generally, these materials still not understood which requires a thorough knowledge of their chemical, physical, rheological and mechanical properties. This paper, we study the mechanical behavior of an amorphous polymer: Acrylonitrile Butadiene Styrene “ABS” by means of uniaxial tensile testing on pierced test pieces with different notch lengths ranging between 1 to 14mm.The proposed approach consists in analyzing the evolution of the global geometry of the obtained strain curves by taking into account the zones and characteristic points of these curves as well as the effect of the damage on the mechanical behavior of the polymer ABS, in order to visualize the evolution of the damage by a static model
Application of Kennelly’model of Running Performances to Elite Endurance Runn...inventy
: The model of Kennelly between distance (Dlim) and exhaustion time (tlim) has been applied to the individual performances of 19 elite endurance runners (World-record holders and Olympic winners) from P. Nurmi (1920-1924) to M. Farah (2012) whose individual best performances on several different distances are known. Kennelly’s model (Dlim = k tlim ) can describe the individual performances of elite runners with a high accuracy (errors lower than 2 %). There is a linear relationship between parameters k and exponents of the elite runners and the extreme values correspond to S. Coe (k = 15.8; = 0.851) and E. Zatopek (k = 6.57; = 0.984). Exponent can be considered as a dimensionless index of aerobic endurance which is close to 1 in the best endurance runners. If it is assumed than maximal aerobic speed can be maintained 7 min in elite endurance runners, exponent is equal to the normalized critical speed (critical speed/maximal aerobic speed) computed from exhaustion times equal to 3 and 12.5 min in these runners.
Development and Application of a Failure Monitoring System by Using the Vibra...inventy
In this project, a failure monitoring system is developed by using the vibration and location information of balises in railway signaling. A lot of field equipment in railway are loosening and broken in time period so that they need maintenance due to the vibrations that occur due to high speed trains traffic and railway vehicles impact. Among the field equipment, balises have very important role of communication in terms of transmitting information to trains. In this scope, it is aimed to make maintenance works more efficient, have no delayed trains, detect previously failure location and intervene in failure timely, by detecting and controlling balise cases such as loosening, out of place and the data consistency error that happens because of balise physical state. In this project, the communication is provided with I2C, Modbus RTU (Remote Terminal Unit) and RS485 standards by using Arduino Uno cards and MPU6050 IMU (Inertial Measurement Unit) sensors in laboratory. Each used sensors are in slave mode and computer interface designed with C# is in master mode. Fault situations in the system are checked instant by the interface. (it is assumed to mount the IMU sensor and the Arduino circuit on the balise) it is seen that the interface responds to the sensor movements instant and the system works well in the end of test processes.
The Management of Protected Areas in Serengeti Ecosystem: A Case Study of Iko...inventy
The study assessed the management of protected areas in Serengeti ecosystem using the case of IGGRs. Specifically, the study aimed at identifying the strategies used for natural resources management; examining the impacts of those strategies; examining the hindrances of the identified strategies; and lastly, examining the methods for scaling up the performance of strategies used for natural resources in the study area. The study involved two villages among 31 villages bordering IGGRs where in each village; at least 5% of the households were sampled. Both Primary data and secondary data were collected and analyzed both manually and computer by using SPSS software. The study revealed that, study population ranked IGGRs performance on protection of natural resources, especially on conserving wildlife for future generation and in reducing poaching to be good(53.3%). In addition, the relationship with IGGRs was said to be considerable good (46.7%). In the aspect of reducing poaching, the findings show that poaching has been reduced by 96.2% from 2009 to 2012. Furthermore, 81.4% of respondents said they use different strategies to control loss of natural resources which in turn has considerably improved the relationship between protected areas and the surrounding communities in some of the aspects. Despite of above successes, the study findings has revealed a number of challenges that hinders the full attainment of conservation objectives. Among the challenges are loss of life and properties (86.4%), shortage of water for livestock (68.9%) since water sources such as Grumeti and Rubana rivers are within protected area while the adjacent local communities do not have a free access to those water sources. Other challenges especially on the IGGRs management include insufficient fund base, working facilities and inadequate staffs. Based on the above findings, the study concluded that the strategies used for natural resources management of protected areas in Serengeti ecosystem is fairly sustainable and need functional participatory approaches of local people and other stakeholders in order to bring about a collaborative natural resources management network in the ecosystem. Furthermore, based on the findings above, equity in benefit sharing accrued from natural resource management in protected areas, more financial support to IGGRs and local community, the use of non-lethal deterrents for crop protection, integration of croplivestock production systems, adoption of land use plans as a solution to land conflicts, strengthens of community based conservation (CBC), adoption of modern information technology such as geographical information system (GIS) and remote sensing are recommended.
Size distribution and biometric relationships of little tunny Euthynnus allet...inventy
This study is taken from data of commercial fishing of the little tunny, Euthynnus alletteratus (Rafinesque, 1810) caught in the Algerian coast, sampled between november 2011 and april 2016. Data were collected in order to determine size distributions of the population and biometric relationships of species including the size - weight relationships. A total of 601 fish ranged from 30.9 and 103 cm fork length (FL) were observed. The size distribution of Euthynnus alletteratus shows multiple modal values witch the most important cohort corresponds to the age class 2 (42-46 cm). The value of the allometric coefficient (b) of the FL/TW relationship is lower than 3, indicating a negative allometric growth.
Removal of Chromium (VI) From Aqueous Solutions Using Discarded Solanum Tuber...inventy
Industrial polluting effluents containing heavy metals are of serious environmental concern in India. Chromium is frequently used in industries like electroplating, metal finishing, cooling towers, dyes, paints, anodizing and leather tanning and is found as traces in effluents finding their way to natural water bodies causing hazardous toxicity to the health of humans, animals and aquatic lives directly or indirectly. Many methods for the removal of Chromium such as chemical reduction, precipitation, ion exchange, electrochemical reduction, evaporation, reverse osmosis and adsorption using activated carbon etc. have been reported but all being expensive and complicated to operate. Experimental practices reveal that adsorption by agricultural and horticultural wastes are quite simple, inexpensive and efficient method. Agra is famous for Potato farming, a lot of discarded potato waste from cold storages is thrown along road side drains causing solid waste generated which either creates solid waste disposal problem or otherwise it finds way to Yamuna river resulting high BOD and posing a serious threat to the aquatic environment. For developing countries like India adsorption studies using discarded potato (Solanum tuberosum) waste from cold storages (DPWC) a solid waste as low cost adsorbent for Chromium removal was dual beneficial i.e., an ideal solution to these solid wastes disposal problem of Agra and removal of Chromium from tannery effluents and thereby saving aquatic life from Chromium contamination in Yamuna river. Keeping this in view batch experiments were designed to study the feasibility of discarded potato waste from cold storages to remove chromium (VI) from the aqueous solutions. During the study various affecting parameters, such as pH, adsorbent does, initial concentration, temperature, contact time, adsorbent grain size and start up agitation speed were optimized as 5.0, 10-20 g/l, 50 mg/l, 250C, 135 minutes, average size and 80 rpm respectively on chromium removal efficiency. Various Isotherms such as Langmuir, Freundlich, Tempkin also fitted suitably and various corresponding constants determined from these Isotherms favor and support the adsorption. Thermodynamic constants ∆G, ∆H and ∆S were found to be 0.267 KJ/mole, 0.288 KJ/mole and 0.0013 KJ/mole respectively.
Effect of Various External and Internal Factors on the Carrier Mobility in n-...inventy
The effect of various external (temperature, electric field, light) and intracrystalline (doping, initial resistivity) factors on the mobility of carriers in layered n-InSe semiconductor experimentally have been investigated. Scientific explanations of the results are proposed
Transient flow analysis for horizontal axial upper-wind turbineinventy
This study is to carry out a transient flow field analysis on the condition that the wind turbine is working to generate turbine, the wind turbine operating conditions change over time, Purpose of this study is try to find out the rule from the wind turbine changing over time . In transient analysis, the wind velocity on inlet boundary and rotation speed in the rotor field will change over time, and an analytical process is provided that can be used for future reference. At present, the wind turbine model is designed on the concept of upwind horizontal axis type. The computer engineering software GH Bladed is used to obtain the relationship between the rotor velocity and the wind turbine. Then the ANSYS engineering software is used to calculate the stress and strain distribution in the blades over time. From the analytical result, the relationship between the stress distribution in the blades and the rotor velocity is got to be used as a reference for future wind turbine structural optimization.
Choice of Numerical Integration Method for Wind Time History Analysis of Tall...inventy
Wind tunnel tests are being performed routinely around the world for designing tall buildings but the advent of powerful computational tools will make time-history analysis for wind more common in near future. As the duration of wind storms ranges from tens of minutes to hours while earthquake durations are typically less than a three to four minutes, the choice of a time step size (Δt) for wind studies needs to be much larger both to reduce the computational time and to save disk space. As the error in any numerical solution of the equation of motion is dependent on step size (Δt), careful investigations on the choice of numerical integration methods for wind analyses are necessary. From a wide variety of integration methods available, it was decided to investigate three methods that seem appropriate for 3D-time history analysis of tall buildings for wind. These are modal time history analysis, the Hilber-Hughes-Taylor (HHT) method or α-method with α=- 0.1, and the Newmark method with β=0.25 and γ=0.5 ( i.e., trapezoidal rule). SAP2000, a common structural analysis software tool, and a 64-story structure are used to conduct all the analyses in this paper. A boundary layer wind tunnel (BLWT) pressure time history measured at 120 locations around the building envelope of a similar structure is used for the analyses. Analyses performed with both the HHT and Newmark-method considering P-delta effects show that second order effects have a considerable impact on both displacement and acceleration response. This result shows that it is necessary to account P-delta effect for wind analysis of tall buildings. As the direct integration time history analysis required very large computation times and very large computer physical memory for a wind duration of hours, a modal analysis with reduced stiffness is considered as a good alternative. For that purpose, a non-linear static analysis of the structure with a load combination of 1.0D + 1.0L is performed in SAP2000 and the reduced stiffness of the structure after the analysis is used to conduct an eigenvalue analysis to extract the mode shapes and frequencies of this structure. Then the first 20- modes are used to perform a modal time history analysis for wind load. The result shows that the responses from modal analysis with “20-mode (reduced stiffness)” are comparable with that from the P-Δ analyses of Newmark-method
Impacts of Demand Side Management on System Reliability Evaluationinventy
Electricity demand in Saudi Arabia is steadily increasing as electrical loads grows at a rate of about 7% per year, this represents a high rate by all standards, and largely due to population growth, as well as due to government subsidies which may lead to prices much lower than actual production cost. This growth represents a challenge that requires Saudi Electricity Company (SEC) to invest huge amounts of money every year, for the construction of additional generation capacity along with the reinforcement of transmission network to meet the consumption growth.Also the demand varies frequently throughout the day, causing a waste of a large part of the energy. SEC believes the optimum solution lies in altering the load shape in order to have a better balance between customer’s consumption and SEC’s generation, This paper describes the method for improving the power system reliability by shifting the portion of peak load to off-peak periods This load management scheme can be achieved by lifting the generation during off peak periods and utilizing the stored energy during peak periods. A hybrid set up involving solar and wind energy along with batteries can also be used to store energy and utilize it during peak periods.
Reliability Evaluation of Riyadh System Incorporating Renewable Generationinventy
In this paper, the experience of Saudi Electricity Company (SEC) in analyzing the generation adequacy for Year 2013 is presented. This analysis is conducted by calculating several reliability indices for Riyadh system hourly load during all four seasonal periods. The reliability indices are gauged against the international utility practice. SEC also plans to introduce renewable energy into the network in order to secure the environmental standards and reduce fuel costs of conventional generation. Thus, the reliability improvement due to different integration levels of Solar and Wind generating sources has also been investigated. The capacity value provided by these variable renewable energy sources (VERs) to reliably meet the system load has been calculated using effective load carrying capability (ELCC) technique with a loss of load expectancy metric.
The effect of reduced pressure acetylene plasma treatment on physical charact...inventy
The capacitors are increasingly being used as energy storage devicesin various power systems. The scientists of the world are tryingto maximize the electrical capacity of the supercapacitors. To achieve this purpose, numerous method sare used: the surface activation of electrodes, the surface etching using the electronbeam, the electrode etching with variousgasplasma, etc. The purpose of this work is toresearch how the properties of carbon electrodes depend on the plasma parameters at whichtheywere formed. The largest surface area ofcarbonelectrodeof47.25m2 /gis obtainedat 15 ofAr/C2H2gasratio. Meanwhile, theSEMimages show that the disruption of structures with low bond energies and the formation of new onesare taking place when the carbon electrodes are etched at acetylene plasma and placed on carbon electrode. The measurements of capacitance showthat capacitors with affectedelectrodes have about10-15% highercapacity than those not treated with acetyleneplasma.
Experimental Investigation of Mini Cooler cum Freezerinventy
In general cases the refrigerator could be converted into an air conditioner by attaching a fan. Thus a cooler as well as freezer is obtained in a single set up. The freezer can be converted to an air conditioner when the outside air is allowed to flow beside the cooling coil and is forced outside by an exhaust fan. In this case a mini scale cooler cum freezer using R134a as refrigerant was fabricated and tested In our mini project work we had designed, fabricated and experimentally analysed a mini cooler cum freezer. From the observations and calculations, the results of mini cooler cum freezer are obtained and are compared.
Growth and Magnetic properties of MnGeP2 thin filmsinventy
We have successfully grown MnGeP2 thin films on GaAs (100) substrate. A ferromagnetic transition near 320 K has been observed by temperature dependent magnetization and resistance measurements. Field dependent magnetization experiments have shown that the coercive fields at 5, 250, and 300 K are 3870, 1380 and 155 Oe, respectively. Magnetoresistance and Hall measurements have displayed that hole conduction is dominant in MnGeP2. PACS: 75.50.Pp, 75.70.-i, 85.70.-w, 73.50.-h
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 3
Research Inventy : International Journal of Engineering and Science
1. Research Inventy: International Journal Of Engineering And Science
Vol.3, Issue 3 (June 2013), PP 29-36
Issn(e): 2278-4721, Issn(p):2319-6483, Www.Researchinventy.Com
1
Performance Enhancement of Content Based Image Retrieval
System Using Contrast Limited Adaptive Histogram Equalization
1
Bhaswati Das, 2
Deepak Sharma
1
(M.tech student, ECE,M.M Engineering College, Mullana , Ambala, Haryana, India)
2(
Assitant Professor ECE Deptt. M.M Engineering College, Mullana ,Ambala, Haryana, India)
ABSTRACT - The retrieval performance of content-based image retrieval (CBIR) systems still leaves much to
be desired, especially when the system is serving as an interface to an image collection covering many different
topics. The problem of missing semantical information about the images leads to great numbers of false matches
because of misleading similarities in the visual primitives that are retrieved. In this paper we used the feature
extraction to improve the efficiency of the content based image retrieval system. Our Experimental results shows
there is an improvement in proposed system.We used CLAHE(Contrast Limited Adaptive Histogram
Equalization)for better results. At the end we had analysed the performance of CBIR system using CLAHE and
without using CLAHE by calculating precision and recall.
KEYWORDS: CBIR(Content Based Image Retrieval, CLAHE(Contrast Limited Adaptive Histogram
Equalization), Discrete Wavelet Transform(DWT), feature extraction, precision/recall.
I. INTRODUCTION
This paper gives an overview of the currently available literature on content based image retrieval. It
evaluates after a few years of developments the need for image retrieval and presents concrete scenarios for
promising future research directions. This section gives an introduction to content based image retrieval systems
(CBIRSs) and the technologies used in them. Image retrieval has been an extremely active research area over
the last 10 years, but the review articles on access methods in image databases appeared already in the early 80s
[1]. The following articles from various years explain the state of the art of the corresponding years and contain
references to a large number of systems and descriptions of the technologies implemented. Enser [2] gives an
extensive description of image archives, various indexing methods and common searching tasks, using mostly
text based searches on annotated images. The overview of the research domain in 1997 is given and in the past,
present and future of image retrieval is highlighted. An almost exhaustive overview of published systems is
given and an evaluation of a subset of the systems is attempted [6]. Unfortunately, the evaluation is very limited
and only for very few systems. The most complete overview of technologies to date is given by Smeulders [7].
This article describes common problems such as the semantic gap or the sensory gap and gives links to a large
number of articles describing the various techniques used in the domain.
II. BLOCK DIAGRAM OF CBIR
Basic idea behind CBIR is that, when building an image database, feature vectors from images (the
features can be color, shape, texture, region or spatial features, features in some compressed domain, etc.) are to
be extracted and then store the vectors in another database for future use. When given a query image its feature
vectors are computed. If the distance between feature vectors of the query image and image in the database is
small enough, the corresponding image in the database is to be considered as a match to the query. The search is
usually based on similarity rather than on exact match and the retrieval results are then ranked accordingly to a
similarity index. The block diagram of basic CBIR system is as shown in Figure.1.
A. Image
An image (from Latin: imago) is an artifact that depicts or records visual perception, for example a two
dimensional picture, that has a similar appearance to some subject usually a physical object or a person, thus
providing a depiction of it.
B. Database index and storage
A collection of image data, typically associated with the activities of one or more related organizations
.Focuses on the organization of images and its metadata in an efficient manner. Sometimes delves more
thoroughly into an image's content query by an image's characteristic rather than just keywords/tag. It efficiently
store images in database.
2. Performance Enhancement Of Content Based Image Retrieval System Using Contrast Limited
30
Figure 1.Block diagram of Content based image retrieval
C. Query image and Query result
Query image is something which is used to retrieve image from the database that satisfy the criteria of
similarity to the user's query image. An image retrieval system is a computer system for browsing, searching and
retrieving images from a large database of digital images.
D. Contrast Limited Adaptive Histogram equalization (CLAHE)
Adaptive histogram equalization (AHE) is a computer image processing technique used to improve
contrast in images. It differs from ordinary histogram equalization in the respect that the adaptive method
computes several histograms, each corresponding to a distinct section of the image, and uses them to redistribute
the lightness values of the image. It is therefore suitable for improving the local contrast of an image and
bringing out more detail. However, AHE has a tendency to over amplify noise in relatively homogeneous
regions of an image
III. PROPOSED WORK
At first a query image is read and the then images from the database is read. Then we have used
CLAHE(Contrast Limited Adaptive Histogram in the images. After that we have extracted the features by using
Discrete Wavelength Transform (DWT). After the feature extraction, all extracted feature will be form in feature
vector. The query image then compared with images database by calculating the Euclidean distance. Ranking all
relevant images and sorting them in ascending order based on Euclidean distance and displayed resultant images
with highest rank. The system then ranking the images and display the top relevant images to user. The CBIR
system performance measurement is based on the Precision and Recall. The experimental result are conducted
using Matlab7.0. For this experiment we used image database from practical lab.
Figure 2.Implementation of CBIR
3. Performance Enhancement Of Content Based Image Retrieval System Using Contrast Limited
31
An image retrieval system is a computer system for browsing, searching and retrieving images from a
large database of digital images. Most traditional and common methods of image retrieval utilize some method
of adding metadata such as captioning, keywords, or descriptions to the images. so that retrieval can be
performed over the annotation words. Manual image. When the input data to an algorithm is too large to be
processed and it is suspected to be notoriously redundant (e.g. the same measurement in both feet and meters)
then the input data will be transformed into a reduced representation set of features (also named features vector).
Transforming the input data into the set of features is called feature extraction. If the features extracted are
carefully chosen it is expected that the features set will extract the relevant information from the input data in
order to perform the desired task using this reduced representation instead of the full size input. The transform
of a signal is just another form of representing the signal. It does not change the information content present in
the signal. The Wavelet Transform provides a time-frequency representation of the signal. It was developed to
overcome the short coming of the Short Time Fourier Transform (STFT), which can also be used to analyze
non-stationary signals. While STFT gives a constant resolution at all frequencies, the Wavelet Transform uses
multi-resolution technique by which different frequencies are analyzed with different resolutions. A wave is an
oscillating function of time or space and is periodic. In contrast, wavelets are localized waves. They have their
energy concentrated in time or space and are suited to analysis of transient signals.
Figure 3 Flow chart of image retrieval without using CLAHE
4. Performance Enhancement Of Content Based Image Retrieval System Using Contrast Limited
32
Figure 4 Flow chart of image retrieval using CLAHE
IV. RESULT AND DISCUSSION
The CBIR system performance measurement is based on the Precision and Recall. The experimental
result are conducted using Matlab7.0. For this experiment image database is used from practical lab.Precision is
defined as the number of relevant images retrieved by a search devise by the total number of images retrieved by
the search.Recall is defined as the number of relevant images retrieved by search divided by total number of
existing relevant image.The precision and recall is calculated by the following formula:
Precision = Total no of images retrieved
No of relevant images retrieved
Recall = Total no of images retrieved
Total no of relevant images in database
In this paper, similarity comparison technique is used for the better performance of the CBIR system and the
better similarity in the query image and the images which are retrieved. The technique is as follows:
Euclidean Distance: where and
.
Images are retrieved in two ways. It is retrieved first without using Contrast Limited Adaptive Histogram
Equalization (CLAHE) and then using CLAHE. Contrast Limited AHE (CLAHE) differs from ordinary
adaptive histogram equalization in its contrast limiting. This feature can also be applied to global histogram
equalization, giving rise to contrast limited histogram equalization (CLHE), which is rarely used in practice. In
the case of CLAHE, the contrast limiting procedure has to be applied for each neighbourhood from which a
5. Performance Enhancement Of Content Based Image Retrieval System Using Contrast Limited
33
transformation function is derived. CLAHE was developed to prevent the over amplification of noise that
adaptive histogram equalization can give rise to.
3.1 Images retrieved without using CLAHE.
Figure 5 Query image
At first query image is read and then ten relevant images from the databases is read.After reading the
ten relevant images from the large database ,the features from the image is extracted. Discrete Wavelength
Transform(DWT) is used for feature extraction.For similarity comparison the query image and the images from
the databases is compared using Euclidean distance.In Figure 4.1 it is seen that that the query image is read and
in Figure 4.2 it is seen that ten relevant images is retrieved from the database with respect to query image on the
basis of Euclidean distance.
Figure 6 Ten relevant images
3.2 Images retrieved using CLAHE
For better performance of CBIR system, Contrast Limited Adaptive Histogram Equalization (CLAHE) is used.
CLAHE helps in preventing the over amplification of noise.
Figure 7 Query image
6. Performance Enhancement Of Content Based Image Retrieval System Using Contrast Limited
34
Then again the similar procedure is applied in retrieving images as in case of images retrieved without
using CLAHE.After applying CLAHE,feature is extracted using DWT and then similarity comparison of the
query image and images in the database is done using Euclidean distance.After determining Euclidean distance
of the query image with the other images in the database,the image which has least Euclidean distance is
considered to be the best image.So it is seen in Figure 4..3,a query image is read and in Figure 4.4 it is clearly
showed that ten relevant images are retieved on the basis of Euclidean distance.
Figure 8 Ten relevant image.
It is seen that in Table 4.1 ,the Euclidean distances of some images is compared between the retrieved images
without using CLAHE and then using CLAHE.
Table 1 Comparison between Euclidean distance of retrieved images using CLAHE and without using CLAHE
.
Figure 9 Comparison between Euclidean distance of retrieved images using CLAHE and without CLAHE
METHOD 1st
IMAGE
2nd
IMAGE
3rd
IMAGE
4th
IMAGE
5th
IMAGE
6th
IMAGE
7th
IMAGE
8th
IMAGE
9th
IMAGE
Without
using
CLAHE
0 5.4350 6.1876 6.8184 6.8567 6.9514 6.9593 7.2008 7.5123
With using
CLAHE
0 4.8434 5.5245 6.0002 6.1289 6.2159 6.2332 6.3492 6.6161
7. Performance Enhancement Of Content Based Image Retrieval System Using Contrast Limited
35
In Figure 9,it is seen that two line graphs are plotted with different points of Euclidean distance first without
using CLAHE and then using CLAHE. Now precision and recall of all the images is calculated one by one to
find the accuracy of the images. Images in the database is categorized and then precision and recall is calculated
one by one in each category for better performance in the accuracy of the images.
Method Average precision Average recall
Content Based Image Retrieval without
using CLAHE
0.35 0.44
Content Based Image Retrieval using
CLAHE
0.40 0.47
Table 2. Performance analysis of CBIR system using CLAHE and without using CLAHE by calculating
average precision and average recall.
Figure 10. Performance analysis of CBIR system using CLAHE and without using CLAHE by calculating
average precision and average recall.
Thus it is seen in Table 2 the average precision and average recall is calculated and then it is compared
first without CLAHE and then using CLAHE and in Figure 10, a bar graph is plotted for the performance
analysis of CBIR system using CLAHE and without using CLAHE by calculating average precision and
average recall.
V. CONCLUSION
This paper described the outlines of an approach that aims at improving the performance of CBIR
systems and successfully implemented a method which improves the performance of CBIR system by using
CLAHE and the performance is enhanced in terms of accuracy by improving precision and recall. Two
techniques are used for similarity comparison, Euclidean distance and Manhattan distance. This is done for
looking the comparison of the images retrieved first by Euclidean distance and then again by Manhattan
distance. After determining Euclidean and Manhattan distance of the query image with the other images in the
database,the image which has least Euclidean and Manhattan distance is considered to be the best image. The
Euclidean distances and Manhattan distances of the images is compared first between the retrieved images
without using CLAHE and then using CLAHE. The performance of CBIR system is analysed using CLAHE and
without using CLAHE by calculating precision and recall.Both precision and recall are increased in case of
images retrieved by applying CLAHE in both the similarity comparison techniques, i.e Euclidean distance and
Manhattan distance. Thus it is proved that after applying CLAHE the performance of the CBIR system increases
on the basis of precision/recall.
REFERENCES
[1] S.-K. Chang, T. Kunii, Pictorial database applications, IEEE Computer 14 (11) (1981)13{21}.
[2] P. G. B. Enser, Pictorial information retrieval, Journal of Documentation 51 (2)(1995) 126{170}.
[3] A. Gupta, R. Jain, Visual information retrieval,Communications of the ACM 40 (5)(1997) 70{79}.
[4] Y. Rui, T. S. Huang, S.-F. Chang, Image retrieval:Past, present and future, in: M. Liao (Ed.), Proceedings of the International
Symposium on Multimedia Information Processing,Taipei, Taiwan, 1997.
[5] J. P. Eakins, M. E. Graham, content based image retrieval, Tech. Rep. JTAP{039, JISC Technology Application Program,
Newcastle upon Tyne (2000).
[6] C. C. Venters, M. Cooper, content based image retrieval, Tech. Rep. JTAP 054, JISC Technology Application Program (2000).
8. Performance Enhancement Of Content Based Image Retrieval System Using Contrast Limited
36
[7] A. W. M. Smeulders, M. Worring, S. Santini,A. Gupta, R. Jain, Content based image retrieval at the end of the early years,IEEE
Transactions on Pattern Analysis and Machine Intelligence 2.
[8] Savvas A. Chatzichristofis, Yiannis S. Boutalis,“Im g(Rummager): An Interactive Content Based Image Retrieval System.,”
Second International Workshop on Similarity Search and Applications,pp. 151-153,2009.
[9] Michael Lama, Tim Disneyb, Mailan Phamc, Daniela Raicud, Jacob Furstd, Ruchaneewan Susomboond, “Content-Based Image
Retrieval for Pulmonary Computed Tomography Nodule Images,” National Science Foundation,pp. 155-167,2008.
[10] Nidhi Singha,Kanchan Singhb, Ashok K. Sinha, “A Novel Approach for Content Based Image Retrieval,” Procedia Technology
4,Elsevier publications,pp. 245-250,2012.
[11] Zhenhua Zhang, Wenhui Li, “An Improving Technique of Color Histogram in Segmentation-based Image Retrieval,” Fifth
International Conference on Information Assurance and Security,pp. 381-384,2009.
[12] Miguel Angel Veganzones,Manuel Graña, “A Spectral/Spatial CBIR System for Hyperspectral Images,”IEEE journal of selected
topics in applied earth observations and remote sensing, vol. 5, no. 2,pp. 488-500,April,2012.
[13] Lining Zhang, Lipo Wang,Weisi Lin, “Generalized Biased Discriminant Analysis for Content-Based Image Retrieval,” IEEE
transactions on systems, man, and cybernetics—part b: cybernetics, vol. 42, no. 1,pp. 282-290,Feb,2012.
[14] Gerald Schaefer, “ Content-based retrieval from image databases: colour, compression, and browsing,”IEEE Trans,Image
Process ,vol.29,pp. 5-10,2010.
[15] Jennifer G. Dy, Carla E. Brodley, Avi Kak, Lynn S. Broderick, Alex M. Aisen,” Unsupervised Feature Selection Applied to
Content-Based Retrieval of Lung Images” IEEE transactions on pattern analysis and machine intelligence, vol. 25, no. 3,pp. 373-
378, March, 2003.
[16] Zhong Su, Hongjiang Zhang, Stan Li, and Shaoping Ma, “Relevance feedback in Content-based Image Retrieval: Bayesian
framework, feature subspaces,and progressive learning” IEEE transactions on image processing, vol. 12, no. 8, pp. 924-937,Aug,
2003.
[17] Raghu Krishnapuram, Swarup Medasani, Sung-Hwan Jung, Young-Sik Choi,” Content-Based Image Retrieval Based on a Fuzzy
Approach” IEEE transactions on knowledge and data engineering, vol. 16, no. 10, pp. 1185-1199,Oct ,2004.
[18] Dionysius P. Huijsmans and Nicu Sebe,” How to Complete Performance Graphs in Content-Based Image Retrieval: Add
Generality and Normalize Scope”, IEEE transactions on pattern analysis and machine intelligence, vol. 27, no. 2, pp. 245-
251,Feb, 2005.