This document discusses methods for multimodal content-based medical image retrieval. It describes how medical images are used for diagnosis, research and education. Content-based image retrieval systems aim to search and retrieve similar images based on visual features extracted from images, such as color, texture and shape. The document outlines different image descriptors and similarity measures that can be used. It also discusses the need to fuse multiple modalities or features to improve retrieval accuracy, as single features do not fully capture image content. Feature level and decision level fusion techniques are significant for the multimodal fusion process.
A new approach for content-based image retrieval for medical applications usi...IJECEIAES
Content based image retrieval (CBIR) has become an important factor in medical imaging research and is obtaining a great success. More applications still need to be developed to get more powerful systems for better image similarity matching, and as a result getting better image retrieval systems. This research focuses on implementing low-level descriptors to maximize the quality of the retrieval of medical images. Such a research is supposed to set a better result in terms of image similarity matching. In this research a system that uses low-level descriptors is introduced. Three descriptors have been developed and applied in an attempt to increase the accuracy of image matching. The final results showed a qualified system in medical images retrieval specially that the low-level image descriptors have not been used yet in the image similarity matching in the medical field.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Engineering Research Publication
Best International Journals, High Impact Journals,
International Journal of Engineering & Technical Research
ISSN : 2321-0869 (O) 2454-4698 (P)
www.erpublication.org
Using a Bag of Words for Automatic Medical Image Annotation with a Latent Sem...ijaia
We present in this paper a new approach for the automatic annotation of medical images, using the
approach of "bag-of-words" to represent the visual content of the medical image combined with text
descriptors based approach tf.idf and reduced by latent semantic to extract the co-occurrence between
terms and visual terms. A medical report is composed of a text describing a medical image. First, we are
interested to index the text and extract all relevant terms using a thesaurus containing MeSH medical
concepts. In a second phase, the medical image is indexed while recovering areas of interest which are
invariant to change in scale, light and tilt. To annotate a new medical image, we use the approach of "bagof-
words" to recover the feature vector. Indeed, we use the vector space model to retrieve similar medical
image from the database training. The calculation of the relevance value of an image to the query image is
based on the cosine function. We conclude with an experiment carried out on five types of radiological
imaging to evaluate the performance of our system of medical annotation. The results showed that our
approach works better with more images from the radiology of the skull.
ZERNIKE-ENTROPY IMAGE SIMILARITY MEASURE BASED ON JOINT HISTOGRAM FOR FACE RE...AM Publications
The direction of image similarity for face recognition required a combination of powerful tools and stable in case of any challenges such as different illumination, various environment and complex poses etc. In this paper, we combined very robust measures in image similarity and face recognition which is Zernike moment and information theory in one proposed measure namely Zernike-Entropy Image Similarity Measure (Z-EISM). Z-EISM based on incorporates the concepts of Picard entropy and a modified one dimension version of the two dimensions joint histogram of the two images under test. Four datasets have been used to test, compare, and prove that the proposed Z-EISM has better performance than the existing measures
META-HEURISTICS BASED ARF OPTIMIZATION FOR IMAGE RETRIEVALIJCSEIT Journal
The document proposes an approach combining automatic relevance feedback and particle swarm optimization for image retrieval. It constructs a visual feature database from image features like color moments and Gabor filters. For a query image, it retrieves similar images and generates automatic relevance feedback by labeling images as relevant or irrelevant. It then uses particle swarm optimization to re-weight features and retrieve more relevant images over multiple iterations, splitting the swarm in later iterations. An experiment on Corel images over 5 classes showed the approach could effectively retrieve relevant images through this meta-heuristic process without human interaction.
This paper analyses features selection method used in medical image processing. How image is selected by using diverse sort of method similarly: screening, scanning and selecting. We discussed on feature selection procedure which is extensively used for data mining and knowledge discovery and it carryout elimination of redundant features, concomitantly retaining the fundamental bigoted information, feature selection implies less data transmission and efficient data mining. It accentuates the need for further research in the field of pattern recognition that can effectively determine the situation with captured portion of human body.
Towards Semantic Clustering – A Brief OverviewCSCJournals
Image clustering is an important technology which helps users to get hold of the large amount of online visual information, especially after the rapid growth of the Web. This paper focuses on image clustering methods and their application in image collection or online image repository. Current progress of image clustering related to image retrieval and image annotation are summarized and some open problems are discussed. Related works are summarized based on the problems addressed, which are image segmentation, compact representation of image set, search space reduction, and semantic gap. Issues are also identified in current progress and semantic clustering is conjectured to be the potential trend. Our framework of semantic clustering as well as the main abstraction levels involved is briefly discussed.
A new approach for content-based image retrieval for medical applications usi...IJECEIAES
Content based image retrieval (CBIR) has become an important factor in medical imaging research and is obtaining a great success. More applications still need to be developed to get more powerful systems for better image similarity matching, and as a result getting better image retrieval systems. This research focuses on implementing low-level descriptors to maximize the quality of the retrieval of medical images. Such a research is supposed to set a better result in terms of image similarity matching. In this research a system that uses low-level descriptors is introduced. Three descriptors have been developed and applied in an attempt to increase the accuracy of image matching. The final results showed a qualified system in medical images retrieval specially that the low-level image descriptors have not been used yet in the image similarity matching in the medical field.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Engineering Research Publication
Best International Journals, High Impact Journals,
International Journal of Engineering & Technical Research
ISSN : 2321-0869 (O) 2454-4698 (P)
www.erpublication.org
Using a Bag of Words for Automatic Medical Image Annotation with a Latent Sem...ijaia
We present in this paper a new approach for the automatic annotation of medical images, using the
approach of "bag-of-words" to represent the visual content of the medical image combined with text
descriptors based approach tf.idf and reduced by latent semantic to extract the co-occurrence between
terms and visual terms. A medical report is composed of a text describing a medical image. First, we are
interested to index the text and extract all relevant terms using a thesaurus containing MeSH medical
concepts. In a second phase, the medical image is indexed while recovering areas of interest which are
invariant to change in scale, light and tilt. To annotate a new medical image, we use the approach of "bagof-
words" to recover the feature vector. Indeed, we use the vector space model to retrieve similar medical
image from the database training. The calculation of the relevance value of an image to the query image is
based on the cosine function. We conclude with an experiment carried out on five types of radiological
imaging to evaluate the performance of our system of medical annotation. The results showed that our
approach works better with more images from the radiology of the skull.
ZERNIKE-ENTROPY IMAGE SIMILARITY MEASURE BASED ON JOINT HISTOGRAM FOR FACE RE...AM Publications
The direction of image similarity for face recognition required a combination of powerful tools and stable in case of any challenges such as different illumination, various environment and complex poses etc. In this paper, we combined very robust measures in image similarity and face recognition which is Zernike moment and information theory in one proposed measure namely Zernike-Entropy Image Similarity Measure (Z-EISM). Z-EISM based on incorporates the concepts of Picard entropy and a modified one dimension version of the two dimensions joint histogram of the two images under test. Four datasets have been used to test, compare, and prove that the proposed Z-EISM has better performance than the existing measures
META-HEURISTICS BASED ARF OPTIMIZATION FOR IMAGE RETRIEVALIJCSEIT Journal
The document proposes an approach combining automatic relevance feedback and particle swarm optimization for image retrieval. It constructs a visual feature database from image features like color moments and Gabor filters. For a query image, it retrieves similar images and generates automatic relevance feedback by labeling images as relevant or irrelevant. It then uses particle swarm optimization to re-weight features and retrieve more relevant images over multiple iterations, splitting the swarm in later iterations. An experiment on Corel images over 5 classes showed the approach could effectively retrieve relevant images through this meta-heuristic process without human interaction.
This paper analyses features selection method used in medical image processing. How image is selected by using diverse sort of method similarly: screening, scanning and selecting. We discussed on feature selection procedure which is extensively used for data mining and knowledge discovery and it carryout elimination of redundant features, concomitantly retaining the fundamental bigoted information, feature selection implies less data transmission and efficient data mining. It accentuates the need for further research in the field of pattern recognition that can effectively determine the situation with captured portion of human body.
Towards Semantic Clustering – A Brief OverviewCSCJournals
Image clustering is an important technology which helps users to get hold of the large amount of online visual information, especially after the rapid growth of the Web. This paper focuses on image clustering methods and their application in image collection or online image repository. Current progress of image clustering related to image retrieval and image annotation are summarized and some open problems are discussed. Related works are summarized based on the problems addressed, which are image segmentation, compact representation of image set, search space reduction, and semantic gap. Issues are also identified in current progress and semantic clustering is conjectured to be the potential trend. Our framework of semantic clustering as well as the main abstraction levels involved is briefly discussed.
Vertical intent prediction approach based on Doc2vec and convolutional neural...IJECEIAES
Vertical selection is the task of selecting the most relevant verticals to a given query in order to improve the diversity and quality of web search results. This task requires not only predicting relevant verticals but also these verticals must be those the user expects to be relevant for his particular information need. Most existing works focused on using traditional machine learning techniques to combine multiple types of features for selecting several relevant verticals. Although these techniques are very efficient, handling vertical selection with high accuracy is still a challenging research task. In this paper, we propose an approach for improving vertical selection in order to satisfy the user vertical intent and reduce user’s browsing time and efforts. First, it generates query embeddings vectors using the doc2vec algorithm that preserves syntactic and semantic information within each query. Secondly, this vector will be used as input to a convolutional neural network model for increasing the representation of the query with multiple levels of abstraction including rich semantic information and then creating a global summarization of the query features. We demonstrate the effectiveness of our approach through comprehensive experimentation using various datasets. Our experimental findings show that our system achieves significant accuracy. Further, it realizes accurate predictions on new unseen data.
Materialized View Generation Using Apriori Algorithmijdms
Data analysis is an important issue in business world in many respects. Different business organizations
have data scientists, knowledge workers to analyze the business patterns and the customer behavior.
Scrutinizing the past data to predict the future result has many aspects and understanding the nature of the
query is one of them. Business analysts try to do this from a big data set which may be stored in the form of
data warehouse. In this context, analysis of historical data has become a subject of interest. Regarding this,
different techniques are being developed to study the pattern of customer behavior. Materialized view is a
database object which can be extensively used in data analysis. Different approaches are there to generate
optimum materialized view. This paper proposes an algorithm which generates a materialized view by
considering the frequencies of the attributes taken from a database with the help of Apriori algorithm.
MULTI-LEVEL FEATURE FUSION BASED TRANSFER LEARNING FOR PERSON RE-IDENTIFICATIONgerogepatton
Most of the currently known methods treat person re-identification task as classification problem and used commonly neural networks. However, these methods used only high-level convolutional feature or to express the feature representation of pedestrians. Moreover, the current data sets for person reidentification is relatively small. Under the limitation of the number of training set, deep convolutional networks are difficult to train adequately. Therefore, it is very worthwhile to introduce auxiliary data sets help training. In order to solve this problem, this paper propose a novel method of deep transfer learning, and combines the comparison model with the classification model and multi-level fusion of the
convolution features on the basis of transfer learning. In a multi-layers convolutional network, the characteristics of each layer of network are the dimensionality reduction of the previous layer of results, but the information of multi-level features is not only inclusive, but also has certain complementarity. We can using the information gap of different layers of convolutional neural networks to extract a better feature expression. Finally, the algorithm proposed in this paper is fully tested on four data sets (VIPeR, CUHK01, GRID and PRID450S). The obtained re-identification results prove the effectiveness of the algorithm.
ONTOLOGY-DRIVEN INFORMATION RETRIEVAL FOR HEALTHCARE INFORMATION SYSTEM : ...IJNSA Journal
In health research, one of the major tasks is to retrieve, and analyze heterogeneous databases containing
one single patient’s information gathered from a large volume of data over a long period of time. The
main objective of this paper is to represent our ontology-based information retrieval approach for
clinical Information System. We have performed a Case Study in the real life hospital settings. The results
obtained illustrate the feasibility of the proposed approach which significantly improved the information
retrieval process on a large volume of data over a long period of time from August 2011 until January
2012
Selecting the correct Data Mining Method: Classification & InDaMiTe-RIOSR Journals
This document describes an intelligent data mining assistant called InDaMiTe-R that aims to help users select the correct data mining method for their problem and data. It presents a classification of common data mining techniques organized by the goal of the problem (descriptive vs predictive) and the structure of the data. This classification is meant to model the human decision process for selecting techniques. The document then describes InDaMiTe-R, which uses a case-based reasoning approach to recommend techniques based on past user experiences with similar problems and data. An example of its use is provided to illustrate how it extracts problem metadata, gets user restrictions, recommends initial techniques, and learns from the user's evaluations to improve future recommendations. A small evaluation
SURVEY PAPER ON OUT LIER DETECTION USING FUZZY LOGIC BASED METHODIJCI JOURNAL
Fuzzy logic can be used to reason like humans and can deal with uncertainty other than randomness. Outlier detection is a difficult task to be performed, due to uncertainty involved in it. The outlier itself is a fuzzy concept and difficult to determine in a deterministic way. fuzzy logic system is very promising, since they exactly tackle the situation associated with outliers. Fuzzy logic that addresses the seemingly conflicting goals (i) removing noise, (ii) smoothing out outliers and certain other salient feature. This paper provides a detailed fuzzy logic used for outlier detection by discussing their pros and cons. Thus this is a very helpful document for naive researchers in this field.
Age Invariant Face Recognition using Convolutional Neural Network IJECEIAES
In the recent years, face recognition across aging has become very popular and challenging task in the area of face recognition. Many researchers have contributed in this area, but still there is a significant gap to fill in. Selection of feature extraction and classification algorithms plays an important role in this area. Deep Learning with Convolutional Neural Networks provides us a combination of feature extraction and classification in a single structure. In this paper, we have presented a novel idea of 7-Layer CNN architecture for solving the problem of aging for recognizing facial images across aging. We have done extensive experimentations to test the performance of the proposed system using two standard datasets FGNET and MORPH (Album II). Rank-1 recognition accuracy of our proposed system is 76.6% on FGNET and 92.5% on MORPH (Album II). Experimental results show the significant improvement over available state-of- the-arts with the proposed CNN architecture and the classifier.
IMAGE PROCESSING Projects for M. Tech, IMAGE PROCESSING Projects in Vijayanagar, IMAGE PROCESSING Projects in Bangalore, M. Tech Projects in Vijayanagar, M. Tech Projects in Bangalore, IMAGE PROCESSING IEEE projects in Bangalore, IEEE 2015 IMAGE PROCESSING Projects, MATLAB Image Processing Projects, MATLAB Image Processing Projects in Bangalore, MATLAB Image Processing Projects in Vijayangar
AN EFFICIENT PSO BASED ENSEMBLE CLASSIFICATION MODEL ON HIGH DIMENSIONAL DATA...ijsc
As the size of the biomedical databases are growing day by day, finding an essential features in the disease prediction have become more complex due to high dimensionality and sparsity problems. Also, due to the
availability of a large number of micro-array datasets in the biomedical repositories, it is difficult to analyze, predict and interpret the feature information using the traditional feature selection based classification models. Most of the traditional feature selection based classification algorithms have computational issues such as dimension reduction, uncertainty and class imbalance on microarray datasets. Ensemble classifier is one of the scalable models for extreme learning machine due to its high efficiency, the fast processing speed for real-time applications. The main objective of the feature selection
based ensemble learning models is to classify the high dimensional data with high computational efficiency
and high true positive rate on high dimensional datasets. In this proposed model an optimized Particle swarm optimization (PSO) based Ensemble classification model was developed on high dimensional microarray
datasets. Experimental results proved that the proposed model has high computational efficiency compared to the traditional feature selection based classification models in terms of accuracy , true positive rate and error rate are concerned.
The document proposes developing a content-based image retrieval system using perceptual texture features for biomedical image databases. It performs a literature review of prior work on texture feature extraction and perceptual texture features like coarseness, contrast, directionality and busyness. It then describes computational measures to estimate these perceptual texture features from images and their autocorrelation functions. These include measures of coarseness based on maxima counts, contrast based on autocorrelation function slope, and directionality based on dominant orientations. The proposed system would apply these texture feature extraction and matching techniques to build a knowledge-based expert system for retrieving dental images.
A Survey on Content Based Image Retrieval SystemYogeshIJTSRD
The increasing increase of picture databases in practically every industry, including medical science, multimedia, geographic information systems, photography, journalism, and so on, necessitates the development of an effective and efficient approach for image processing. The approach of content based image retrieval is used to recover images based on their content, such as texture, colour, shape, and spatial layout. However, because to the semantic mismatch between the users high level notions and the images low level properties, retrieving the image is extremely challenging. Many concepts were presented in effort to close this gap. Furthermore, images can be stored and extracted depending on a variety of properties, one of which being texture. Content based Image Retrieval has become a popular study area as a result of the growth of video and image data in digital form. Digital data, such as criminal photographs, fingerprints, and scene photographs, has been widely used in forensic sciences. As a result, arranging such enormous amounts of visual data, such as how to quickly find an interesting image, becomes a major difficulty. There is a pressing need to develop an effective method for locating photographs. An image must be represented with particular features in order to be found. Three significant visual qualities of an image are colour, texture, and shape. The search for images utilising colour, texture, and shape attributes has gotten a lot of press. Preeti Sondhi | Umar Bashir "A Survey on Content Based Image Retrieval System" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-5 , August 2021, URL: https://www.ijtsrd.com/papers/ijtsrd43777.pdf Paper URL: https://www.ijtsrd.com/engineering/computer-engineering/43777/a-survey-on-content-based-image-retrieval-system/preeti-sondhi
SIGNIFICANCE OF DIMENSIONALITY REDUCTION IN IMAGE PROCESSING sipij
The aim of this paper is to present a comparative study of two linear dimension reduction methods namely
PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis). The main idea of PCA is to
transform the high dimensional input space onto the feature space where the maximal variance is
displayed. The feature selection in traditional LDA is obtained by maximizing the difference between
classes and minimizing the distance within classes. PCA finds the axes with maximum variance for the
whole data set where LDA tries to find the axes for best class seperability. The neural network is trained
about the reduced feature set (using PCA or LDA) of images in the database for fast searching of images
from the database using back propagation algorithm. The proposed method is experimented over a general
image database using Matlab. The performance of these systems has been evaluated by Precision and
Recall measures. Experimental results show that PCA gives the better performance in terms of higher
precision and recall values with lesser computational complexity than LDA
This document summarizes research from the Finnish Center for Artificial Intelligence (FCAI).
1. FCAI conducts fundamental AI research and aims to apply this research to have high societal impact. It builds on decades of AI foundation work at universities in Finland.
2. FCAI's research focuses on understandable and data-efficient AI, including privacy-preserving machine learning and interactive expert knowledge elicitation to improve predictive models.
3. Other areas of research include modeling human-AI interaction, inverse modeling of user behavior, and endowing AI with a theory of mind to understand users.
This document discusses image mining techniques for image retrieval. It provides an overview of the image mining process which involves processing images, extracting features, and mining for information and knowledge. The document then surveys various feature extraction techniques used in image mining, including color, texture, and shape features. It discusses how features like color histograms, textures, and invariant moments can be extracted from images and used for content-based image retrieval. Finally, the document reviews several papers on image mining techniques and how they extract different features from images for applications like digital forensics and image retrieval.
This document discusses using machine learning clustering algorithms to analyze stock market data. It compares the K-means, COBWEB, DBSCAN, EM and OPTICS clustering algorithms in the WEKA tool on a stock market dataset containing 420 instances and 6 attributes. The K-means algorithm had the best performance with the lowest error and fastest runtime. It clustered the data into 4 groups in 0.16 seconds. The COBWEB algorithm clustered the data into 107 groups in 27.88 seconds. The DBSCAN algorithm found 21 clusters in 3.97 seconds. The paper concludes that K-means is best suited for stock market data mining applications due to its simplicity and speed compared to other algorithms.
This document summarizes a research paper on face recognition using Gabor features and PCA. It begins with an introduction to face recognition and discusses challenges like lighting, pose, and orientation. It then describes how the proposed system uses Gabor wavelets for preprocessing to reduce variations from pose, lighting, etc. Principal component analysis (PCA) is used to extract low dimensional and discriminating feature vectors from the preprocessed images. These feature vectors are then used for classification with k-nearest neighbors. The proposed system was tested on the Yale face database containing 100 images of 10 subjects with variable illumination and expressions.
Introduction to feature subset selection methodIJSRD
Data Mining is a computational progression to ascertain patterns in hefty data sets. It has various important techniques and one of them is Classification which is receiving great attention recently in the database community. Classification technique can solve several problems in different fields like medicine, industry, business, science. PSO is based on social behaviour for optimization problem. Feature Selection (FS) is a solution that involves finding a subset of prominent features to improve predictive accuracy and to remove the redundant features. Rough Set Theory (RST) is a mathematical tool which deals with the uncertainty and vagueness of the decision systems.
Atkins is one of the world's leading providers of professional and technological consultancy services, employing over 18,000 staff across offices worldwide. They offer a wide range of geotechnical skills and services for projects such as airports, foundations, tunnels, slopes, and more. For further details on their capabilities, interested parties can email the provided contact.
Multimodal Searching and Semantic Spaces: ...or how to find images of Dalmati...Jonathon Hare
Tutorial at the "Reality of the Semantic Gap in Image Retrieval" tutorial at the first international conference on Semantics And digital Media Technology (SAMT 2006). 6th December 2006.
This document outlines a presentation on content-based image retrieval (CBIR). It discusses the motivation for CBIR by describing limitations of text-based image retrieval, such as problems with image annotation, human perception, and queries that cannot be described with text. CBIR allows images to be retrieved based on automatically extracted visual features like color, texture, and histograms. A typical CBIR system extracts image features and then matches features to find visually similar images. Applications of CBIR include crime prevention, security, medical diagnosis, and intellectual property. The conclusion states that CBIR reduces computation time and increases user interaction compared to other methods.
1. The document discusses various methods used to investigate the structure of fibers, including nuclear magnetic resonance, infrared spectroscopy, optical and x-ray diffraction, thermal analysis, optical microscopy, electron microscopy, and density measurement.
2. It provides details on specific techniques like nuclear magnetic resonance spectroscopy, optical diffraction, x-ray diffraction, and electron microscopy and electron diffraction. These techniques help determine properties of fibers like composition, molecular structure, crystallinity, and orientation.
3. The structure investigation of fibers is important to understand fiber properties in order to improve their use in textiles. Different methods are used to study characteristics like chemical bonding, molecular spacing, and cross-sectional structure.
Action plans were outlined for 5 weeks. Week 1 focused on action planned for that week. Week 2 involved feature extraction from images in a database and the query image. Week 3's action was unspecified. Week 4 involved splitting images into RBG components and applying discrete Fourier transforms. Weeks 5 and 6 involved sectorizing image features and comparing them to a component database to evaluate performance.
Vertical intent prediction approach based on Doc2vec and convolutional neural...IJECEIAES
Vertical selection is the task of selecting the most relevant verticals to a given query in order to improve the diversity and quality of web search results. This task requires not only predicting relevant verticals but also these verticals must be those the user expects to be relevant for his particular information need. Most existing works focused on using traditional machine learning techniques to combine multiple types of features for selecting several relevant verticals. Although these techniques are very efficient, handling vertical selection with high accuracy is still a challenging research task. In this paper, we propose an approach for improving vertical selection in order to satisfy the user vertical intent and reduce user’s browsing time and efforts. First, it generates query embeddings vectors using the doc2vec algorithm that preserves syntactic and semantic information within each query. Secondly, this vector will be used as input to a convolutional neural network model for increasing the representation of the query with multiple levels of abstraction including rich semantic information and then creating a global summarization of the query features. We demonstrate the effectiveness of our approach through comprehensive experimentation using various datasets. Our experimental findings show that our system achieves significant accuracy. Further, it realizes accurate predictions on new unseen data.
Materialized View Generation Using Apriori Algorithmijdms
Data analysis is an important issue in business world in many respects. Different business organizations
have data scientists, knowledge workers to analyze the business patterns and the customer behavior.
Scrutinizing the past data to predict the future result has many aspects and understanding the nature of the
query is one of them. Business analysts try to do this from a big data set which may be stored in the form of
data warehouse. In this context, analysis of historical data has become a subject of interest. Regarding this,
different techniques are being developed to study the pattern of customer behavior. Materialized view is a
database object which can be extensively used in data analysis. Different approaches are there to generate
optimum materialized view. This paper proposes an algorithm which generates a materialized view by
considering the frequencies of the attributes taken from a database with the help of Apriori algorithm.
MULTI-LEVEL FEATURE FUSION BASED TRANSFER LEARNING FOR PERSON RE-IDENTIFICATIONgerogepatton
Most of the currently known methods treat person re-identification task as classification problem and used commonly neural networks. However, these methods used only high-level convolutional feature or to express the feature representation of pedestrians. Moreover, the current data sets for person reidentification is relatively small. Under the limitation of the number of training set, deep convolutional networks are difficult to train adequately. Therefore, it is very worthwhile to introduce auxiliary data sets help training. In order to solve this problem, this paper propose a novel method of deep transfer learning, and combines the comparison model with the classification model and multi-level fusion of the
convolution features on the basis of transfer learning. In a multi-layers convolutional network, the characteristics of each layer of network are the dimensionality reduction of the previous layer of results, but the information of multi-level features is not only inclusive, but also has certain complementarity. We can using the information gap of different layers of convolutional neural networks to extract a better feature expression. Finally, the algorithm proposed in this paper is fully tested on four data sets (VIPeR, CUHK01, GRID and PRID450S). The obtained re-identification results prove the effectiveness of the algorithm.
ONTOLOGY-DRIVEN INFORMATION RETRIEVAL FOR HEALTHCARE INFORMATION SYSTEM : ...IJNSA Journal
In health research, one of the major tasks is to retrieve, and analyze heterogeneous databases containing
one single patient’s information gathered from a large volume of data over a long period of time. The
main objective of this paper is to represent our ontology-based information retrieval approach for
clinical Information System. We have performed a Case Study in the real life hospital settings. The results
obtained illustrate the feasibility of the proposed approach which significantly improved the information
retrieval process on a large volume of data over a long period of time from August 2011 until January
2012
Selecting the correct Data Mining Method: Classification & InDaMiTe-RIOSR Journals
This document describes an intelligent data mining assistant called InDaMiTe-R that aims to help users select the correct data mining method for their problem and data. It presents a classification of common data mining techniques organized by the goal of the problem (descriptive vs predictive) and the structure of the data. This classification is meant to model the human decision process for selecting techniques. The document then describes InDaMiTe-R, which uses a case-based reasoning approach to recommend techniques based on past user experiences with similar problems and data. An example of its use is provided to illustrate how it extracts problem metadata, gets user restrictions, recommends initial techniques, and learns from the user's evaluations to improve future recommendations. A small evaluation
SURVEY PAPER ON OUT LIER DETECTION USING FUZZY LOGIC BASED METHODIJCI JOURNAL
Fuzzy logic can be used to reason like humans and can deal with uncertainty other than randomness. Outlier detection is a difficult task to be performed, due to uncertainty involved in it. The outlier itself is a fuzzy concept and difficult to determine in a deterministic way. fuzzy logic system is very promising, since they exactly tackle the situation associated with outliers. Fuzzy logic that addresses the seemingly conflicting goals (i) removing noise, (ii) smoothing out outliers and certain other salient feature. This paper provides a detailed fuzzy logic used for outlier detection by discussing their pros and cons. Thus this is a very helpful document for naive researchers in this field.
Age Invariant Face Recognition using Convolutional Neural Network IJECEIAES
In the recent years, face recognition across aging has become very popular and challenging task in the area of face recognition. Many researchers have contributed in this area, but still there is a significant gap to fill in. Selection of feature extraction and classification algorithms plays an important role in this area. Deep Learning with Convolutional Neural Networks provides us a combination of feature extraction and classification in a single structure. In this paper, we have presented a novel idea of 7-Layer CNN architecture for solving the problem of aging for recognizing facial images across aging. We have done extensive experimentations to test the performance of the proposed system using two standard datasets FGNET and MORPH (Album II). Rank-1 recognition accuracy of our proposed system is 76.6% on FGNET and 92.5% on MORPH (Album II). Experimental results show the significant improvement over available state-of- the-arts with the proposed CNN architecture and the classifier.
IMAGE PROCESSING Projects for M. Tech, IMAGE PROCESSING Projects in Vijayanagar, IMAGE PROCESSING Projects in Bangalore, M. Tech Projects in Vijayanagar, M. Tech Projects in Bangalore, IMAGE PROCESSING IEEE projects in Bangalore, IEEE 2015 IMAGE PROCESSING Projects, MATLAB Image Processing Projects, MATLAB Image Processing Projects in Bangalore, MATLAB Image Processing Projects in Vijayangar
AN EFFICIENT PSO BASED ENSEMBLE CLASSIFICATION MODEL ON HIGH DIMENSIONAL DATA...ijsc
As the size of the biomedical databases are growing day by day, finding an essential features in the disease prediction have become more complex due to high dimensionality and sparsity problems. Also, due to the
availability of a large number of micro-array datasets in the biomedical repositories, it is difficult to analyze, predict and interpret the feature information using the traditional feature selection based classification models. Most of the traditional feature selection based classification algorithms have computational issues such as dimension reduction, uncertainty and class imbalance on microarray datasets. Ensemble classifier is one of the scalable models for extreme learning machine due to its high efficiency, the fast processing speed for real-time applications. The main objective of the feature selection
based ensemble learning models is to classify the high dimensional data with high computational efficiency
and high true positive rate on high dimensional datasets. In this proposed model an optimized Particle swarm optimization (PSO) based Ensemble classification model was developed on high dimensional microarray
datasets. Experimental results proved that the proposed model has high computational efficiency compared to the traditional feature selection based classification models in terms of accuracy , true positive rate and error rate are concerned.
The document proposes developing a content-based image retrieval system using perceptual texture features for biomedical image databases. It performs a literature review of prior work on texture feature extraction and perceptual texture features like coarseness, contrast, directionality and busyness. It then describes computational measures to estimate these perceptual texture features from images and their autocorrelation functions. These include measures of coarseness based on maxima counts, contrast based on autocorrelation function slope, and directionality based on dominant orientations. The proposed system would apply these texture feature extraction and matching techniques to build a knowledge-based expert system for retrieving dental images.
A Survey on Content Based Image Retrieval SystemYogeshIJTSRD
The increasing increase of picture databases in practically every industry, including medical science, multimedia, geographic information systems, photography, journalism, and so on, necessitates the development of an effective and efficient approach for image processing. The approach of content based image retrieval is used to recover images based on their content, such as texture, colour, shape, and spatial layout. However, because to the semantic mismatch between the users high level notions and the images low level properties, retrieving the image is extremely challenging. Many concepts were presented in effort to close this gap. Furthermore, images can be stored and extracted depending on a variety of properties, one of which being texture. Content based Image Retrieval has become a popular study area as a result of the growth of video and image data in digital form. Digital data, such as criminal photographs, fingerprints, and scene photographs, has been widely used in forensic sciences. As a result, arranging such enormous amounts of visual data, such as how to quickly find an interesting image, becomes a major difficulty. There is a pressing need to develop an effective method for locating photographs. An image must be represented with particular features in order to be found. Three significant visual qualities of an image are colour, texture, and shape. The search for images utilising colour, texture, and shape attributes has gotten a lot of press. Preeti Sondhi | Umar Bashir "A Survey on Content Based Image Retrieval System" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-5 , August 2021, URL: https://www.ijtsrd.com/papers/ijtsrd43777.pdf Paper URL: https://www.ijtsrd.com/engineering/computer-engineering/43777/a-survey-on-content-based-image-retrieval-system/preeti-sondhi
SIGNIFICANCE OF DIMENSIONALITY REDUCTION IN IMAGE PROCESSING sipij
The aim of this paper is to present a comparative study of two linear dimension reduction methods namely
PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis). The main idea of PCA is to
transform the high dimensional input space onto the feature space where the maximal variance is
displayed. The feature selection in traditional LDA is obtained by maximizing the difference between
classes and minimizing the distance within classes. PCA finds the axes with maximum variance for the
whole data set where LDA tries to find the axes for best class seperability. The neural network is trained
about the reduced feature set (using PCA or LDA) of images in the database for fast searching of images
from the database using back propagation algorithm. The proposed method is experimented over a general
image database using Matlab. The performance of these systems has been evaluated by Precision and
Recall measures. Experimental results show that PCA gives the better performance in terms of higher
precision and recall values with lesser computational complexity than LDA
This document summarizes research from the Finnish Center for Artificial Intelligence (FCAI).
1. FCAI conducts fundamental AI research and aims to apply this research to have high societal impact. It builds on decades of AI foundation work at universities in Finland.
2. FCAI's research focuses on understandable and data-efficient AI, including privacy-preserving machine learning and interactive expert knowledge elicitation to improve predictive models.
3. Other areas of research include modeling human-AI interaction, inverse modeling of user behavior, and endowing AI with a theory of mind to understand users.
This document discusses image mining techniques for image retrieval. It provides an overview of the image mining process which involves processing images, extracting features, and mining for information and knowledge. The document then surveys various feature extraction techniques used in image mining, including color, texture, and shape features. It discusses how features like color histograms, textures, and invariant moments can be extracted from images and used for content-based image retrieval. Finally, the document reviews several papers on image mining techniques and how they extract different features from images for applications like digital forensics and image retrieval.
This document discusses using machine learning clustering algorithms to analyze stock market data. It compares the K-means, COBWEB, DBSCAN, EM and OPTICS clustering algorithms in the WEKA tool on a stock market dataset containing 420 instances and 6 attributes. The K-means algorithm had the best performance with the lowest error and fastest runtime. It clustered the data into 4 groups in 0.16 seconds. The COBWEB algorithm clustered the data into 107 groups in 27.88 seconds. The DBSCAN algorithm found 21 clusters in 3.97 seconds. The paper concludes that K-means is best suited for stock market data mining applications due to its simplicity and speed compared to other algorithms.
This document summarizes a research paper on face recognition using Gabor features and PCA. It begins with an introduction to face recognition and discusses challenges like lighting, pose, and orientation. It then describes how the proposed system uses Gabor wavelets for preprocessing to reduce variations from pose, lighting, etc. Principal component analysis (PCA) is used to extract low dimensional and discriminating feature vectors from the preprocessed images. These feature vectors are then used for classification with k-nearest neighbors. The proposed system was tested on the Yale face database containing 100 images of 10 subjects with variable illumination and expressions.
Introduction to feature subset selection methodIJSRD
Data Mining is a computational progression to ascertain patterns in hefty data sets. It has various important techniques and one of them is Classification which is receiving great attention recently in the database community. Classification technique can solve several problems in different fields like medicine, industry, business, science. PSO is based on social behaviour for optimization problem. Feature Selection (FS) is a solution that involves finding a subset of prominent features to improve predictive accuracy and to remove the redundant features. Rough Set Theory (RST) is a mathematical tool which deals with the uncertainty and vagueness of the decision systems.
Atkins is one of the world's leading providers of professional and technological consultancy services, employing over 18,000 staff across offices worldwide. They offer a wide range of geotechnical skills and services for projects such as airports, foundations, tunnels, slopes, and more. For further details on their capabilities, interested parties can email the provided contact.
Multimodal Searching and Semantic Spaces: ...or how to find images of Dalmati...Jonathon Hare
Tutorial at the "Reality of the Semantic Gap in Image Retrieval" tutorial at the first international conference on Semantics And digital Media Technology (SAMT 2006). 6th December 2006.
This document outlines a presentation on content-based image retrieval (CBIR). It discusses the motivation for CBIR by describing limitations of text-based image retrieval, such as problems with image annotation, human perception, and queries that cannot be described with text. CBIR allows images to be retrieved based on automatically extracted visual features like color, texture, and histograms. A typical CBIR system extracts image features and then matches features to find visually similar images. Applications of CBIR include crime prevention, security, medical diagnosis, and intellectual property. The conclusion states that CBIR reduces computation time and increases user interaction compared to other methods.
1. The document discusses various methods used to investigate the structure of fibers, including nuclear magnetic resonance, infrared spectroscopy, optical and x-ray diffraction, thermal analysis, optical microscopy, electron microscopy, and density measurement.
2. It provides details on specific techniques like nuclear magnetic resonance spectroscopy, optical diffraction, x-ray diffraction, and electron microscopy and electron diffraction. These techniques help determine properties of fibers like composition, molecular structure, crystallinity, and orientation.
3. The structure investigation of fibers is important to understand fiber properties in order to improve their use in textiles. Different methods are used to study characteristics like chemical bonding, molecular spacing, and cross-sectional structure.
Action plans were outlined for 5 weeks. Week 1 focused on action planned for that week. Week 2 involved feature extraction from images in a database and the query image. Week 3's action was unspecified. Week 4 involved splitting images into RBG components and applying discrete Fourier transforms. Weeks 5 and 6 involved sectorizing image features and comparing them to a component database to evaluate performance.
Content based image retrieval using clustering Algorithm(CBIR)Raja Sekar
The document discusses content-based image retrieval (CBIR). It defines CBIR as retrieving images from a collection based on automatically extracted features like color, texture, and shape. The document outlines the history and motivation for CBIR. It discusses features used for retrieval like color, texture, shape. Filtering algorithms and clustering methods used for CBIR are also summarized. Applications of CBIR include medical imaging, stock photography, and military intelligence. CBIR is presented as an effective alternative to text-based image retrieval.
This document summarizes a seminar presentation on Content Based Image Retrieval (CBIR). CBIR allows users to search for digital images in large databases based on the images' visual contents like color, shape, and texture, rather than keywords. The seminar covers the inspiration for CBIR, different types of image retrieval, how CBIR works by extracting features from images, applications like crime prevention and biomedicine, advantages like efficient searching, and limitations like accuracy issues. The goal of CBIR research is to develop algorithms that can characterize and understand images like human vision.
The project aims at development of efficient segmentation method for the CBIR system. Mean-shift segmentation generates a list of potential objects which are meaningful and then these objects are clustered according to a predefined similarity measure. The method was tested on benchmark data and F-Score of .30 was achieved.
This document summarizes two major aviation incidents: the Tenerife disaster and the hijacking of TWA Flight 847. The Tenerife disaster involved a collision between a Pan Am Boeing 747 and a KLM Boeing 747 at Tenerife airport in 1977 that killed 583 people. Contributing factors included poor communication and human error as the KLM plane began takeoff while the Pan Am plane was still taxiing. The hijacking of TWA Flight 847 in 1985 involved the hijacking of the aircraft and hostage situation. The document provides background details on the incidents and factors assumed to have contributed to the Tenerife disaster.
- Content-based image retrieval (CBIR) searches for images based on visual features like color, texture, and shape rather than keywords.
- CBIR systems extract features from images to create metadata and use those features to calculate visual similarity between images.
- Relevance feedback allows users to provide feedback on initial search results to help the system recalculate feature weights and improve subsequent results.
This document discusses various methods of data collection. It describes primary data collection methods like personal interviews, questionnaires, and observation. It also discusses secondary data collection from published sources like government publications and commercial research, as well as unpublished sources. The key differences between primary and secondary data are described, such as primary data being real-time while secondary data is from the past. Popular data storage methods include databases, spreadsheets, and statistical programs. The document emphasizes that the best data collection method depends on the research problem and available resources.
This document appears to be a marketing research report conducted by students for John Mitchels G.A.A. Complex. The report investigates methods to increase the sports complex's revenue stream while examining user perceptions. It includes secondary research on the complex's current members, facilities, pricing, trends in Irish sports participation, and similar facilities. Primary research involved qualitative interviews and quantitative member and non-member questionnaires. The questionnaires examined perceptions, awareness, improvements, and profiles. The conclusions determined awareness levels and perceptions. Recommendations included introducing new activities, raising corporate/school awareness, improving the website/signage, and an online shop/overseas membership. The report aims to provide insights to help John Mitchels G
This document discusses various geophysical investigation techniques used to study groundwater resources, with a focus on electrical resistivity and seismic refraction methods. It provides background on why geophysical methods are important for groundwater exploration, noting that they can quickly investigate large areas and provide multipurpose inferences. The electrical resistivity method is explained in detail, including how it works, electrode configurations like Wenner and Schlumberger, and profiling versus sounding approaches. Seismic refraction techniques are also introduced. In conclusion, a variety of geophysical techniques can provide useful information about groundwater occurrence and quality from surface or above-surface locations.
Site investigation involves determining the soil layers and properties beneath a proposed structure. It helps select the foundation type and depth, evaluate load capacity, estimate settlement, and identify potential issues. The exploration program uses methods like test pits, auger and wash borings, probing, and geophysics to obtain samples and measure properties. A site investigation includes planning borings and tests, executing fieldwork, and reporting the findings and recommendations.
Visual aids like markings and lighting help pilots navigate airports safely during day and night. Markings include colored stripes and patterns on runways, taxiways, and aprons to indicate centerlines, edges, directions, and restricted areas. Runway markings identify numbers, thresholds, and touch down zones. Taxiway markings guide planes to and from runways. Airport lighting uses colored lights to replicate markings for nighttime visibility. Together, these visual aids allow pilots to orient themselves and follow correct paths for takeoff and landing in all weather conditions.
This document discusses methods for investigating voice production including examining vocal cord anatomy and using various medical procedures such as indirect laryngoscopy, rhinolaryngoscopy, video laryngoscopy, and stroboscopy to view the larynx and vocal cords.
Presentation by National University of Singapore - Winners of CBS Case Compet...CBS Case Competition
Presentation by National University of Singapore - Winners of CBS Case Competition 2011. Congratulations to Caroline Ng, Candice Lim, Peh Che Min, and Tobias Chen. Presented at the Finals March 4, 2011.
This presentation form part of CBS Case Competition. Views, opinions and suggestions expressed in these presentations are the sole work of the case study writers, and are not neccessarily shared by H&M
Visit www.casecompetition.com to see more.
research methodology METHODS OF INVESTIGATION Suvin Lal
Methods of investigation include survey methods of investigation, case study method of investigation, experimental methods of investigation, scientific method of investigation
There are various methods for collecting primary and secondary data. Primary data collection methods include observation, interviews, questionnaires, and schedules. Secondary data refers to previously collected data that is analyzed and available for use in other studies. Factors to consider when selecting a data collection method include the nature, scope, and objective of the research, available funds and time, and required precision.
This document describes a proposed hybrid technique for automatic medical image classification and retrieval using information retrieval, support vector machines, and particle swarm optimization. Key aspects of the proposed approach include extracting low-level visual features from images like color, texture, shape and integrating them with semantic metadata. A content analysis system analyzes image descriptors and assigns semantic labels. Images are indexed and classified during a training phase. The proposed system aims to reduce the semantic gap between low-level features and high-level semantics by combining content-based image retrieval with text-based retrieval and machine learning algorithms.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
This document summarizes recent work on content-based image retrieval (CBIR) techniques for medical images. It discusses several methods used for CBIR, including shape-based, texture-based, and feature selection methods. Recent CBIR works are surveyed that use approaches like support vector machines, nearest neighbor algorithms, and relevance feedback. While progress has been made, the document notes there are still research gaps around bridging the semantic gap between low-level image features and high-level concepts, and improving retrieval accuracy and efficiency.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
This document discusses techniques for content-based image retrieval (CBIR) systems. It provides an overview of CBIR, describing how CBIR systems work and the major approaches used, including visual features like color, texture, shape, and semantic features derived from annotations and ontologies. The document also discusses challenges in CBIR like bridging the semantic gap between low-level visual features and high-level concepts, and various relevance feedback techniques used to improve retrieval effectiveness, such as query expansion, support vector machines, and Bayesian learning methods.
A review deep learning for medical image segmentation using multi modality fu...Aykut DİKER
This paper reviews deep learning approaches for medical image segmentation using multi-modality fusion. It finds that the number of papers on this topic has increased significantly in recent years, as deep learning methods have achieved superior performance over traditional methods. The paper categorizes fusion strategies as early fusion, where modalities are combined before network processing, and late fusion, where each modality is processed separately before fusion. While early fusion is simpler, late fusion can achieve more accurate results by learning complex relationships between modalities. Overall, the paper aims to provide an overview of deep learning fusion methods for multi-modal medical image segmentation.
Preprocessing Techniques for Image Mining on Biopsy ImagesIJERA Editor
This document discusses preprocessing techniques for image mining on biopsy images. It begins with an introduction to biomedical imaging and image mining. The key steps in image mining are described as image retrieval, preprocessing, feature extraction, data mining, and interpretation. Various preprocessing techniques are then evaluated on biopsy images, including interpolation, thresholding, and segmentation. Bicubic interpolation and Otsu thresholding produced good results for enhancing renal biopsy images. Overall, the document evaluates different preprocessing methods and their effects on biopsy images to help extract meaningful features for disease detection through image mining.
Mri brain image retrieval using multi support vector machine classifiersrilaxmi524
This document discusses content-based image retrieval (CBIR) for medical images. It proposes using multiple query images instead of a single query image to improve retrieval accuracy. The system works by preprocessing queries, extracting features like texture from the queries, optimizing the features, using classifiers like SVM to categorize images, and then using KNN to retrieve similar images from the database based on feature matching. It claims this approach improves on existing CBIR systems that rely on annotations and have difficulties bridging the semantic gap between low-level features and high-level meanings.
1) The document discusses various medical image fusion techniques including pixel level, feature level, and decision level fusion.
2) It proposes a novel pixel level fusion method called Iterative Block Level Principal Component Averaging fusion that divides images into blocks and calculates principal components for each block.
3) Experimental results on fusing noise free and noise filtered MR images show that the proposed method performs well in terms of average mutual information and structural similarity compared to other algorithms.
A Comprehensive Analysis on Co-Saliency Detection on Learning Approaches in 3...AnuragVijayAgrawal
This document summarizes a research paper presentation on co-saliency detection approaches. The paper discusses related works on saliency detection in single images and videos. It describes the methodology used, which extracts image features and examines bottom-up cues to create co-saliency maps. Results on a benchmark image pair dataset show state-of-the-art methods achieve high accuracy. In conclusion, co-saliency detection is an emerging field that aims to identify shared salient regions across multiple images, though challenges remain to be addressed.
Design and Development of an Algorithm for Image Clustering In Textile Image ...IJCSEA Journal
All textile industries aim to produce competitive materials and the competition enhancement depends mainly on designs and quality of the dresses produced by each industry. Every day, a vast amount of textile images are being generated such as images of shirts, jeans, t-shirts and sarees. A principal driver of innovation is World Wide Web, unleashing publication at the scale of tens and millions of content creators. Images play an important role as a picture is worth thousand words in the field of textile design and marketing. A retrieving of images needs special concepts such as image annotation, context, and image content and image values. This research work aimed at studying the image mining process in detail and analyzes the methods for retrieval. The textile images analyze various methods for clustering the images and developing an algorithm for the same. The retrieval method considered is based on relevance feedback, scalable method, edge histogram and color layout. The image clustering algorithm is designed based on color descriptors and k-means clustering algorithm. A software prototype to prove the proposed algorithm has been developed using net beans integrated development environment and found successful.
This document describes a proposed content-based image retrieval system using backpropagation neural networks (BPNN) and k-means clustering. It begins by discussing CBIR techniques and features like color, texture, and shape. It then outlines the proposed system which includes training a BPNN on image features, validating images, and testing by querying and retrieving similar images. Performance is analyzed based on metrics like accuracy, efficiency, and classification rate. Results show the system achieves up to 98% classification accuracy within 5-6 seconds.
This document provides a review of different techniques for image retrieval from large databases, including text-based image retrieval and content-based image retrieval (CBIR). CBIR uses visual features extracted from images like color, texture, and shape to search for similar images. The document discusses some limitations of CBIR and proposes video-based image retrieval as a new direction. It also surveys recent research in areas like feature extraction, indexing, and discusses future directions like reducing the semantic gap between low-level features and high-level meanings.
ONTOLOGY-DRIVEN INFORMATION RETRIEVAL FOR HEALTHCARE INFORMATION SYSTEM : A C...IJNSA Journal
In health research, one of the major tasks is to retrieve, and analyze heterogeneous databases containing one single patient’s information gathered from a large volume of data over a long period of time. The main objective of this paper is to represent our ontology-based information retrieval approach for clinical Information System. We have performed a Case Study in the real life hospital settings. The results obtained illustrate the feasibility of the proposed approach which significantly improved the information retrieval process on a large volume of data over a long period of time from August 2011 until January 2012.
This document provides a comprehensive review of recent developments in content-based image retrieval and feature extraction. It discusses various low-level visual features used for image retrieval, including color, texture, shape, and spatial features. It also reviews approaches that fuse low-level features and use local features. Machine learning and deep learning techniques for content-based image retrieval are also summarized. The document concludes by discussing open challenges and directions for future research in this area.
Visual Saliency Model Using Sift and Comparison of Learning Approachescsandit
This document discusses a study that aims to develop a visual saliency model to predict where humans look in images. It uses the SIFT feature in addition to low, mid, and high-level image features to train machine learning models on an eye-tracking dataset. Support vector machines (SVM) achieved the best performance, accurately predicting fixations 88% of the time. Including the SIFT feature further improved SVM performance to 91% accuracy. The study evaluates different machine learning methods and determines SVM to be best suited for this binary classification task using high-dimensional image data.
Techniques Used For Extracting Useful Information From ImagesJill Crawford
This document discusses techniques for extracting useful information from images, including image classification, feature extraction, face detection and recognition, and image retrieval. It provides details on supervised classification and various tree structures used for indexing images. Face recognition algorithms extract facial features and compare them to databases to identify matches. The results of searching six sample images of different types (face, content, feature) are shown, with search times ranging from 3.5 to 7 seconds. Indexing techniques for multimedia databases are discussed to efficiently retrieve different data types like text, audio and video.
An Impact on Content Based Image Retrival A Perspective Viewijtsrd
The explosive increase and ubiquitous accessibility of visual data on the Web have led to the prosperity of research activity in image search or retrieval. With the ignorance of visual content as a ranking clue, methods with text search techniques for visual retrieval may suffer inconsistency between the text words and visual content. Content based image retrieval CBIR , which makes use of the representation of visual content to identify relevant images, has attracted sustained attention in recent two decades. Such a problem is challenging due to the intention gap and the semantic gap problems. Numerous techniques have been developed for content based image retrieval in the last decade. We conclude with several promising directions for future research. Shivanshu Jaiswal | Dr. Avinash Sharma ""An Impact on Content Based Image Retrival: A Perspective View"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-2 , February 2020, URL: https://www.ijtsrd.com/papers/ijtsrd29969.pdf
Paper Url : https://www.ijtsrd.com/engineering/computer-engineering/29969/an-impact-on-content-based-image-retrival-a-perspective-view/shivanshu-jaiswal
Texture Analysis As An Aid In CAD And Computational Logiciosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
This document discusses the use of texture analysis in medical image analysis and computer-aided diagnosis (CAD) systems. It begins by providing background on texture analysis and its role in extracting features from medical images that can help with diagnosis. The document then discusses how texture analysis is used as a preprocessing step in CAD systems, where extracted texture features are fed into machine learning algorithms to perform diagnostic tasks. It also addresses some challenges with texture analysis and its implementation in CAD systems, noting further development and testing is still needed. Overall, the summary discusses how texture analysis opens new opportunities for CAD in radiology by automating the feature extraction process.
[IAIM 2023 - Poster] Label-efficient Generalizable Deep Learning for Medical...Ziyuan Zhao
The document describes a presentation on label-efficient generalizable deep learning for medical image segmentation. The presentation will discuss challenges with domain shift and label scarcity in medical image segmentation using deep learning. It will present methods that use dual cycle alignment, dual domain knowledge transfer, dual self-ensembling adversarial learning, and gradient-based meta-hallucination learning to overcome these challenges and enable label-efficient generalizable segmentation across domains. Evaluation is done on the Multi Modality Whole Heart Segmentation dataset, where the methods outperform existing unsupervised domain adaptation approaches with limited source labels.
Similar to An investigation on combination methods (20)
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor IvaniukFwdays
At this talk we will discuss DDoS protection tools and best practices, discuss network architectures and what AWS has to offer. Also, we will look into one of the largest DDoS attacks on Ukrainian infrastructure that happened in February 2022. We'll see, what techniques helped to keep the web resources available for Ukrainians and how AWS improved DDoS protection for all customers based on Ukraine experience
Session 1 - Intro to Robotic Process Automation.pdfUiPathCommunity
👉 Check out our full 'Africa Series - Automation Student Developers (EN)' page to register for the full program:
https://bit.ly/Automation_Student_Kickstart
In this session, we shall introduce you to the world of automation, the UiPath Platform, and guide you on how to install and setup UiPath Studio on your Windows PC.
📕 Detailed agenda:
What is RPA? Benefits of RPA?
RPA Applications
The UiPath End-to-End Automation Platform
UiPath Studio CE Installation and Setup
💻 Extra training through UiPath Academy:
Introduction to Automation
UiPath Business Automation Platform
Explore automation development with UiPath Studio
👉 Register here for our upcoming Session 2 on June 20: Introduction to UiPath Studio Fundamentals: https://community.uipath.com/events/details/uipath-lagos-presents-session-2-introduction-to-uipath-studio-fundamentals/
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectorsDianaGray10
Join us to learn how UiPath Apps can directly and easily interact with prebuilt connectors via Integration Service--including Salesforce, ServiceNow, Open GenAI, and more.
The best part is you can achieve this without building a custom workflow! Say goodbye to the hassle of using separate automations to call APIs. By seamlessly integrating within App Studio, you can now easily streamline your workflow, while gaining direct access to our Connector Catalog of popular applications.
We’ll discuss and demo the benefits of UiPath Apps and connectors including:
Creating a compelling user experience for any software, without the limitations of APIs.
Accelerating the app creation process, saving time and effort
Enjoying high-performance CRUD (create, read, update, delete) operations, for
seamless data management.
Speakers:
Russell Alfeche, Technology Leader, RPA at qBotic and UiPath MVP
Charlie Greenberg, host
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
High performance Serverless Java on AWS- GoTo Amsterdam 2024Vadym Kazulkin
Java is for many years one of the most popular programming languages, but it used to have hard times in the Serverless community. Java is known for its high cold start times and high memory footprint, comparing to other programming languages like Node.js and Python. In this talk I'll look at the general best practices and techniques we can use to decrease memory consumption, cold start times for Java Serverless development on AWS including GraalVM (Native Image) and AWS own offering SnapStart based on Firecracker microVM snapshot and restore and CRaC (Coordinated Restore at Checkpoint) runtime hooks. I'll also provide a lot of benchmarking on Lambda functions trying out various deployment package sizes, Lambda memory settings, Java compilation options and HTTP (a)synchronous clients and measure their impact on cold and warm start times.
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
"Scaling RAG Applications to serve millions of users", Kevin GoedeckeFwdays
How we managed to grow and scale a RAG application from zero to thousands of users in 7 months. Lessons from technical challenges around managing high load for LLMs, RAGs and Vector databases.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...Jason Yip
The typical problem in product engineering is not bad strategy, so much as “no strategy”. This leads to confusion, lack of motivation, and incoherent action. The next time you look for a strategy and find an empty space, instead of waiting for it to be filled, I will show you how to fill it in yourself. If you’re wrong, it forces a correction. If you’re right, it helps create focus. I’ll share how I’ve approached this in the past, both what works and lessons for what didn’t work so well.
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
1. An Investigation on Combination Methods for
Multimodal Content-based Medical Image Retrieval
ALI HOSSEINZADEH VAHID
ASST.PROF.DR.ADIL ALPKOÇAK
AUGUST, 2012
İZMİR
2. Introduction
2
Medical images are playing an important role to
detect anatomical and functional information of the
body part for diagnosis, medical research and
education :
physicians or radiologists examine them in conventional ways
based on their individual experiences and knowledge
provide diagnostic support to physicians or radiologists by
displaying relevant past cases.
as a training tool for medical students and residents in education,
follow-up studies, and for research purposes.
An Investigation on Combination Methods for Multimodal Content-based Medical Image Retrieval
10/8/2012
3. Background(Image Retrieval Systems)
3
Image retrieval is a poor stepchild to other forms
of information retrieval (IR). Image retrieval has
been one of the most interesting and vivid research
areas in the field of computer vision over the last
decades.
An image retrieval system is a computer system
for browsing, searching and retrieving similar
images (may not be exact) from a large database of
digital images with the help of some key attributes
associated with the images or features inherently
contained in the images.
An Investigation on Combination Methods for Multimodal Content-based Medical Image Retrieval
10/8/2012
4. Background(TBIR)
4
In Text Based Image Retrieval (TBIR)system,
images are indexed by text, known as the metadata of the
image, such as the patient’s ID number, the date it was
produced, the type of the image and a manually
annotated description on the content of the image itself
such as Google Images and Flickr.
image retrieval based only on text information is not
sufficient since :
The amount of labor required to manually annotate every single
image,
The difference in human perception when describing the images,
which might lead to inaccuracies during the retrieval process.
An Investigation on Combination Methods for Multimodal Content-based Medical Image Retrieval
10/8/2012
5. Background(CBIR)
5
The main goal in Content Base Image Retrieval system is searching
and finding similar images based on their content.
To accomplish this, the content should first be described in an efficient
way, e.g. the so-called indexing or feature extraction and binary signatures
are formed and stored as the data
When the query image is given to the system, the system will extract image
features for this query. It will compare these features with that of other
images in a database. Relevant results will be displayed to the user.
There are many factors to consider in the design of a CBIR:
Choice of right features: how to mathematically describe an image ?
Similarity measurement criteria: how to assess the similarity between a pair of images?
Indexing mechanism and
Query formulation technique
An Investigation on Combination Methods for Multimodal Content-based Medical Image Retrieval
10/8/2012
6. Background (CBIR)
6
Major problems of CBIR are :
Semantic gap: The lack of coincidence between the
information that one can extract from the visual data and the
interpretation that the same data have for a user in a given
situation. User seeks semantic similarity, but the database can
only provide similarity by data processing.
Huge amount of objects to search among.
Incomplete query specification.
Incomplete image description.
An Investigation on Combination Methods for Multimodal Content-based Medical Image Retrieval
10/8/2012
7. Image Content Descriptors
7
image content may include :
Visual content
General : include color, texture, shape, spatial relationship, etc.
Domain specific: is application dependent and may involve
domain knowledge
Semantic content is obtained
by textual annotation
by complex inference procedures based on visual content
An Investigation on Combination Methods for Multimodal Content-based Medical Image Retrieval
10/8/2012
8. Color
8
One of the most widely used visual features
Relatively robust to changes in the background
colors
Independent of image size and orientation
Considerable design and experimental work in
MPEG-7 to arrive at efficient color descriptors for
similarity matching.
No single generic color descriptor exists that can be
used for all foreseen applications.
Such as SCD, CLD, CSD
An Investigation on Combination Methods for Multimodal Content-based Medical Image Retrieval
10/8/2012
9. Texture
9
Another fundamental visual feature
This contains
structure ness,
regularity,
directionality
and roughness of images
Such as HTD, EHD
An Investigation on Combination Methods for Multimodal Content-based Medical Image Retrieval
10/8/2012
10. Compact composite descriptors
10
Color and edge directivity descriptor (CEDD)
The six-bin histogram of the fuzzy system that uses the five
digital filters proposed by the MPEG-7 EHD.
The 24-bin color histogram produced by the 24-bin fuzzylinking system.
Overall, the final histogram has 144 regions.
Fuzzy color and texture histogram (FCTH)
The eight-bin histogram of the fuzzy system that uses the high
frequency bands of the Haar wavelet transform
The 24-bin color histogram produced by the 24-bin fuzzylinking system.
Overall, the final histogram includes192 regions.
An Investigation on Combination Methods for Multimodal Content-based Medical Image Retrieval
10/8/2012
11. Compact composite descriptors
11
Brightness and Texture Directionality Histogram
BTDH is very similar to FCTH feature.
The main difference is using brightness instead of color
histogram.
uses brightness and texture characteristics as well as the
spatial distribution of these characteristics in one compact 1D
vector.
The texture information comes from the Directionality
histogram.
Fractal Scanning method through the Hilbert Curve or the ZGrid method is used to capture the spatial distribution of
brightness and texture information
An Investigation on Combination Methods for Multimodal Content-based Medical Image Retrieval
10/8/2012
12. Similarity Measures
12
Geometric Measures treat objects as vectors.
Information Theoretic Measures are derived from
the Shannon’s entropy theory and treat objects as
probabilistic distributions
Statistic Measures compare two objects in a
distributed manner, and basically assume that the
vector elements are samples.
An Investigation on Combination Methods for Multimodal Content-based Medical Image Retrieval
10/8/2012
15. Need to fuse (CBIR)
15
Some research efforts have been reported to enhance
CBIR performance by taking the multi-modality
fusion approaches:
Since each feature extracted from images just characterizes
certain aspect of image content.
A special feature is not equally important for different image
queries since a special feature has different importance in
reflecting the content of different images.
An Investigation on Combination Methods for Multimodal Content-based Medical Image Retrieval
10/8/2012
16. Fusion
16
“Information fusion is the study of efficient methods
for automatically or semi-automatically transforming
information from different sources and different points
in time into a representation that provides effective
support for human or automated decision making.”
The major challenge is to find adjusted techniques for associating
multiple sources of information for either decision–making or
information retrieval.
traditional work on multimodal integration has largely been
heuristic-based. Still today, the understanding of how fusion
works and by what it is influenced is limited.
An Investigation on Combination Methods for Multimodal Content-based Medical Image Retrieval
10/8/2012
17. Significant techniques in the multimodal fusion process
17
Feature level fusion: An information process that
integrates, associates, correlates and combines unimodal
features, data and information from single or multiple
sensors or sources to achieve refined estimates of
parameters, characteristics, events and behaviors
The information fusion at data or sensor level can achieve the best performance
improvements (Koval, 2007)
. It can utilize the correlation between multiple features from different modalities
at an early stage which helps in better task accomplishment.
Also, it requires only one learning phase on the combined feature vector
it is hard to represent the time synchronization between the multimodal features.
The features to be fused should be represented in the same format before fusion.
The increase in the number of modalities makes it difficult to learn the crosscorrelation among the heterogeneous features
An Investigation on Combination Methods for Multimodal Content-based Medical Image Retrieval
10/8/2012
18. Significant techniques in the multimodal fusion process
18
Score, rank and decision level fusion, also called high-
level, late information fusion, arose in the neural network
literature. Here, each modality/ sensor/ source/ feature is first
processed individually. The results, so called experts, can be
scores in classification or ranks for retrieval. The expert's values
are then combined for determining the final decision.
This type of information fusion is faster and easier to implement than
early fusion.
The decision level fusion strategy offers scalability (i.e. graceful upgrading
or degrading) in terms of the modalities used in the fusion process.
The disadvantage of the late fusion approach lies in its failure to utilize
the feature level correlation among modalities
As different classifiers are used to obtain the local decisions, the learning
process for them becomes tedious and time-consuming.
An Investigation on Combination Methods for Multimodal Content-based Medical Image Retrieval
10/8/2012
19. Formal presentation of Fusion on Multimodal Retrieval systems
19
An Investigation on Combination Methods for Multimodal Content-based Medical Image Retrieval
10/8/2012
20. Formal presentation of Fusion on Multimodal Retrieval systems
20
An Investigation on Combination Methods for Multimodal Content-based Medical Image Retrieval
10/8/2012
21. Venn diagram of different modalities
relevant retrieved document set in combined result set
21
An Investigation on Combination Methods for Multimodal Content-based Medical Image Retrieval
10/8/2012
22. Formal presentation of Fusion on Multimodal Retrieval systems
22
An Investigation on Combination Methods for Multimodal Content-based Medical Image Retrieval
10/8/2012
23. Formal presentation of Fusion on Multimodal Retrieval systems
23
An Investigation on Combination Methods for Multimodal Content-based Medical Image Retrieval
10/8/2012
24. Formal presentation of Fusion on Multimodal Retrieval systems
24
An Investigation on Combination Methods for Multimodal Content-based Medical Image Retrieval
10/8/2012
25. Experiments
25
We performed our experiments with CLEF 2011 medical image
classification and retrieval tasks dataset. The database includes 231,000
images from journals of BioMed Central at the PubMed Central database
associated with their original articles in the journals.
Beside, a single XML file is provided as textual metadata for all
documents in the collection.
30 topics, ten topics each for visual, textual and mixed retrieval, were
chosen to allow for the evaluation of a large variety of techniques. Each
topic has both a textual query and at least one sample query image
An Investigation on Combination Methods for Multimodal Content-based Medical Image Retrieval
10/8/2012
26. Text modality
26
We used Terrier IR Platform API
Preprocessing :
Split the metadata file and each represented image in the
collection as a structured document of xml file.
Special characters deletion: characters with no meaning, like
punctuation marks or blanks, are all eliminated;
Stop words removal: discarding of semantically empty words,
very high frequency words,
Token normalization: converting all words to lower case
Stemming: we used the Porter stemmer
An Investigation on Combination Methods for Multimodal Content-based Medical Image Retrieval
10/8/2012
27. Text modality
27
We compared performance of the subsystem using
variety of implemented weighting models in Terrier
and chose DFR-BM25 weighting model (Amati,
2003) as base textual modality of our system because
its result was almost the average values of results of
other weighting models.
Additionally, we calculate the similarity score of all
documents in collection corresponding to each query
topic and then sort them in descending order as
ranked list.
An Investigation on Combination Methods for Multimodal Content-based Medical Image Retrieval
10/8/2012
29. Visual Modality
29
We extracted features for all images in test collection and
query examples using Rummager tool .
We examined the performance of all extracted feature.
We perceived that compact composite features like CEDD and
FCTH have satisfactorily retrieval result on our image
collection
Because CEDD feature has a satisfactorily retrieval result and
its required computational power and storage space is
noticeably lower, we used it as the base visual modality result.
An Investigation on Combination Methods for Multimodal Content-based Medical Image Retrieval
10/8/2012
30. Comparison on
performance of different low level features
30
num_rel_ret
700
603
600
547
530
519
500
400
329
352
265
300
200
167
120
100
0
An Investigation on Combination Methods for Multimodal Content-based Medical Image Retrieval
10/8/2012
31. Visual Modality
31
In matching phase, we had to evaluate the similarity difference
between the vector corresponding to the query example image
and the vectors representing the dataset images.
We assessed performance of different similarity function on
Compact Composite features
Then we sorted all of dataset images in a descending list based
on the value of similarity score in corresponding to each query
example image.
An Investigation on Combination Methods for Multimodal Content-based Medical Image Retrieval
10/8/2012
32. Distance function performance evaluation of
different features
32
650
603
596
583
600
547
537
500
519
537
521
550
570
568
525
568
521
533
506
481
516
499
450
476
441
443
400
350
CEDD
329
333
FCTH
322
319
305
300
291
297
Minkowski P5
Distance
SPCD
Tanimoto Distance
BTDH
250
Euclidean
Distance
Cosine Similarity
Manhattan
Distance
Minkowski P3
Distance
Minkowski P4
Distance
An Investigation on Combination Methods for Multimodal Content-based Medical Image Retrieval
10/8/2012
33. Integrated Combination Multimodal Retrieval
33
Our proposed method is a super level of late fusion
because it can applied on both, similarity scores or
ranks, of each modality feature that processed
individually like as late fusion.
The significant difference between this approach and
late fusion is scale of combination.
In our method, all of documents in data collection
involve on combination. In contrast to late fusion
that the number of combined document in list
depends on value of a threshold based on score or
rank.
An Investigation on Combination Methods for Multimodal Content-based Medical Image Retrieval
10/8/2012
34. Experiments on combination method of
modalities
34
Early Fusion:
We used feature concatenation method on the synchronous
compact composite feature vectors of all images
We used Euclidean distance for similarity measure and
selected top 1000 documents for each query.
Late Fusion with Substitution Value of Zero:
We applied CombSUM function on similarity scores of first
1000 top retrieved documents of each feature result set
We
normalized the similarity scores using Min-Max
normalization function before combination.
According to Fagin’s A0 combination algorithm, we substitute
zero as similarity score of documents that are not appeared in
retrieved document list.
An Investigation on Combination Methods for Multimodal Content-based Medical Image Retrieval
10/8/2012
35. Comparison of
different combination method performance
35
Combination Function
Combination
method
# of
Relevant
MAP
Retrieved
Early Fusion
An Investigation on Combination Methods for
Multimodal Content-based Medical Image
643
0.0194
665
0.199
676
0.0231
LFSVZ
541
0.0198
ICMR
(CEDD,FCTH,SpCD)
LFSVZ
Early Fusion
CombSUM
0.0201
ICMR
CombSUM(CEDD,FCTH)
658
699
0.0252
10/8/2012
37. Venn diagram of different modalities
relevant retrieved document set in combined result set
37
An Investigation on Combination Methods for Multimodal Content-based Medical Image Retrieval
10/8/2012
39. Details of ICMR in response to query #18
39
Text
Threshold in 1000th top
score
Visual
Mixed
0.4053
0.8677
0.6486
Similarity Scores
Document ID
Textual
Visual
Mixed
1471-213X-4-16-2
0.3029
0.7768
1.2919
1471-213X-4-16-3
0.3242
0.8055
1.3567
1471-213X-4-16-5
0.3223
0.7837
1.3318
An Investigation on Combination Methods for Multimodal Content-based Medical Image Retrieval
10/8/2012
40. Conclusion
40
In this study, we found that:
Effective combination of textual and visual modalities
improves the overall performance of Content-based Medical
Image Retrieval Systems.
It is clear that integrated retrieval outperforms all fusion
techniques, regardless of late or early fusion, in multimodal
CBIR systems.
In the best combination of textual and visual modalities,
weight for textual modality is about 1.7 folds of visual modality
weight.
Common documents in relevant retrieved set of different
modalities also appears in relevant retrieved document set of
combined modality too, regardless weights or methods.
An Investigation on Combination Methods for Multimodal Content-based Medical Image Retrieval
10/8/2012
41. Future directions
41
Our study can be extended in several ways:
To apply and verify this experimentation results to other
medical image collection
To extend our study into other domain of CBIR Systems rather
than medical domain.
To implement the effective and efficient tool for applying
ICMR
An Investigation on Combination Methods for Multimodal Content-based Medical Image Retrieval
10/8/2012