The emergence of larger databases has made image retrieval techniques an essential component and has led to the development of more efficient image retrieval systems. Retrieval can either be content or text-based. In this paper, the focus is on the content-based image retrieval from the FGNET database. Input query images are subjected to several processing techniques in the database before computing the squared Euclidean distance (SED) between them. The images with the shortest Euclidean distance are considered as a match and are retrieved. The processing techniques involve the application of the median modified Weiner filter (MMWF), extraction of the low-level features using histogram-oriented gradients (HOG), discrete wavelet transform (DWT), GIST, and Local tetra pattern (LTrP). Finally, the features are selected using Adaptive Mean Genetic Algorithm (AMGA). In this study, the average PSNR value obtained after applying the Wiener filter was 45.29. The performance of the AMGA was evaluated based on its precision, F-measure, and recall, and the obtained average values were respectively 0.75, 0.692, and 0.66. The performance matrix of the AMGA was compared to those of particle swarm optimization algorithm (PSO) and genetic algorithm (GA) and found to perform better; thus, proving its efficiency.
Content Based Image Retrieval (CBIR) aims at retrieving the images from the database based on the user query which is visual form rather than the traditional text form. The applications of CBIR extend from surveillance to remote sensing, medical imaging to weather forecasting, and security systems to historical research and so on. Though extensive research is made on content based image retrieval in the spatial domain, we have most images in the internet which is JPEG compressed which pushes the need for image retrieval in the compressed domain itself rather than decoding it to raw format before comparison and retrieval. This research addresses the need to retrieve the images from the database based on the features extracted from the compressed domain along with the application of genetic algorithm in improving the retrieval results. The research focuses on various features and their levels of impact on improving the precision and recall parameters of the CBIR system. Our experimentation results also indicate that the CBIR features in compressed domain along with the genetic algorithm usage improves the results considerably when compared with the literature techniques.
Dataset Pre-Processing and Artificial Augmentation, Network Architecture and ...INFOGAIN PUBLICATION
Training a Convolutional Neural Network (CNN) based classifier is dependent on a large number of factors. These factors involve tasks such as aggregation of apt dataset, arriving at a suitable CNN network, processing of the dataset, and selecting the training parameters to arrive at the desired classification results. This review includes pre-processing techniques and dataset augmentation techniques used in various CNN based classification researches. In many classification problems, it is usually observed that the quality of dataset is responsible for proper training of CNN network, and this quality is judged on the basis of variations in data for every class. It is not usual to find such a pre-made dataset due to many natural concerns. Also it is recommended to have a large dataset, which is again not usually made available directly as a dataset. In some cases, the noise present in the dataset may not prove useful for training, while in others, researchers prefer to add noise to certain images to make the network less vulnerable to unwanted variations. Hence, researchers use artificial digital imaging techniques to derive variations in the dataset and clear or add noise. Thus, the presented paper accumulates state-of-the-art works that used the pre-processing and artificial augmentation of dataset before training. The next part to data augmentation is training, which includes proper selection of several parameters and a suitable CNN architecture. This paper also includes such network characteristics, dataset characteristics and training methodologies used in biomedical imaging, vision modules of autonomous driverless cars, and a few general vision based applications.
SIGNIFICANCE OF DIMENSIONALITY REDUCTION IN IMAGE PROCESSING sipij
The aim of this paper is to present a comparative study of two linear dimension reduction methods namely
PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis). The main idea of PCA is to
transform the high dimensional input space onto the feature space where the maximal variance is
displayed. The feature selection in traditional LDA is obtained by maximizing the difference between
classes and minimizing the distance within classes. PCA finds the axes with maximum variance for the
whole data set where LDA tries to find the axes for best class seperability. The neural network is trained
about the reduced feature set (using PCA or LDA) of images in the database for fast searching of images
from the database using back propagation algorithm. The proposed method is experimented over a general
image database using Matlab. The performance of these systems has been evaluated by Precision and
Recall measures. Experimental results show that PCA gives the better performance in terms of higher
precision and recall values with lesser computational complexity than LDA
A hierarchical RCNN for vehicle and vehicle license plate detection and recog...IJECEIAES
Vehicle and vehicle license detection obtained incredible achievements during recent years that are also popularly used in real traffic scenarios, such as intelligent traffic monitoring systems, auto parking systems, and vehicle services. Computer vision attracted much attention in vehicle and vehicle license detection, benefit from image processing and machine learning technologies. However, the existing methods still have some issues with vehicle and vehicle license plate recognition, especially in a complex environment. In this paper, we propose a multivehicle detection and license plate recognition system based on a hierarchical region convolutional neural network (RCNN). Firstly, a higher level of RCNN is employed to extract vehicles from the original images or video frames. Secondly, the regions of the detected vehicles are input to a lower level (smaller) RCNN to detect the license plate. Thirdly, the detected license plate is split into single numbers. Finally, the individual numbers are recognized by an even smaller RCNN. The experiments on the real traffic database validated the proposed method. Compared with the commonly used all-in-one deep learning structure, the proposed hierarchical method deals with the license plate recognition task in multiple levels for sub-tasks, which enables the modification of network size and structure according to the complexity of sub-tasks. Therefore, the computation load is reduced.
Multilinear Kernel Mapping for Feature Dimension Reduction in Content Based M...ijma
In the process of content-based multimedia retrieval, multimedia information is processed in order to
obtain descriptive features. Descriptive representation of features, results in a huge feature count, which
results in processing overhead. To reduce this descriptive feature overhead, various approaches have been
used to dimensional reduction, among which PCA and LDA are the most used methods. However, these
methods do not reflect the significance of feature content in terms of inter-relation among all dataset
features. To achieve a dimension reduction based on histogram transformation, features with low
significance can be eliminated. In this paper, we propose a feature dimensional reduction approaches to
achieve the dimension reduction approach based on a multi-linear kernel (MLK) modeling. A benchmark
dataset for the experimental work is taken and the proposed work is observed to be improved in analysis in
comparison to the conventional system.
Content Based Image Retrieval (CBIR) aims at retrieving the images from the database based on the user query which is visual form rather than the traditional text form. The applications of CBIR extend from surveillance to remote sensing, medical imaging to weather forecasting, and security systems to historical research and so on. Though extensive research is made on content based image retrieval in the spatial domain, we have most images in the internet which is JPEG compressed which pushes the need for image retrieval in the compressed domain itself rather than decoding it to raw format before comparison and retrieval. This research addresses the need to retrieve the images from the database based on the features extracted from the compressed domain along with the application of genetic algorithm in improving the retrieval results. The research focuses on various features and their levels of impact on improving the precision and recall parameters of the CBIR system. Our experimentation results also indicate that the CBIR features in compressed domain along with the genetic algorithm usage improves the results considerably when compared with the literature techniques.
Dataset Pre-Processing and Artificial Augmentation, Network Architecture and ...INFOGAIN PUBLICATION
Training a Convolutional Neural Network (CNN) based classifier is dependent on a large number of factors. These factors involve tasks such as aggregation of apt dataset, arriving at a suitable CNN network, processing of the dataset, and selecting the training parameters to arrive at the desired classification results. This review includes pre-processing techniques and dataset augmentation techniques used in various CNN based classification researches. In many classification problems, it is usually observed that the quality of dataset is responsible for proper training of CNN network, and this quality is judged on the basis of variations in data for every class. It is not usual to find such a pre-made dataset due to many natural concerns. Also it is recommended to have a large dataset, which is again not usually made available directly as a dataset. In some cases, the noise present in the dataset may not prove useful for training, while in others, researchers prefer to add noise to certain images to make the network less vulnerable to unwanted variations. Hence, researchers use artificial digital imaging techniques to derive variations in the dataset and clear or add noise. Thus, the presented paper accumulates state-of-the-art works that used the pre-processing and artificial augmentation of dataset before training. The next part to data augmentation is training, which includes proper selection of several parameters and a suitable CNN architecture. This paper also includes such network characteristics, dataset characteristics and training methodologies used in biomedical imaging, vision modules of autonomous driverless cars, and a few general vision based applications.
SIGNIFICANCE OF DIMENSIONALITY REDUCTION IN IMAGE PROCESSING sipij
The aim of this paper is to present a comparative study of two linear dimension reduction methods namely
PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis). The main idea of PCA is to
transform the high dimensional input space onto the feature space where the maximal variance is
displayed. The feature selection in traditional LDA is obtained by maximizing the difference between
classes and minimizing the distance within classes. PCA finds the axes with maximum variance for the
whole data set where LDA tries to find the axes for best class seperability. The neural network is trained
about the reduced feature set (using PCA or LDA) of images in the database for fast searching of images
from the database using back propagation algorithm. The proposed method is experimented over a general
image database using Matlab. The performance of these systems has been evaluated by Precision and
Recall measures. Experimental results show that PCA gives the better performance in terms of higher
precision and recall values with lesser computational complexity than LDA
A hierarchical RCNN for vehicle and vehicle license plate detection and recog...IJECEIAES
Vehicle and vehicle license detection obtained incredible achievements during recent years that are also popularly used in real traffic scenarios, such as intelligent traffic monitoring systems, auto parking systems, and vehicle services. Computer vision attracted much attention in vehicle and vehicle license detection, benefit from image processing and machine learning technologies. However, the existing methods still have some issues with vehicle and vehicle license plate recognition, especially in a complex environment. In this paper, we propose a multivehicle detection and license plate recognition system based on a hierarchical region convolutional neural network (RCNN). Firstly, a higher level of RCNN is employed to extract vehicles from the original images or video frames. Secondly, the regions of the detected vehicles are input to a lower level (smaller) RCNN to detect the license plate. Thirdly, the detected license plate is split into single numbers. Finally, the individual numbers are recognized by an even smaller RCNN. The experiments on the real traffic database validated the proposed method. Compared with the commonly used all-in-one deep learning structure, the proposed hierarchical method deals with the license plate recognition task in multiple levels for sub-tasks, which enables the modification of network size and structure according to the complexity of sub-tasks. Therefore, the computation load is reduced.
Multilinear Kernel Mapping for Feature Dimension Reduction in Content Based M...ijma
In the process of content-based multimedia retrieval, multimedia information is processed in order to
obtain descriptive features. Descriptive representation of features, results in a huge feature count, which
results in processing overhead. To reduce this descriptive feature overhead, various approaches have been
used to dimensional reduction, among which PCA and LDA are the most used methods. However, these
methods do not reflect the significance of feature content in terms of inter-relation among all dataset
features. To achieve a dimension reduction based on histogram transformation, features with low
significance can be eliminated. In this paper, we propose a feature dimensional reduction approaches to
achieve the dimension reduction approach based on a multi-linear kernel (MLK) modeling. A benchmark
dataset for the experimental work is taken and the proposed work is observed to be improved in analysis in
comparison to the conventional system.
NEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGESijcax
Semantic annotation of images is an important research topic on both image understanding and database or web image search. Image annotation is a technique to choosing appropriate labels for images with
extracting effective and hidden feature in pictures. In the feature extraction step of proposed method, we
present a model, which combined effective features of visual topics (global features over an image) and
regional contexts (relationship between the regions in Image and each other regions images) to automatic image annotation.In the annotation step of proposed method, we create a new ontology (base on WordNet ontology) for the semantic relationships between tags in the classification and improving semantic gap exist in the automatic image annotation.Experiments result on the 5k Corel dataset show the proposed
method of image annotation in addition to reducing the complexity of the classification, increased accuracy
compared to the another methods.
MULTI-LEVEL FEATURE FUSION BASED TRANSFER LEARNING FOR PERSON RE-IDENTIFICATIONijaia
Most of the currently known methods treat person re-identification task as classification problem and used commonly neural networks. However, these methods used only high-level convolutional feature or to express the feature representation of pedestrians. Moreover, the current data sets for person reidentification is relatively small. Under the limitation of the number of training set, deep convolutional networks are difficult to train adequately. Therefore, it is very worthwhile to introduce auxiliary data sets to help training. In order to solve this problem, this paper propose a novel method of deep transfer learning, and combines the comparison model with the classification model and multi-level fusion of the convolution features on the basis of transfer learning. In a multi-layers convolutional network, the characteristics of each layer of network are the dimensionality reduction of the previous layer of results, but the information of multi-level features is not only inclusive, but also has certain complementarity. We can using the information gap of different layers of convolutional neural networks to extract a better feature expression. Finally, the algorithm proposed in this paper is fully tested on four data sets (VIPeR, CUHK01, GRID and PRID450S). The obtained re-identification results prove the effectiveness of the algorithm.
K2 Algorithm-based Text Detection with An Adaptive Classifier ThresholdCSCJournals
In natural scene images, text detection is a challenging study area for dissimilar content-based image analysis tasks. In this paper, a Bayesian network scores are used to classify candidate character regions by computing posterior probabilities. The posterior probabilities are used to define an adaptive threshold to detect text in scene images with accuracy. Therefore, candidate character regions are extracted through maximally stable extremal region. K2 algorithm-based Bayesian network scores are learned by evaluating dependencies amongst features of a given candidate character region. Bayesian logistic regression classifier is trained to compute posterior probabilities to define an adaptive classifier threshold. The candidate character regions below from adaptive classifier threshold are discarded as non-character regions. Finally, text regions are detected with the use of effective text localization scheme based on geometric features. The entire system is evaluated on the ICDAR 2013 competition database. Experimental results produce competitive performance (precision, recall and harmonic mean) with the recently published algorithms.
C OMPARATIVE S TUDY OF D IMENSIONALITY R EDUCTION T ECHNIQUES U SING PCA AND ...csandit
The aim of this paper is to present a comparative s
tudy of two linear dimension reduction
methods namely PCA (Principal Component Analysis) a
nd LDA (Linear Discriminant Analysis).
The main idea of PCA is to transform the high dimen
sional input space onto the feature space
where the maximal variance is displayed. The featur
e selection in traditional LDA is obtained
by maximizing the difference between classes and mi
nimizing the distance within classes. PCA
finds the axes with maximum variance for the whole
data set where LDA tries to find the axes
for best class seperability. The proposed method is
experimented over a general image database
using Matlab. The performance of these systems has
been evaluated by Precision and Recall
measures. Experimental results show that PCA based
dimension reduction method gives the
better performance in terms of higher precision and
recall values with lesser computational
complexity than the LDA based method.
Security and imperceptibility improving of image steganography using pixel al...IJECEIAES
Information security is one of the main aspects of processes and methodologies in the technical age of information and communication. The security of information should be a key priority in the secret exchange of information between two parties. In order to ensure the security of information, there are some strategies that are used, and they include steganography and cryptography. An effective digital image-steganographic method based on odd/even pixel allocation and random function to increase the security and imperceptibility has been improved. This lately developed outline has been verified for increasing the security and imperceptibility to determine the existent problems. Huffman coding has been used to modify secret data prior embedding stage; this modified equivalent secret data that prevent the secret data from attackers to increase the secret data capacities. The main objective of our scheme is to boost the peak-signal-to-noise-ratio (PSNR) of the stego cover and stop against any attack. The size of the secret data also increases. The results confirm good PSNR values in addition of these findings confirmed the proposed method eligibility.
The content based Image Retrieval is the restoration of images with respect to the visual appearances
like texture, shape and color.The methods, components and the algorithms adopted in this content based
retrieval of images were commonly derived from the areas like pattern identification, signal progressing
and the computer vision. Moreover the shape and the color features were abstracted in the course of
wavelet transformation and color histogram. Thus the new content based retrieval is proposed in this
research paper.In this paper the algorithms were required to propose with regards to the shape, shade and
texture feature abstraction .The concept of discrete wavelet transform to be implemented in order to
compute the Euclidian distance.The calculation of clusters was made with the help of the modified KMeans
clustering technique. Thus the analysis is made in among the query image and the database
image.The MATLAB software is implemented to execute the queries. The K-Means of abstraction is
proposed by performing fragmentation and grid-means module, feature extraction and K- nearest neighbor
clustering algorithms to construct the content based image retrieval system.Thus the obtained result are
made to compute and compared to all other algorithm for the retrieval of quality image features
META-HEURISTICS BASED ARF OPTIMIZATION FOR IMAGE RETRIEVALIJCSEIT Journal
The proposed approach avoids the semantic gap in image retrieval by combining automatic relevance
feedback and a modified stochastic algorithm. A visual feature database is constructed from the image
database, using combined feature vector. Very few fast-computable features are included in this step. The
user selects the query image, and based on that, the system ranks the whole dataset. The nearest images are
retrieved and the first automatic relevance feedback is generated. The combined similarity of textual and
visual feature space using Latent Semantic Indexing is evaluated and the images are labelled as relevant or
irrelevant. The feedback drives a feature re-weighting process and is routed to the particle swarm
optimizer. Instead of classical swarm update approach, the swarm is split, for each swarm to perform the
search in parallel, thereby increasing the performance of the system. It provides a powerful optimization
tool and an effective space exploration mechanism. The proposed approach aims to achieve the following
goals without any human interaction - to cluster relevant images using meta-heuristics and to dynamically
modify the feature space by feeding automatic relevance feedback.
Content Based Image Retrieval : Classification Using Neural Networksijma
In a content-based image retrieval system (CBIR), the main issue is to extract the image features that
effectively represent the image contents in a database. Such an extraction requires a detailed evaluation of
retrieval performance of image features. This paper presents a review of fundamental aspects of content
based image retrieval including feature extraction of color and texture features. Commonly used color
features including color moments, color histogram and color correlogram and Gabor texture are
compared. The paper reviews the increase in efficiency of image retrieval when the color and texture
features are combined. The similarity measures based on which matches are made and images are
retrieved are also discussed. For effective indexing and fast searching of images based on visual features,
neural network based pattern learning can be used to achieve effective classification.
Technical analysis of content placement algorithms for content delivery netwo...IJECEIAES
Content placement algorithm is an integral part of the cloud-based content de-livery network. They are responsible for selecting a precise content to be re-posited over the surrogate servers distributed over a geographical region. Although various works are being already carried out in this sector, there are loopholes connected to most of the work, which doesn't have much disclosure. It is already known that quality of service, quality of experience, and the cost is always an essential objective targeting to be improved in existing work. Still, there are various other aspects and underlying reasons which are equally important from the design aspect. Therefore, this paper contributes towards reviewing the existing approaches of content placement algorithm over cloud-based content delivery network targeting to explore open-end re-search issues.
A principal component analysis-based feature dimensionality reduction scheme ...TELKOMNIKA JOURNAL
In content-based image retrieval (CBIR) system, one approach of image representation is to employ combination of low-level visual features cascaded together into a flat vector. While this presents more descriptive information, it however poses serious challenges in terms of high dimensionality and high computational cost of feature extraction algorithms to deployment of CBIR on platforms (devices) with limited computational and storage resources. Hence, in this work a feature dimensionality reduction technique based on principal component analysis (PCA) is implemented. Each image in a database is indexed using 174-dimensional feature vector comprising of 54-dimensional colour moments (CM54), 32-bin HSV-histogram (HIST32), 48-dimensional gabor wavelet (GW48) and 40-dimensional wavelet moments (MW40). The PCA scheme was incorporated into a CBIR system that utilized the entire feature vector space. The k-largest eigenvalues that yielded a not more than 5% degradation in mean precision were retained for dimensionality reduction. Three image databases (DB10, DB20 and DB100) were used for testing. The result obtained showed that with 80% reduction in feature dimensions, tolerable loss of 3.45, 4.39 and 7.40% in mean precision value were achieved on DB10, DB20 and DB100.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Image hashing is an efficient way to handle digital data authentication problem. Image hashing represents quality summarization of image features in compact manner. In this paper, the modified center symmetric local binary pattern (CSLBP) image hashing algorithm is proposed. Unlike CSLBP 16 bin histogram, Modified CSLBP generates 8 bin histogram without compromise on quality to generate compact hash. It has been found that, uniform quantization on a histogram with more bin results in more precision loss. To overcome quantization loss, modified CSLBP generates the two histogram of a four bin. Uniform quantization on a 4 bin histogram results in less precision loss than a 16 bin histogram. The first generated histogram represents the nearest neighbours and second one is for the diagonal neighbours. To enhance quality in terms of discrimination power, different weight factor are used during histogram generation. For the nearest and the diagonal neighbours, two local weight factors are used. One is the Standard Deviation (SD) and other is the Laplacian of Gaussian (LoG). Standard deviation represents a spread of data which captures local variation from mean. LoG is a second order derivative edge detection operator which detects edges well in presence of noise. The proposed algorithm is resilient to the various kinds of attacks. The proposed method is tested on database having malicious and non-malicious images using benchmark like NHD and ROC which confirms theoretical analysis. The experimental results shows good performance of the proposed method for various attacks despite the short hash length.
Face Recognition for Human Identification using BRISK Feature and Normal Dist...ijtsrd
Face recognition is a kind of automatic human identification from face images has been performed widely research in image processing and machine learning. Face image, facial information of the person is presented and unique information for each person even two person possessed the same face. We propose a methodology for automatic human classification based on Binary Robust Invariant Scalable Keypoints BRISK feature of face images and the normal distribution model. In our proposed methodology, the normal distribution model is used to represent the statistical information of face image as a global feature. The human name is the output of the system according to the input face image. Our proposed feature is applied with Artificial Neural Networks to recognize face for human identification. The proposed feature is extracted from the face image of the Extended Yale Face Database B to perform human identification and highlight the properties of the proposed feature. Khin Mar Thi "Face Recognition for Human Identification using BRISK Feature and Normal Distribution Model" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26589.pdfPaper URL: https://www.ijtsrd.com/computer-science/multimedia/26589/face-recognition-for-human-identification-using-brisk-feature-and-normal-distribution-model/khin-mar-thi
Global Descriptor Attributes Based Content Based Image Retrieval of Query ImagesIJERA Editor
The need for efficient content-based image retrieval system has increased hugely. Efficient and effective retrieval techniques of images are desired because of the explosive growth of digital images. Content based image retrieval (CBIR) is a promising approach because of its automatic indexing retrieval based on their semantic features and visual appearance. In this proposed system we investigate method for describing the contents of images which characterizes images by global descriptor attributes, where global features are extracted to make system more efficient by using color features which are color expectancy, color variance, skewness and texture feature correlation.
A deep locality-sensitive hashing approach for achieving optimal image retri...IJECEIAES
Efficient methods that enable high and rapid image retrieval are continuously needed, especially with the large mass of images that are generated from different sectors and domains like business, communication media, and entertainment. Recently, deep neural networks are extensively proved higher-performing models compared to other traditional models. Besides, combining hashing methods with a deep learning architecture improves the image retrieval time and accuracy. In this paper, we propose a novel image retrieval method that employs locality-sensitive hashing with convolutional neural networks (CNN) to extract different types of features from different model layers. The aim of this hybrid framework is focusing on both the high-level information that provides semantic content and the low-level information that provides visual content of the images. Hash tables are constructed from the extracted features and trained to achieve fast image retrieval. To verify the effectiveness of the proposed framework, a variety of experiments and computational performance analysis are carried out on the CIFRA-10 and NUS-WIDE datasets. The experimental results show that the proposed method surpasses most existing hash-based image retrieval methods.
Due to recent development in technology, there is an increase in the usage of digital cameras, smartphones, and Internet. The shared and stored multimedia data are growing, and to search or to retrieve a relevant image from an archive is a challenging research problem. The fundamental need of any image retrieval model is to search and arrange the images that are in a visual semantic re- lationship with the query given by the user Content Based Image Retrieval Project.
http://takeoffprojects.com/content-based-image-retrieval-project
We are providing you with some of the greatest ideas for building Final Year projects with proper guidance and assistance
National Flags Recognition Based on Principal Component Analysisijtsrd
Recognizing an unknown flag in a scene is challenging due to the diversity of the data and to the complexity of the identification process. And flags are associated with geographical regions, countries and nations. But flag identification of different countries is a challenging and difficult task. Recognition of an unknown flag image in a scene is challenging due to the diversity of the data and to the complexity of the identification process. The aim of the study is to propose a feature extraction based recognition system for Myanmar's national flag. Image features are acquired from the region and state of flags which are identified by using principal component analysis PCA . PCA is a statistical approach used for reducing the number of features in National flags recognition system. Soe Moe Myint | Moe Moe Myint | Aye Aye Cho "National Flags Recognition Based on Principal Component Analysis" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26775.pdfPaper URL: https://www.ijtsrd.com/other-scientific-research-area/other/26775/national-flags-recognition-based-on-principal-component-analysis/soe-moe-myint
NEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGESijcax
Semantic annotation of images is an important research topic on both image understanding and database or web image search. Image annotation is a technique to choosing appropriate labels for images with
extracting effective and hidden feature in pictures. In the feature extraction step of proposed method, we
present a model, which combined effective features of visual topics (global features over an image) and
regional contexts (relationship between the regions in Image and each other regions images) to automatic image annotation.In the annotation step of proposed method, we create a new ontology (base on WordNet ontology) for the semantic relationships between tags in the classification and improving semantic gap exist in the automatic image annotation.Experiments result on the 5k Corel dataset show the proposed
method of image annotation in addition to reducing the complexity of the classification, increased accuracy
compared to the another methods.
MULTI-LEVEL FEATURE FUSION BASED TRANSFER LEARNING FOR PERSON RE-IDENTIFICATIONijaia
Most of the currently known methods treat person re-identification task as classification problem and used commonly neural networks. However, these methods used only high-level convolutional feature or to express the feature representation of pedestrians. Moreover, the current data sets for person reidentification is relatively small. Under the limitation of the number of training set, deep convolutional networks are difficult to train adequately. Therefore, it is very worthwhile to introduce auxiliary data sets to help training. In order to solve this problem, this paper propose a novel method of deep transfer learning, and combines the comparison model with the classification model and multi-level fusion of the convolution features on the basis of transfer learning. In a multi-layers convolutional network, the characteristics of each layer of network are the dimensionality reduction of the previous layer of results, but the information of multi-level features is not only inclusive, but also has certain complementarity. We can using the information gap of different layers of convolutional neural networks to extract a better feature expression. Finally, the algorithm proposed in this paper is fully tested on four data sets (VIPeR, CUHK01, GRID and PRID450S). The obtained re-identification results prove the effectiveness of the algorithm.
K2 Algorithm-based Text Detection with An Adaptive Classifier ThresholdCSCJournals
In natural scene images, text detection is a challenging study area for dissimilar content-based image analysis tasks. In this paper, a Bayesian network scores are used to classify candidate character regions by computing posterior probabilities. The posterior probabilities are used to define an adaptive threshold to detect text in scene images with accuracy. Therefore, candidate character regions are extracted through maximally stable extremal region. K2 algorithm-based Bayesian network scores are learned by evaluating dependencies amongst features of a given candidate character region. Bayesian logistic regression classifier is trained to compute posterior probabilities to define an adaptive classifier threshold. The candidate character regions below from adaptive classifier threshold are discarded as non-character regions. Finally, text regions are detected with the use of effective text localization scheme based on geometric features. The entire system is evaluated on the ICDAR 2013 competition database. Experimental results produce competitive performance (precision, recall and harmonic mean) with the recently published algorithms.
C OMPARATIVE S TUDY OF D IMENSIONALITY R EDUCTION T ECHNIQUES U SING PCA AND ...csandit
The aim of this paper is to present a comparative s
tudy of two linear dimension reduction
methods namely PCA (Principal Component Analysis) a
nd LDA (Linear Discriminant Analysis).
The main idea of PCA is to transform the high dimen
sional input space onto the feature space
where the maximal variance is displayed. The featur
e selection in traditional LDA is obtained
by maximizing the difference between classes and mi
nimizing the distance within classes. PCA
finds the axes with maximum variance for the whole
data set where LDA tries to find the axes
for best class seperability. The proposed method is
experimented over a general image database
using Matlab. The performance of these systems has
been evaluated by Precision and Recall
measures. Experimental results show that PCA based
dimension reduction method gives the
better performance in terms of higher precision and
recall values with lesser computational
complexity than the LDA based method.
Security and imperceptibility improving of image steganography using pixel al...IJECEIAES
Information security is one of the main aspects of processes and methodologies in the technical age of information and communication. The security of information should be a key priority in the secret exchange of information between two parties. In order to ensure the security of information, there are some strategies that are used, and they include steganography and cryptography. An effective digital image-steganographic method based on odd/even pixel allocation and random function to increase the security and imperceptibility has been improved. This lately developed outline has been verified for increasing the security and imperceptibility to determine the existent problems. Huffman coding has been used to modify secret data prior embedding stage; this modified equivalent secret data that prevent the secret data from attackers to increase the secret data capacities. The main objective of our scheme is to boost the peak-signal-to-noise-ratio (PSNR) of the stego cover and stop against any attack. The size of the secret data also increases. The results confirm good PSNR values in addition of these findings confirmed the proposed method eligibility.
The content based Image Retrieval is the restoration of images with respect to the visual appearances
like texture, shape and color.The methods, components and the algorithms adopted in this content based
retrieval of images were commonly derived from the areas like pattern identification, signal progressing
and the computer vision. Moreover the shape and the color features were abstracted in the course of
wavelet transformation and color histogram. Thus the new content based retrieval is proposed in this
research paper.In this paper the algorithms were required to propose with regards to the shape, shade and
texture feature abstraction .The concept of discrete wavelet transform to be implemented in order to
compute the Euclidian distance.The calculation of clusters was made with the help of the modified KMeans
clustering technique. Thus the analysis is made in among the query image and the database
image.The MATLAB software is implemented to execute the queries. The K-Means of abstraction is
proposed by performing fragmentation and grid-means module, feature extraction and K- nearest neighbor
clustering algorithms to construct the content based image retrieval system.Thus the obtained result are
made to compute and compared to all other algorithm for the retrieval of quality image features
META-HEURISTICS BASED ARF OPTIMIZATION FOR IMAGE RETRIEVALIJCSEIT Journal
The proposed approach avoids the semantic gap in image retrieval by combining automatic relevance
feedback and a modified stochastic algorithm. A visual feature database is constructed from the image
database, using combined feature vector. Very few fast-computable features are included in this step. The
user selects the query image, and based on that, the system ranks the whole dataset. The nearest images are
retrieved and the first automatic relevance feedback is generated. The combined similarity of textual and
visual feature space using Latent Semantic Indexing is evaluated and the images are labelled as relevant or
irrelevant. The feedback drives a feature re-weighting process and is routed to the particle swarm
optimizer. Instead of classical swarm update approach, the swarm is split, for each swarm to perform the
search in parallel, thereby increasing the performance of the system. It provides a powerful optimization
tool and an effective space exploration mechanism. The proposed approach aims to achieve the following
goals without any human interaction - to cluster relevant images using meta-heuristics and to dynamically
modify the feature space by feeding automatic relevance feedback.
Content Based Image Retrieval : Classification Using Neural Networksijma
In a content-based image retrieval system (CBIR), the main issue is to extract the image features that
effectively represent the image contents in a database. Such an extraction requires a detailed evaluation of
retrieval performance of image features. This paper presents a review of fundamental aspects of content
based image retrieval including feature extraction of color and texture features. Commonly used color
features including color moments, color histogram and color correlogram and Gabor texture are
compared. The paper reviews the increase in efficiency of image retrieval when the color and texture
features are combined. The similarity measures based on which matches are made and images are
retrieved are also discussed. For effective indexing and fast searching of images based on visual features,
neural network based pattern learning can be used to achieve effective classification.
Technical analysis of content placement algorithms for content delivery netwo...IJECEIAES
Content placement algorithm is an integral part of the cloud-based content de-livery network. They are responsible for selecting a precise content to be re-posited over the surrogate servers distributed over a geographical region. Although various works are being already carried out in this sector, there are loopholes connected to most of the work, which doesn't have much disclosure. It is already known that quality of service, quality of experience, and the cost is always an essential objective targeting to be improved in existing work. Still, there are various other aspects and underlying reasons which are equally important from the design aspect. Therefore, this paper contributes towards reviewing the existing approaches of content placement algorithm over cloud-based content delivery network targeting to explore open-end re-search issues.
A principal component analysis-based feature dimensionality reduction scheme ...TELKOMNIKA JOURNAL
In content-based image retrieval (CBIR) system, one approach of image representation is to employ combination of low-level visual features cascaded together into a flat vector. While this presents more descriptive information, it however poses serious challenges in terms of high dimensionality and high computational cost of feature extraction algorithms to deployment of CBIR on platforms (devices) with limited computational and storage resources. Hence, in this work a feature dimensionality reduction technique based on principal component analysis (PCA) is implemented. Each image in a database is indexed using 174-dimensional feature vector comprising of 54-dimensional colour moments (CM54), 32-bin HSV-histogram (HIST32), 48-dimensional gabor wavelet (GW48) and 40-dimensional wavelet moments (MW40). The PCA scheme was incorporated into a CBIR system that utilized the entire feature vector space. The k-largest eigenvalues that yielded a not more than 5% degradation in mean precision were retained for dimensionality reduction. Three image databases (DB10, DB20 and DB100) were used for testing. The result obtained showed that with 80% reduction in feature dimensions, tolerable loss of 3.45, 4.39 and 7.40% in mean precision value were achieved on DB10, DB20 and DB100.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Image hashing is an efficient way to handle digital data authentication problem. Image hashing represents quality summarization of image features in compact manner. In this paper, the modified center symmetric local binary pattern (CSLBP) image hashing algorithm is proposed. Unlike CSLBP 16 bin histogram, Modified CSLBP generates 8 bin histogram without compromise on quality to generate compact hash. It has been found that, uniform quantization on a histogram with more bin results in more precision loss. To overcome quantization loss, modified CSLBP generates the two histogram of a four bin. Uniform quantization on a 4 bin histogram results in less precision loss than a 16 bin histogram. The first generated histogram represents the nearest neighbours and second one is for the diagonal neighbours. To enhance quality in terms of discrimination power, different weight factor are used during histogram generation. For the nearest and the diagonal neighbours, two local weight factors are used. One is the Standard Deviation (SD) and other is the Laplacian of Gaussian (LoG). Standard deviation represents a spread of data which captures local variation from mean. LoG is a second order derivative edge detection operator which detects edges well in presence of noise. The proposed algorithm is resilient to the various kinds of attacks. The proposed method is tested on database having malicious and non-malicious images using benchmark like NHD and ROC which confirms theoretical analysis. The experimental results shows good performance of the proposed method for various attacks despite the short hash length.
Face Recognition for Human Identification using BRISK Feature and Normal Dist...ijtsrd
Face recognition is a kind of automatic human identification from face images has been performed widely research in image processing and machine learning. Face image, facial information of the person is presented and unique information for each person even two person possessed the same face. We propose a methodology for automatic human classification based on Binary Robust Invariant Scalable Keypoints BRISK feature of face images and the normal distribution model. In our proposed methodology, the normal distribution model is used to represent the statistical information of face image as a global feature. The human name is the output of the system according to the input face image. Our proposed feature is applied with Artificial Neural Networks to recognize face for human identification. The proposed feature is extracted from the face image of the Extended Yale Face Database B to perform human identification and highlight the properties of the proposed feature. Khin Mar Thi "Face Recognition for Human Identification using BRISK Feature and Normal Distribution Model" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26589.pdfPaper URL: https://www.ijtsrd.com/computer-science/multimedia/26589/face-recognition-for-human-identification-using-brisk-feature-and-normal-distribution-model/khin-mar-thi
Global Descriptor Attributes Based Content Based Image Retrieval of Query ImagesIJERA Editor
The need for efficient content-based image retrieval system has increased hugely. Efficient and effective retrieval techniques of images are desired because of the explosive growth of digital images. Content based image retrieval (CBIR) is a promising approach because of its automatic indexing retrieval based on their semantic features and visual appearance. In this proposed system we investigate method for describing the contents of images which characterizes images by global descriptor attributes, where global features are extracted to make system more efficient by using color features which are color expectancy, color variance, skewness and texture feature correlation.
A deep locality-sensitive hashing approach for achieving optimal image retri...IJECEIAES
Efficient methods that enable high and rapid image retrieval are continuously needed, especially with the large mass of images that are generated from different sectors and domains like business, communication media, and entertainment. Recently, deep neural networks are extensively proved higher-performing models compared to other traditional models. Besides, combining hashing methods with a deep learning architecture improves the image retrieval time and accuracy. In this paper, we propose a novel image retrieval method that employs locality-sensitive hashing with convolutional neural networks (CNN) to extract different types of features from different model layers. The aim of this hybrid framework is focusing on both the high-level information that provides semantic content and the low-level information that provides visual content of the images. Hash tables are constructed from the extracted features and trained to achieve fast image retrieval. To verify the effectiveness of the proposed framework, a variety of experiments and computational performance analysis are carried out on the CIFRA-10 and NUS-WIDE datasets. The experimental results show that the proposed method surpasses most existing hash-based image retrieval methods.
Due to recent development in technology, there is an increase in the usage of digital cameras, smartphones, and Internet. The shared and stored multimedia data are growing, and to search or to retrieve a relevant image from an archive is a challenging research problem. The fundamental need of any image retrieval model is to search and arrange the images that are in a visual semantic re- lationship with the query given by the user Content Based Image Retrieval Project.
http://takeoffprojects.com/content-based-image-retrieval-project
We are providing you with some of the greatest ideas for building Final Year projects with proper guidance and assistance
National Flags Recognition Based on Principal Component Analysisijtsrd
Recognizing an unknown flag in a scene is challenging due to the diversity of the data and to the complexity of the identification process. And flags are associated with geographical regions, countries and nations. But flag identification of different countries is a challenging and difficult task. Recognition of an unknown flag image in a scene is challenging due to the diversity of the data and to the complexity of the identification process. The aim of the study is to propose a feature extraction based recognition system for Myanmar's national flag. Image features are acquired from the region and state of flags which are identified by using principal component analysis PCA . PCA is a statistical approach used for reducing the number of features in National flags recognition system. Soe Moe Myint | Moe Moe Myint | Aye Aye Cho "National Flags Recognition Based on Principal Component Analysis" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26775.pdfPaper URL: https://www.ijtsrd.com/other-scientific-research-area/other/26775/national-flags-recognition-based-on-principal-component-analysis/soe-moe-myint
Alinteri Journal of Agriculture Sciences aims to create an environment for researchers to introduce, share, read, and discuss recent scientific progress. We adopt the policy of providing open access to readers who may be interested in recent developments. Alinteri Journal of Agriculture Sciences is being published online biannually as of 2007. The journal is an open access, international, double-blind peer-reviewed journal publishing research articles, Invited reviews, short communications, and letters to the Editor in the fields of agriculture, fisheries, veterinary, biology, and closely related disciplines.
Content-based image retrieval based on corel dataset using deep learningIAESIJAI
A popular technique for retrieving images from huge and unlabeled image databases are content-based-image-retrieval (CBIR). However, the traditional information retrieval techniques do not satisfy users in terms of time consumption and accuracy. Additionally, the number of images accessible to users are growing due to web development and transmission networks. As the result, huge digital image creation occurs in many places. Therefore, quick access to these huge image databases and retrieving images like a query image from these huge image collections provides significant challenges and the need for an effective technique. Feature extraction and similarity measurement are important for the performance of a CBIR technique. This work proposes a simple but efficient deep-learning framework based on convolutional-neural networks (CNN) for the feature extraction phase in CBIR. The proposed CNN aims to reduce the semantic gap between low-level and high-level features. The similarity measurements are used to compute the distance between the query and database image features. When retrieving the first 10 pictures, an experiment on the Corel-1K dataset showed that the average precision was 0.88 with Euclidean distance, which was a big step up from the state-of-the-art approaches.
IMAGE RECOGNITION USING MATLAB SIMULINK BLOCKSETIJCSEA Journal
The world over, image recognition are essential players in promoting quality object recognition especially in emergency and search-rescue operation. In this paper precise image recognition system using Matlab Simulink Blockset to detect selected object from crowd is presented. The process involves extracting object
features and then recognizes it considering illumination, direction and pose. A Simulink model has been developed to eliminate the tiny elements from the image, then creating segments for precise object recognition. Furthermore, the simulation explores image recognition from the coloured and gray-scale images through image processing techniques in Simulink environment. The tool employed for computation
and simulation is the Matlab image processing blockset. The process comprises morphological operation method which is effective for captured images and video. The results of extensive simulations indicate that this method is suitable for application identifying a person from a crow. The model can be used in emergency and search-rescue operation as well as in medicine, information security, access control, law enforcement, surveillance system, microscopy etc.
Combination of texture feature extraction and forward selection for one-clas...IJECEIAES
This study aims to validate self-portraits using one-class support vector machine (OCSVM). To validate accurately, we build a model by combining texture feature extraction methods, Haralick and local binary pattern (LBP). We also reduce irrelevant features using forward selection (FS). OCSVM was selected because it can solve the problem caused by the inadequate variation of the negative class population. In OCSVM, we only need to feed the algorithm using the true class data, and the data with pattern that does not match will be classified as false. However, combining the two feature extractions produces many features, leading to the curse of dimensionality. The FS method is used to overcome this problem by selecting the best features. From the experiments carried out, the Haralick+LBP+FS+OCSVM model outperformed other models with an accuracy of 95.25% on validation data and 91.75% on test data.
Semantic Concept Detection in Video Using Hybrid Model of CNN and SVM Classif...CSCJournals
In today's era of digitization and fast internet, many video are uploaded on websites, a mechanism is required to access this video accurately and efficiently. Semantic concept detection achieve this task accurately and is used in many application like multimedia annotation, video summarization, annotation, indexing and retrieval. Video retrieval based on semantic concept is efficient and challenging research area. Semantic concept detection bridges the semantic gap between low level extraction of features from key-frame or shot of video and high level interpretation of the same as semantics. Semantic Concept detection automatically assigns labels to video from predefined vocabulary. This task is considered as supervised machine learning problem. Support vector machine (SVM) emerged as default classifier choice for this task. But recently Deep Convolutional Neural Network (CNN) has shown exceptional performance in this area. CNN requires large dataset for training. In this paper, we present framework for semantic concept detection using hybrid model of SVM and CNN. Global features like color moment, HSV histogram, wavelet transform, grey level co-occurrence matrix and edge orientation histogram are selected as low level features extracted from annotated groundtruth video dataset of TRECVID. In second pipeline, deep features are extracted using pretrained CNN. Dataset is partitioned in three segments to deal with data imbalance issue. Two classifiers are separately trained on all segments and fusion of scores is performed to detect the concepts in test dataset. The system performance is evaluated using Mean Average Precision for multi-label dataset. The performance of the proposed framework using hybrid model of SVM and CNN is comparable to existing approaches.
Human Re-identification with Global and Local Siamese Convolution Neural NetworkTELKOMNIKA JOURNAL
Human re-identification is an important task in surveillance system to determine whether the same human re-appears in multiple cameras with disjoint views. Mostly, appearance based approaches are used to perform human re-identification task because they are less constrained than biometric based approaches. Most of the research works apply hand-crafted feature extractors and then simple matching methods are used. However, designing a robust and stable feature requires expert knowledge and takes time to tune the features. In this paper, we propose a global and local structure of Siamese Convolution Neural Network which automatically extracts features from input images to perform human re-identification task. Besides, most of the current human re-identification tasks in single-shot approaches do not consider occlusion issue due to lack of tracking information. Therefore, we apply a decision fusion technique to combine global and local features for occlusion cases in single-shot approaches.
IMAGE SUBSET SELECTION USING GABOR FILTERS AND NEURAL NETWORKSijma
An automatic method for the selection of subsets of images, both modern and historic, out of a set of
landmark large images collected from the Internet is presented in this paper. This selection depends on the
extraction of dominant features using Gabor filtering. Features are selected carefully from a preliminary
image set and fed into a neural network as a training data. The method collects a large set of raw landmark
images containing modern and historic landmark images and non-landmark images. The method then
processes these images to classify them as landmark and non-landmark images. The classification
performance highly depends on the number of candidate features of the landmark.
I MAGE S UBSET S ELECTION U SING G ABOR F ILTERS A ND N EURAL N ETWORKSijma
An automatic method for the selection of subsets of
images, both modern and historic, out of a set of
landmark large images collected from the Internet i
s presented in this paper. This selection depends o
n the
extraction of dominant features using Gabor filteri
ng. Features are selected carefully from a prelimin
ary
image set and fed into a neural network as a traini
ng data. The method collects a large set of raw lan
dmark
images containing modern and historic landmark imag
es and non-landmark images. The method then
processes these images to classify them as landmark
and non-landmark images. The classification
performance highly depends on the number of candida
te features of the landmark.
Investigating the Effect of BD-CRAFT to Text Detection Algorithmsgerogepatton
With the rise and development of deep learning, computer vision and document analysis has influenced the
area of text detection. Despite significant efforts in improving text detection performance, it remains to be
challenging, as evident by the series of the Robust Reading Competitions. This study investigates the impact
of employing BD-CRAFT – a variant of CRAFT that involves automatic image classification utilizing a
Laplacian operator and further preprocess the classified blurry images using blind deconvolution to the
top-ranked algorithms, SenseTime and TextFuseNet. Results revealed that the proposed method
significantly enhanced the detection performances of the said algorithms. TextFuseNet + BD-CRAFT
achieved an outstanding h-mean result of 93.55% and shows an impressive improvement of over 4%
increase to its precision yielding 95.71% while SenseTime + BD-CRAFT placed first with a very
remarkable 95.22% h-mean and exhibited a huge precision improvement of over 4%.
INVESTIGATING THE EFFECT OF BD-CRAFT TO TEXT DETECTION ALGORITHMSijaia
With the rise and development of deep learning, computer vision and document analysis has influenced the
area of text detection. Despite significant efforts in improving text detection performance, it remains to be
challenging, as evident by the series of the Robust Reading Competitions. This study investigates the impact
of employing BD-CRAFT – a variant of CRAFT that involves automatic image classification utilizing a
Laplacian operator and further preprocess the classified blurry images using blind deconvolution to the
top-ranked algorithms, SenseTime and TextFuseNet. Results revealed that the proposed method
significantly enhanced the detection performances of the said algorithms. TextFuseNet + BD-CRAFT
achieved an outstanding h-mean result of 93.55% and shows an impressive improvement of over 4%
increase to its precision yielding 95.71% while SenseTime + BD-CRAFT placed first with a very
remarkable 95.22% h-mean and exhibited a huge precision improvement of over 4%.
A face recognition system using convolutional feature extraction with linear...IJECEIAES
Face recognition is one of the important biometric authentication research areas for security purposes in many fields such as pattern recognition and image processing. However, the human face recognitions have the major problem in machine learning and deep learning techniques, since input images vary with poses of people, different lighting conditions, various expressions, ages as well as illumination conditions and it makes the face recognition process poor in accuracy. In the present research, the resolution of the image patches is reduced by the max pooling layer in convolutional neural network (CNN) and also used to make the model robust than other traditional feature extraction technique called local multiple pattern (LMP). The extracted features are fed into the linear collaborative discriminant regression classification (LCDRC) for final face recognition. Due to optimization using CNN in LCDRC, the distance ratio between the classes has maximized and the distance of the features inside the class reduces. The results stated that the CNN-LCDRC achieved 93.10% and 87.60% of mean recognition accuracy, where traditional LCDRC achieved 83.35% and 77.70% of mean recognition accuracy on ORL and YALE databases respectively for the training number 8 (i.e. 80% of training and 20% of testing data).
A simplified and novel technique to retrieve color images from hand-drawn sk...IJECEIAES
With the increasing adoption of human-computer interaction, there is a growing trend of extracting the image through hand-drawn sketches by humans to find out correlated objects from the storage unit. A review of the existing system shows the dominant use of sophisticated and complex mechanisms where the focus is more on accuracy and less on system efficiency. Hence, this proposed system introduces a simplified extraction of the related image using an attribution clustering process and a cost-effective training scheme. The proposed method uses K-means clustering and bag-ofattributes to extract essential information from the sketch. The proposed system also introduces a unique indexing scheme that makes the retrieval process faster and results in retrieving the highest-ranked images. Implemented in MATLAB, the study outcome shows the proposed system offers better accuracy and processing time than the existing feature extraction technique.
Semantic Image Retrieval Using Relevance Feedback dannyijwest
This paper presents optimized interactive content-based image retrieval framework based on AdaBoost
learning method. As we know relevance feedback (RF) is online process, so we have optimized the learning
process by considering the most positive image selection on each feedback iteration. To learn the system we
have used AdaBoost. The main significances of our system are to address the small training sample and to
reduce retrieval time. Experiments are conducted on 1000 semantic colour images from Corel database to
demonstrate the effectiveness of the proposed framework. These experiments employed large image
database and combined RCWFs and DT-CWT texture descriptors to represent content of the images.
Similar to Facial image retrieval on semantic features using adaptive mean genetic algorithm (20)
Amazon products reviews classification based on machine learning, deep learni...TELKOMNIKA JOURNAL
In recent times, the trend of online shopping through e-commerce stores and websites has grown to a huge extent. Whenever a product is purchased on an e-commerce platform, people leave their reviews about the product. These reviews are very helpful for the store owners and the product’s manufacturers for the betterment of their work process as well as product quality. An automated system is proposed in this work that operates on two datasets D1 and D2 obtained from Amazon. After certain preprocessing steps, N-gram and word embedding-based features are extracted using term frequency-inverse document frequency (TF-IDF), bag of words (BoW) and global vectors (GloVe), and Word2vec, respectively. Four machine learning (ML) models support vector machines (SVM), logistic regression (RF), logistic regression (LR), multinomial Naïve Bayes (MNB), two deep learning (DL) models convolutional neural network (CNN), long-short term memory (LSTM), and standalone bidirectional encoder representations (BERT) are used to classify reviews as either positive or negative. The results obtained by the standard ML, DL models and BERT are evaluated using certain performance evaluation measures. BERT turns out to be the best-performing model in the case of D1 with an accuracy of 90% on features derived by word embedding models while the CNN provides the best accuracy of 97% upon word embedding features in the case of D2. The proposed model shows better overall performance on D2 as compared to D1.
Design, simulation, and analysis of microstrip patch antenna for wireless app...TELKOMNIKA JOURNAL
In this study, a microstrip patch antenna that works at 3.6 GHz was built and tested to see how well it works. In this work, Rogers RT/Duroid 5880 has been used as the substrate material, with a dielectric permittivity of 2.2 and a thickness of 0.3451 mm; it serves as the base for the examined antenna. The computer simulation technology (CST) studio suite is utilized to show the recommended antenna design. The goal of this study was to get a more extensive transmission capacity, a lower voltage standing wave ratio (VSWR), and a lower return loss, but the main goal was to get a higher gain, directivity, and efficiency. After simulation, the return loss, gain, directivity, bandwidth, and efficiency of the supplied antenna are found to be -17.626 dB, 9.671 dBi, 9.924 dBi, 0.2 GHz, and 97.45%, respectively. Besides, the recreation uncovered that the transfer speed side-lobe level at phi was much better than those of the earlier works, at -28.8 dB, respectively. Thus, it makes a solid contender for remote innovation and more robust communication.
Design and simulation an optimal enhanced PI controller for congestion avoida...TELKOMNIKA JOURNAL
In this paper, snake optimization algorithm (SOA) is used to find the optimal gains of an enhanced controller for controlling congestion problem in computer networks. M-file and Simulink platform is adopted to evaluate the response of the active queue management (AQM) system, a comparison with two classical controllers is done, all tuned gains of controllers are obtained using SOA method and the fitness function chose to monitor the system performance is the integral time absolute error (ITAE). Transient analysis and robust analysis is used to show the proposed controller performance, two robustness tests are applied to the AQM system, one is done by varying the size of queue value in different period and the other test is done by changing the number of transmission control protocol (TCP) sessions with a value of ± 20% from its original value. The simulation results reflect a stable and robust behavior and best performance is appeared clearly to achieve the desired queue size without any noise or any transmission problems.
Improving the detection of intrusion in vehicular ad-hoc networks with modifi...TELKOMNIKA JOURNAL
Vehicular ad-hoc networks (VANETs) are wireless-equipped vehicles that form networks along the road. The security of this network has been a major challenge. The identity-based cryptosystem (IBC) previously used to secure the networks suffers from membership authentication security features. This paper focuses on improving the detection of intruders in VANETs with a modified identity-based cryptosystem (MIBC). The MIBC is developed using a non-singular elliptic curve with Lagrange interpolation. The public key of vehicles and roadside units on the network are derived from number plates and location identification numbers, respectively. Pseudo-identities are used to mask the real identity of users to preserve their privacy. The membership authentication mechanism ensures that only valid and authenticated members of the network are allowed to join the network. The performance of the MIBC is evaluated using intrusion detection ratio (IDR) and computation time (CT) and then validated with the existing IBC. The result obtained shows that the MIBC recorded an IDR of 99.3% against 94.3% obtained for the existing identity-based cryptosystem (EIBC) for 140 unregistered vehicles attempting to intrude on the network. The MIBC shows lower CT values of 1.17 ms against 1.70 ms for EIBC. The MIBC can be used to improve the security of VANETs.
Conceptual model of internet banking adoption with perceived risk and trust f...TELKOMNIKA JOURNAL
Understanding the primary factors of internet banking (IB) acceptance is critical for both banks and users; nevertheless, our knowledge of the role of users’ perceived risk and trust in IB adoption is limited. As a result, we develop a conceptual model by incorporating perceived risk and trust into the technology acceptance model (TAM) theory toward the IB. The proper research emphasized that the most essential component in explaining IB adoption behavior is behavioral intention to use IB adoption. TAM is helpful for figuring out how elements that affect IB adoption are connected to one another. According to previous literature on IB and the use of such technology in Iraq, one has to choose a theoretical foundation that may justify the acceptance of IB from the customer’s perspective. The conceptual model was therefore constructed using the TAM as a foundation. Furthermore, perceived risk and trust were added to the TAM dimensions as external factors. The key objective of this work was to extend the TAM to construct a conceptual model for IB adoption and to get sufficient theoretical support from the existing literature for the essential elements and their relationships in order to unearth new insights about factors responsible for IB adoption.
Efficient combined fuzzy logic and LMS algorithm for smart antennaTELKOMNIKA JOURNAL
The smart antennas are broadly used in wireless communication. The least mean square (LMS) algorithm is a procedure that is concerned in controlling the smart antenna pattern to accommodate specified requirements such as steering the beam toward the desired signal, in addition to placing the deep nulls in the direction of unwanted signals. The conventional LMS (C-LMS) has some drawbacks like slow convergence speed besides high steady state fluctuation error. To overcome these shortcomings, the present paper adopts an adaptive fuzzy control step size least mean square (FC-LMS) algorithm to adjust its step size. Computer simulation outcomes illustrate that the given model has fast convergence rate as well as low mean square error steady state.
Design and implementation of a LoRa-based system for warning of forest fireTELKOMNIKA JOURNAL
This paper presents the design and implementation of a forest fire monitoring and warning system based on long range (LoRa) technology, a novel ultra-low power consumption and long-range wireless communication technology for remote sensing applications. The proposed system includes a wireless sensor network that records environmental parameters such as temperature, humidity, wind speed, and carbon dioxide (CO2) concentration in the air, as well as taking infrared photos.The data collected at each sensor node will be transmitted to the gateway via LoRa wireless transmission. Data will be collected, processed, and uploaded to a cloud database at the gateway. An Android smartphone application that allows anyone to easily view the recorded data has been developed. When a fire is detected, the system will sound a siren and send a warning message to the responsible personnel, instructing them to take appropriate action. Experiments in Tram Chim Park, Vietnam, have been conducted to verify and evaluate the operation of the system.
Wavelet-based sensing technique in cognitive radio networkTELKOMNIKA JOURNAL
Cognitive radio is a smart radio that can change its transmitter parameter based on interaction with the environment in which it operates. The demand for frequency spectrum is growing due to a big data issue as many Internet of Things (IoT) devices are in the network. Based on previous research, most frequency spectrum was used, but some spectrums were not used, called spectrum hole. Energy detection is one of the spectrum sensing methods that has been frequently used since it is easy to use and does not require license users to have any prior signal understanding. But this technique is incapable of detecting at low signal-to-noise ratio (SNR) levels. Therefore, the wavelet-based sensing is proposed to overcome this issue and detect spectrum holes. The main objective of this work is to evaluate the performance of wavelet-based sensing and compare it with the energy detection technique. The findings show that the percentage of detection in wavelet-based sensing is 83% higher than energy detection performance. This result indicates that the wavelet-based sensing has higher precision in detection and the interference towards primary user can be decreased.
A novel compact dual-band bandstop filter with enhanced rejection bandsTELKOMNIKA JOURNAL
In this paper, we present the design of a new wide dual-band bandstop filter (DBBSF) using nonuniform transmission lines. The method used to design this filter is to replace conventional uniform transmission lines with nonuniform lines governed by a truncated Fourier series. Based on how impedances are profiled in the proposed DBBSF structure, the fractional bandwidths of the two 10 dB-down rejection bands are widened to 39.72% and 52.63%, respectively, and the physical size has been reduced compared to that of the filter with the uniform transmission lines. The results of the electromagnetic (EM) simulation support the obtained analytical response and show an improved frequency behavior.
Deep learning approach to DDoS attack with imbalanced data at the application...TELKOMNIKA JOURNAL
A distributed denial of service (DDoS) attack is where one or more computers attack or target a server computer, by flooding internet traffic to the server. As a result, the server cannot be accessed by legitimate users. A result of this attack causes enormous losses for a company because it can reduce the level of user trust, and reduce the company’s reputation to lose customers due to downtime. One of the services at the application layer that can be accessed by users is a web-based lightweight directory access protocol (LDAP) service that can provide safe and easy services to access directory applications. We used a deep learning approach to detect DDoS attacks on the CICDDoS 2019 dataset on a complex computer network at the application layer to get fast and accurate results for dealing with unbalanced data. Based on the results obtained, it is observed that DDoS attack detection using a deep learning approach on imbalanced data performs better when implemented using synthetic minority oversampling technique (SMOTE) method for binary classes. On the other hand, the proposed deep learning approach performs better for detecting DDoS attacks in multiclass when implemented using the adaptive synthetic (ADASYN) method.
The appearance of uncertainties and disturbances often effects the characteristics of either linear or nonlinear systems. Plus, the stabilization process may be deteriorated thus incurring a catastrophic effect to the system performance. As such, this manuscript addresses the concept of matching condition for the systems that are suffering from miss-match uncertainties and exogeneous disturbances. The perturbation towards the system at hand is assumed to be known and unbounded. To reach this outcome, uncertainties and their classifications are reviewed thoroughly. The structural matching condition is proposed and tabulated in the proposition 1. Two types of mathematical expressions are presented to distinguish the system with matched uncertainty and the system with miss-matched uncertainty. Lastly, two-dimensional numerical expressions are provided to practice the proposed proposition. The outcome shows that matching condition has the ability to change the system to a design-friendly model for asymptotic stabilization.
Implementation of FinFET technology based low power 4×4 Wallace tree multipli...TELKOMNIKA JOURNAL
Many systems, including digital signal processors, finite impulse response (FIR) filters, application-specific integrated circuits, and microprocessors, use multipliers. The demand for low power multipliers is gradually rising day by day in the current technological trend. In this study, we describe a 4×4 Wallace multiplier based on a carry select adder (CSA) that uses less power and has a better power delay product than existing multipliers. HSPICE tool at 16 nm technology is used to simulate the results. In comparison to the traditional CSA-based multiplier, which has a power consumption of 1.7 µW and power delay product (PDP) of 57.3 fJ, the results demonstrate that the Wallace multiplier design employing CSA with first zero finding logic (FZF) logic has the lowest power consumption of 1.4 µW and PDP of 27.5 fJ.
Evaluation of the weighted-overlap add model with massive MIMO in a 5G systemTELKOMNIKA JOURNAL
The flaw in 5G orthogonal frequency division multiplexing (OFDM) becomes apparent in high-speed situations. Because the doppler effect causes frequency shifts, the orthogonality of OFDM subcarriers is broken, lowering both their bit error rate (BER) and throughput output. As part of this research, we use a novel design that combines massive multiple input multiple output (MIMO) and weighted overlap and add (WOLA) to improve the performance of 5G systems. To determine which design is superior, throughput and BER are calculated for both the proposed design and OFDM. The results of the improved system show a massive improvement in performance ver the conventional system and significant improvements with massive MIMO, including the best throughput and BER. When compared to conventional systems, the improved system has a throughput that is around 22% higher and the best performance in terms of BER, but it still has around 25% less error than OFDM.
Reflector antenna design in different frequencies using frequency selective s...TELKOMNIKA JOURNAL
In this study, it is aimed to obtain two different asymmetric radiation patterns obtained from antennas in the shape of the cross-section of a parabolic reflector (fan blade type antennas) and antennas with cosecant-square radiation characteristics at two different frequencies from a single antenna. For this purpose, firstly, a fan blade type antenna design will be made, and then the reflective surface of this antenna will be completed to the shape of the reflective surface of the antenna with the cosecant-square radiation characteristic with the frequency selective surface designed to provide the characteristics suitable for the purpose. The frequency selective surface designed and it provides the perfect transmission as possible at 4 GHz operating frequency, while it will act as a band-quenching filter for electromagnetic waves at 5 GHz operating frequency and will be a reflective surface. Thanks to this frequency selective surface to be used as a reflective surface in the antenna, a fan blade type radiation characteristic at 4 GHz operating frequency will be obtained, while a cosecant-square radiation characteristic at 5 GHz operating frequency will be obtained.
Reagentless iron detection in water based on unclad fiber optical sensorTELKOMNIKA JOURNAL
A simple and low-cost fiber based optical sensor for iron detection is demonstrated in this paper. The sensor head consist of an unclad optical fiber with the unclad length of 1 cm and it has a straight structure. Results obtained shows a linear relationship between the output light intensity and iron concentration, illustrating the functionality of this iron optical sensor. Based on the experimental results, the sensitivity and linearity are achieved at 0.0328/ppm and 0.9824 respectively at the wavelength of 690 nm. With the same wavelength, other performance parameters are also studied. Resolution and limit of detection (LOD) are found to be 0.3049 ppm and 0.0755 ppm correspondingly. This iron sensor is advantageous in that it does not require any reagent for detection, enabling it to be simpler and cost-effective in the implementation of the iron sensing.
Impact of CuS counter electrode calcination temperature on quantum dot sensit...TELKOMNIKA JOURNAL
In place of the commercial Pt electrode used in quantum sensitized solar cells, the low-cost CuS cathode is created using electrophoresis. High resolution scanning electron microscopy and X-ray diffraction were used to analyze the structure and morphology of structural cubic samples with diameters ranging from 40 nm to 200 nm. The conversion efficiency of solar cells is significantly impacted by the calcination temperatures of cathodes at 100 °C, 120 °C, 150 °C, and 180 °C under vacuum. The fluorine doped tin oxide (FTO)/CuS cathode electrode reached a maximum efficiency of 3.89% when it was calcined at 120 °C. Compared to other temperature combinations, CuS nanoparticles crystallize at 120 °C, which lowers resistance while increasing electron lifetime.
In place of the commercial Pt electrode used in quantum sensitized solar cells, the low-cost CuS cathode is created using electrophoresis. High resolution scanning electron microscopy and X-ray diffraction were used to analyze the structure and morphology of structural cubic samples with diameters ranging from 40 nm to 200 nm. The conversion efficiency of solar cells is significantly impacted by the calcination temperatures of cathodes at 100 °C, 120 °C, 150 °C, and 180 °C under vacuum. The fluorine doped tin oxide (FTO)/CuS cathode electrode reached a maximum efficiency of 3.89% when it was calcined at 120 °C. Compared to other temperature combinations, CuS nanoparticles crystallize at 120 °C, which lowers resistance while increasing electron lifetime.
A progressive learning for structural tolerance online sequential extreme lea...TELKOMNIKA JOURNAL
This article discusses the progressive learning for structural tolerance online sequential extreme learning machine (PSTOS-ELM). PSTOS-ELM can save robust accuracy while updating the new data and the new class data on the online training situation. The robustness accuracy arises from using the householder block exact QR decomposition recursive least squares (HBQRD-RLS) of the PSTOS-ELM. This method is suitable for applications that have data streaming and often have new class data. Our experiment compares the PSTOS-ELM accuracy and accuracy robustness while data is updating with the batch-extreme learning machine (ELM) and structural tolerance online sequential extreme learning machine (STOS-ELM) that both must retrain the data in a new class data case. The experimental results show that PSTOS-ELM has accuracy and robustness comparable to ELM and STOS-ELM while also can update new class data immediately.
Electroencephalography-based brain-computer interface using neural networksTELKOMNIKA JOURNAL
This study aimed to develop a brain-computer interface that can control an electric wheelchair using electroencephalography (EEG) signals. First, we used the Mind Wave Mobile 2 device to capture raw EEG signals from the surface of the scalp. The signals were transformed into the frequency domain using fast Fourier transform (FFT) and filtered to monitor changes in attention and relaxation. Next, we performed time and frequency domain analyses to identify features for five eye gestures: opened, closed, blink per second, double blink, and lookup. The base state was the opened-eyes gesture, and we compared the features of the remaining four action gestures to the base state to identify potential gestures. We then built a multilayer neural network to classify these features into five signals that control the wheelchair’s movement. Finally, we designed an experimental wheelchair system to test the effectiveness of the proposed approach. The results demonstrate that the EEG classification was highly accurate and computationally efficient. Moreover, the average performance of the brain-controlled wheelchair system was over 75% across different individuals, which suggests the feasibility of this approach.
Adaptive segmentation algorithm based on level set model in medical imagingTELKOMNIKA JOURNAL
For image segmentation, level set models are frequently employed. It offer best solution to overcome the main limitations of deformable parametric models. However, the challenge when applying those models in medical images stills deal with removing blurs in image edges which directly affects the edge indicator function, leads to not adaptively segmenting images and causes a wrong analysis of pathologies wich prevents to conclude a correct diagnosis. To overcome such issues, an effective process is suggested by simultaneously modelling and solving systems’ two-dimensional partial differential equations (PDE). The first PDE equation allows restoration using Euler’s equation similar to an anisotropic smoothing based on a regularized Perona and Malik filter that eliminates noise while preserving edge information in accordance with detected contours in the second equation that segments the image based on the first equation solutions. This approach allows developing a new algorithm which overcome the studied model drawbacks. Results of the proposed method give clear segments that can be applied to any application. Experiments on many medical images in particular blurry images with high information losses, demonstrate that the developed approach produces superior segmentation results in terms of quantity and quality compared to other models already presented in previeous works.
Automatic channel selection using shuffled frog leaping algorithm for EEG bas...TELKOMNIKA JOURNAL
Drug addiction is a complex neurobiological disorder that necessitates comprehensive treatment of both the body and mind. It is categorized as a brain disorder due to its impact on the brain. Various methods such as electroencephalography (EEG), functional magnetic resonance imaging (FMRI), and magnetoencephalography (MEG) can capture brain activities and structures. EEG signals provide valuable insights into neurological disorders, including drug addiction. Accurate classification of drug addiction from EEG signals relies on appropriate features and channel selection. Choosing the right EEG channels is essential to reduce computational costs and mitigate the risk of overfitting associated with using all available channels. To address the challenge of optimal channel selection in addiction detection from EEG signals, this work employs the shuffled frog leaping algorithm (SFLA). SFLA facilitates the selection of appropriate channels, leading to improved accuracy. Wavelet features extracted from the selected input channel signals are then analyzed using various machine learning classifiers to detect addiction. Experimental results indicate that after selecting features from the appropriate channels, classification accuracy significantly increased across all classifiers. Particularly, the multi-layer perceptron (MLP) classifier combined with SFLA demonstrated a remarkable accuracy improvement of 15.78% while reducing time complexity.
Courier management system project report.pdfKamal Acharya
It is now-a-days very important for the people to send or receive articles like imported furniture, electronic items, gifts, business goods and the like. People depend vastly on different transport systems which mostly use the manual way of receiving and delivering the articles. There is no way to track the articles till they are received and there is no way to let the customer know what happened in transit, once he booked some articles. In such a situation, we need a system which completely computerizes the cargo activities including time to time tracking of the articles sent. This need is fulfilled by Courier Management System software which is online software for the cargo management people that enables them to receive the goods from a source and send them to a required destination and track their status from time to time.
Automobile Management System Project Report.pdfKamal Acharya
The proposed project is developed to manage the automobile in the automobile dealer company. The main module in this project is login, automobile management, customer management, sales, complaints and reports. The first module is the login. The automobile showroom owner should login to the project for usage. The username and password are verified and if it is correct, next form opens. If the username and password are not correct, it shows the error message.
When a customer search for a automobile, if the automobile is available, they will be taken to a page that shows the details of the automobile including automobile name, automobile ID, quantity, price etc. “Automobile Management System” is useful for maintaining automobiles, customers effectively and hence helps for establishing good relation between customer and automobile organization. It contains various customized modules for effectively maintaining automobiles and stock information accurately and safely.
When the automobile is sold to the customer, stock will be reduced automatically. When a new purchase is made, stock will be increased automatically. While selecting automobiles for sale, the proposed software will automatically check for total number of available stock of that particular item, if the total stock of that particular item is less than 5, software will notify the user to purchase the particular item.
Also when the user tries to sale items which are not in stock, the system will prompt the user that the stock is not enough. Customers of this system can search for a automobile; can purchase a automobile easily by selecting fast. On the other hand the stock of automobiles can be maintained perfectly by the automobile shop manager overcoming the drawbacks of existing system.
TECHNICAL TRAINING MANUAL GENERAL FAMILIARIZATION COURSEDuvanRamosGarzon1
AIRCRAFT GENERAL
The Single Aisle is the most advanced family aircraft in service today, with fly-by-wire flight controls.
The A318, A319, A320 and A321 are twin-engine subsonic medium range aircraft.
The family offers a choice of engines
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Student information management system project report ii.pdfKamal Acharya
Our project explains about the student management. This project mainly explains the various actions related to student details. This project shows some ease in adding, editing and deleting the student details. It also provides a less time consuming process for viewing, adding, editing and deleting the marks of the students.
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Dr.Costas Sachpazis
Terzaghi's soil bearing capacity theory, developed by Karl Terzaghi, is a fundamental principle in geotechnical engineering used to determine the bearing capacity of shallow foundations. This theory provides a method to calculate the ultimate bearing capacity of soil, which is the maximum load per unit area that the soil can support without undergoing shear failure. The Calculation HTML Code included.
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...Amil Baba Dawood bangali
Contact with Dawood Bhai Just call on +92322-6382012 and we'll help you. We'll solve all your problems within 12 to 24 hours and with 101% guarantee and with astrology systematic. If you want to take any personal or professional advice then also you can call us on +92322-6382012 , ONLINE LOVE PROBLEM & Other all types of Daily Life Problem's.Then CALL or WHATSAPP us on +92322-6382012 and Get all these problems solutions here by Amil Baba DAWOOD BANGALI
#vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore#blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #blackmagicforlove #blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #Amilbabainuk #amilbabainspain #amilbabaindubai #Amilbabainnorway #amilbabainkrachi #amilbabainlahore #amilbabaingujranwalan #amilbabainislamabad
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxR&R Consult
CFD analysis is incredibly effective at solving mysteries and improving the performance of complex systems!
Here's a great example: At a large natural gas-fired power plant, where they use waste heat to generate steam and energy, they were puzzled that their boiler wasn't producing as much steam as expected.
R&R and Tetra Engineering Group Inc. were asked to solve the issue with reduced steam production.
An inspection had shown that a significant amount of hot flue gas was bypassing the boiler tubes, where the heat was supposed to be transferred.
R&R Consult conducted a CFD analysis, which revealed that 6.3% of the flue gas was bypassing the boiler tubes without transferring heat. The analysis also showed that the flue gas was instead being directed along the sides of the boiler and between the modules that were supposed to capture the heat. This was the cause of the reduced performance.
Based on our results, Tetra Engineering installed covering plates to reduce the bypass flow. This improved the boiler's performance and increased electricity production.
It is always satisfying when we can help solve complex challenges like this. Do your systems also need a check-up or optimization? Give us a call!
Work done in cooperation with James Malloy and David Moelling from Tetra Engineering.
More examples of our work https://www.r-r-consult.dk/en/cases-en/
2. TELKOMNIKA ISSN: 1693-6930 ◼
Facial image retrieval on semantic features using adaptive.. (Marwan Ali Shnan)
883
database. After performing the similarity assessment based on the features of the images, the
output set achieves a high retrieval performance level [16].
In the absence of an image with the user, then, some systems will warrant the user to
sketch an image which will serve as an input [17-19]. Most times, it may be difficult for a user to
accurately present the intended visual content at hand, such as a sketch map or a sample
image [20, 21]. This phenomenon is called “intention gap” and must be minimized to ensure the
provision of the user with the relevant query results. Hence, the query input in a CBIR system
demands a good user interface [22-25]. The remaining part of this paper is scheduled as
follows: a review of the current reports on deep learning for sentiment analysis is presented in
Section 2, while Section 3 provides the details of the designs and methodologies adopted in this
study. Section 4 presented the experimental findings from the performance evaluations of the
proposed approach. The last section presented the conclusions drawn from the current study.
2. Literature survey
A novel approach for long-term relevance feedback gain enhancement has been
proposed by Rupali et al [13]. The general CBIR in the proposed system comprised of two
steps, the ABC-based training step, and the image retrieval step. The image features such as
the shape, colour or texture was extracted using Gabor Filter, Gray Level Co-occurrence Matrix,
and Hu-Moment shape feature techniques, while the static features such as the mean and SD
were extracted after image preprocessing. The k-means algorithm was utilized to cluster these
features and each cluster was trained using an ANN-based ABC method. The weight of the
features was updated using an ABC-based ANN. The experimental findings showed the
proposed system as a suitable tool for achieving a better CBIR as it can minimize the semantic
gap compared to the traditional systems.
Xiaochao et al. [14] presented the volume local binary count (VLBC) method of
representing and recognizing dynamic textures. In this descriptor, the histograms of threshold
local spatiotemporal volumes are extracted using both motion features and appearance to
describe dynamic textures. The system was proven to be efficient in both computing and
representing dynamic textures. The experimental evaluations using 3 dynamic texture
databases (UCLA, DynTex, and DynTex++) showed the proposed method to achieve better
classification rates compared to those achieved with the existing approaches. In addition to the
effective dynamic texture recognition of the proposed system, it also utilizes CVLBC for 2D face
spoofing detection. The performance evaluations demonstrated the CVLBC as an effective tool
for 2D face spoofing detection.
Zhijie et al. [15] presented a fractal theory-based rapid face recognition method. In this
method, the facial images are compressed to obtain the fractal codes before completing face
recognition using the fractal codes. To ensure that face recognition is rapidly done, they
suggested the use of the Fractal Neighbor Distance-Based Classification (FNDC) which is an
improvement of the conventional fractal recognition methods. The FNDC uses class information
to set up thresholds between and within classes to speed up the speed of recognition. They
demonstrated the advantages of the FNDC through a series of experiments conducted on Yale,
FERET and CMU PIE databases.
An improved principal component analysis (IPCA) was proposed by Yani et al. [16] for
facial feature representation. The IPCA was designed mainly for the extraction of vital
information from original facial images via the reduction of the feature vector dimensions. An
LRC framework was utilized to treat the face recognition process as a linear regression
problem. In the LRC, the least-square method is used to determine the class label with the least
reconstruction error. Several evaluations were performed on Yale B, CMU_PIE and JAFFE
databases to prove the suitability of the proposed IPCA, where both IPCA and LRC algorithms
achieved better recognition performances compared to the existing algorithms.
Soumendu et al. [17] suggested a new hand-crafted local quadruple pattern (LQPAT)
for the recognition and retrieval of facial images. The proposed framework addressed the
challenges of the current techniques by defining an efficient encoding structure which has an
optimal feature length. In the proposed descriptor, the relationship between the neighbours is
encoded in the quadruple space, and from this relationship, two micropatterns were computed
to form the descriptor. The proposed descriptor was compared to the existing handcrafted
descriptors for retrieval and recognition accuracies based on certain benchmark databases such
3. ◼ ISSN: 1693-6930
TELKOMNIKA Vol. 17, No. 2, April 2019: 882-896
884
as Caltech-face, Color-FERET, CASIA-face-v5, and LFW, and the analyses showed the
proposed descriptor to perform well under uncontrolled variations.
Wangming et al. [18] suggested a novel non-negative sparse feature learning
mechanism for producing a holistic image representation scheme based on low-level local
features. To reduce the rate of information loss, a new feature pooling strategy called kMaxSum
(comprising of the sum and max-pooling strategies) pooling was introduced. This strategy
achieved higher image representation efficiency and was considered as a hybrid of the sum and
max-pooling approaches. The outcome of the retrieval evaluations of 2 public image databases
showed the proposed method as an effective approach.
3. The proposed system
The system proposed in this study focuses on image retrieval from databases using
features from the input image. Figure 1 depicts the block diagram of the proposed model.
The flow of the proposed work is as follows: when the query image is read, it is preprocessed to
remove noise using a Weiner filter before extracting the features. Among the extracted features,
the most discriminating features are selected using AMGA. Finally, the images are retrieved
upon calculating the SED between the input image and the images in the database.
Figure 1. Block diagram of the proposed system
3.1. Preprocessing (MMWF)
The Wiener filter is the commonest technique for removing image blur due to
unfocussed optics or linear motion. The Weiner filter mainly aims to estimate the g^ of the
corrupted image g to ensure the minimization of the minimum mean square error between them.
This error is measured thus:
(1)
where E [1] is the expected value of the argument, ℎ is the desired output, and ℎ̂ is (). The
median modified Wiener filter (MMWF) is represented as:
(2)
where 𝜇̃ is the median of the local window around each pixel, 𝑣2
is the noise variance, 𝜎2
is
the variance. The variance is calculated using the following equation:
AMG
−=
2^
2
hhEe
( ) ( )
−
−
+=
~
2
22~
,.,
mnb
v
mnMMWF
4. TELKOMNIKA ISSN: 1693-6930 ◼
Facial image retrieval on semantic features using adaptive.. (Marwan Ali Shnan)
885
(3)
(4)
where µ = the mean, 𝑁𝑀 = size of the local neighbourhood area 𝜂, 𝑎2
(𝑛, 𝑚) is a notation for the
identification of each pixel in the area 𝜂.
The aim of introducing the nonlinear adaptive spatial filter is to combine the abilities and
qualities of the median and Wiener filters, thereby, annulling their respective drawbacks.
A major effect is that after de-noising, there is a preservation of the edge morphology compared
to the outcome of both median and Wiener filters alone. This effect was termed drop-off-effect
due to the preservation of the slope of the spot sides in the 2D signals. The next significant
outcome is the good performance of the MMWF in the global de-noising of different types
of noise.
3.2. Feature extraction
The image preprocessing was followed by the feature extraction techniques to obtain
the most discriminating features. A set of low-level features, viz, HOG, DWT, GIST, LTrP, and a
set of high-level features were obtained by applying the Viola-Jones algorithm.
3.2.1. Low-level features
a. HOG
In the HOG feature descriptor, the distribution of the gradient directions is used as the
features. The image gradients are initially calculated before the histogram calculations. Each
pixel in the cell casts a weighted vote for an orientation-based histogram channel depending on
the gradient computation values. The values are later grouped into a larger block that is spatially
connected. The HOG descriptor is then the concatenated vector of the components of the
normalized cell histograms from all the block regions. The L2 norm of the vector is used for the
block normalization which is denoted as:
(5)
Where 𝑏𝑖 = component of the block.
b. DWT
A two-level 2D DWT is applied to the input image. This produces four types of 2D
matrices at the output (an approximation coefficient which is the original image at lower
resolution) and 3 detail coefficients; viz. a horizontal coefficient (which highlights the horizontal
edges of the original image), a vertical coefficient (which highlights the vertical edges), and a
diagonal coefficient (which highlights the diagonal edges). The approximation coefficients at the
end of the second level are considered as the feature here.
c. GIST
The GIST descriptor is based on spatial envelopes. These envelopes define the
features that distinguish the images. These dominant spatial image structures are represented
by features such as expansion, openness, roughness, naturalness, and ruggedness.
d. LTrP
The LTrP uses the direction of the centre grey pixel 𝑔0 to describe the spatial structure
of a local texture. Assume an image 𝐼, its first-order derivatives along 0 and 90 directions are
represented as . Let 𝑔0 = the centre pixel 𝐼 while 𝑔ℎ and 𝑔 𝑣 = the horizontal and
vertical pixels, respectively. The first order derivatives magnitude at the centre pixel can then be
represented as:
−=
mn
mna
NM ,
222
),(
1
=
mn
mna
NM ,
2
),(
1
=
=
d
i
ibB
1
)(
( )
90,0
1
= ggI
5. ◼ ISSN: 1693-6930
TELKOMNIKA Vol. 17, No. 2, April 2019: 882-896
886
(6)
while the direction at the centre pixel is denoted as:
(7)
(8)
from (7) and (8), four different directions of the centre pixel can be calculated.
3.2.2. High-level features
a. Viola-Jones algorithm
This is a widely used approach to face detection whose main feature is low training
speed but a fast detection rate. In this algorithm, the Haar feature, a major component of the
Haar cascade classifier, is used for rapid face detection. They are mainly used for the detection
of the presence of features in a given image. The human face has some unique features, such
as a darker eye region than the upper-cheeks, a brighter nose bridge region than the eyes.
These features are matched to the Haar features. A Haar-like rectangle feature is a scalar
product between the image and some Haar-like pattern. The result of each feature is a single
value which is obtained when the sum of pixels under the white rectangle is subtracted from the
sum of those under the black rectangle. If this value is more in that region, then, it represents a
part of the face and is identified as either eyes, nose, cheek, etc.
(9)
where 𝑃𝑏 is the sum of black pixels, and 𝑃𝑤 is the sum of the white pixels. With the Haar feature,
image detection is initiated by scanning the image from the top left corner and ended at the
bottom right corner Figure 2. The Haar features scans the image severally to detect the face of
an image. An integral image concept is adopted for a rapid computation of the rectangle
features, where only 4 values at the edges of the rectangle are required for the calculation of the
sum of the pixels within a given rectangle. In an integral image, the value at any pixel (a, b)
represents the sum of pixels above and to the left of (a, b). Figure 2 depicts the sum of all pixel
values in rectangle D.
Figure 2. Calculation of integral image
in the figure, the sides 𝐻1, 𝐻2, 𝐻3 and 𝐻4 are given by the following equations:
(10)
(11)
2
0
2
0 )()( ggggM vhg −−−=
( ) ( ) ( )00
1
0
gIgIgI h −=
( ) ( ) ( )00
1
90
gIgIgI v −=
wb PPxP +=)(
AH =1
BAH +=2
6. TELKOMNIKA ISSN: 1693-6930 ◼
Facial image retrieval on semantic features using adaptive.. (Marwan Ali Shnan)
887
(12)
(13)
(14)
(15)
where 𝑆𝑔 is the sum of all pixel values in the rectangle.
The detection window (Haar features) is moved across the image, and for each chosen
window size, the window slides vertically and horizontally. A face recognition filter is applied at
each step. Each classifier is composed of Haar feature extractors (weak classifiers). Ada boosts
a strong classifier as a linear combination of these weak classifiers, as shown in (16).
(16)
where 𝐹(𝑥) is the strong classifier, 𝛾1, 𝛾2 … 𝛾𝑛 are the weights assigned to the classifier which
are inversely proportional to the error rate. In this way, the best classifiers are considered more
and 𝐹1(𝑥), 𝐹2(𝑥) … . 𝐹𝑛(𝑥) are the weak classifiers. Face detection can be performed by cascade
using Haar-like features. Each face recognition filter contains a set of classifiers connected in
cascades. Each classifier observes the rectangular subset of the detection window and
determines if it looks like a face. In this cascade, an image will be a human face if it passes all
the stages. If it failed any of the stages, it means the image is not a human face. Figure 3
depicts the flow of this algorithm.
Figure 3. Flowchart for the Viola-Jones algorithm
3.3. Feature selection (Adaptive Mean Genetic Algorithm)
All the extracted features do not provide accurate classification results. Hence, it is
essential to sort the most discriminating features before the retrieval process for apt results. In
this paper, feature selection is done with the help of Adaptive Mean Genetic Algorithm (AMGA).
The pseudo code of the AMGA is given in Figure 4. Figure 4 is explained in the following steps:
CAH +=3
DCBAH +++=4
1)32(4 HHHHSg ++−=
DCABADCBAASg =−−−−++++=
)(...........)()()( 2211 xFxFxFxF nn ++=
7. ◼ ISSN: 1693-6930
TELKOMNIKA Vol. 17, No. 2, April 2019: 882-896
888
Step 1: Initially, the number of population and iterations is set. The population size is the
number of features selected.
Step 2: The fitness function 𝐹𝑡 is determined using (17).
Step 3: Sort the fitness function and their corresponding chromosomes.
Step 4: Select the chromosomes with the best fitness values. The first half of the population is
selected here:
Figure 4. Pseudocode of AMGA
Step 5: One-point cross is performed on the selected parent chromosomes.
Step 6: The mutation rate is calculated using (18), in which 𝑛𝑐 is the population of the chosen
parent chromosomes, 𝑓 𝑚𝑖𝑛 is the worst fitness value, 𝑓𝑚𝑎𝑥 is the best fitness value, 𝑓𝑎𝑣𝑔 is the
average fitness value, B is taken as 2, and 𝑆𝑖 is the control parameter given by (19).
(19)
This adaptation on the mutation rate in GA is utilized in this paper and is stated as
AMGA. With this adaptive strategy, the exploitation of good solutions is increased, thus,
speeding up the convergence and preventing the population from being trapped at the local
minima in most cases.
3.4. Image retrieval (Squared Euclidean Distance)
The aim of this entire process is to retrieve the images that are identical to the input
image. The FGNET database was used in this paper. The above techniques (preprocessing,
feature extraction and feature selection) were also done for the images in the database. The
SED between the selected features of the query images and the images in the database is
calculated and the image corresponding to the features with the shortest SED is selected as the
identical image. The SED between two features 𝑓1𝑖 and 𝑓2𝑖 is given by:
nc
avg
i
f
ff
S
−
= minmax
8. TELKOMNIKA ISSN: 1693-6930 ◼
Facial image retrieval on semantic features using adaptive.. (Marwan Ali Shnan)
889
(20)
4. Results and discussion
The performance of the techniques used in this paper was evaluated and compared to
other existing techniques. Furthermore, the performance of the AMGA algorithm was evaluated
in terms of its precision, recall, and F-measure and compared to other feature selection
techniques like GA and PSO.
4.1. Performance analysis
The performance of the filter techniques and the feature selection techniques was
analyzed. The PSNR value was utilized in evaluating the performance of the MMWF. A
comparison with the existing filtering techniques was also provided. The PSNR value can be
calculated using (21).
(21)
where 𝑀𝐴𝑋𝑖 is the maximum number of pixels, and 𝑀𝑆𝐸 is the mean square error. The MSE is
given by:
(22)
where 𝑔(𝑥, 𝑦) is the desired output, and 𝑔̂(𝑥, 𝑦) is the current output. 𝑀𝑁 is the area of the
selected pixels. The performance of the feature selection techniques can be evaluated using 3
measures-precision, recall, and F-measure. The precision is calculated using (23).
(23)
similarly, the recall and F-measure values can be calculated using in (24) and (25), respectively.
(24)
(25)
4.2. Comparative analysis
The input query images used in this paper are shown in Figure 5. The input images are
filtered for noise using MMWF. The superiority of the MMWF lies in its ability to preserve the
image edge morphologies. Figure 5 shows a visual comparison of the existing filters, while the
superiority of MMWF is shown visually. From Figure 5, it is visually clear that MMWF
outperformed the existing filters. The shortcomings of the existing filters were evident as they
either made the image blur or diminished the edge information. Figure 6. Show the Comparison
between the existing and the proposed filter.
2
1
2
21 ))(( =
−=
P
i
ii ffd
=
MSE
MAX
PSNR i
2
10log10
−
=
−
=
−=
1
0
1
0
2^
),(),(
1 M
x
N
y
yxgyxg
MN
MSE
pp
p
FT
T
precision
+
=
np
p
FT
T
recall
+
=
recallprecision
recallprecision
ScoreF
+
=− 2
9. ◼ ISSN: 1693-6930
TELKOMNIKA Vol. 17, No. 2, April 2019: 882-896
890
(a) input image using
Average filter (AF) &
filtered for noise using
MMWF
(b) input image using
Gaussian filter (GF) &
filtered for noise using
MMWF
(c) input image using
Median filter (MF) &
filtered for noise using
MMWF
(d) input image using
Wiener filter (WF) &
filtered for noise using
MMWF
Figure 5. Input query images the input images are filtered for noise using MMWF
Figure 6. Comparison between the existing and the proposed filter
To evaluate the performance of the filters in numerical terms, their PSNR values were
considered. The PSNR values are calculated using in (21). Figure 7 provides a graphical
comparison of the PSNR from the query images. They visually represent the PSNR values of
the query images after being processed by the filters earlier discussed. In Figure 7(a, b, c, d),
the names of the filters are plotted on the x-axis while the PSNR values are plotted on the y-
axis. Form Figure 7, the PSNR values of the images when using MMWF were higher compared
to those of the other filters, thereby, presenting the MMWF as the most efficient filter. Similarly,
comparisons can also be made on the feature selection techniques. These techniques can be
evaluated in terms of precision, recall, and F-measure. This would provide a concrete proof of
the efficiency of the superior method. Figure 8 depicts the comparison of the precision between
4 different query images for 3 different feature selection methods. The precision of the AMGA
method used in this study was compared to those of GA and PSO. The algorithmic precision
was calculated using (23). Figure 8 depicts the precision values of the input query images after
processing by different filters.
10. TELKOMNIKA ISSN: 1693-6930 ◼
Facial image retrieval on semantic features using adaptive.. (Marwan Ali Shnan)
891
(a) PSNR values for image (b) PSNR values for image
(c) PSNR values for image (d) PSNR values for image
Figure 7. PSNR values of the input query images after processing by different filters
In Figures 8 (a, b, c, d), the names of the feature selection methods are plotted on the
x-axis while the precision values are plotted on the y-axis. From the figure, the AMGA
performed better than AG and PSO. The below results are tabulated in Table 1 and Table 2
presents the recall values of the evaluated algorithms and their comparison. The recall values
are calculated using in (24).
Figure 9 presents the comparison between the recall values of AMGA, AG, and PSO. In
Figures 9 (a, b, c, d), the names of the feature selection method are plotted on the x-axis while
the recall values are plotted on the y-axis. From the figure, the AMGA outperformed the AG
and PSO. In Figures 9 (a, b, c, d), the names of the feature selection method are plotted on
the x-axis while the recall values are plotted on the y-axis. From the figure, the AMGA
outperformed the AG and PSO. The F-measure values of the existing and the proposed
methodologies are tabulated in Table 3. The F-measure is calculated using (25).
11. ◼ ISSN: 1693-6930
TELKOMNIKA Vol. 17, No. 2, April 2019: 882-896
892
(a) Precision values for image (b) Precision values for image
(c) Precision values for image (d) Precision values for image
Figure 8. Precision values of the input query images after processing by different filters
Table 1. Precision comparison
Image Name Precision for AMGA Precision for GA Precision for PSO
(a) 0.8125 0.8 0.4
(b) 0.84615 0.78571 0.8
(c) 0.571429 0.571429 0.25
(d) 0.785714 0.75 0.4
Table 2. Recall comparison
Image Name Recall for AMGA Recall for GA Recall for PSO
(a) 0.86667 0.8 0.13333
(b) 0.6875 0.6875 0.75
(c) 0.363636 0.363636 0.090909
(d) 0.733333 0.6 0.133333
Figure 10 provides a visual comparison of the F-measure between AMGA, GA, and
PSO. In Figures 10 (a, b, c, d), the names of the feature selection method are plotted on the
x-axis while the corresponding F-measure values are plotted on the y-axis. From the figure, the
AMGAAMGA
AMGAAMGA
12. TELKOMNIKA ISSN: 1693-6930 ◼
Facial image retrieval on semantic features using adaptive.. (Marwan Ali Shnan)
893
AMGA outperformed the AG and PSO. The overall results of the methodologies used in this
paper are tabulated in Table 4, containing the precision, recall, and F-measure values of the
feature selection techniques used in this paper. The Table also presents the PSNR value of the
images filtered using MMWF.
(a) Recall values for image (b) Recall values for image
(c) Recall values for image (d) Recall values for image
Figure 9. Recall values of the input query images after processing by different filters
Table 3. Comparison of the F-Measure
Image Name F-measure for AMGA F-measure for GA F-measure for PSO
(a) 0.83871 0.8 0.2
(b) 0.75862 0.73333 0.77419
(c) 0.444444 0.444444 0.133333
(d) 0.758621 0.666667 0.2
13. ◼ ISSN: 1693-6930
TELKOMNIKA Vol. 17, No. 2, April 2019: 882-896
894
(a) F-measure values for image (yb) F-measure values for image
(c) F-measure values for image (d) F-measure values for image
Figure 10. F-measure values of the input query images after processing by different filters
Table 4. Concatenated Results of the Proposed Methodology
Image Precision Recall F-measure
PSNR values while using
MMWF
0.8125 0.86667 0.83871 43.50073
0.84615 0.6875 0.75862 44.36676
0.571429 0.363636364 0.444444444 49.64241
0.785714 0.733333333
0.75862069
43.65809
14. TELKOMNIKA ISSN: 1693-6930 ◼
Facial image retrieval on semantic features using adaptive.. (Marwan Ali Shnan)
895
5. Conclusion
A content-based image retrieval technique was proposed in this study. In this technique,
the input image is first filtered using MMWF. This filter preserves the vital image information
better than most of the available image filters. The filtration process is followed by the feature
selection process using AMGA. The Viola-Jones algorithm was used for the extraction of the
high-level features. The performance of the proposed AMGA was evaluated in terms of its
precision, recall, and F-measure and compared to those of GA and PSO. The evaluation results
proved a superior performance of the AMGA compared to GA and PSO in terms of its precision,
recall, and F-measure. Future works are recommended on the improvement of the AMGA
computational time.
Acknowledgement
This work is supported by the Universiti Malaysia Pahang (UMP) via Research Grant
UMP RDU160349.
References
[1] Madhavi D, M Ramesh Patnaik. Genetic Algorithm-Based Optimized Gabor Filters for Content-Based
Image Retrieval. In Intelligent Communication, Control and Devices, Springer, Singapore. 2018;
157-164.
[2] Khodaskar, Anuja, Sidharth Ladhake. A Novel Approach for Intellectual Image Retrieval Based on
Image Content Using ANN. In Computational Intelligence in Data Mining Springer, New Delhi.
2015; 3: 693-704.
[3] Das, Pranjit, Arambam Neelima. An overview of approaches for content-based medical image
retrieval. International Journal of Multimedia Information Retrieval. 2017; 6(4): 271-280.
[4] Tyagi, Vipin. Content-Based Image Retrieval Based on Location-Independent Regions of Interest.
In Content-Based Image Retrieval, Springer, Singapore. 2017; 205-225.
[5] Dixit, Umesh DMS. Shirdhonkar. Face-based Document Image Retrieval System. Procedia Computer
Science. 2018; 132: 659-668.
[6] Desai, Rupali, Bhakti Sonawane. Gist, HOG, and DWT-Based Content-Based Image Retrieval for
Facial Images. In Proceedings of the International Conference on Data Engineering and
Communication Technology, Springer, Singapore. 2017; 297-307.
[7] Vanitha, Janakiraman, Muthukrishnan Senthilmurugan. Multidimensional Image Indexing Using
SR-Tree Algorithm for Content-Based Image Retrieval System. In International Conference on
Emerging Research in Computing, Information, Communication and Applications, Springer,
Singapore, 2016; 81-88.
[8] Iftene, Adrian, Baboi Alexandra-Mihaela. Using semantic resources in image retrieval. Procedia
Computer Science. 2016; 96: 436-445.
[9] Wen Ge, Huaguan Chen, Deng Cai, Xiaofei He. Improving face recognition with domain adaptation.
Neurocomputing. 2018; 287: 45-51.
[10] Angadi, Shanmukhappa, Vishwanath Kagawade. Face Recognition Through Symbolic Data Modeling
of Local Directional Gradient. In International Conference on Emerging Research in Computing,
Information, Communication and Applications. Springer, Singapore. 2016; 67-79.
[11] Tyagi, Vipin. Content-Based Image Retrieval: An Introduction. In Content-Based Image Retrieval.
Springer, Singapore. 2017; 1-27.
[12] Tyagi, Vipin. Research Issues for Next Generation Content-Based Image Retrieval. In Content-Based
Image Retrieval. Springer, Singapore. 2017; 295-302.
[13] Mane, Pranoti P, Narendra G, Bawane. An effective technique for the content-based image retrieval
to reduce the semantic gap based on an optimal classifier technique. Pattern Recognition and Image
Analysis. 2016; 26(3): 597-607.
[14] Hao Xiaochao, Yaping Lin, Janne Heikkilä. Dynamic texture recognition using volume local binary
count patterns with an application to 2D face spoofing detection. IEEE Transactions on Multimedia.
2018; 20(3): 552-566.
15. ◼ ISSN: 1693-6930
TELKOMNIKA Vol. 17, No. 2, April 2019: 882-896
896
[15] Tang Zhijie, Xiaocheng Wu, Bin Fu, Weiwei Chen, Hao Feng. Fast face recognition based on fractal
theory. Applied Mathematics and Computation. 2018; 321(C): 721-730.
[16] Zhu Yani, Chaoyang Zhu, Xiaoxin Li. Improved principal component analysis and linear regression
classification for face recognition. Signal Processing. 2018; 145: 175-182.
[17] Chakraborty, Soumendu, Satish Kumar Singh, Pavan Chakraborty. Local quadruple pattern: A novel
descriptor for facial image recognition and retrieval. Computers & Electrical Engineering. 2017; 62:
92-104.
[18] Xu Wangming, Shiqian Wu, Meng Joo Er, Chaobing Zheng, Yimin Qiu. New non-negative sparse
feature learning approach for content-based image retrieval. IET Image Processing. 2017; 11(9):
724-733.
[19] Hasan RA, Mohammed MN. A Krill Herd Behaviour Inspired Load Balancing of Tasks in Cloud
Computing. Studies in Informatics and Control. 2017; 26(4): 413-424.
[20] Hasan RA, Mohammed MA, Salih ZH, Ameedeen B, Ariff M, Ţăpuş N, Mohammed MN. (2018). HSO:
A Hybrid Swarm Optimization Algorithm for Reducing Energy Consumption in the Cloudlets.
TELKOMNIKA Telecommunication Computing Electronics and Control, 16(5).
[21] Rassem TH, Makbol NM, Yee SY. Face recognition using completed local ternary pattern (CLTP)
texture descriptor. International Journal of Electrical and Computer Engineering (IJECE). 2017; 7(3):
1594-1601.
[22] Mohammed MA, Salih ZH, Ţăpuş N, Hasan RAK. Security and accountability for sharing the data
stored in the cloud. In RoEduNet Conference: Networking in Education and Research, IEEE.
2016; 1-5.
[23] Mohammed MA, ŢĂPUŞ N. A Novel Approach of Reducing Energy Consumption by Utilizing
Enthalpy in Mobile Cloud Computing. Studies in Informatics and Control. 2017. 26(4), 425-434.
[24] Hammood OA, Kahar MNM, Mohammed MN. Enhancement the video quality forwarding Using
Receiver-Based Approach (URBA) in Vehicular Ad-Hoc Network. In Radar, Antenna, Microwave,
Electronics, and Telecommunications (ICRAMET), International Conference on IEEE. 2017; 64-67.
[25] Guo H, Su S, Liu J, Sun Z, Xu Y. An Image Retrieval Method Based on Manifold Learning with
Scale-Invariant Feature Control. TELKOMNIKA Telecommunication Computing Electronics and
Control. 2016; 14(3A): 252-258.