Multimodal authentication is one of the prime concepts in current applications of real scenario. Various
approaches have been proposed in this aspect. In this paper, an intuitive strategy is proposed as a
framework for providing more secure key in biometric security aspect. Initially the features will be
extracted through PCA by SVD from the chosen biometric patterns, then using LU factorization technique
key components will be extracted, then selected with different key sizes and then combined the selected key
components using convolution kernel method (Exponential Kronecker Product - eKP) as Context-Sensitive
Exponent Associative Memory model (CSEAM). In the similar way, the verification process will be done
and then verified with the measure MSE. This model would give better outcome when compared with SVD
factorization[1] as feature selection. The process will be computed for different key sizes and the results
will be presented.
Pattern recognition using context dependent memory model (cdmm) in multimodal...ijfcstjournal
Pattern recognition is one of the prime concepts in current technologies in both private and public sectors.
The analysis and recognition of two or more patterns is a complex task due to several factors. The
consideration of two or more patterns requires huge space for keeping the storage media as well as
computational aspect. Vector logic gives very good strategy for recognition of patterns. This paper
proposes pattern recognition in multimodal authentication system with the use of vector logic and makes
the computation model hard and less error rate. Using PCA two to three biometric patterns will be fusion
and then various key sizes will be extracted using LU factorization approach. The selected keys will be
combined using vector logic, which introduces a memory model often called Context Dependent Memory
Model (CDMM) as computational model in multimodal authentication system that gives very accurate and
very effective outcome for authentication as well as verification. In the verification step, Mean Square
Error (MSE) and Normalized Correlation (NC) as metrics to minimize the error rate for the proposed
model and the performance analysis will be presented.
KNOWLEDGE BASED ANALYSIS OF VARIOUS STATISTICAL TOOLS IN DETECTING BREAST CANCERcscpconf
In this paper, we study the performance criterion of machine learning tools in classifying breast cancer. We compare the data mining tools such as Naïve Bayes, Support vector machines, Radial basis neural networks, Decision trees J48 and simple CART. We used both binary and multi class data sets namely WBC, WDBC and Breast tissue from UCI machine learning depositary. The experiments are conducted in WEKA. The aim of this research is to find out the best classifier with respect to accuracy, precision, sensitivity and specificity in detecting breast cancer
POSTERIOR RESOLUTION AND STRUCTURAL MODIFICATION FOR PARAMETER DETERMINATION ...IJCI JOURNAL
When only a few lower modes data are available to evaluate a large number of unknown parameters, it is
difficult to acquire information about all unknown parameters. The challenge in this kind of updation
problem is first to get confidence about the parameters that are evaluated correctly using the available
data and second to get information about the remaining parameters. In this work, the first issue is resolved
employing the sensitivity of the modal data used for updation. Once it is fixed that which parameters are
evaluated satisfactorily using the available modal data the remaining parameters are evaluated employing
modal data of a virtual structure. This virtual structure is created by adding or removing some known
stiffness to or from some of the stories of the original structure. A 12-story shear building is considered for
the numerical illustration of the approach. Results of the study show that the present approach is an
effective tool in system identification problem when only a few data is available for updation.
IMPROVING SUPERVISED CLASSIFICATION OF DAILY ACTIVITIES LIVING USING NEW COST...csandit
The growing population of elders in the society calls for a new approach in care giving. By
inferring what activities elderly are performing in their houses it is possible to determine their
physical and cognitive capabilities. In this paper we show the potential of important
discriminative classifiers namely the Soft-Support Vector Machines (C-SVM), Conditional
Random Fields (CRF) and k-Nearest Neighbors (k-NN) for recognizing activities from sensor
patterns in a smart home environment. We address also the class imbalance problem in activity
recognition field which has been known to hinder the learning performance of classifiers. Cost
sensitive learning is attractive under most imbalanced circumstances, but it is difficult to
determine the precise misclassification costs in practice. We introduce a new criterion for
selecting the suitable cost parameter C of the C-SVM method. Through our evaluation on four
real world imbalanced activity datasets, we demonstrate that C-SVM based on our proposed
criterion outperforms the state-of-the-art discriminative methods in activity recognition.
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
A survey of modified support vector machine using particle of swarm optimizat...Editor Jacotech
The main objective of this survey paper is to provide a detailed description of Wireless Sensor Networks with Medium Access Control layer and Routing layer. In the medium access control layer, Event Driven Time Division Multiple Access protocol is studied and in Network layer, two routing protocols Bellman-Ford and Dynamic Source Routing are studied.
Pattern recognition using context dependent memory model (cdmm) in multimodal...ijfcstjournal
Pattern recognition is one of the prime concepts in current technologies in both private and public sectors.
The analysis and recognition of two or more patterns is a complex task due to several factors. The
consideration of two or more patterns requires huge space for keeping the storage media as well as
computational aspect. Vector logic gives very good strategy for recognition of patterns. This paper
proposes pattern recognition in multimodal authentication system with the use of vector logic and makes
the computation model hard and less error rate. Using PCA two to three biometric patterns will be fusion
and then various key sizes will be extracted using LU factorization approach. The selected keys will be
combined using vector logic, which introduces a memory model often called Context Dependent Memory
Model (CDMM) as computational model in multimodal authentication system that gives very accurate and
very effective outcome for authentication as well as verification. In the verification step, Mean Square
Error (MSE) and Normalized Correlation (NC) as metrics to minimize the error rate for the proposed
model and the performance analysis will be presented.
KNOWLEDGE BASED ANALYSIS OF VARIOUS STATISTICAL TOOLS IN DETECTING BREAST CANCERcscpconf
In this paper, we study the performance criterion of machine learning tools in classifying breast cancer. We compare the data mining tools such as Naïve Bayes, Support vector machines, Radial basis neural networks, Decision trees J48 and simple CART. We used both binary and multi class data sets namely WBC, WDBC and Breast tissue from UCI machine learning depositary. The experiments are conducted in WEKA. The aim of this research is to find out the best classifier with respect to accuracy, precision, sensitivity and specificity in detecting breast cancer
POSTERIOR RESOLUTION AND STRUCTURAL MODIFICATION FOR PARAMETER DETERMINATION ...IJCI JOURNAL
When only a few lower modes data are available to evaluate a large number of unknown parameters, it is
difficult to acquire information about all unknown parameters. The challenge in this kind of updation
problem is first to get confidence about the parameters that are evaluated correctly using the available
data and second to get information about the remaining parameters. In this work, the first issue is resolved
employing the sensitivity of the modal data used for updation. Once it is fixed that which parameters are
evaluated satisfactorily using the available modal data the remaining parameters are evaluated employing
modal data of a virtual structure. This virtual structure is created by adding or removing some known
stiffness to or from some of the stories of the original structure. A 12-story shear building is considered for
the numerical illustration of the approach. Results of the study show that the present approach is an
effective tool in system identification problem when only a few data is available for updation.
IMPROVING SUPERVISED CLASSIFICATION OF DAILY ACTIVITIES LIVING USING NEW COST...csandit
The growing population of elders in the society calls for a new approach in care giving. By
inferring what activities elderly are performing in their houses it is possible to determine their
physical and cognitive capabilities. In this paper we show the potential of important
discriminative classifiers namely the Soft-Support Vector Machines (C-SVM), Conditional
Random Fields (CRF) and k-Nearest Neighbors (k-NN) for recognizing activities from sensor
patterns in a smart home environment. We address also the class imbalance problem in activity
recognition field which has been known to hinder the learning performance of classifiers. Cost
sensitive learning is attractive under most imbalanced circumstances, but it is difficult to
determine the precise misclassification costs in practice. We introduce a new criterion for
selecting the suitable cost parameter C of the C-SVM method. Through our evaluation on four
real world imbalanced activity datasets, we demonstrate that C-SVM based on our proposed
criterion outperforms the state-of-the-art discriminative methods in activity recognition.
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
A survey of modified support vector machine using particle of swarm optimizat...Editor Jacotech
The main objective of this survey paper is to provide a detailed description of Wireless Sensor Networks with Medium Access Control layer and Routing layer. In the medium access control layer, Event Driven Time Division Multiple Access protocol is studied and in Network layer, two routing protocols Bellman-Ford and Dynamic Source Routing are studied.
MAGNETIC RESONANCE BRAIN IMAGE SEGMENTATIONVLSICS Design
Segmentation of tissues and structures from medical images is the first step in many image analysis applications developed for medical diagnosis. With the growing research on medical image segmentation, it is essential to categorize the research outcomes and provide researchers with an overview of the existing segmentation techniques in medical images. In this paper, different image segmentation methods applied on magnetic resonance brain images are reviewed. The selection of methods includes sources from image processing journals, conferences, books, dissertations and thesis. The conceptual details of the methods are explained and mathematical details are avoided for simplicity. Both broad and detailed categorizations of reviewed segmentation techniques are provided. The state of art research is provided with emphasis on developed techniques and image properties used by them. The methods defined are not always mutually independent. Hence, their inter relationships are also stated. Finally, conclusions are drawn summarizing commonly used techniques and their complexities in application.
Hybrid Method HVS-MRMR for Variable Selection in Multilayer Artificial Neural...IJECEIAES
The variable selection is an important technique the reducing dimensionality of data frequently used in data preprocessing for performing data mining. This paper presents a new variable selection algorithm uses the heuristic variable selection (HVS) and Minimum Redundancy Maximum Relevance (MRMR). We enhance the HVS method for variab le selection by incorporating (MRMR) filter. Our algorithm is based on wrapper approach using multi-layer perceptron. We called this algorithm a HVS-MRMR Wrapper for variables selection. The relevance of a set of variables is measured by a convex combination of the relevance given by HVS criterion and the MRMR criterion. This approach selects new relevant variables; we evaluate the performance of HVS-MRMR on eight benchmark classification problems. The experimental results show that HVS-MRMR selected a less number of variables with high classification accuracy compared to MRMR and HVS and without variables selection on most datasets. HVS-MRMR can be applied to various classification problems that require high classification accuracy.
Integrated bio-search approaches with multi-objective algorithms for optimiza...TELKOMNIKA JOURNAL
Optimal selection of features is very difficult and crucial to achieve, particularly for the task of classification. It is due to the traditional method of selecting features that function independently and generated the collection of irrelevant features, which therefore affects the quality of the accuracy of the classification. The goal of this paper is to leverage the potential of bio-inspired search algorithms, together with wrapper, in optimizing multi-objective algorithms, namely ENORA and NSGA-II to generate an optimal set of features. The main steps are to idealize the combination of ENORA and NSGA-II with suitable bio-search algorithms where multiple subset generation has been implemented. The next step is to validate the optimum feature set by conducting a subset evaluation. Eight (8) comparison datasets of various sizes have been deliberately selected to be checked. Results shown that the ideal combination of multi-objective algorithms, namely ENORA and NSGA-II, with the selected bio-inspired search algorithm is promising to achieve a better optimal solution (i.e. a best features with higher classification accuracy) for the selected datasets. This discovery implies that the ability of bio-inspired wrapper/filtered system algorithms will boost the efficiency of ENORA and NSGA-II for the task of selecting and classifying features.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
K-Medoids Clustering Using Partitioning Around Medoids for Performing Face Re...ijscmcj
Face recognition is one of the most unobtrusive biometric techniques that can be used for access control as well as surveillance purposes. Various methods for implementing face recognition have been proposed with varying degrees of performance in different scenarios. The most common issue with effective facial biometric systems is high susceptibility of variations in the face owing to different factors like changes in pose, varying illumination, different expression, presence of outliers, noise etc. This paper explores a novel technique for face recognition by performing classification of the face images using unsupervised learning approach through K-Medoids clustering. Partitioning Around Medoids algorithm (PAM) has been used for performing K-Medoids clustering of the data. The results are suggestive of increased robustness to noise and outliers in comparison to other clustering methods. Therefore the technique can also be used to increase the overall robustness of a face recognition system and thereby increase its invariance and make it a reliably usable biometric modality.
Mammogram image segmentation using rough clusteringeSAT Journals
Abstract The mammography is the most effective procedure to diagnosis the breast cancer at an early stage. This paper proposes mammogram image segmentation using Rough K-Means (RKM) clustering algorithm. The median filter is used for pre-processing of image and it is normally used to reduce noise in an image. The 14 Haralick features are extracted from mammogram image using Gray Level Co-occurrence Matrix (GLCM) for different angles. The features are clustered by K-Means, Fuzzy C-Means (FCM) and Rough K-Means algorithms to segment the region of interests for classification. The result of the segmentation algorithms compared and analyzed using Mean Square Error (MSE) and Root Means Square Error (RMSE). It is observed that the proposed method produces better results that the existing methods. Keywords— Mammogram, Data mining, Image Processing, Feature Extraction, Rough K- Means and Image Segmentation
Accounting for variance in machine learning benchmarksDevansh16
Accounting for Variance in Machine Learning Benchmarks
Xavier Bouthillier, Pierre Delaunay, Mirko Bronzi, Assya Trofimov, Brennan Nichyporuk, Justin Szeto, Naz Sepah, Edward Raff, Kanika Madan, Vikram Voleti, Samira Ebrahimi Kahou, Vincent Michalski, Dmitriy Serdyuk, Tal Arbel, Chris Pal, Gaël Varoquaux, Pascal Vincent
Strong empirical evidence that one machine-learning algorithm A outperforms another one B ideally calls for multiple trials optimizing the learning pipeline over sources of variation such as data sampling, data augmentation, parameter initialization, and hyperparameters choices. This is prohibitively expensive, and corners are cut to reach conclusions. We model the whole benchmarking process, revealing that variance due to data sampling, parameter initialization and hyperparameter choice impact markedly the results. We analyze the predominant comparison methods used today in the light of this variance. We show a counter-intuitive result that adding more sources of variation to an imperfect estimator approaches better the ideal estimator at a 51 times reduction in compute cost. Building on these results, we study the error rate of detecting improvements, on five different deep-learning tasks/architectures. This study leads us to propose recommendations for performance comparisons.
When deep learners change their mind learning dynamics for active learningDevansh16
Abstract:
Active learning aims to select samples to be annotated that yield the largest performance improvement for the learning algorithm. Many methods approach this problem by measuring the informativeness of samples and do this based on the certainty of the network predictions for samples. However, it is well-known that neural networks are overly confident about their prediction and are therefore an untrustworthy source to assess sample informativeness. In this paper, we propose a new informativeness-based active learning method. Our measure is derived from the learning dynamics of a neural network. More precisely we track the label assignment of the unlabeled data pool during the training of the algorithm. We capture the learning dynamics with a metric called label-dispersion, which is low when the network consistently assigns the same label to the sample during the training of the network and high when the assigned label changes frequently. We show that label-dispersion is a promising predictor of the uncertainty of the network, and show on two benchmark datasets that an active learning algorithm based on label-dispersion obtains excellent results.
Survey on semi supervised classification methods and feature selectioneSAT Journals
Abstract Data mining also called knowledge discovery is a process of analyzing data from several perspective and summarize it into useful information. It has tremendous application in the area of classification like pattern recognition, discovering several disease type, analysis of medical image, recognizing speech, for identifying biometric, drug discovery etc. This is a survey based on several semisupervised classification method used by classifiers , in this both labeled and unlabeled data can be used for classification purpose.It is less expensive than other classification methods . Different techniques surveyed in this paper are low density separation approach, transductive SVM, semi-supervised based logistic discriminate procedure, self training nearest neighbour rule using cut edges, self training nearest neighbour rule using cut edges. Along with classification methods a review about various feature selection methods is also mentioned in this paper. Feature selection is performed to reduce the dimension of large dataset. After reducing attribute the data is given for classification hence the accuracy and performance of classification system can be improved .Several feature selection method include consistency based feature selection, fuzzy entropy measure feature selection with similarity classifier, Signal to noise ratio,Positive approximation. So each method has several benefits. Index Terms: Semisupervised classification, Transductive support vector machine, Feature selection, unlabeled samples
A comparative study of clustering and biclustering of microarray dataijcsit
There are subsets of genes that have similar behavior under subsets of conditions, so we say that they
coexpress, but behave independently under other subsets of conditions. Discovering such coexpressions can
be helpful to uncover genomic knowledge such as gene networks or gene interactions. That is why, it is of
utmost importance to make a simultaneous clustering of genes and conditions to identify clusters of genes
that are coexpressed under clusters of conditions. This type of clustering is called biclustering.
Biclustering is an NP-hard problem. Consequently, heuristic algorithms are typically used to approximate
this problem by finding suboptimal solutions. In this paper, we make a new survey on clustering and
biclustering of gene expression data, also called microarray data.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
A comparative study of three validities computation methods for multimodel ap...IJECEIAES
The multimodel approach offers a very satisfactory results in modelling, diagnose and control of complex systems. In the modelling case, this approach passes by three steps: the determination of the model’s library, the validities computation and the establishment of the final model. In this context, this paper focuses on the elaboration of a comparative study between three recent methods of validities computation. Thus, it highlight the method that offers the best performances in term of precision. To achieve this goal, we apply, these three methods on two simulation examples in order to compare their performances.
Integrating Fuzzy Dematel and SMAA-2 for Maintenance Expensesinventionjournals
The majority of the allowances being transferred to public institutions are mostly spent for buying new equipment, materials, facilities and their maintenance and repair. Some of the public sectors establish their own plants in order to reduce the maintenance and repair costs and gain ability to perform these activities. However, developing technology and variety of materials make their repair and maintenance activities more expensive for them. In this study, vital criteria for a public institution are determined. By using Fuzzy DEMATEL (Decision Making Trial And Evaluation Laboratory) method the degree of importance is identified by two defuzzification methods and the alternatives are ranked by using SMAA-2 (Stochastic Multi Criteria Acceptability Analysis) in three scenarios. The results show that different defuzzification methods change the order of preferences.
Unsupervised Clustering Classify theCancer Data withtheHelp of FCM AlgorithmIOSR Journals
There is structure in nature; also it is believed that there is an underlying structure in most of
phenomena, to be understood. in image recognition ,mole biology applications such as protein folding and 3D
molecular structure, cancer detection many others the underlying structure exists .by finding structure one
classifies the data according to similar patterns, features and other characteristics .this general idea is known
as classification. In classification, also termed clustering, the most important issue is deciding what criteria to
classify against.This paper presents the fuzzy classification techniques to classify the data of cancer disease.
Cluster analysis, cluster validity and fuzzy C-mean (FCM) technique are proposed to be discussed and applied
in cancer data classification.
SURVEY ON CLASSIFICATION ALGORITHMS USING BIG DATASETEditor IJMTER
Data mining environment produces a large amount of data that need to be analyzed.
Using traditional databases and architectures, it has become difficult to process, manage and analyze
patterns. To gain knowledge about the Big Data a proper architecture should be understood.
Classification is an important data mining technique with broad applications to classify the various
kinds of data used in nearly every field of our life. Classification is used to classify the item
according to the features of the item with respect to the predefined set of classes. This paper put a
light on various classification algorithms including j48, C4.5, Naive Bayes using large dataset.
Expert system of single magnetic lens using JESS in Focused Ion Beamijcsa
This work shows expert system of symmetrical single magnetic lens used in focused ion beam optical system. Java expert system shell(JESS) programming is proposed to build the intelligent agent "MOPTION"for getting an optimum magnetic flux density , and calculate the ion optical trajectory. The combination of such rule based engine and SIMION 8.1 has configured the reconstruction process and compiled the data retrieved by the proposed expert system agent to implement the pole-pieces reconstruction for lens design. The pole pieces reconstruction has been resulted in 3D graph , and under the infinite magnification conditions of the optical path, aberration (spherical / chromatic and total) disks diameters have been obtained and got the values (0.03,0.13 and 0.133) micron (μm) respectively.
Front End Data Cleaning And Transformation In Standard Printed Form Using Neu...ijcsa
Front end of data collection and loading into database manually may cause potential errors in data sets and a very time consuming process. Scanning of a data document in the form of an image and recognition of corresponding information in that image can be considered as a possible solution of this challenge. This paper presents an automated solution for the problem of data cleansing and recognition of user written data to transform into standard printed format with the help of artificial neural networks. Three different neural models namely direct, correlation based and hierarchical have been developed to handle this issue. In a very hostile input environment, the solution is developed to justify the proposed logic.
A Novel Framework and Policies for On-line Block of Cores Allotment for Multi...ijcsa
Computer industry has widely accepted that future performance increases must largely come from increasing the number of processing cores on a die. This has led to NoC processors. Task scheduling is one of the most challenging problems facing parallel programmers today which is known to be NP-complete. A good principle is space-sharing of cores and to schedule multiple DAGs simultaneously on NoC processor. Hence the need to find optimal number of cores for a DAG for a particular scheduling method and further which region of cores on NoC, to be allotted for a DAG . In this work, a method is proposed to find near-optimal minimal block of cores for a DAG on a NoC processor. Further, a time efficient framework and three on-line block allotment policies to the submitted DAGs are experimented. The objectives of the policies, is to improve the NoC throughput. The policies are experimented on a simulator and found to deliver better performance than the policies found in literature..
MAGNETIC RESONANCE BRAIN IMAGE SEGMENTATIONVLSICS Design
Segmentation of tissues and structures from medical images is the first step in many image analysis applications developed for medical diagnosis. With the growing research on medical image segmentation, it is essential to categorize the research outcomes and provide researchers with an overview of the existing segmentation techniques in medical images. In this paper, different image segmentation methods applied on magnetic resonance brain images are reviewed. The selection of methods includes sources from image processing journals, conferences, books, dissertations and thesis. The conceptual details of the methods are explained and mathematical details are avoided for simplicity. Both broad and detailed categorizations of reviewed segmentation techniques are provided. The state of art research is provided with emphasis on developed techniques and image properties used by them. The methods defined are not always mutually independent. Hence, their inter relationships are also stated. Finally, conclusions are drawn summarizing commonly used techniques and their complexities in application.
Hybrid Method HVS-MRMR for Variable Selection in Multilayer Artificial Neural...IJECEIAES
The variable selection is an important technique the reducing dimensionality of data frequently used in data preprocessing for performing data mining. This paper presents a new variable selection algorithm uses the heuristic variable selection (HVS) and Minimum Redundancy Maximum Relevance (MRMR). We enhance the HVS method for variab le selection by incorporating (MRMR) filter. Our algorithm is based on wrapper approach using multi-layer perceptron. We called this algorithm a HVS-MRMR Wrapper for variables selection. The relevance of a set of variables is measured by a convex combination of the relevance given by HVS criterion and the MRMR criterion. This approach selects new relevant variables; we evaluate the performance of HVS-MRMR on eight benchmark classification problems. The experimental results show that HVS-MRMR selected a less number of variables with high classification accuracy compared to MRMR and HVS and without variables selection on most datasets. HVS-MRMR can be applied to various classification problems that require high classification accuracy.
Integrated bio-search approaches with multi-objective algorithms for optimiza...TELKOMNIKA JOURNAL
Optimal selection of features is very difficult and crucial to achieve, particularly for the task of classification. It is due to the traditional method of selecting features that function independently and generated the collection of irrelevant features, which therefore affects the quality of the accuracy of the classification. The goal of this paper is to leverage the potential of bio-inspired search algorithms, together with wrapper, in optimizing multi-objective algorithms, namely ENORA and NSGA-II to generate an optimal set of features. The main steps are to idealize the combination of ENORA and NSGA-II with suitable bio-search algorithms where multiple subset generation has been implemented. The next step is to validate the optimum feature set by conducting a subset evaluation. Eight (8) comparison datasets of various sizes have been deliberately selected to be checked. Results shown that the ideal combination of multi-objective algorithms, namely ENORA and NSGA-II, with the selected bio-inspired search algorithm is promising to achieve a better optimal solution (i.e. a best features with higher classification accuracy) for the selected datasets. This discovery implies that the ability of bio-inspired wrapper/filtered system algorithms will boost the efficiency of ENORA and NSGA-II for the task of selecting and classifying features.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
K-Medoids Clustering Using Partitioning Around Medoids for Performing Face Re...ijscmcj
Face recognition is one of the most unobtrusive biometric techniques that can be used for access control as well as surveillance purposes. Various methods for implementing face recognition have been proposed with varying degrees of performance in different scenarios. The most common issue with effective facial biometric systems is high susceptibility of variations in the face owing to different factors like changes in pose, varying illumination, different expression, presence of outliers, noise etc. This paper explores a novel technique for face recognition by performing classification of the face images using unsupervised learning approach through K-Medoids clustering. Partitioning Around Medoids algorithm (PAM) has been used for performing K-Medoids clustering of the data. The results are suggestive of increased robustness to noise and outliers in comparison to other clustering methods. Therefore the technique can also be used to increase the overall robustness of a face recognition system and thereby increase its invariance and make it a reliably usable biometric modality.
Mammogram image segmentation using rough clusteringeSAT Journals
Abstract The mammography is the most effective procedure to diagnosis the breast cancer at an early stage. This paper proposes mammogram image segmentation using Rough K-Means (RKM) clustering algorithm. The median filter is used for pre-processing of image and it is normally used to reduce noise in an image. The 14 Haralick features are extracted from mammogram image using Gray Level Co-occurrence Matrix (GLCM) for different angles. The features are clustered by K-Means, Fuzzy C-Means (FCM) and Rough K-Means algorithms to segment the region of interests for classification. The result of the segmentation algorithms compared and analyzed using Mean Square Error (MSE) and Root Means Square Error (RMSE). It is observed that the proposed method produces better results that the existing methods. Keywords— Mammogram, Data mining, Image Processing, Feature Extraction, Rough K- Means and Image Segmentation
Accounting for variance in machine learning benchmarksDevansh16
Accounting for Variance in Machine Learning Benchmarks
Xavier Bouthillier, Pierre Delaunay, Mirko Bronzi, Assya Trofimov, Brennan Nichyporuk, Justin Szeto, Naz Sepah, Edward Raff, Kanika Madan, Vikram Voleti, Samira Ebrahimi Kahou, Vincent Michalski, Dmitriy Serdyuk, Tal Arbel, Chris Pal, Gaël Varoquaux, Pascal Vincent
Strong empirical evidence that one machine-learning algorithm A outperforms another one B ideally calls for multiple trials optimizing the learning pipeline over sources of variation such as data sampling, data augmentation, parameter initialization, and hyperparameters choices. This is prohibitively expensive, and corners are cut to reach conclusions. We model the whole benchmarking process, revealing that variance due to data sampling, parameter initialization and hyperparameter choice impact markedly the results. We analyze the predominant comparison methods used today in the light of this variance. We show a counter-intuitive result that adding more sources of variation to an imperfect estimator approaches better the ideal estimator at a 51 times reduction in compute cost. Building on these results, we study the error rate of detecting improvements, on five different deep-learning tasks/architectures. This study leads us to propose recommendations for performance comparisons.
When deep learners change their mind learning dynamics for active learningDevansh16
Abstract:
Active learning aims to select samples to be annotated that yield the largest performance improvement for the learning algorithm. Many methods approach this problem by measuring the informativeness of samples and do this based on the certainty of the network predictions for samples. However, it is well-known that neural networks are overly confident about their prediction and are therefore an untrustworthy source to assess sample informativeness. In this paper, we propose a new informativeness-based active learning method. Our measure is derived from the learning dynamics of a neural network. More precisely we track the label assignment of the unlabeled data pool during the training of the algorithm. We capture the learning dynamics with a metric called label-dispersion, which is low when the network consistently assigns the same label to the sample during the training of the network and high when the assigned label changes frequently. We show that label-dispersion is a promising predictor of the uncertainty of the network, and show on two benchmark datasets that an active learning algorithm based on label-dispersion obtains excellent results.
Survey on semi supervised classification methods and feature selectioneSAT Journals
Abstract Data mining also called knowledge discovery is a process of analyzing data from several perspective and summarize it into useful information. It has tremendous application in the area of classification like pattern recognition, discovering several disease type, analysis of medical image, recognizing speech, for identifying biometric, drug discovery etc. This is a survey based on several semisupervised classification method used by classifiers , in this both labeled and unlabeled data can be used for classification purpose.It is less expensive than other classification methods . Different techniques surveyed in this paper are low density separation approach, transductive SVM, semi-supervised based logistic discriminate procedure, self training nearest neighbour rule using cut edges, self training nearest neighbour rule using cut edges. Along with classification methods a review about various feature selection methods is also mentioned in this paper. Feature selection is performed to reduce the dimension of large dataset. After reducing attribute the data is given for classification hence the accuracy and performance of classification system can be improved .Several feature selection method include consistency based feature selection, fuzzy entropy measure feature selection with similarity classifier, Signal to noise ratio,Positive approximation. So each method has several benefits. Index Terms: Semisupervised classification, Transductive support vector machine, Feature selection, unlabeled samples
A comparative study of clustering and biclustering of microarray dataijcsit
There are subsets of genes that have similar behavior under subsets of conditions, so we say that they
coexpress, but behave independently under other subsets of conditions. Discovering such coexpressions can
be helpful to uncover genomic knowledge such as gene networks or gene interactions. That is why, it is of
utmost importance to make a simultaneous clustering of genes and conditions to identify clusters of genes
that are coexpressed under clusters of conditions. This type of clustering is called biclustering.
Biclustering is an NP-hard problem. Consequently, heuristic algorithms are typically used to approximate
this problem by finding suboptimal solutions. In this paper, we make a new survey on clustering and
biclustering of gene expression data, also called microarray data.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
A comparative study of three validities computation methods for multimodel ap...IJECEIAES
The multimodel approach offers a very satisfactory results in modelling, diagnose and control of complex systems. In the modelling case, this approach passes by three steps: the determination of the model’s library, the validities computation and the establishment of the final model. In this context, this paper focuses on the elaboration of a comparative study between three recent methods of validities computation. Thus, it highlight the method that offers the best performances in term of precision. To achieve this goal, we apply, these three methods on two simulation examples in order to compare their performances.
Integrating Fuzzy Dematel and SMAA-2 for Maintenance Expensesinventionjournals
The majority of the allowances being transferred to public institutions are mostly spent for buying new equipment, materials, facilities and their maintenance and repair. Some of the public sectors establish their own plants in order to reduce the maintenance and repair costs and gain ability to perform these activities. However, developing technology and variety of materials make their repair and maintenance activities more expensive for them. In this study, vital criteria for a public institution are determined. By using Fuzzy DEMATEL (Decision Making Trial And Evaluation Laboratory) method the degree of importance is identified by two defuzzification methods and the alternatives are ranked by using SMAA-2 (Stochastic Multi Criteria Acceptability Analysis) in three scenarios. The results show that different defuzzification methods change the order of preferences.
Unsupervised Clustering Classify theCancer Data withtheHelp of FCM AlgorithmIOSR Journals
There is structure in nature; also it is believed that there is an underlying structure in most of
phenomena, to be understood. in image recognition ,mole biology applications such as protein folding and 3D
molecular structure, cancer detection many others the underlying structure exists .by finding structure one
classifies the data according to similar patterns, features and other characteristics .this general idea is known
as classification. In classification, also termed clustering, the most important issue is deciding what criteria to
classify against.This paper presents the fuzzy classification techniques to classify the data of cancer disease.
Cluster analysis, cluster validity and fuzzy C-mean (FCM) technique are proposed to be discussed and applied
in cancer data classification.
SURVEY ON CLASSIFICATION ALGORITHMS USING BIG DATASETEditor IJMTER
Data mining environment produces a large amount of data that need to be analyzed.
Using traditional databases and architectures, it has become difficult to process, manage and analyze
patterns. To gain knowledge about the Big Data a proper architecture should be understood.
Classification is an important data mining technique with broad applications to classify the various
kinds of data used in nearly every field of our life. Classification is used to classify the item
according to the features of the item with respect to the predefined set of classes. This paper put a
light on various classification algorithms including j48, C4.5, Naive Bayes using large dataset.
Expert system of single magnetic lens using JESS in Focused Ion Beamijcsa
This work shows expert system of symmetrical single magnetic lens used in focused ion beam optical system. Java expert system shell(JESS) programming is proposed to build the intelligent agent "MOPTION"for getting an optimum magnetic flux density , and calculate the ion optical trajectory. The combination of such rule based engine and SIMION 8.1 has configured the reconstruction process and compiled the data retrieved by the proposed expert system agent to implement the pole-pieces reconstruction for lens design. The pole pieces reconstruction has been resulted in 3D graph , and under the infinite magnification conditions of the optical path, aberration (spherical / chromatic and total) disks diameters have been obtained and got the values (0.03,0.13 and 0.133) micron (μm) respectively.
Front End Data Cleaning And Transformation In Standard Printed Form Using Neu...ijcsa
Front end of data collection and loading into database manually may cause potential errors in data sets and a very time consuming process. Scanning of a data document in the form of an image and recognition of corresponding information in that image can be considered as a possible solution of this challenge. This paper presents an automated solution for the problem of data cleansing and recognition of user written data to transform into standard printed format with the help of artificial neural networks. Three different neural models namely direct, correlation based and hierarchical have been developed to handle this issue. In a very hostile input environment, the solution is developed to justify the proposed logic.
A Novel Framework and Policies for On-line Block of Cores Allotment for Multi...ijcsa
Computer industry has widely accepted that future performance increases must largely come from increasing the number of processing cores on a die. This has led to NoC processors. Task scheduling is one of the most challenging problems facing parallel programmers today which is known to be NP-complete. A good principle is space-sharing of cores and to schedule multiple DAGs simultaneously on NoC processor. Hence the need to find optimal number of cores for a DAG for a particular scheduling method and further which region of cores on NoC, to be allotted for a DAG . In this work, a method is proposed to find near-optimal minimal block of cores for a DAG on a NoC processor. Further, a time efficient framework and three on-line block allotment policies to the submitted DAGs are experimented. The objectives of the policies, is to improve the NoC throughput. The policies are experimented on a simulator and found to deliver better performance than the policies found in literature..
Statistical Analysis based Hypothesis Testing Method in Biological Knowledge ...ijcsa
The correlation and interactions among different biological entities comprise the biological system. Although already revealed interactions contribute to the understanding of different existing systems, researchers face many questions everyday regarding inter-relationships among entities. Their queries have potential role in exploring new relations which may open up a new area of investigation. In this paper, we introduce a text mining based method for answering the biological queries in terms of statistical computation such that researchers can come up with new knowledge discovery. It facilitates user to submit their query in natural linguistic form which can be treated as hypothesis. Our proposed approach analyzes the hypothesis and measures the p-value of the hypothesis with respect to the existing literature. Based on the measured value, the system either accepts or rejects the hypothesis from statistical point of view. Moreover, even it does not find any direct relationship among the entities of the hypothesis, it presents a network to give an integral overview of all the entities through which the entities might be related. This is also congenial for the researchers to widen their view and thus think of new hypothesis for further investigation. It assists researcher to get a quantitative evaluation of their assumptions such that they can reach a logical conclusion and thus aids in relevant re-searches of biological knowledge discovery. The system also provides the researchers a graphical interactive interface to submit their hypothesis for assessment in a more convenient way.
NTRUSION D ETECTION S YSTEMS IN M OBILE A D H OC N ETWORKS : S TATE OF ...ijcsa
Mobile Ad Hoc Networks (MANETs) are more vulnerable
to different attacks. Prevention methods as
cryptographic techniques alone are not sufficient t
o make them secure; therefore, efficient intrusion
detection must be deployed and elaborated to facili
tate the identification of attacks. An Intrusion De
tection
System (IDS) aims to detect malicious and selfish n
odes in a network. The intrusion detection methods
used
normally for wired networks can no longer adequate
when adapted directly to a wireless ad-hoc network,
so existing techniques of intrusion detection have
to be changed and new techniques have to be determi
ned
to work efficiency and effectively in this new netw
ork architecture of MANETs. In this paper we give a
survey of different architectures and methods of in
trusion detection systems (IDSs) for MANETs
accordingly to the recent literature.
A SURVEY OF S ENTIMENT CLASSIFICATION TECHNIQUES USED FOR I NDIAN REGIONA...ijcsa
Sentiment Analysis is a natural language processing
task that extracts sentiment from various text for
ms
and classifies them according to positive, negative
or neutral polarity. It analyzes emotions, feeling
s, and
the attitude of a speaker or a writer towards a con
text. This paper gives comparative study of various
sentiment classification techniques and also discus
ses in detail two main categories of sentiment
classification techniques these are machine based a
nd lexicon based. The paper also presents challenge
s
associated with sentiment analysis along with lexic
al resources available.
Predicting the Credit Defaulter is a perilous task of Financial Industries like Banks. Ascertainingnon payer
before giving loan is a significant and conflict-ridden task of the Banker. Classification techniques
are the better choice for predictive analysis like finding the claimant, whether he/she is an unpretentious
customer or a cheat. Defining the outstanding classifier is a risky assignment for any industrialist like a
banker. This allow computer science researchers to drill down efficient research works through evaluating
different classifiers and finding out the best classifier for such predictive problems. This research
work investigates the productivity of LADTree Classifier and REPTree Classifier for the credit risk prediction
and compares their fitness through various measures. German credit dataset has been taken and used
to predict the credit risk with a help of open source machine learning tool.
Parsing of xml file to make secure transaction in mobile commerceijcsa
Mobile commerce M-Commerce is transaction using mobile devices. Now a day’s users are dependable on mobile phone because of its anytime, anywhere features. User purchase and pay more with their mobile device than desktop. Therefore, the security of M-commerce transaction should be strong enough to enhance its performance. Mobile phones use WAP, i-mode and J2ME technologies for programming for making transaction. The data during transaction is sent as XML file. The XML processor takes more time to encrypt which leads to breakdown the security during transaction. This paper has defined the code to parse XML file which reduces it size so that data during transaction would be sent with ease and fast. The code is written in XML and J2ME as these technologies can easily run on multiple platform.
In this paper fuzzy VRPTW with an uncertain travel time is considered. Credibility theory is used to model
the problem and specifies a preference index at which it is desired that the travel times to reach the
customers fall into their time windows. We propose the integration of fuzzy and ant colony system based
evolutionary algorithm to solve the problem while preserving the constraints. Computational results for
certain benchmark problems having short and long time horizons are presented to show the effectiveness of
the algorithm. Comparison between different preferences indexes have been obtained to help the user in
making suitable decisions
AN ONTOLOGY FOR EXPLORING KNOWLEDGE IN COMPUTER NETWORKSijcsa
Ontology is applied to impart knowledge in various fields of Information Technology made it more intelligent in the past few decades. Many Ontologies was built on various domains like biology, medicine,physics, chemistry, and mathematics. The Ontology in computer science domain are limited and even the Ontology is not explored in detail. The knowledge in the field of computer networks is very large, which makes it more difficult for a human to expertise in. This paper proposes the Ontology in computer network domain on various perspectives like scope, scale, topology, communication media, OSI model, TCP/ IP model, protocol, security, network operating system, network hardware and performance. The Ontology is developed in OWL format, which can be easily integrated with any other semantic based applications. The network Ontology can be employed in Semantic Web applications to help the users to search for concepts computer networks domain
Proposed Agent Based Black hole Node Detection Algorithm for Ad-Hoc Wireless...ijcsa
A Mobile ad-hoc network (MANET) is a latest and eme
rging Research topic among researchers. The
reason behind the popularity of MANET is flexibilit
y and independence of network infrastructure. MANET
has some unique characteristic like dynamic network
topology, limited power and limited bandwidth for
communication. MANET has more challenge compare to
any other conventional network. However the
dynamical network topology of MANETs, infrastructur
e-less property and lack of certificate authority m
ake
the security problems of MANETs need to pay more at
tention. This paper represents review of layer wise
security attacks. It also discussed the issues and
challenges of mobile ad hoc network. On the importa
nce of
security issues, this paper proposed intrusion dete
ction framework for detecting network layer threats
such
as black hole attack.
A ROBUST MISSING VALUE IMPUTATION METHOD MIFOIMPUTE FOR INCOMPLETE MOLECULAR ...ijcsa
Missing data imputation is an important research topic in data mining. Large-scale Molecular descriptor data may contains missing values (MVs). However, some methods for downstream analyses, including some prediction tools, require a complete descriptor data matrix. We propose and evaluate an iterative imputation method MiFoImpute based on a random forest. By averaging over many unpruned regression trees, random forest intrinsically constitutes a multiple imputation scheme. Using the NRMSE and NMAE estimates of random forest, we are able to estimate the imputation error. Evaluation is performed on two molecular descriptor datasets generated from a diverse selection of pharmaceutical fields with artificially introduced missing values ranging from 10% to 30%. The experimental result demonstrates that missing values has a great impact on the effectiveness of imputation techniques and our method MiFoImpute is more robust to missing value than the other ten imputation methods used as benchmark. Additionally, MiFoImpute exhibits attractive computational efficiency and can cope with high-dimensional data.
The advancement in mobile technology and wireless network increase the using of mobile device in database
driven application, these application require high reliability and availability due to nature inheritance of
mobile environment, transaction is the center component in database systems, In this paper we present
useful work done in mobile transaction, we show the mobile database environment and overview a lot of
proposed model of mobile transaction and show many techniques used to enhance transaction execution.
USING ARTIFICIAL NEURAL NETWORK IN DIAGNOSIS OF THYROID DISEASE: A CASE STUDYijcsa
Nowadays, one of the main issues to create challenges in medicine sciences by developing technology is the
disease diagnosis with high accuracy. In the recent decades, Artificial Neural Networks (ANNs) are considered as the best solutions to achieve this goal and involve in widespread researches to diagnose the diseases. In this paper, we consider a Multi-layer Perceptron (MLP) ANN using back propagation learning algorithm to classify Thyroid disease. It consists of an input layer with 5 neurons, a hidden layer with 6 neurons and an output layer with just 1 neuron. The suitable selection of activation function and the number of neurons in the hidden layer and also the number of layers are achieved using test and error method. Our simulation results indicate that the performed optimization in MLP ANNs can be reached the accuracy level to 98.6%.
Taking advantage of state of the art underwater vehicles and current networking capabilities, the visionary
double objective of this work is to “open to people connected to the Internet, an access to ocean depths
anytime, anywhere.” Today, these people can just perceive the changing surface of the sea from the shores,
but ignore almost everything on what is hidden. If they could explore seabed and become knowledgeable,
they would get involved in finding alternative solutions for our vital terrestrial problems – pollution,
climate changes, destruction of biodiversity and exhaustion of Earth resources. The second objective is to
assist professionals of underwater world in performing their tasks by augmenting the perception of the
scene and offering automated actions such as wildlife monitoring and counting. The introduction of Mixed
Reality and Internet in aquatic activities constitutes a technological breakthrough when compared with the
status of existing related technologies. Through Internet, anyone, anywhere, at any moment will be
naturally able to dive in real-time using a Remote Operated Vehicle (ROV) in the most remarkable sites
around the world. The heart of this work is focused on Mixed Reality. The main challenge is to reach real
time display of digital video stream to web users, by mixing 3D entities (objects or pre-processed
underwater terrain surfaces), with 2D videos of live images collected in real time by a teleoperated ROV.
Inside Story Opens the Vaults on Corporate Reputation - May 2013Catherine Anderson
After more than 13 years' measuring corporate reputations we're pleased to share some of our research findings and insights focusing on a case study with global insurance company QBE. We show some of the key events which can negatively impact your corporate reputation, and how organisations can negate this with a good relationship with business journalists.
Personal identification using multibiometrics score level fusioneSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
K-MEDOIDS CLUSTERING USING PARTITIONING AROUND MEDOIDS FOR PERFORMING FACE R...ijscmc
Face recognition is one of the most unobtrusive biometric techniques that can be used for access control as well as surveillance purposes. Various methods for implementing face recognition have been proposed with varying degrees of performance in different scenarios. The most common issue with effective facial biometric systems is high susceptibility of variations in the face owing to different factors like changes in pose, varying illumination, different expression, presence of outliers, noise etc. This paper explores a novel technique for face recognition by performing classification of the face images using unsupervised learning approach through K-Medoids clustering. Partitioning Around Medoids algorithm (PAM) has been used for performing K-Medoids clustering of the data. The results are suggestive of increased robustness to noise and outliers in comparison to other clustering methods. Therefore the technique can also be used to increase the overall robustness of a face recognition system and thereby increase its invariance and make it a reliably usable biometric modality
Q UANTUM C LUSTERING -B ASED F EATURE SUBSET S ELECTION FOR MAMMOGRAPHIC I...ijcsit
In this paper, we present an algorithm for feature selection. This algorithm labeled QC-FS: Quantum
Clustering for Feature Selection performs the selection in two steps. Partitioning the original features
space in order to group similar features is performed using the Quantum Clustering algorithm. Then the
selection of a representative for each cluster is carried out. It uses similarity measures such as correlation
coefficient (CC) and the mutual information (MI). The feature which maximizes this information is chosen
by the algorithm
Ijaems apr-2016-1 Multibiometric Authentication System Processed by the Use o...INFOGAIN PUBLICATION
The present day authentication system is mostly uni-model i.e having only single authentication method which can be either finger print, iris , palm veins ,etc. Thus these models have to contend with a variety of problems such as absurd or unusual data, non-versatility; un authorized attempts, and huge amount of error rates. Some of these limitations can be reduced or stopped by the use of multimodal biometric systems that integrate the evidence presented by several sources of information. This paper converses a multi biometric based authentication system based on Fusion algorithm using a key. Our work mainly focuses on the fusion algorithm, i.e fusion of finger and palm print out of which the greatest features from the above two traits are taken into account. With minimum possible features the fusion of the both the traits is carried out. Then some key is generated for multi biometric authentication. By processing the test image of a person, the identity of the person is displayed with his/her own image. By the fusion algorithm, it is found that it has less computation time compared to the existing algorithms. By matching results, we validate and authenticate the particular individual.
The International Journal of Engineering and Science (IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
Multimodal Biometrics at Feature Level Fusion using Texture FeaturesCSCJournals
In recent years, fusion of multiple biometric modalities for personal authentication has received considerable attention. This paper presents a feature level fusion algorithm based on texture features. The system combines fingerprint, face and off-line signature. Texture features are extracted from Curvelet transform. The Curvelet feature dimension is selected based on d-prime number. The increase in feature dimension is reduced by using template averaging, moment features and by Principal component analysis (PCA). The algorithm is tested on in-house multimodal database comprising of 3000 samples and Chimeric databases. Identification performance of the system is evaluated using SVM classifier. A maximum GAR of 97.15% is achieved with Curvelet-PCA features.
Feature Extraction using Sparse SVD for Biometric Fusion in Multimodal Authen...IJNSA Journal
Token based security (ID Cards) have been used to restrict access to the Secured systems. The purpose of
Biometrics is to identify / verify the correctness of an individual by using certain physiological or
behavioural traits associated with the person. Current biometric systems make use of face, fingerprints,
iris, hand geometry, retina, signature, palm print, voiceprint and so on to establish a person’s identity.
Biometrics is one of the primary key concepts of real application domains such as aadhar card, passport,
pan card, etc. In this paper, we consider face and fingerprint patterns for identification/verification. Using
this data we proposed a novel model for authentication in multimodal biometrics often called ContextSensitive Exponent Associative Memory Model (CSEAM). It provides different stages of security for
biometrics fusion patterns. In stage 1, fusion of face and finger patterns using Principal Component
Analysis (PCA), in stage 2 by applying Sparse SVD decomposition to extract the feature patterns from the
fusion data and face pattern and then in stage 3, using CSEAM model, the extracted feature vectors can be
encoded. The final key will be stored in the smart cards as Associative Memory (M), which is often called
Context-Sensitive Associative Memory (CSAM). In CSEAM model, the CSEAM will be computed using
exponential kronecker product for encoding and verification of the chosen samples from the users. The
exponential of matrix can be computed in various ways such as Taylor Series, Pade Approximation and
also using Ordinary Differential Equations (O.D.E.). Among these approaches we considered first two
methods for computing exponential of a feature space. The result analysis of SVD and Sparse SVD for
feature extraction process and also authentication/verification process of the proposed system in terms of
performance measures as Mean square error rates will be presented.
Simplified Knowledge Prediction: Application of Machine Learning in Real LifePeea Bal Chakraborty
Machine learning is the scientific study of algorithms and statistical models that is used by the machines to perform a specific task depending on patterns and inference rather than explicit instructions. This research and analysis aims to observe how precisely a machine can predict that a patient suspected of breast cancer is having malignant or benign cancer.In this paper the classification of cancer type and prediction of risk levels is done by various model of machine learning and is pictorially depicted by various tools of visual analytics.
Analysis of machine learning algorithms for character recognition: a case stu...nooriasukmaningtyas
This paper covers the work done in handwritten digit recognition and the
various classifiers that have been developed. Methods like MLP, SVM,
Bayesian networks, and Random forests were discussed with their accuracy
and are empirically evaluated. Boosted LetNet 4, an ensemble of various
classifiers, has shown maximum efficiency among these methods.
Controlling informative features for improved accuracy and faster predictions...Damian R. Mingle, MBA
Identification of suitable biomarkers for accurate prediction of phenotypic outcomes is a goal for personalized medicine. However, current machine learning approaches are either too complex or perform poorly.
For more information:
http://societyofdatascientists.com/controlling-informative-features-for-improved-accuracy-and-faster-predictions-in-omentum-cancer-models/?src=slideshare
A Threshold Fuzzy Entropy Based Feature Selection: Comparative StudyIJMER
Feature selection is one of the most common and critical tasks in database classification. It
reduces the computational cost by removing insignificant and unwanted features. Consequently, this
makes the diagnosis process accurate and comprehensible. This paper presents the measurement of
feature relevance based on fuzzy entropy, tested with Radial Basis Classifier (RBF) network,
Bagging(Bootstrap Aggregating), Boosting and stacking for various fields of datasets. Twenty
benchmarked datasets which are available in UCI Machine Learning Repository and KDD have been
used for this work. The accuracy obtained from these classification process shows that the proposed
method is capable of producing good and accurate results with fewer features than the original
datasets.
Prediction of dementia using machine learning model and performance improvem...IJECEIAES
Dementia is a brain disease that stays in the seventh position of death rate as per the report of the World Health Organization (WHO). Among the various types of dementia, Alzheimer’s disease has more than 70% of cases of dementia. The objective is to predict dementia disease from the open access series of imaging studies (OASIS) dataset using machine learning techniques. Also, the performance of the machine learning model is analyzed to improve the performance of the model using the cuckoo algorithm. In this paper, feature engineering has been focused and the prediction of dementia has been done using the OASIS dataset with the help of data mining techniques. Feature engineering is followed by prediction using the machine learning model Gaussian naïve Bayes (NB), support vector machine, and linear regression. Also, the best prediction model has been selected and done the validation. The evaluation metrics considered for validating the models are accuracy, precision, recall, and F1-Score and the highest values are 95%, 97%, 95%, and 95%. The Gaussian NB has been given these best results. The accuracy of the machine learning models has been increased by eliminating the factors which affect the performance of the models using the cuckoo algorithm
Review of Multimodal Biometrics: Applications, Challenges and Research AreasCSCJournals
Biometric systems for today’s high security applications must meet stringent performance requirements. The fusion of multiple biometrics helps to minimize the system error rates. Fusion methods include processing biometric modalities sequentially until an acceptable match is obtained. More sophisticated methods combine scores from separate classifiers for each modality. This paper is an overview of multimodal biometrics, challenges in the progress of multimodal biometrics, the main research areas and its applications to develop the security system for high security areas
Possibility fuzzy c means clustering for expression invariant face recognitionIJCI JOURNAL
Face being the most natural method of identification for humans is one of the most significant biometric
modalities and various methods to achieve efficient face recognition have been proposed. However the
changes in face owing to different expressions, pose, makeup, illumination, age bring about marked
variations in the facial image. These changes will inevitably occur and they can be controlled only till a
certain degree beyond which they are bound to happen and will affect the face thereby adversely impacting
the performance of any face recognition system. This paper proposes a strategy to improve the
classification methodology in face recognition by using Possibility Fuzzy C-Means Clustering (PFCM).
This clustering technique was used for face recognition due to its properties like outlier insensitivity which
make it a suitable candidate for use in designing such robust applications.PFCM is a hybridization of
Possibilistic C-Means (PCM) and Fuzzy C-Means (FCM) clustering algorithms. PFCM is a robust
clustering technique and is especially significant for its noise insensitivity. It has also resolved the
coincident clusters problem which is faced by other clustering techniques. Therefore the technique can also
be used to increase the overall robustness of a face recognition system and thereby increase its invariance
and make it a reliably usable biometric modality.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.
Water scarcity is the lack of fresh water resources to meet the standard water demand. There are two type of water scarcity. One is physical. The other is economic water scarcity.
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Student information management system project report ii.pdfKamal Acharya
Our project explains about the student management. This project mainly explains the various actions related to student details. This project shows some ease in adding, editing and deleting the student details. It also provides a less time consuming process for viewing, adding, editing and deleting the marks of the students.
Explore the innovative world of trenchless pipe repair with our comprehensive guide, "The Benefits and Techniques of Trenchless Pipe Repair." This document delves into the modern methods of repairing underground pipes without the need for extensive excavation, highlighting the numerous advantages and the latest techniques used in the industry.
Learn about the cost savings, reduced environmental impact, and minimal disruption associated with trenchless technology. Discover detailed explanations of popular techniques such as pipe bursting, cured-in-place pipe (CIPP) lining, and directional drilling. Understand how these methods can be applied to various types of infrastructure, from residential plumbing to large-scale municipal systems.
Ideal for homeowners, contractors, engineers, and anyone interested in modern plumbing solutions, this guide provides valuable insights into why trenchless pipe repair is becoming the preferred choice for pipe rehabilitation. Stay informed about the latest advancements and best practices in the field.
Overview of the fundamental roles in Hydropower generation and the components involved in wider Electrical Engineering.
This paper presents the design and construction of hydroelectric dams from the hydrologist’s survey of the valley before construction, all aspects and involved disciplines, fluid dynamics, structural engineering, generation and mains frequency regulation to the very transmission of power through the network in the United Kingdom.
Author: Robbie Edward Sayers
Collaborators and co editors: Charlie Sims and Connor Healey.
(C) 2024 Robbie E. Sayers
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...Amil Baba Dawood bangali
Contact with Dawood Bhai Just call on +92322-6382012 and we'll help you. We'll solve all your problems within 12 to 24 hours and with 101% guarantee and with astrology systematic. If you want to take any personal or professional advice then also you can call us on +92322-6382012 , ONLINE LOVE PROBLEM & Other all types of Daily Life Problem's.Then CALL or WHATSAPP us on +92322-6382012 and Get all these problems solutions here by Amil Baba DAWOOD BANGALI
#vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore#blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #blackmagicforlove #blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #Amilbabainuk #amilbabainspain #amilbabaindubai #Amilbabainnorway #amilbabainkrachi #amilbabainlahore #amilbabaingujranwalan #amilbabainislamabad
Runway Orientation Based on the Wind Rose Diagram.pptx
Feature selection in multimodal
1. International Journal on Computational Sciences & Applications (IJCSA) Vol.5, No.1, February 2015
70
FEATURE SELECTION IN MULTIMODAL
AUTHENTICATION USING LU FACTORIZATION
WITH CSEAM MODEL
Y Suresh,K Pavan Kumar, PESN Krishna Prasad, BDCN Prasad
Prasad V. Potluri Siddhartha Institute of Technology,
Kanuru, Vijayawada-7, India
Abstract
Multimodal authentication is one of the prime concepts in current applications of real scenario. Various
approaches have been proposed in this aspect. In this paper, an intuitive strategy is proposed as a
framework for providing more secure key in biometric security aspect. Initially the features will be
extracted through PCA by SVD from the chosen biometric patterns, then using LU factorization technique
key components will be extracted, then selected with different key sizes and then combined the selected key
components using convolution kernel method (Exponential Kronecker Product - eKP) as Context-Sensitive
Exponent Associative Memory model (CSEAM). In the similar way, the verification process will be done
and then verified with the measure MSE. This model would give better outcome when compared with SVD
factorization[1] as feature selection. The process will be computed for different key sizes and the results
will be presented.
KEYWORDS
Feature selection, LU factorization, CSEAM Model, PCA by SVD, Exponential kronecker Product (eKP),
Multimodal Authentication.
1.INTRODUCTION
In recent scenario, agent biometric data verification is one of the critical tasks in the access
control that performs the major building block of security. User identification is traditionally
based on something that user knows (PIN/Password) and user has(Key/Token/Smart Card). But,
these traditional methods can be forgotten, disclosed, lost or stolen. Apart from these strategies,
failure rate is to differentiate between an accredited and unaccredited person [12].
Biometrics [4] is a scientific discipline that involves methods of identity verification or
identification based on the principle of measurable physiological or behavioural characteristics
such as a fingerprint, an iris pattern, palm, vein or a voice sample. In the analysis of security
aspect, human patterns are unique. The primary advantage of multimodal authentication strategies
over other approaches of authentication is to process/ combine two or more user patterns with the
use of psychological/behavioural patterns.
2. International Journal on Computational Sciences & Applications (IJCSA) Vol.5, No.1, February 2015
71
Nowadays, the biometrics patterns has been adopted throughout worldwide on a large scale and
used in different sectors of public/private organizations. As specified in [12], biometric pattern
authentication is a layered model consists of identification mode and verification mode [2, 3, 4]
and the identification process starts with the enrolment process and consists of Acquisition:
Acquisition is the first step in biometric authentication system. Using input device User’s sample
is obtained; Creation of the characteristics: After first step, biometric measurements are
processed. The numbers of biometric samples depends upon the technology that we are using.
Sometimes, single metric is sufficient, but most of the cases multiple patterns are required. By
using these patters features are extracted; Storage of the characteristics: After feature extraction
the feature patterns are stored in a card/database server/on a workstation/directly on
authentication terminal.
After enrolment of the user, the user can use the system for successful authentications or
identifications. The verification/testing process takes the following steps:
1. Acquisition(s): Biometric measurements should be acquired in order to make comparison
with parent patterns.
2. Creation of new characteristics: New characteristics are created from the previous step.
3. Comparison: Currently computed characteristics are computed with the characteristics
obtained in the previous step.
4. Decision: The final step in the test phase is the yes/no decision based on a threshold.
2.MULTIMODAL AUTHENTICATION SYSTEM
In multimodal authentication system, user has to enrol two to three patterns for
identification/verification. Such patterns may be noise in capturing and may lead to more error
prone. In order, to resolve the noise/ inconsistency, vector logic based models have been
considered for minimizing the error rate.
From author’s perspective, different strategies, methodologies and models have been proposed
for multimodal systems, some of them may be, the user authentication based on speech and face
patterns [5] is one of the first multimodal biometric system. Brunelli and Falavigna[5] for
normalization and weighted geometric average for fusion of voice and face biometrics that uses
hyperbolic tangent (tanh) and also they proposed a hierarchical combination scheme for a
multimodal identification system. Kittler et al. [6] conducted with different fusion strategies for
human face and voice patterns, including statistical and algebraic metrics. Kittler et al. [6]
concluded that the algebraic sum metric is not significantly affected by the statistical estimation
errors and covers its superiority and importance. Hong and Jain [7] proposed a multimodal
system fingerprint pattern satisfaction is applied after pruning the datasets via face pattern
matching. Ben-Yacoub et al. [8] considered several fusion strategies, such as tree classifiers,
multilayer perceptions and support vector machines for face and voice biometrics. Ross and Jain
[9] fusion the traits of biometric patterns- fingerprint, face and hand geometry with sum, decision
tree and linear discriminant approaches. The authors concluded that sum rule performs
outstanding when compared with others.
3. International Journal on Computational Sciences & Applications (IJCSA) Vol.5, No.1, February 2015
72
From these observations, vector logic approach is to be considered for multi-modal biometric
authentication system based on facial and iris pattern. In this paper, both face images and
fingerprint patterns are chosen due to their complementary characteristics, physiology, and
behaviour.
3.PROPOSED MODEL
The effectiveness of biometric system using two level multimodal authentication problems are
discussed in [1]for encoding and decoding of biometric patterns. At level one, input patterns are
normalized with PCA by SVD and then keys will be selected using LU factorization from the
extracted patterns after processing sensed data with PCA by SVD, at level two using
computational model (CSEAM model [1]),the generated keys will fusion through LU
factorization, and at level three the decision process can be analyzed using error estimation
strategies. An overview of the computational framework for the proposed model is described in
the Fig. 1.
4. International Journal on Computational Sciences & Applications (IJCSA) Vol.5, No.1, February 2015
73
Figure 1. Framework of the Proposed Model
Figure 2. Black diagram of the Proposed Model.
3.1.Feature Extraction Using PCA by SVD
3.1.1 Principal Component Analysis (PCA)
In PCA[10], the directions of the data patterns is the most variation, that is the eigen vectors
corresponding to the largest eigen values of the covariance matrix, and project the data patterns
onto these directions that might get rid of important non-second order information by PCA. If the
matrix of eigen vectors sorted according to eigen value by , then PCA transformation of the data
as . The eigen vectors are called principle components [11]. By selecting the first d rows
of Y, then the projected data is from n down to d dimensions.
3.1.2 Spectral decomposition of a square matrix:
Any real symmetric matrix A of size mxm may be decomposed uniquely as [10]
Where U is an orthonormal matrix and it satisfies
or
and is a diagonal matrix . The eigen vectors of matrix A are columns of the matrix U and
the eigen values are the diagonal elements of matrix .
3.1.3 Singular Value Decomposition:
A matrix A of size mxn can be decomposed as [10]
5. International Journal on Computational Sciences & Applications (IJCSA) Vol.5, No.1, February 2015
74
U is a column orthogonal matrix of size mxn and its columns are eigen vectors of
i
V is a orthogonal matrix of size nxn and its columns are eigen vectors of A
ie
D is a diagonal matrix of size nxn called singular values.
If then
3.1.4 PCA by SVD:
The main advantage of the PCA by SVD is to apply the projection of the data on the principal
components. So the best features will be extracted with less computational complexity. i.e. Let X
be a matrix, decompose it by using SVD [10] is
and the covariance matrix as
In this case U is a nxn matrix. The SVD[10] orders the singular values in descending order. If
n<m , the first n columns in U corresponding to the sorted eigen values of C and if m , the first
m corresponds to the sorted non-zero eigen values of C. The transformed data patterns can thus
be written as
Where is a matrix of size nxm which is one on the diagonal and zero everywhere else.
4.FEATURE SELECTION USING LU FACTORIZATION
An intuitive models described, where the stimulate association of a matrix are dependent on the
context of multiplicative vectorial logic. In this context a remarkable operation that transforms the
real matrix over the truly computing data into a triangular matrix. This transformation promoted
6. International Journal on Computational Sciences & Applications (IJCSA) Vol.5, No.1, February 2015
75
by the context leads to the LU Decomposition of a matrix representation, which is a mathematical
method in the field of Image Processing and signal processing.
The meaningful event in the representation is the activity of the large group of data patterns,
naturally represented by high dimensional vectors or diodic matrices. These representations are
processed by associative memories (association of input-output pattern pairs) which store the
information distributed and superposed over coefficients.
In this paper, the associative memory is computed using the keys which are generated from the
fusion patterns by using LU factorization,
Where A and B are fusion patterns of two pair biometric samples ( face and finger print). The
fusion is done by using PCA. The importance of this factorization is here to extract the best
features by eliminating noise and lighting effect using Gaussian-distribution models. In the
process of computation the exponential kronecker product as associative memory combines the
two keys from LU factorization and generates a memory representation M over K, where K is the
Kth
order kernel that uses Kronecker product as convolution product.
Where, g= . In this associative model, the memories models are based on a matrix over a
multiplicity of vector associations are super imposed.
In general, A B represents the Kronecker product.
Where A is the one key pattern of one fusion set and B is another key patterns of the second
fusion set, with ith
order convolution.
5.EXPONENTIAL KRONECKER PRODUCT
5.1. Exponential Kronecker Product (eKP):
The exponential function [13] possesses some specific properties in connection with tensor
operations. Assuming that A and B are the two matrices, then the eKP[1] is described as:
7. International Journal on Computational Sciences & Applications (IJCSA) Vol.5, No.1, February 2015
76
The eKP [13,14,15] is having excellent properties that imply the concept of vector logic. The
properties are as:
, is a special property of kronecker calculus.
In this paper, the eKP is represented as Context-Sensitive Exponent associative memory model
(CSEAM).
6.DECISION STRATEGY - MSE
6.1. Decision Strategy
In this strategy, the Mean Square Error (MSE) is as for verification of proposed framework with
the support of CSEAM model at the training and testing levels.
Mean Square Error
The Mean Square Error (MSE) of an estimator of a parameter is the function of X defined by
and it is denoted as .
The MSE is the sum of the variance and the squared bias of the predications. The estimator is,
7.EXPERIMENT ANALYSIS
Several experiments have been conducted on this framework on the standard datasets from [16,
17] for recognition/verification. In the decision process, the Mean Square Error (MSE) [22] is
considered for testing/training patterns based on the error rate, the difference between training
and testing memories M and MT
computed using CSEAM model respectively.
The experimental results have been considered on the chosen data patterns is listed in the Table1
with various key patterns from selected features using proposed framework such as
8x8,16x16,....,64x64 on similar and dissimilar face and fingerprint patterns.
Matching and non-matching patterns using FMR and FNMR decision method
From the consideration of posters of the multimodal authentication system, FNMR and FMR
metrics have been computed for the selected features using LU factorization with CSEAM model.
In the process of pattern acquisition, error free data will be considered from the standard
benchmark datasets and its FAR/FMR and FRR/FNMR pairs are equivalent.
8. International Journal on Computational Sciences & Applications (IJCSA) Vol.5, No.1, February 2015
77
FMR[1] is the probabilistic estimator of the pattern matching that incorrect the input data to a
non-matching data in the chosen dataset. The FMR metric is the ratio of matching whose error
value is equal or less than the threshold d: MSE ≤ d, where the threshold d is the set of possible
values of the global error.
Similarly FNMR[1] is another probabilistic estimator non-pattern matching of the input data to
matching patterns from the datasets that measures % of valid inputs which are incorrectly
rejected. The FNMR is computed as the percentage of matching whose error is greater than the
threshold d: MSE>d.
No Key Size
MSE
Similar with
different poses
(SVD with
LU)
Similar with
different poses
(SVD)
Dissimilar with
different poses
(SVD with LU)
Dissimilar
with different
poses (SVD)
1 8x8 0.001044 0.0162 0.011948 0.0488
2 16x16 0.000106 0.0011 0.004774 0.0074
3 24x24 0.000273 0.000450 0.002030 0.0067
4 32x32 0.000130 0.000444 0.001630 0.0033
5 40x40 0.000066 0.000392 0.000893 0.0027
6 48x48 0.000049 0.000271 0.000777 0.0018
7 56x56 0.000061 0.000208 0.000372 0.0015
8 64x64 0.000043 0.000182 0.000459 0.0013
Table 1: Mean Square Error (MSE) of various key sizes
Figure 3: Mean Square Error
9. International Journal on Computational Sciences & Applications (IJCSA) Vol.5, No.1, February 2015
78
From the analysis of experimental results, it is observed that the key size 8x8 has been
encountered in rejecting rate when provided similar multimodal data patterns (FMR).
8.CONCLUSIONS
In this paper, an intuitive framework is proposed using cognitive logic for authentication/
verification of the biometrics patterns in multimodal authentication. In this framework the keys
have been selected from the extracted features of the fusion patterns using LU factorization
technique and then combined using Context-sensitive Exponent associative memory model with
the concept of exponential kronecker product (CSEAM).This model is one of the nice strategies
in the domain of cognitive logic. This framework returns better and better outcome from the
selected datasets [16, 17] and provides more complex security in terms of time and space, since it
uses eKP in the cognitive logic. From the observations of experimental results, the key sizes
should be more than 8x8, since while extracting the feature and applying the PCA by SVD, some
of the features might be lost. In such cases, the fusion data will be regretted by the proposed
framework. For the rest of the cases the proposed framework returns better outcome. This
framework is also compared with the selection of key patterns using normal SVD factorization
which gives more accurate and efficient recognition.
REFERENCES
[1] PESN Krishna Prasad, K. Pavan Kumar, MV Ramakrishna and BDCN Prasad, “Fusion Based
Multimodal authentication in Biometrics using Context-Sensitive Exponent Associative Memory
Model: A Novel Approach”, CS&IT Conference proceedings pp. 81-90, 2013,
DOI:10.5121/csit/2013.3608
[2] Mary Lourde R and DushyantKhosla, Fingerprint Identification in Biometric Security Systems,
International Journal of Computer and Electrical Engineering, Vol. 2, No. 5, 852-855, 2010.
[3] T. De Bie, N. Cristianini, R. Rosipal, Eigenproblems in Pattern Recognition, Handbook of
Computational Geometry for Pattern Recognition, Computer Vision, Neurocomputing and Robotics,
Springer-Verlag, Heidelberg, 2004
[4] Jain, A.K.; Ross, A., Prabhakar, S. An Introduction to Biometric Recognition. IEEE Trans. Circuits
Syst. Video Technol., 14, 4-20, 2004.
[5] R. Bruneelli and D.Falavigna, “Person identification using multiple cues,” IEEE Trans. PAMI, vol.
17, no.10, pp.955-966, oct.1995.
[6] J. Kittler, M. Hatef, R.P.W. Duin, and J. Matas, “On Combining Classifiers”, IEEE Trans. PAMI, vol.
20, no. 3, pp. 226-239, 1998
[7] L. Hong and A.K. Jain, “Integrating Faces and Fingerprints for Personal Identification”, IEEE Trans.
PAMI, vol. 20, no. 12, pp. 1295-1307, 1998.
[8] S. Ben-Yacoub, Y. Abdeljaoued, and E. Mayoraz, “Fusion of Face and Speech Data for Person
Identity Verification”, IEEE Trans. Neural Networks, vol. 10, no. 5, pp. 1065-1075, 1999
[9] A. Ross and A.K. Jain, “Information Fusion in Biometrics”, Pattern Recognition Letters, vol. 24, no.
13, pp. 2115-2125, 2003
[10] RasmusElsborg Madsen, Lars Kai Hansen and Ole Winther, “Singular Value Decomposition and
Principle Component Analaysis”, February 2004
[11] Matthew A. Turk, Alex P. Pentland, “Face Recognition using Eigen Faces”, IEEE 1991
[12] Vaclav Matyas, ZdenekRiha, “Biometric Authentication —Security and Usability”,
http://www.fi.muni.cz/usr/matyas/
10. International Journal on Computational Sciences & Applications (IJCSA) Vol.5, No.1, February 2015
79
[13] John W. Brewer, Kronecker Products and Matrix Calculus in System Analysis, IEEE Transactions on
Circuits and Systems, Vol. 25, No. 9, 1978
[14] Wolfgang Hackbusch, Boris N. Khoromskij, Hierarchical Tensor-Product Approximations, 1-26.
[15] LubomírBrančík, Matlab Programs for Matrix Exponential Function Derivative Evaluation
[16] http://www.face-rec.org/databases/
[17] http://www.advancedsourcecode.com/fingerprintdatabase.asp
[18] Izquierdo-Fuento, A.; del Val, L.; Jimenez M.I.; Villakorta, J.J. “Performance Evaluation of a
Biometric System Based on Acoustic Images. Sensors, 119499-9519, 2011.