Currently, there is a growing interest from the scientific and/or industrial community in respect
to methods that offer solutions to the problem of fiducial points detection in human faces. Some
methods use the SVM for classification, but we observed that some formulations of optimization
problems were not discussed. In this article, we propose to investigate the performance of
mathematical formulation C-SVC when applied in fiducial point detection system. Futhermore,
we explore new parameters for training the proposed system. The performance of the proposed
system is evaluated in a fiducial points detection problem. The results demonstrate that the
method is competitive.
In this paper person identification is done based on sets of facial images. Each facial image is considered as the scattered point of logistic regression. The vertical distance of scattered point of facial image and the regression line is considered as the parameter to determine whether the image is of same person or not. The ratio of Euclidian distance (in terms of number of pixel of gray scale image based on ‘imtool’ of Matlab 13.0) between nasal and eye points are determined. The variance of the ration is considered another parameter to identify a facial image. The concept is combined with ghost image of Principal Component Analysis; where the mean square error and signal to noise ratio (SNR) in dB is considered as the parameters of detection. The combination of three methods, enhance the degree of accuracy compared to individual one.
FREE- REFERENCE IMAGE QUALITY ASSESSMENT FRAMEWORK USING METRICS FUSION AND D...sipij
This paper focuses on no-reference image quality assessment(NR-IQA)metrics. In the literature, a wide range of algorithms are proposed to automatically estimate the perceived quality of visual data. However, most of them are not able to effectively quantify the various degradations and artifacts that the image may undergo. Thus, merging of diverse metrics operating in different information domains is hoped to yield better performances, which is the main theme of the proposed work. In particular, the metric proposed in this paper is based on three well-known NR-IQA objective metrics that depend on natural scene statistical attributes from three different domains to extract a vector of image features. Then, Singular Value Decomposition (SVD) based dominant eigenvectors method is used to select the most relevant image quality attributes. These latter are used as input to Relevance Vector Machine (RVM) to derive the overall quality index. Validation experiments are divided into two groups; in the first group, learning process (training and test phases) is applied on one single image quality database whereas in the second group of simulations, training and test phases are separated on two distinct datasets. Obtained results demonstrate that the proposed metric performs very well in terms of correlation, monotonicity and accuracy in both the two scenarios.
In this paper person identification is done based on sets of facial images. Each facial image is considered as the scattered point of logistic regression. The vertical distance of scattered point of facial image and the regression line is considered as the parameter to determine whether the image is of same person or not. The ratio of Euclidian distance (in terms of number of pixel of gray scale image based on ‘imtool’ of Matlab 13.0) between nasal and eye points are determined. The variance of the ration is considered another parameter to identify a facial image. The concept is combined with ghost image of Principal Component Analysis; where the mean square error and signal to noise ratio (SNR) in dB is considered as the parameters of detection. The combination of three methods, enhance the degree of accuracy compared to individual one.
FREE- REFERENCE IMAGE QUALITY ASSESSMENT FRAMEWORK USING METRICS FUSION AND D...sipij
This paper focuses on no-reference image quality assessment(NR-IQA)metrics. In the literature, a wide range of algorithms are proposed to automatically estimate the perceived quality of visual data. However, most of them are not able to effectively quantify the various degradations and artifacts that the image may undergo. Thus, merging of diverse metrics operating in different information domains is hoped to yield better performances, which is the main theme of the proposed work. In particular, the metric proposed in this paper is based on three well-known NR-IQA objective metrics that depend on natural scene statistical attributes from three different domains to extract a vector of image features. Then, Singular Value Decomposition (SVD) based dominant eigenvectors method is used to select the most relevant image quality attributes. These latter are used as input to Relevance Vector Machine (RVM) to derive the overall quality index. Validation experiments are divided into two groups; in the first group, learning process (training and test phases) is applied on one single image quality database whereas in the second group of simulations, training and test phases are separated on two distinct datasets. Obtained results demonstrate that the proposed metric performs very well in terms of correlation, monotonicity and accuracy in both the two scenarios.
Multimodal authentication is one of the prime concepts in current applications of real scenario. Various
approaches have been proposed in this aspect. In this paper, an intuitive strategy is proposed as a
framework for providing more secure key in biometric security aspect. Initially the features will be
extracted through PCA by SVD from the chosen biometric patterns, then using LU factorization technique
key components will be extracted, then selected with different key sizes and then combined the selected key
components using convolution kernel method (Exponential Kronecker Product - eKP) as Context-Sensitive
Exponent Associative Memory model (CSEAM). In the similar way, the verification process will be done
and then verified with the measure MSE. This model would give better outcome when compared with SVD
factorization[1] as feature selection. The process will be computed for different key sizes and the results
will be presented.
MAGNETIC RESONANCE BRAIN IMAGE SEGMENTATIONVLSICS Design
Segmentation of tissues and structures from medical images is the first step in many image analysis applications developed for medical diagnosis. With the growing research on medical image segmentation, it is essential to categorize the research outcomes and provide researchers with an overview of the existing segmentation techniques in medical images. In this paper, different image segmentation methods applied on magnetic resonance brain images are reviewed. The selection of methods includes sources from image processing journals, conferences, books, dissertations and thesis. The conceptual details of the methods are explained and mathematical details are avoided for simplicity. Both broad and detailed categorizations of reviewed segmentation techniques are provided. The state of art research is provided with emphasis on developed techniques and image properties used by them. The methods defined are not always mutually independent. Hence, their inter relationships are also stated. Finally, conclusions are drawn summarizing commonly used techniques and their complexities in application.
Assessing Error Bound For Dominant Point DetectionCSCJournals
This paper compares three error bounds that can be used to make dominant point detection methods non-parametric. The three error bounds are based on the error in slope estimation due to digitization. However, each of the three methods takes a different approach for calculating the error bounds. This results into slightly different natures of the three methods and slightly different values. The impact of these error bounds is studied in the context of the non-parametric version of the widely used RDP method [1, 2] of dominant point detection. It is seen that the recently derived error bound (the third error bound in this paper), which depends on both the length and the slope of the line segment, provides the most balanced dominant point detection results for a variety of curves.
Automatic microaneurysm detection walter et. al.Ritu Lahoti
The slides is the summary of a 2007 paper on 'Automatic Detection of Microaneurysms in Color Fundus Images' by Walter et. al. The paper addresses automatic detection of microaneurysms (MA) which are first detectable changes in Diabetic Retinopathy (DR). Green channel, being the most contrasted channel, of the color fundus images are considered. The algorithm includes pre-processing, MA candidates detection, features extraction, classification and comparison with ground truth to evaluate the performance of classifier model.
Matlab code for implementing Automatic Microaneurysms Detection is available at the below GitHub link;
https://github.com/ritulahoti/Automatic-Microaneurysm-Detection-from-Color-Fundus-Images-in-Diabetic-Retinopathy
The Positive Effects of Fuzzy C-Means Clustering on Supervised Learning Class...Waqas Tariq
Selection of inputs is one of the most substantial components of classification algorithms for data mining and pattern recognition problems since even the best classifier will perform badly if the inputs are not selected very well. Big data and computational complexity are main cause of bad performance and low accuracy for classical classifiers. In other words, the complexity of classifier method is inversely proportional with its classification efficiency. For this purpose, two hybrid classifiers have been developed by using both type-1 and type-2 fuzzy c-means clustering with cascaded a classifier. In this proposed classifier, a large number of data points are reduced by using fuzzy c-means clustering before applied to a classifier algorithm as inputs. The aim of this study is to investigate the effect of fuzzy clustering on well-known and useful classifiers such as artificial neural networks (ANN) and support vector machines (SVM). Then the role of positive effects of these proposed algorithms were investigated on applied different data sets.
Texture classification of fabric defects using machine learning IJECEIAES
In this paper, a novel algorithm for automatic fabric defect classification was proposed, based on the combination of a texture analysis method and a support vector machine SVM. Three texture methods were used and compared, GLCM, LBP, and LPQ. They were combined with SVM’s classifier. The system has been tested using TILDA database. A comparative study of the performance and the running time of the three methods was carried out. The obtained results are interesting and show that LBP is the best method for recognition and classification and it proves that the SVM is a suitable classifier for such problems. We demonstrate that some defects are easier to classify than others.
Template matching is a technique in computer vision used for finding a sub-image of a target image which matches a template image. This technique is widely used in object detection fields such as vehicle tracking, robotics , medical imaging, and manufacturing .
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Development of stereo matching algorithm based on sum of absolute RGB color d...IJECEIAES
This article presents local-based stereo matching algorithm which comprises a devel- opment of an algorithm using block matching and two edge preserving filters in the framework. Fundamentally, the matching process consists of several stages which will produce the disparity or depth map. The problem and most challenging work for matching process is to get an accurate corresponding point between two images. Hence, this article proposes an algorithm for stereo matching using improved Sum of Absolute RGB Differences (SAD), gradient matching and edge preserving filters. It is Bilateral Filter (BF) to surge up the accuracy. The SAD and gradient matching will be implemented at the first stage to get the preliminary corresponding result, then the BF works as an edge-preserving filter to remove the noise from the first stage. The second BF is used at the last stage to improve final disparity map and increase the object boundaries. The experimental analysis and validation are using the Middlebury standard benchmarking evaluation system. Based on the results, the proposed work is capable to increase the accuracy and to preserve the object edges. To make the proposed work more reliable with current available methods, the quantitative measurement has been made to compare with other existing methods and it shows the proposed work in this article perform much better.
Face recognition using selected topographical features IJECEIAES
This paper represents a new features selection method to improve an existed feature type. Topographical (TGH) features provide large set of features by assigning each image pixel to the related feature depending on image gradient and Hessian matrix. Such type of features was handled by a proposed features selection method. A face recognition feature selector (FRFS) method is presented to inspect TGH features. FRFS depends in its main concept on linear discriminant analysis (LDA) technique, which is used in evaluating features efficiency. FRFS studies feature behavior over a dataset of images to determine the level of its performance. At the end, each feature is assigned to its related level of performance with different levels of performance over the whole image. Depending on a chosen threshold, the highest set of features is selected to be classified by SVM classifier.
Noise tolerant color image segmentation using support vector machineeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Q UANTUM C LUSTERING -B ASED F EATURE SUBSET S ELECTION FOR MAMMOGRAPHIC I...ijcsit
In this paper, we present an algorithm for feature selection. This algorithm labeled QC-FS: Quantum
Clustering for Feature Selection performs the selection in two steps. Partitioning the original features
space in order to group similar features is performed using the Quantum Clustering algorithm. Then the
selection of a representative for each cluster is carried out. It uses similarity measures such as correlation
coefficient (CC) and the mutual information (MI). The feature which maximizes this information is chosen
by the algorithm
http://inarocket.com
Learn BEM fundamentals as fast as possible. What is BEM (Block, element, modifier), BEM syntax, how it works with a real example, etc.
How to Build a Dynamic Social Media PlanPost Planner
Stop guessing and wasting your time on networks and strategies that don’t work!
Join Rebekah Radice and Katie Lance to learn how to optimize your social networks, the best kept secrets for hot content, top time management tools, and much more!
Watch the replay here: bit.ly/socialmedia-plan
Content personalisation is becoming more prevalent. A site, it's content and/or it's products, change dynamically according to the specific needs of the user. SEO needs to ensure we do not fall behind of this trend.
Multimodal authentication is one of the prime concepts in current applications of real scenario. Various
approaches have been proposed in this aspect. In this paper, an intuitive strategy is proposed as a
framework for providing more secure key in biometric security aspect. Initially the features will be
extracted through PCA by SVD from the chosen biometric patterns, then using LU factorization technique
key components will be extracted, then selected with different key sizes and then combined the selected key
components using convolution kernel method (Exponential Kronecker Product - eKP) as Context-Sensitive
Exponent Associative Memory model (CSEAM). In the similar way, the verification process will be done
and then verified with the measure MSE. This model would give better outcome when compared with SVD
factorization[1] as feature selection. The process will be computed for different key sizes and the results
will be presented.
MAGNETIC RESONANCE BRAIN IMAGE SEGMENTATIONVLSICS Design
Segmentation of tissues and structures from medical images is the first step in many image analysis applications developed for medical diagnosis. With the growing research on medical image segmentation, it is essential to categorize the research outcomes and provide researchers with an overview of the existing segmentation techniques in medical images. In this paper, different image segmentation methods applied on magnetic resonance brain images are reviewed. The selection of methods includes sources from image processing journals, conferences, books, dissertations and thesis. The conceptual details of the methods are explained and mathematical details are avoided for simplicity. Both broad and detailed categorizations of reviewed segmentation techniques are provided. The state of art research is provided with emphasis on developed techniques and image properties used by them. The methods defined are not always mutually independent. Hence, their inter relationships are also stated. Finally, conclusions are drawn summarizing commonly used techniques and their complexities in application.
Assessing Error Bound For Dominant Point DetectionCSCJournals
This paper compares three error bounds that can be used to make dominant point detection methods non-parametric. The three error bounds are based on the error in slope estimation due to digitization. However, each of the three methods takes a different approach for calculating the error bounds. This results into slightly different natures of the three methods and slightly different values. The impact of these error bounds is studied in the context of the non-parametric version of the widely used RDP method [1, 2] of dominant point detection. It is seen that the recently derived error bound (the third error bound in this paper), which depends on both the length and the slope of the line segment, provides the most balanced dominant point detection results for a variety of curves.
Automatic microaneurysm detection walter et. al.Ritu Lahoti
The slides is the summary of a 2007 paper on 'Automatic Detection of Microaneurysms in Color Fundus Images' by Walter et. al. The paper addresses automatic detection of microaneurysms (MA) which are first detectable changes in Diabetic Retinopathy (DR). Green channel, being the most contrasted channel, of the color fundus images are considered. The algorithm includes pre-processing, MA candidates detection, features extraction, classification and comparison with ground truth to evaluate the performance of classifier model.
Matlab code for implementing Automatic Microaneurysms Detection is available at the below GitHub link;
https://github.com/ritulahoti/Automatic-Microaneurysm-Detection-from-Color-Fundus-Images-in-Diabetic-Retinopathy
The Positive Effects of Fuzzy C-Means Clustering on Supervised Learning Class...Waqas Tariq
Selection of inputs is one of the most substantial components of classification algorithms for data mining and pattern recognition problems since even the best classifier will perform badly if the inputs are not selected very well. Big data and computational complexity are main cause of bad performance and low accuracy for classical classifiers. In other words, the complexity of classifier method is inversely proportional with its classification efficiency. For this purpose, two hybrid classifiers have been developed by using both type-1 and type-2 fuzzy c-means clustering with cascaded a classifier. In this proposed classifier, a large number of data points are reduced by using fuzzy c-means clustering before applied to a classifier algorithm as inputs. The aim of this study is to investigate the effect of fuzzy clustering on well-known and useful classifiers such as artificial neural networks (ANN) and support vector machines (SVM). Then the role of positive effects of these proposed algorithms were investigated on applied different data sets.
Texture classification of fabric defects using machine learning IJECEIAES
In this paper, a novel algorithm for automatic fabric defect classification was proposed, based on the combination of a texture analysis method and a support vector machine SVM. Three texture methods were used and compared, GLCM, LBP, and LPQ. They were combined with SVM’s classifier. The system has been tested using TILDA database. A comparative study of the performance and the running time of the three methods was carried out. The obtained results are interesting and show that LBP is the best method for recognition and classification and it proves that the SVM is a suitable classifier for such problems. We demonstrate that some defects are easier to classify than others.
Template matching is a technique in computer vision used for finding a sub-image of a target image which matches a template image. This technique is widely used in object detection fields such as vehicle tracking, robotics , medical imaging, and manufacturing .
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Development of stereo matching algorithm based on sum of absolute RGB color d...IJECEIAES
This article presents local-based stereo matching algorithm which comprises a devel- opment of an algorithm using block matching and two edge preserving filters in the framework. Fundamentally, the matching process consists of several stages which will produce the disparity or depth map. The problem and most challenging work for matching process is to get an accurate corresponding point between two images. Hence, this article proposes an algorithm for stereo matching using improved Sum of Absolute RGB Differences (SAD), gradient matching and edge preserving filters. It is Bilateral Filter (BF) to surge up the accuracy. The SAD and gradient matching will be implemented at the first stage to get the preliminary corresponding result, then the BF works as an edge-preserving filter to remove the noise from the first stage. The second BF is used at the last stage to improve final disparity map and increase the object boundaries. The experimental analysis and validation are using the Middlebury standard benchmarking evaluation system. Based on the results, the proposed work is capable to increase the accuracy and to preserve the object edges. To make the proposed work more reliable with current available methods, the quantitative measurement has been made to compare with other existing methods and it shows the proposed work in this article perform much better.
Face recognition using selected topographical features IJECEIAES
This paper represents a new features selection method to improve an existed feature type. Topographical (TGH) features provide large set of features by assigning each image pixel to the related feature depending on image gradient and Hessian matrix. Such type of features was handled by a proposed features selection method. A face recognition feature selector (FRFS) method is presented to inspect TGH features. FRFS depends in its main concept on linear discriminant analysis (LDA) technique, which is used in evaluating features efficiency. FRFS studies feature behavior over a dataset of images to determine the level of its performance. At the end, each feature is assigned to its related level of performance with different levels of performance over the whole image. Depending on a chosen threshold, the highest set of features is selected to be classified by SVM classifier.
Noise tolerant color image segmentation using support vector machineeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Q UANTUM C LUSTERING -B ASED F EATURE SUBSET S ELECTION FOR MAMMOGRAPHIC I...ijcsit
In this paper, we present an algorithm for feature selection. This algorithm labeled QC-FS: Quantum
Clustering for Feature Selection performs the selection in two steps. Partitioning the original features
space in order to group similar features is performed using the Quantum Clustering algorithm. Then the
selection of a representative for each cluster is carried out. It uses similarity measures such as correlation
coefficient (CC) and the mutual information (MI). The feature which maximizes this information is chosen
by the algorithm
http://inarocket.com
Learn BEM fundamentals as fast as possible. What is BEM (Block, element, modifier), BEM syntax, how it works with a real example, etc.
How to Build a Dynamic Social Media PlanPost Planner
Stop guessing and wasting your time on networks and strategies that don’t work!
Join Rebekah Radice and Katie Lance to learn how to optimize your social networks, the best kept secrets for hot content, top time management tools, and much more!
Watch the replay here: bit.ly/socialmedia-plan
Content personalisation is becoming more prevalent. A site, it's content and/or it's products, change dynamically according to the specific needs of the user. SEO needs to ensure we do not fall behind of this trend.
Lightning Talk #9: How UX and Data Storytelling Can Shape Policy by Mika Aldabaux singapore
How can we take UX and Data Storytelling out of the tech context and use them to change the way government behaves?
Showcasing the truth is the highest goal of data storytelling. Because the design of a chart can affect the interpretation of data in a major way, one must wield visual tools with care and deliberation. Using quantitative facts to evoke an emotional response is best achieved with the combination of UX and data storytelling.
Succession “Losers”: What Happens to Executives Passed Over for the CEO Job?
By David F. Larcker, Stephen A. Miles, and Brian Tayan
Stanford Closer Look Series
Overview:
Shareholders pay considerable attention to the choice of executive selected as the new CEO whenever a change in leadership takes place. However, without an inside look at the leading candidates to assume the CEO role, it is difficult for shareholders to tell whether the board has made the correct choice. In this Closer Look, we examine CEO succession events among the largest 100 companies over a ten-year period to determine what happens to the executives who were not selected (i.e., the “succession losers”) and how they perform relative to those who were selected (the “succession winners”).
We ask:
• Are the executives selected for the CEO role really better than those passed over?
• What are the implications for understanding the labor market for executive talent?
• Are differences in performance due to operating conditions or quality of available talent?
• Are boards better at identifying CEO talent than other research generally suggests?
A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...gerogepatton
A support vector machine (SVM) learns the decision surface from two different classes of the input points, there are misclassifications in some of the input points in several applications. In this paper a bi-objective quadratic programming model is utilized and different feature quality measures are optimized simultaneously using the weighting method for solving our bi-objective quadratic programming problem. An important contribution will be added for the proposed bi-objective quadratic programming model by getting different efficient support vectors due to changing the weighting values. The numerical examples, give evidence of the effectiveness of the weighting parameters on reducing the misclassification between two classes of the input points. An interactive procedure will be added to identify the best compromise solution from the generated efficient solutions.
A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...ijaia
A support vector machine (SVM) learns the decision surface from two different classes of the input points, there are misclassifications in some of the input points in several applications. In this paper a bi-objective quadratic programming model is utilized and different feature quality measures are optimized simultaneously using the weighting method for solving our bi-objective quadratic programming problem. An important contribution will be added for the proposed bi-objective quadratic programming model by getting different efficient support vectors due to changing the weighting values. The numerical examples, give evidence of the effectiveness of the weighting parameters on reducing the misclassification between two classes of the input points. An interactive procedure will be added to identify the best compromise solution from the generated efficient solutions.
Method of optimization of the fundamental matrix by technique speeded up rob...IJECEIAES
The purpose of determining the fundamental matrix (F) is to define the epipolar geometry and to relate two 2D images of the same scene or video series to find the 3D scenes. The problem we address in this work is the estimation of the localization error and the processing time. We start by comparing the following feature extraction techniques: Harris, features from accelerated segment test (FAST), scale invariant feature transform (SIFT) and speed-up robust features (SURF) with respect to the number of detected points and correct matches by different changes in images. Then, we merged the best chosen by the objective function, which groups the descriptors by different regions in order to calculate F. Then, we applied the standardized eight-point algorithm which also automatically eliminates the outliers to find the optimal solution F. The test of our optimization approach is applied on the real images with different scene variations. Our simulation results provided good results in terms of accuracy and the computation time of F does not exceed 900 ms, as well as the projection error of maximum 1 pixel, regardless of the modification.
The International Journal of Engineering and Science (The IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
A parsimonious SVM model selection criterion for classification of real-world ...o_almasi
This paper proposes and optimizes a two-term cost function consisting of a sparseness term and a generalized v-fold cross-validation term by a new adaptive particle swarm optimization (APSO). APSO updates its parameters adaptively based on a dynamic feedback from the success rate of the each particle’s personal best. Since the proposed cost function is based on the choosing fewer numbers of support vectors, the complexity of SVM models decreased while the accuracy remains in an acceptable range. Therefore, the testing time decreases and makes SVM more applicable for practical applications in real data sets. A comparative study on data sets of UCI database is performed between the proposed cost function and conventional cost function to demonstrate the effectiveness of the proposed cost function.
Generalization of linear and non-linear support vector machine in multiple fi...CSITiaesprime
Support vector machines (SVMs) are a set of related supervised learning methods used for classification and regression. They belong to a family of generalized linear classifiers. In other terms, SVM is a classification and regression prediction tool that uses machine learning theory to maximize predictive accuracy. In this article, the discussion about linear and non-linear SVM classifiers with their functions and parameters is investigated. Due to the equality type of constraints in the formulation, the solution follows from solving a set of linear equations. Besides this, if the under-consideration problem is in the form of a non-linear case, then the problem must convert into linear separable form with the help of kernel trick and solve it according to the methods. Some important algorithms related to sentimental work are also presented in this paper. Generalization of the formulation of linear and non-linear SVMs is also open in this article. In the final section of this paper, the different modified sections of SVM are discussed which are modified by different research for different purposes.
Analytical study of feature extraction techniques in opinion miningcsandit
Although opinion mining is in a nascent stage of development but still the ground is set for
dense growth of researches in the field. One of the important activities of opinion mining is to
extract opinions of people based on characteristics of the object under study. Feature extraction
in opinion mining can be done by various ways like that of clustering, support vector machines
etc. This paper is an attempt to appraise the various techniques of feature extraction. The first
part discusses various techniques and second part makes a detailed appraisal of the major
techniques used for feature extraction
ANALYTICAL STUDY OF FEATURE EXTRACTION TECHNIQUES IN OPINION MININGcsandit
Although opinion mining is in a nascent stage of development but still the ground is set for dense growth of researches in the field. One of the important activities of opinion mining is to extract opinions of people based on characteristics of the object under study. Feature extraction in opinion mining can be done by various ways like that of clustering, support vector machines
etc. This paper is an attempt to appraise the various techniques of feature extraction. The first part discusses various techniques and second part makes a detailed appraisal of the major techniques used for feature extraction
Radial Basis Function Neural Network (RBFNN), Induction Motor, Vector control...cscpconf
Although opinion mining is in a nascent stage of development but still the ground is set for dense growth of researches in the field. One of the important activities of opinion mining is to
extract opinions of people based on characteristics of the object under study. Feature extraction in opinion mining can be done by various ways like that of clustering, support vector machines
etc. This paper is an attempt to appraise the various techniques of feature extraction. The first part discusses various techniques and second part makes a detailed appraisal of the major techniques used for feature extraction.
A Fuzzy Interactive BI-objective Model for SVM to Identify the Best Compromis...ijfls
A support vector machine (SVM) learns the decision surface from two different classes of the input points. In several applications, some of the input points are misclassified and each is not fully allocated to either of these two groups. In this paper a bi-objective quadratic programming model with fuzzy parameters is utilized and different feature quality measures are optimized simultaneously. An α-cut is defined to transform the fuzzy model to a family of classical bi-objective quadratic programming problems. The weighting method is used to optimize each of these problems. For the proposed fuzzy bi-objective quadratic programming model, a major contribution will be added by obtaining different effective support vectors due to changes in weighting values. The experimental results, show the effectiveness of the α-cut with the weighting parameters on reducing the misclassification between two classes of the input points. An interactive procedure will be added to identify the best compromise solution from the generated efficient solutions. The main contribution of this paper includes constructing a utility function for measuring the degree of infection with coronavirus disease (COVID-19).
A FUZZY INTERACTIVE BI-OBJECTIVE MODEL FOR SVM TO IDENTIFY THE BEST COMPROMIS...ijfls
A support vector machine (SVM) learns the decision surface from two different classes of the input points. In several applications, some of the input points are misclassified and each is not fully allocated to either of these two groups. In this paper a bi-objective quadratic programming model with fuzzy parameters is utilized and different feature quality measures are optimized simultaneously. An α-cut is defined to transform the fuzzy model to a family of classical bi-objective quadratic programming problems. The weighting method is used to optimize each of these problems. For the proposed fuzzy bi-objective quadratic programming model, a major contribution will be added by obtaining different effective support vectors due to changes in weighting values. The experimental results, show the effectiveness of the α-cut with the weighting parameters on reducing the misclassification between two classes of the input points. An interactive procedure will be added to identify the best compromise solution from the generated efficient solutions. The main contribution of this paper includes constructing a utility function for measuring the degree of infection with coronavirus disease (COVID-19).
IMPROVING SUPERVISED CLASSIFICATION OF DAILY ACTIVITIES LIVING USING NEW COST...csandit
The growing population of elders in the society calls for a new approach in care giving. By
inferring what activities elderly are performing in their houses it is possible to determine their
physical and cognitive capabilities. In this paper we show the potential of important
discriminative classifiers namely the Soft-Support Vector Machines (C-SVM), Conditional
Random Fields (CRF) and k-Nearest Neighbors (k-NN) for recognizing activities from sensor
patterns in a smart home environment. We address also the class imbalance problem in activity
recognition field which has been known to hinder the learning performance of classifiers. Cost
sensitive learning is attractive under most imbalanced circumstances, but it is difficult to
determine the precise misclassification costs in practice. We introduce a new criterion for
selecting the suitable cost parameter C of the C-SVM method. Through our evaluation on four
real world imbalanced activity datasets, we demonstrate that C-SVM based on our proposed
criterion outperforms the state-of-the-art discriminative methods in activity recognition.
Product defect detection based on convolutional autoencoder and one-class cla...IAESIJAI
To meet customer expectations and remain competitive, industrials try constantly to improve their quality control systems. There is hence increasing demand for adopting automatic defect detection solutions. However, the biggest issue in addressing such systems is the imbalanced aspect of industrial datasets. Often, defect-free samples far exceed the defected ones, due to continuous improvement approaches adopted by manufacturing companies. In this sense, we propose an automatic defect detection system based on one-class classification (OCC) since it involves only normal samples during training. It consists of three sub-models, first, a convolutional autoencoder serves as latent features extractor, the extracted features vectors are subsequently fed into the dimensionality reduction process by performing principal component analysis (PCA), then the reduced-dimensional data are used to train the one-class classifier support vector data description (SVDD). During the test phase, both normal and defected images are used. The first two stages of the trained model generate a low-dimensional features vector, whereas the SVDD classifies the new input, whether it is defect-free or defected. This approach is evaluated on the carpet images from the industrial inspection dataset MVTec anomaly detection (MVTec AD). During training, only normal images were used. The results showed that the proposed method outperforms the state-of-the-art methods.
Anomaly Detection in Temporal data Using Kmeans Clustering with C5.0theijes
Anomaly detection is a challenging problem in Temporal data .In this paper we have proposed an algorithm using two different machine learning techniques Kmeans clustering and C5.0 decision tree , where Euclidean distance is used to find the closest cluster for the data set and then decision tree is built for each cluster using C5.0 decision tree technique and the rules of decision tree is used to classify each anomalous and normal instances in the dataset .The proposed algorithm gives impressive classification accuracy in the experimented result and describe the proposed system of kmeans and C5.0 decision tree
INFLUENCE OF QUANTITY OF PRINCIPAL COMPONENT IN DISCRIMINATIVE FILTERINGcsandit
Discriminative filtering is a pattern recognition technique which aim maximize the energy of
output signal when a pattern is found. Looking improve the performance of filter response, was
incorporated the principal component analysis in discriminative filters design. In this work, we
investigate the influence of the quantity of principal components in the performance of
discriminative filtering applied to a facial fiducial point detection system. We show that quantity
of principal components directly affects the performance of the system, both in relation of true
and false positives rate.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
2. 24 Computer Science & Information Technology (CS & IT)
There are many recent papers about this theme. For example, the authors propose in [2] a face
recognition subsystem framework that uses fiducial points detection. The detection of these points
combines two techniques. The first uses Gabor filters coefficients for local detecting and the
second uses a combination of human face anthropometric measurements. The system proposed in
[8] explores the same problem. The authors use classifiers based on IPD (Inner Detector Product)
correlation filters. In [1], the authors developed robust filters to investigate the same problem.
These filters are designed using the principal components.
In this article, we study the performance of SVM mathematical formulation called C-SVC
(Support Vector Classification) [5], for fiducial point detection. The main contribution of this
paper is the investigation of parameters that have not been used before. For performance
evaluation, the classifiers were designed based on C-SVC. The training algorithm was performed
using eleven (11) fiducial points in a database composed of 503 images (base subset BioID [7]).
The performance of the system with C-SVC classifier was compared with similar methods, two of
them using discriminating and filtering and one using linear SVM [1]. The results demonstrate the
proposed method has competitive performance, and, in some cases, the proposed method
outperforms the others.
The remainder of this paper is organized as follows. In next section, we present a SVM
description. The system proposed is shown in Section 3, which is emphasized in the design of
classifiers, parameters optimization of different types of SVM and the chosen parameters, in
training and testing, results, and comments are made about these results. Finally, conclusions are
drawn in Section 4.
2. SUPPORT VECTORS MACHINES (SVM)
The SVM, proposed by Vapnik [4], is used to solve classification and regression problems. The
SVM algorithm provides an optimal separating hyperplane with maximum margin. The SVM can
be formulated to consider mislabelled samples. In this case, the technique is called soft margin
[5]. The use of soft margin provides a generalization of the SVM method to deal with problems of
separating classes. The mathematical formulation can be written as follows. First, consider the
hyperplane:
where: ξi , is the non-negative slack variable, yi is the class labels, n is the number of total class
elements and the superscript t
is the transpose of the vector. To maximize the class-separating
margin and obtain the optimal hyperplane, it is necessary to find a solution to the optimization
problem given by:
In Equation (2), C represents the weight imposed on the soft margin. The implementation of this
optimization problem is known as C-SVC [5]. There are others SVM optimization problems that
differ in some aspects such as: the formulation of the optimization problem and the used
parameters [5]. The performance depends on the parameters choice and the search for the best set
of parameters is extremely important. In this work, we use classifiers C-SVC due to its linear
structure. The linear SVM classifiers have low computational cost and therefore we can say that
the categorization of patterns is fast.
3. Computer Science & Information Technology (CS & IT) 25
3. FIDUCIAL POINT DETECTION SYSTEM USING C-SVC CLASSIFIERS
3.1. Design of Classifiers C-SVC
As mentioned in Section II, we can find several SVM formulation of the optimization problem. In
our research, we employ the optimization problem denominated C-SVC to design classifiers for a
fiducial points detection system. An important aspect is investigate which set of parameters
produces the best performance of the C-SVC classifiers. In this work, we explore the following
parameters:
• Parameter of regularization C with values equal to 2n
with n = {− 12, − 9, − 6, − 3, 0, 3};
• Maximum number of iterations to convergence: 10000; 18000; 26000; 34000; 42000;
50000.
• Proportional weighting of the number of class elements (positive: negative): (1:1), (1:5),
(2:10), (10:500), (5:1), (10:2) and (500:10).
The range of parameter values were obtained empirically by preliminary results. In these cases,
we observe the range have a significant influence in the performance of the fiducial points
detection system. An important point of view is that each parameter and its ranges represent one
experiment which must be explored. This approach is called parameters grid [5]. In our research,
we have an extensive grid with 252 parameter combinations. We use ξi equal to 0.1.
3.2. Fiducial Points Detection System using C-SVC Classifiers
The fiducial points detection system has two stages: training and testing. In both, all images are
pre-processed using a common step. The pre-processing step is composed by two stages. Firstly,
we apply the Viola-Jones face detection algorithm [11] in the image. After that, the face is scaled
to 200x200, and then we perform illumination correction [12].
In the training stage (see Figure 1), first, the image is pre-processed. Next, we use a method to
select an elliptical region of interest with high probability of containing the fiducial point. The
mathematical formulation can be seen in [1]. We can view the elliptical region of interest in the
block output of the Gaussian Priori Model (Figure 1).
Following, we choose the Bz blocks, with dimensions 13×13, that define the positive and negative
classes, where z represents the coordinates of the center of the block. We select a restricted
number of negative blocks due to its high amount. This selection is done using an algorithm to
generate random two-dimensional coordinates called Salt and Pepper [13]. The positive and
negative ratio is 160/1. Next, we transform each block Bz (array) in a vector v(Bz) by
concatenating the rows of the transposed matrix. Finally, we use the blocks of positive and
negative classes to design the C-SVC classifier. It is important comment that all images used in
the test system are different from the images used in the training system, i.e., the sets of training
and testing images are mutually exclusive.
4. 26 Computer Science & Information Technology (CS & IT)
Figure 1. Block diagram for training the fiducial points detection system using C-SVC classifiers.
In the testing stage, shown in Figure 2, can be described as follows: first, we pre-processing the
image. Next, we use the same elliptical region of interest determined in the training stage.
Following, each block B z is processed using a sliding window. The vector v(Bz) is obtained
through the transformation matrix Bz in a vector (identically to the transformation performed in
the training). Finally, we use the C-SVC classifier, found in the training, to label the vector v(Bz)
as positive or negative.
Figure 2. Block diagram for test the fiducial points detection system using C-SVC classifiers.
3.3. Experiments
The C-SVC classifiers of training and testing procedures (see Sections 3.1 and 3.2) were designed
using OpenCV (Open Source Computer Library) [14]. OpenCV is written in C and C++ and can
be operated in Windows, Linux and MAC OS. OpenCV was designed for computational
efficiency and has specific mathematical functions and methods, for example, in signal processing
and machine learning areas.
In order to evaluate the proposed method, we use 11 fiducial points as illustrated in Figure 3. We
use the system described in Section 3.2 for each fiducial point. Thus, we have 11 different fiducial
points detection systems. The tests were made using cross-validation [15] with 7 partitions (value
generally found in the literature [16]). In each experiment, we used two different sets for training
and test. That total numbers of images in the database, we used 6/7 for verification and training
parameters and 1/7 to validate the performance of the algorithm.
5. Computer Science & Information Technology (CS & IT) 27
Figure 3. Fiducial points and a human face considered in the experiments.
The experiments were performed by cluster computer with 30 cores, 4GB RAM memory and
2.26 GHz processor. In a brief description, BioID database [7] have 1521 images with frontal
human faces in grey scale from 23 different people. The image resolution is 384x286 pixels. The
faces have change of scale (for instance, some are close to the camera and), lighting, and small
rotations. Some people wear glasses, others have beard and/or moustache. The images in this
database have manual labels for 20 fiducial points. In this work, we use a BioID subset with 503
images. We used only the frontal face images whose individuals do not wear glasses and do not
have moustaches or beards. The faces have scale and lighting variations, and small rotations.
Were performed 8316 experiments for C-SVC classifier parameter design and 11 fiducial points.
3.4. Results
The performance of classifier is assessed computing the True Positive (TP) and False Positive
rates. We consider a candidate as a true positive when the distance between automatic and manual
annotations is smaller than 10% of face intraocular distance (distance between the pupils).
In Figures 4, 5, 6, 7 and 8 are shown the simulations results of the points 04, 05, 07, 08 and 10.
The graphics present the average TP, FP and standard deviation rates of all folds of all parameters
grid. Each number in grid axis represents one parameter combination of set described in Section
3.1. To assess in order to evaluate the graphs, we consider the best performing regions those with
higher TP and smaller FP rates jointly. Note that, for all graphs, the 84 first combinations are best
performance of the parameters grid.
Figure 4. Grid of parameters for the point 04 (left eye pupil) from BioID dataset.
6. 28 Computer Science & Information Technology (CS & IT)
Figure 5. Grid of parameters for the point 05 (outer corner of left eye) from BioID dataset.
Figure 6. Grid of parameters for the point 07 (tip of nose) from BioID dataset.
Figure 7. Grid of parameters for the point 08 (left nostril) from BioID dataset.
7. Computer Science & Information Technology (CS & IT) 29
Figure 8. Grid of parameters for the point 10 (left mouth corner) from BioID dataset.
In Table I, we show the results of the proposed method with the best performance of parameters
set for 11 fiducial points. The systems used in the methods SVM-L, DF and DF-PCA, are similar
those presented in [1]. These results differ due to the method of post-processing. In our research,
the post-processing is done by considering a region of 10% around the fiducial point for the
calculation of true and false positives (TP and FP). We can see that, in Table I, the proposed
method outperforms DF for all fiducial points, except in point 07 and outperforms linear SVM in
points 0, 1, 2, 4, 5, 9 and 10. In points 0, 1, 3, 4, 5, 8, 9 and 10, TP rate outperforms DF-PCA
method. We consider FP rates less than 2% satisfactory, this fact can be observed in the proposed
method, except in fiducial point 09.
Table I. Performance of the proposed system for 11 points using BioID database.
4. CONCLUSION
In this paper, we propose the performance evaluation of mathematical formulation SVM, called
C-SVC employed in 11 fiducial points detection. In this research, we use parameters of C-SVC
unexploited in the literature. The supervised system was implemented using the library in C++
called OpenCV. In this work, considering all the parameters of the system, we performed 8316
experiments. The performance of parameters grid in 5 fiducial points was shown and the best
performance was compared with other similar systems in the literature. From this set, two
classifiers were used for discriminating filter with and without PCA and one using a linear SVM.
Evaluating the performance in terms of TP and FP rates using cross-validation with 7 folds, we
identify the proposed system presents good results. Finally, we comment the relevance of this
8. 30 Computer Science & Information Technology (CS & IT)
research in methods that provide fiducial points detection on human faces due to large number of
applications.
ACKNOWLEDGEMENT
We would like to thanks Samsung for financial support. This research was supported by
FAPEAM and CAPES agency.
REFERENCES
[1] W. S. da Silva Júnior, G. M. Araújo, E. A. B. da Silva and S. K. Goldenstein, "Facial Fiducial Points
Detection Using Discriminative Filtering on Principal Components". In: Proceedings of the IEEE
International Conference on Image Processing, September, pp. 2681-2684, 2010.
[2] S. Jahanbin, H. Choi and A.C. Bovik, "Passive Multimodal 2-D+3-D Face Recognition Using Gabor
Features and Landmark Distances". IEEE Transactions on Information Forensics and Security, vol.6,
pp.1287-1304, 2011.
[3] Du, Chunhua and Yang, Jie and Wu, Qiang and He, Xiangjian, "Locating Facial Landmarks by
Support Vector Machine-based Active Shape Model". International Journal of Intelligent Systems
Technologies and Application, vol. 10, n. 2, pp. 151-170, 2011.
[4] V. N. Vapnik, The Nature of Statistical Learning Theory. New York, NY, USA: Springer-Verlag New
York, Inc., 1998.
[5] C. Chang and C. Lin, "LIBSVM: A Library for Support Vector Machines". ACM Transactions on
Intelligent Systems and Technology,vol.2, n. 3, 2011.
[6] K. Wu and S. Wang "Choosing the Kernel Parameters for Support Vector Machines by the Inter-
cluster in the Feature Space". Journal on Pattern Recognition, vol. 42, pp. 710-717, 2009.
[7] "Bioid Database", [Last access in May of 2013]. [Online]. Available in: http://www.bioid.com/
[8] G. M. Araujo, W. S. da Silva Júnior, E. A. B. da Silva, and S. K. Goldenstein, "Facial Landmarks
Detection based on Correlation Filter", In: Proceedings of the IEEE International Telecommunication
Symposium, Manaus, AM, Brazil, October 2010.
[9] "Cognitec Systems", [Last access in May of 2013]. [Online]. Available in: http://www.cognitec-
systems.de/
[10] A. Pradhan, "Support Vector Machine - A Survey". International Journal of Emerging technology and
Advanced Engineering, vol. 2, pp. 82-85, 2012.
[11] Viola, P. and Jones, M., "Robust Real-Time Object Detection". International Journal of Computer
Vision, vol. 57, n. 2, pp. 137-154, 2001.
[12] Xiaoyang, T. and Triggs, B., Enhanced Local Texture Feature Sets for Face Recognition Under
Difficult Lighting Conditions. IEEE Transactions on Image Processing, vol. 19, n. 6, pp. 1635-1650,
2010.
[13] G. R., Arce, J. L., Paredes and J. Mullan, "Nonlinear Filtering for Image Analysis and Enhancement".
In: A. L. Bovik (Ed.), Handbook of Image & Video Processing, Academic Press, 2000.
[14] "Open Computer Vision Library - OpenCV", [Last access in May of 2013]. [Online]. Available in:
http://opencv.org/
[15] Reeves, S., "A Cross-Validation Framework for Solving Image Restoration Problems". Journal of
Visual Communication and Image Processing, vol. 3, pp. 433-445, 1992.
[16] Kohavi, R., "A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model
Selection", In: Proceedings of the International Joint Conference on Artificial Intelligence, August,
pp. 1137-1143, 1996.
9. Computer Science & Information Technology (CS & IT) 31
Authors
Luiz Eduardo S. e Silva received the B.S. degree in electrical engineering at the Federal
University of Amazonas (UFAM), Manaus, AM, Brazil, in 2011. Currently, he is a M.Sc.
student at the same University and he is working with computer vision. Support Vector
Machine is a technique applied in his pattern recognition researches, which try to classify
elements of interest. He is Professor of Nokia Foundation in Electrical Circuits e
Telecommunications. Automatic Control, Digital Signal Processing, Machine Learning
and Mathematical Morphology are also his interest areas.
Pedro Donadio de T. Júnior is a researcher at Federal University of Amazonas – Brazil.
He received his Eletrical Engineer degree in 2009 at Federal University of Amazonas.
Currently, he is a M.Sc. student at the same University and he is working with computer
vision. Discriminative filtering is a technique applied in his pattern recognition researches,
which try to improve the filters performance by parallel processing. Automatic Control and
noise cancellation are also his interest areas.
Kenny V. Santos is a graduate student in Electrical Engineering from Federal University of
Amazonas. He received his bacharel degree in Electrical Engineering at Federal University
of Amazonas in 2008. Currently he works in the fields of computer vision and pattern
recognition, where his main interest is applications of digital signal processing in pattern
recognition.
Waldir S. S. Júnior received the B.S. degree in electrical engineering at the Federal
University of Amazonas (UFAM), Manaus, AM, Brazil, in 2000, and the M.S. degree in
electrical engineering at the Federal University of Rio de Janeiro (COPPE/UFRJ), Rio de
Janeiro, RJ, Brazil, in 2004 and Ph.D. degree in electrical engineering at the Federal
University of Rio de Janeiro (COPPE/UFRJ), Rio de Janeiro, RJ, Brazil, in 2010. Since
2006, he has been with the Federal University of Amazonas, as Full Professor. His
research interests are in the fields of data compression, as well as in mather gtay