In this contribution, a Bayes Ying-Yang(BYY) harmony based approach for on-line signature verification is
presented. In the proposed method, a simple but effective Gaussian Mixture Models(GMMs) is used to
represent for each user’s signature model based on the prior information collected. Different from the early
works, in this paper, we use the Bayes Ying Yang machine combined with the harmony function to achieve
Automatic Model Selection(AMS) during the parameter learning for the GMMs, so that a better
approximation of the user model is assured. Experiments on a database from the First International
Signature Verification Competition(SVC 2004) confirm that this combined algorithm yields quite a
satisfactory result.
The Application Of Bayes Ying-Yang Harmony Based Gmms In On-Line Signature Ve...ijaia
In this contribution, a Bayes Ying-Yang(BYY) harmony based approach for on-line signature verification is
presented. In the proposed method, a simple but effective Gaussian Mixture Models(GMMs) is used to
represent for each user’s signature model based on the prior information collected. Different from the early
works, in this paper, we use the Bayes Ying Yang machine combined with the harmony function to achieve
Automatic Model Selection(AMS) during the parameter learning for the GMMs, so that a better
approximation of the user model is assured. Experiments on a database from the First International
Signature Verification Competition(SVC 2004) confirm that this combined algorithm yields quite a
satisfactory result.
On comprehensive analysis of learning algorithms on pedestrian detection usin...UniversitasGadjahMada
Despite the surge of deep learning, deploying the deep learning-based pedestrian detection into the real system faces hurdles, mainly due to the huge resource usages. The classical feature-based detection system still becomes feasible option. There have been many efforts to improve the performance of pedestrian detection system. Among many feature set, Histogram of Oriented Gradient seems to be very effective for person detection. In this research, various machine learning algorithms are investigated for person detection. Different machine learning algorithms are evaluated to obtain the optimal accuracy and speed of the system.
IRJET- Performance Evaluation of Various Classification AlgorithmsIRJET Journal
This document evaluates the performance of various classification algorithms (logistic regression, K-nearest neighbors, decision tree, random forest, support vector machine, naive Bayes) on a heart disease dataset. It provides details on each algorithm and evaluates their performance based on metrics like confusion matrix, precision, recall, F1-score and accuracy. The results show that naive Bayes had the best performance in correctly classifying samples with an accuracy of 80.21%, while SVM had the worst at 46.15%. In general, random forest and naive Bayes performed best according to the evaluation.
A Combined Approach for Feature Subset Selection and Size Reduction for High ...IJERA Editor
selection of relevant feature from a given set of feature is one of the important issues in the field of
data mining as well as classification. In general the dataset may contain a number of features however it is not
necessary that the whole set features are important for particular analysis of decision making because the
features may share the common information‟s and can also be completely irrelevant to the undergoing
processing. This generally happen because of improper selection of features during the dataset formation or
because of improper information availability about the observed system. However in both cases the data will
contain the features that will just increase the processing burden which may ultimately cause the improper
outcome when used for analysis. Because of these reasons some kind of methods are required to detect and
remove these features hence in this paper we are presenting an efficient approach for not just removing the
unimportant features but also the size of complete dataset size. The proposed algorithm utilizes the information
theory to detect the information gain from each feature and minimum span tree to group the similar features
with that the fuzzy c-means clustering is used to remove the similar entries from the dataset. Finally the
algorithm is tested with SVM classifier using 35 publicly available real-world high-dimensional dataset and the
results shows that the presented algorithm not only reduces the feature set and data lengths but also improves the
performances of the classifier.
This document presents a methodology for real-time object tracking using a webcam. It combines Prewitt edge detection for object detection and Kalman filtering for tracking. Prewitt edge detection is used to detect the edges of the moving object in each video frame. Then, Kalman filtering is used to track the detected object across subsequent frames by predicting its location. Experiments show the approach can efficiently track objects under deformation, occlusion, and can track multiple objects simultaneously. The combination of Prewitt edge detection and Kalman filtering provides an effective method for real-time object tracking.
Text Skew Angle Detection in Vision-Based Scanning of Nutrition LabelsVladimir Kulyukin
This document presents an algorithm for detecting the text skew angle in images of nutrition labels. The algorithm applies multiple iterations of the 2D Haar Wavelet Transform to downsample the image and compute horizontal, vertical, and diagonal change matrices. It then binarizes and combines these matrices into a set of 2D change points. Finally, it uses a convex hull algorithm to find the minimum area rectangle containing all text pixels, and calculates the text skew angle as the rotation angle of this rectangle. The algorithm's performance is compared to two other text skew detection algorithms on a sample of over 600 nutrition label images, finding a median error of 4.62 degrees compared to 68.85 and 20.92 degrees for the other algorithms.
Fault diagnosis using genetic algorithms and principal curveseSAT Journals
Abstract Several applications of nonlinear principal component analysis (NPCA) have appeared recently in process monitoring and fault diagnosis. In this paper a new approach is proposed for fault detection based on principal curves and genetic algorithms. The principal curve is a generation of linear principal component (PCA) introduced by Hastie as a parametric curve passes satisfactorily through the middle of data. The existing principal curves algorithms employ the first component of the data as an initial estimation of principal curve. However the dependence on initial line leads to a lack of flexibility and the final curve is only satisfactory for specific problems. In this paper we extend this work in two ways. First, we propose a new method based on genetic algorithms to find the principal curve. Here, lines are fitted and connected to form polygonal lines (PL). Second, potential application of principal curves is discussed. An example is used to illustrate fault diagnosis of nonlinear process using the proposed approach. Index Terms: Principal curve, Genetic Algorithm, Nonlinear principal component analysis, Fault detection.
IRJET- Document Layout analysis using Inverse Support Vector Machine (I-SV...IRJET Journal
This document discusses using inverse support vector machines (I-SVM) for document layout analysis of Hindi newspaper images for optical character recognition. It proposes a framework that uses bounding box segmentation, feature extraction using subline direction and bounding box shape detection, and I-SVM classification. Preprocessing steps include binarization, removing horizontal/vertical lines, and morphological operations. Experimental results show the algorithm can accurately label blocks in newspaper layouts and extract articles for OCR.
The Application Of Bayes Ying-Yang Harmony Based Gmms In On-Line Signature Ve...ijaia
In this contribution, a Bayes Ying-Yang(BYY) harmony based approach for on-line signature verification is
presented. In the proposed method, a simple but effective Gaussian Mixture Models(GMMs) is used to
represent for each user’s signature model based on the prior information collected. Different from the early
works, in this paper, we use the Bayes Ying Yang machine combined with the harmony function to achieve
Automatic Model Selection(AMS) during the parameter learning for the GMMs, so that a better
approximation of the user model is assured. Experiments on a database from the First International
Signature Verification Competition(SVC 2004) confirm that this combined algorithm yields quite a
satisfactory result.
On comprehensive analysis of learning algorithms on pedestrian detection usin...UniversitasGadjahMada
Despite the surge of deep learning, deploying the deep learning-based pedestrian detection into the real system faces hurdles, mainly due to the huge resource usages. The classical feature-based detection system still becomes feasible option. There have been many efforts to improve the performance of pedestrian detection system. Among many feature set, Histogram of Oriented Gradient seems to be very effective for person detection. In this research, various machine learning algorithms are investigated for person detection. Different machine learning algorithms are evaluated to obtain the optimal accuracy and speed of the system.
IRJET- Performance Evaluation of Various Classification AlgorithmsIRJET Journal
This document evaluates the performance of various classification algorithms (logistic regression, K-nearest neighbors, decision tree, random forest, support vector machine, naive Bayes) on a heart disease dataset. It provides details on each algorithm and evaluates their performance based on metrics like confusion matrix, precision, recall, F1-score and accuracy. The results show that naive Bayes had the best performance in correctly classifying samples with an accuracy of 80.21%, while SVM had the worst at 46.15%. In general, random forest and naive Bayes performed best according to the evaluation.
A Combined Approach for Feature Subset Selection and Size Reduction for High ...IJERA Editor
selection of relevant feature from a given set of feature is one of the important issues in the field of
data mining as well as classification. In general the dataset may contain a number of features however it is not
necessary that the whole set features are important for particular analysis of decision making because the
features may share the common information‟s and can also be completely irrelevant to the undergoing
processing. This generally happen because of improper selection of features during the dataset formation or
because of improper information availability about the observed system. However in both cases the data will
contain the features that will just increase the processing burden which may ultimately cause the improper
outcome when used for analysis. Because of these reasons some kind of methods are required to detect and
remove these features hence in this paper we are presenting an efficient approach for not just removing the
unimportant features but also the size of complete dataset size. The proposed algorithm utilizes the information
theory to detect the information gain from each feature and minimum span tree to group the similar features
with that the fuzzy c-means clustering is used to remove the similar entries from the dataset. Finally the
algorithm is tested with SVM classifier using 35 publicly available real-world high-dimensional dataset and the
results shows that the presented algorithm not only reduces the feature set and data lengths but also improves the
performances of the classifier.
This document presents a methodology for real-time object tracking using a webcam. It combines Prewitt edge detection for object detection and Kalman filtering for tracking. Prewitt edge detection is used to detect the edges of the moving object in each video frame. Then, Kalman filtering is used to track the detected object across subsequent frames by predicting its location. Experiments show the approach can efficiently track objects under deformation, occlusion, and can track multiple objects simultaneously. The combination of Prewitt edge detection and Kalman filtering provides an effective method for real-time object tracking.
Text Skew Angle Detection in Vision-Based Scanning of Nutrition LabelsVladimir Kulyukin
This document presents an algorithm for detecting the text skew angle in images of nutrition labels. The algorithm applies multiple iterations of the 2D Haar Wavelet Transform to downsample the image and compute horizontal, vertical, and diagonal change matrices. It then binarizes and combines these matrices into a set of 2D change points. Finally, it uses a convex hull algorithm to find the minimum area rectangle containing all text pixels, and calculates the text skew angle as the rotation angle of this rectangle. The algorithm's performance is compared to two other text skew detection algorithms on a sample of over 600 nutrition label images, finding a median error of 4.62 degrees compared to 68.85 and 20.92 degrees for the other algorithms.
Fault diagnosis using genetic algorithms and principal curveseSAT Journals
Abstract Several applications of nonlinear principal component analysis (NPCA) have appeared recently in process monitoring and fault diagnosis. In this paper a new approach is proposed for fault detection based on principal curves and genetic algorithms. The principal curve is a generation of linear principal component (PCA) introduced by Hastie as a parametric curve passes satisfactorily through the middle of data. The existing principal curves algorithms employ the first component of the data as an initial estimation of principal curve. However the dependence on initial line leads to a lack of flexibility and the final curve is only satisfactory for specific problems. In this paper we extend this work in two ways. First, we propose a new method based on genetic algorithms to find the principal curve. Here, lines are fitted and connected to form polygonal lines (PL). Second, potential application of principal curves is discussed. An example is used to illustrate fault diagnosis of nonlinear process using the proposed approach. Index Terms: Principal curve, Genetic Algorithm, Nonlinear principal component analysis, Fault detection.
IRJET- Document Layout analysis using Inverse Support Vector Machine (I-SV...IRJET Journal
This document discusses using inverse support vector machines (I-SVM) for document layout analysis of Hindi newspaper images for optical character recognition. It proposes a framework that uses bounding box segmentation, feature extraction using subline direction and bounding box shape detection, and I-SVM classification. Preprocessing steps include binarization, removing horizontal/vertical lines, and morphological operations. Experimental results show the algorithm can accurately label blocks in newspaper layouts and extract articles for OCR.
Performance and analysis of improved unsharp masking algorithm for imageIAEME Publication
This document presents a study on improving an unsharp masking algorithm for image enhancement. It proposes using an exploratory data analysis model that decomposes an image into a model component and a residual component. The proposed algorithm then individually processes these components to increase contrast and sharpness while reducing halo effects and out-of-range issues. It defines new log-ratio operations for a generalized linear system using concepts from vector spaces and Bregman divergence to provide a theoretical basis for the algorithm. Experimental results showed the proposed algorithm enhanced contrast and sharpness better than previous methods.
The Detection of Straight and Slant Wood Fiber through Slop Angle Fiber FeatureNooria Sukmaningtyas
Quality control is one of important process that can not be avoided in industry. Image processing
technique is required to distinguish the quality of wood. If it can be done automatically by the computer, it
will be very helpful. This paper discusses the detection of straight and slant wood fiber to distinguish its
quality. This paper proposes an algorithm by using only two features i.e. mean (average value of slop
angle fiber) and maximumangle (the maximum value of slop angle fiber). Then the classification method is
used by tresholding. The result shows the performance is achieved on accuracy 79.2%
Texture based feature extraction and object trackingPriyanka Goswami
This document provides a project report on texture-based feature extraction and object tracking. It discusses using various texture analysis techniques like Local Binary Pattern (LBP), Local Derivative Pattern (LDP), and Local Ternary Pattern (LTP) to extract features from images for tasks like cloud tracking. It implements these techniques in MATLAB and evaluates them on standard datasets to extract features and represent images with histograms for tasks like image recognition and analysis while reducing computational requirements compared to using raw images. The techniques are then applied to track cloud motion in weather satellite images by analyzing differences in texture histograms over time.
Solving linear equations from an image using anneSAT Journals
Abstract
Optical character recognition has a great impact in image processing application. This paper combines the concept of OCR and feed-forward artificial neural network to solve mathematical linear equations. We implement blob analysis and feature extraction to extract the individual characters to a captured image which having some mathematical equations. We are constructing 39 character set which having some numbers, alphabet and operators. Training of these character set is done by using supervised learning rule. If that image satisfying linear equation condition then our proposed algorithm solve this equation and generate the output. This paper tries to increase the recognition rate more than 87%. The result achieved from the training and testing on the network of the letter recognition is satisfactory.
Keywords: Artificial Neural Network, Linear Equation, Recognized rate, Optical Character Recognition.
LOGNORMAL ORDINARY KRIGING METAMODEL IN SIMULATION OPTIMIZATIONorajjournal
This paper presents a lognormal ordinary kriging (LOK) metamodel algorithm and its application to
optimize a stochastic simulation problem. Kriging models have been developed as an interpolation method
in geology. They have been successfully used for the deterministic simulation optimization (SO) problem. In
recent years, kriging metamodeling has attracted a growing interest with stochastic problems. SO
researchers have begun using ordinary kriging through global optimization in stochastic systems. The
goals of this study are to present LOK metamodel algorithm and to analyze the result of the application
step-by-step. The results show that LOK is a powerful alternative metamodel in simulation optimization
when the data are too skewed.
A Modified KS-test for Feature SelectionIOSR Journals
This document proposes a modified Kolmogorov-Smirnov (KS) test-based feature selection algorithm. It begins with an overview of feature selection and its benefits. It then discusses two common feature selection approaches: filter and wrapper models. The document proposes a fast redundancy removal filter based on a modified KS statistic that utilizes class label information to compare feature pairs. It compares the proposed algorithm to other methods like Correlation Feature Selection (CFS) and KS-Correlation Based Filter (KS-CBF). The efficiency and effectiveness of the various methods are tested on standard classifiers. In most cases, the proposed approach achieved equal or better classification accuracy compared to using all features or the other algorithms.
OPTIMAL GLOBAL THRESHOLD ESTIMATION USING STATISTICAL CHANGE-POINT DETECTIONsipij
Aim of this paper is reformulation of global image thresholding problem as a well-founded statistical
method known as change-point detection (CPD) problem. Our proposed CPD thresholding algorithm does
not assume any prior statistical distribution of background and object grey levels. Further, this method is
less influenced by an outlier due to our judicious derivation of a robust criterion function depending on
Kullback-Leibler (KL) divergence measure. Experimental result shows efficacy of proposed method
compared to other popular methods available for global image thresholding. In this paper we also propose
a performance criterion for comparison of thresholding algorithms. This performance criteria does not
depend on any ground truth image. We have used this performance criterion to compare the results of
proposed thresholding algorithm with most cited global thresholding algorithms in the literature.
A combined method of fractal and glcm features for mri and ct scan images cla...sipij
Fractal analysis has been shown to be useful in image processing for characterizing shape and gray-scale
complexity. The fractal feature is a compact descriptor used to give a numerical measure of the degree of
irregularity of the medical images. This descriptor property does not give ownership of the local image
structure. In this paper, we present a combination of this parameter based on Box Counting with GLCM
Features. This powerful combination has proved good results especially in classification of medical texture
from MRI and CT Scan images of trabecular bone. This method has the potential to improve clinical
diagnostics tests for osteoporosis pathologies.
Brain tumor segmentation using asymmetry based histogram thresholding and k m...eSAT Publishing House
This document presents a method for segmenting brain tumors from MRI images using asymmetry-based histogram thresholding and k-means clustering. The method involves 8 steps: 1) preprocessing the MRI image using sharpening and median filters, 2) computing histograms of the left and right halves of the image, 3) calculating a threshold value using the difference between left and right histograms, 4) applying thresholding and morphological operations to extract the tumor region, 5) applying k-means clustering and using the cluster centroids to refine the segmentation. The method is tested on 30 MRI images and results show the tumor region is accurately segmented. The segmented tumors can then be used for quantification, classification, and computer-assisted diagnosis of brain tumors.
This document discusses intra-frame compression using Huffman and arithmetic entropy coding techniques. It implements both techniques in MATLAB on two test images and evaluates the results. Huffman coding performed slightly better, with lower bit rates and higher compression rates. However, arithmetic coding would likely perform better on images with larger symbol dictionaries or alphabets due to its adaptive modeling capabilities. Both techniques achieved high efficiency for the test images.
An efficient hardware logarithm generator with modified quasi-symmetrical app...IJECEIAES
This paper presents a low-error, low-area FPGA-based hardware logarithm generator for digital signal processing systems which require high-speed, real time logarithm operations. The proposed logarithm generator employs the modified quasi-symmetrical approach for an efficient hardware implementation. The error analysis and implementation results are also presented and discussed. The achieved results show that the proposed approach can reduce the approximation error and hardware area compared with traditional methods.
This document discusses tracking multiple objects in video using probabilistic distributions. It proposes using particle filters to represent object positions with random particles. The method initializes particles randomly, updates their positions each frame based on probabilistic distributions, and uses maximum likelihood estimation to compute the distribution parameters. It models object motion using a beta distribution and estimates the distribution's alpha and beta parameters from each frame to predict object positions. The results show this approach can effectively track multiple moving objects, especially when there are occlusions.
This document presents a novel method for recognizing two-dimensional QR barcodes using texture feature analysis and neural networks. It first extracts texture features like mean, standard deviation, smoothness, skewness and entropy from divided blocks of barcode images. These features are then used to train a neural network to classify blocks as containing a barcode or not. The trained neural network can then be used to locate barcodes in unknown images by classifying each block. The method is implemented and evaluated using MATLAB on a database of QR code images, showing satisfactory recognition results.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
BPSO&1-NN algorithm-based variable selection for power system stability ident...IJAEMSJORNAL
Due to the very high nonlinearity of the power system, traditional analytical methods take a lot of time to solve, causing delay in decision-making. Therefore, quickly detecting power system instability helps the control system to make timely decisions become the key factor to ensure stable operation of the power system. Power system stability identification encounters large data set size problem. The need is to select representative variables as input variables for the identifier. This paper proposes to apply wrapper method to select variables. In which, Binary Particle Swarm Optimization (BPSO) algorithm combines with K-NN (K=1) identifier to search for good set of variables. It is named BPSO&1-NN. Test results on IEEE 39-bus diagram show that the proposed method achieves the goal of reducing variables with high accuracy.
APPLYING DYNAMIC MODEL FOR MULTIPLE MANOEUVRING TARGET TRACKING USING PARTICL...IJITCA Journal
In this paper, we applied a dynamic model for manoeuvring targets in SIR particle filter algorithm for improving tracking accuracy of multiple manoeuvring targets. In our proposed approach, a color distribution model is used to detect changes of target's model . Our proposed approach controls
deformation of target's model. If deformation of target's model is larger than a predetermined threshold,then the model will be updated. Global Nearest Neighbor (GNN) algorithm is used as data association algorithm. We named our proposed method as Deformation Detection Particle Filter (DDPF) . DDPF
approach is compared with basic SIR-PF algorithm on real airshow videos. Comparisons results show that, the basic SIR-PF algorithm is not able to track the manoeuvring targets when the rotation or scaling is occurred in target' s model. However, DDPF approach updates target's model when the rotation or
scaling is occurred. Thus, the proposed approach is able to track the manoeuvring targets more efficiently
and accurately.
Blind Image Seperation Using Forward Difference Method (FDM)sipij
In this paper, blind image separation is performed, exploiting the property of sparseness to represent images. A new sparse representation called forward difference method is proposed. It is known that most of the independent component analysis (ICA) basis functions, extracted from images are sparse and gives unreliable sparseness measure. In the proposed method, the image mixture is first transformed to sparse images. These images are divided into blocks and for each block the sparseness measure ε0 norm is applied. The block having the most sparseness is considered to determine the separation matrix. The efficiency of the proposed method is compared with other sparse representation functions.
Cursive Handwriting Segmentation Using Ideal Distance Approach IJECEIAES
Offline cursive handwriting becomes a major challenge due to the huge amount of handwriting varieties such as slant handwriting, space between words, the size and direction of the letter, the style of writing the letter and handwriting with contour similarity on some letters. There are some steps for recursive handwriting recognition. The steps are preprocessing, morphology, segmentation, features of letter extraction and recognition. Segmentation is a crucial process in handwriting recognition since the success of segmentation step will determine the success level of recognition. This paper proposes a segmentation algorithm that segment recursive handwriting into letters. These letters will form words using a method that determine the intersection cutting point of image recursive handwriting with an ideal image distance. The ideal distance of recursive handwriting image is an ideal distance segmentation point in order to avoid the cutting of other letter’s section. The width and height of images are used to determine the accurate segmentation point. There were 999 recursive handwriting input images taken from 25 researchers used for this study. The images used are the images obtained from preprocessing step. Those are the images with slope correction. This study used Support Vector Machine (SVM) to recognize recursive handwriting. The experiments show the proposed segmentation algorithm able to segment the image precisely and have 97% success recognizing the recursive handwriting.
JARDIKNAS adalah jaringan pendidikan nasional yang terdiri atas empat zona jaringan untuk kantor dinas, perguruan tinggi, sekolah, dan guru siswa. JARDIKNAS memungkinkan akses informasi dan e-learning serta transaksi data pendidikan secara online. Rencananya, JARDIKNAS akan mengintegrasikan sistem informasi manajemen, konten pembelajaran, dan e-learning untuk mendukung pendidikan jarak jauh di seluruh Indonesia.
Dois terroristas da Al Qaeda foram enviados para realizar um ataque ao Santuário de Fátima em Portugal, porém sofreram uma série de contratempos cômicos que impediram a conclusão da missão, incluindo extravio de bagagem, dificuldades de comunicação, assalto, intoxicação alimentar, surra de torcedores, entre outros. Após duas semanas de desventuras, os terroristas desejam apenas retornar ao Afeganistão.
Performance and analysis of improved unsharp masking algorithm for imageIAEME Publication
This document presents a study on improving an unsharp masking algorithm for image enhancement. It proposes using an exploratory data analysis model that decomposes an image into a model component and a residual component. The proposed algorithm then individually processes these components to increase contrast and sharpness while reducing halo effects and out-of-range issues. It defines new log-ratio operations for a generalized linear system using concepts from vector spaces and Bregman divergence to provide a theoretical basis for the algorithm. Experimental results showed the proposed algorithm enhanced contrast and sharpness better than previous methods.
The Detection of Straight and Slant Wood Fiber through Slop Angle Fiber FeatureNooria Sukmaningtyas
Quality control is one of important process that can not be avoided in industry. Image processing
technique is required to distinguish the quality of wood. If it can be done automatically by the computer, it
will be very helpful. This paper discusses the detection of straight and slant wood fiber to distinguish its
quality. This paper proposes an algorithm by using only two features i.e. mean (average value of slop
angle fiber) and maximumangle (the maximum value of slop angle fiber). Then the classification method is
used by tresholding. The result shows the performance is achieved on accuracy 79.2%
Texture based feature extraction and object trackingPriyanka Goswami
This document provides a project report on texture-based feature extraction and object tracking. It discusses using various texture analysis techniques like Local Binary Pattern (LBP), Local Derivative Pattern (LDP), and Local Ternary Pattern (LTP) to extract features from images for tasks like cloud tracking. It implements these techniques in MATLAB and evaluates them on standard datasets to extract features and represent images with histograms for tasks like image recognition and analysis while reducing computational requirements compared to using raw images. The techniques are then applied to track cloud motion in weather satellite images by analyzing differences in texture histograms over time.
Solving linear equations from an image using anneSAT Journals
Abstract
Optical character recognition has a great impact in image processing application. This paper combines the concept of OCR and feed-forward artificial neural network to solve mathematical linear equations. We implement blob analysis and feature extraction to extract the individual characters to a captured image which having some mathematical equations. We are constructing 39 character set which having some numbers, alphabet and operators. Training of these character set is done by using supervised learning rule. If that image satisfying linear equation condition then our proposed algorithm solve this equation and generate the output. This paper tries to increase the recognition rate more than 87%. The result achieved from the training and testing on the network of the letter recognition is satisfactory.
Keywords: Artificial Neural Network, Linear Equation, Recognized rate, Optical Character Recognition.
LOGNORMAL ORDINARY KRIGING METAMODEL IN SIMULATION OPTIMIZATIONorajjournal
This paper presents a lognormal ordinary kriging (LOK) metamodel algorithm and its application to
optimize a stochastic simulation problem. Kriging models have been developed as an interpolation method
in geology. They have been successfully used for the deterministic simulation optimization (SO) problem. In
recent years, kriging metamodeling has attracted a growing interest with stochastic problems. SO
researchers have begun using ordinary kriging through global optimization in stochastic systems. The
goals of this study are to present LOK metamodel algorithm and to analyze the result of the application
step-by-step. The results show that LOK is a powerful alternative metamodel in simulation optimization
when the data are too skewed.
A Modified KS-test for Feature SelectionIOSR Journals
This document proposes a modified Kolmogorov-Smirnov (KS) test-based feature selection algorithm. It begins with an overview of feature selection and its benefits. It then discusses two common feature selection approaches: filter and wrapper models. The document proposes a fast redundancy removal filter based on a modified KS statistic that utilizes class label information to compare feature pairs. It compares the proposed algorithm to other methods like Correlation Feature Selection (CFS) and KS-Correlation Based Filter (KS-CBF). The efficiency and effectiveness of the various methods are tested on standard classifiers. In most cases, the proposed approach achieved equal or better classification accuracy compared to using all features or the other algorithms.
OPTIMAL GLOBAL THRESHOLD ESTIMATION USING STATISTICAL CHANGE-POINT DETECTIONsipij
Aim of this paper is reformulation of global image thresholding problem as a well-founded statistical
method known as change-point detection (CPD) problem. Our proposed CPD thresholding algorithm does
not assume any prior statistical distribution of background and object grey levels. Further, this method is
less influenced by an outlier due to our judicious derivation of a robust criterion function depending on
Kullback-Leibler (KL) divergence measure. Experimental result shows efficacy of proposed method
compared to other popular methods available for global image thresholding. In this paper we also propose
a performance criterion for comparison of thresholding algorithms. This performance criteria does not
depend on any ground truth image. We have used this performance criterion to compare the results of
proposed thresholding algorithm with most cited global thresholding algorithms in the literature.
A combined method of fractal and glcm features for mri and ct scan images cla...sipij
Fractal analysis has been shown to be useful in image processing for characterizing shape and gray-scale
complexity. The fractal feature is a compact descriptor used to give a numerical measure of the degree of
irregularity of the medical images. This descriptor property does not give ownership of the local image
structure. In this paper, we present a combination of this parameter based on Box Counting with GLCM
Features. This powerful combination has proved good results especially in classification of medical texture
from MRI and CT Scan images of trabecular bone. This method has the potential to improve clinical
diagnostics tests for osteoporosis pathologies.
Brain tumor segmentation using asymmetry based histogram thresholding and k m...eSAT Publishing House
This document presents a method for segmenting brain tumors from MRI images using asymmetry-based histogram thresholding and k-means clustering. The method involves 8 steps: 1) preprocessing the MRI image using sharpening and median filters, 2) computing histograms of the left and right halves of the image, 3) calculating a threshold value using the difference between left and right histograms, 4) applying thresholding and morphological operations to extract the tumor region, 5) applying k-means clustering and using the cluster centroids to refine the segmentation. The method is tested on 30 MRI images and results show the tumor region is accurately segmented. The segmented tumors can then be used for quantification, classification, and computer-assisted diagnosis of brain tumors.
This document discusses intra-frame compression using Huffman and arithmetic entropy coding techniques. It implements both techniques in MATLAB on two test images and evaluates the results. Huffman coding performed slightly better, with lower bit rates and higher compression rates. However, arithmetic coding would likely perform better on images with larger symbol dictionaries or alphabets due to its adaptive modeling capabilities. Both techniques achieved high efficiency for the test images.
An efficient hardware logarithm generator with modified quasi-symmetrical app...IJECEIAES
This paper presents a low-error, low-area FPGA-based hardware logarithm generator for digital signal processing systems which require high-speed, real time logarithm operations. The proposed logarithm generator employs the modified quasi-symmetrical approach for an efficient hardware implementation. The error analysis and implementation results are also presented and discussed. The achieved results show that the proposed approach can reduce the approximation error and hardware area compared with traditional methods.
This document discusses tracking multiple objects in video using probabilistic distributions. It proposes using particle filters to represent object positions with random particles. The method initializes particles randomly, updates their positions each frame based on probabilistic distributions, and uses maximum likelihood estimation to compute the distribution parameters. It models object motion using a beta distribution and estimates the distribution's alpha and beta parameters from each frame to predict object positions. The results show this approach can effectively track multiple moving objects, especially when there are occlusions.
This document presents a novel method for recognizing two-dimensional QR barcodes using texture feature analysis and neural networks. It first extracts texture features like mean, standard deviation, smoothness, skewness and entropy from divided blocks of barcode images. These features are then used to train a neural network to classify blocks as containing a barcode or not. The trained neural network can then be used to locate barcodes in unknown images by classifying each block. The method is implemented and evaluated using MATLAB on a database of QR code images, showing satisfactory recognition results.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
BPSO&1-NN algorithm-based variable selection for power system stability ident...IJAEMSJORNAL
Due to the very high nonlinearity of the power system, traditional analytical methods take a lot of time to solve, causing delay in decision-making. Therefore, quickly detecting power system instability helps the control system to make timely decisions become the key factor to ensure stable operation of the power system. Power system stability identification encounters large data set size problem. The need is to select representative variables as input variables for the identifier. This paper proposes to apply wrapper method to select variables. In which, Binary Particle Swarm Optimization (BPSO) algorithm combines with K-NN (K=1) identifier to search for good set of variables. It is named BPSO&1-NN. Test results on IEEE 39-bus diagram show that the proposed method achieves the goal of reducing variables with high accuracy.
APPLYING DYNAMIC MODEL FOR MULTIPLE MANOEUVRING TARGET TRACKING USING PARTICL...IJITCA Journal
In this paper, we applied a dynamic model for manoeuvring targets in SIR particle filter algorithm for improving tracking accuracy of multiple manoeuvring targets. In our proposed approach, a color distribution model is used to detect changes of target's model . Our proposed approach controls
deformation of target's model. If deformation of target's model is larger than a predetermined threshold,then the model will be updated. Global Nearest Neighbor (GNN) algorithm is used as data association algorithm. We named our proposed method as Deformation Detection Particle Filter (DDPF) . DDPF
approach is compared with basic SIR-PF algorithm on real airshow videos. Comparisons results show that, the basic SIR-PF algorithm is not able to track the manoeuvring targets when the rotation or scaling is occurred in target' s model. However, DDPF approach updates target's model when the rotation or
scaling is occurred. Thus, the proposed approach is able to track the manoeuvring targets more efficiently
and accurately.
Blind Image Seperation Using Forward Difference Method (FDM)sipij
In this paper, blind image separation is performed, exploiting the property of sparseness to represent images. A new sparse representation called forward difference method is proposed. It is known that most of the independent component analysis (ICA) basis functions, extracted from images are sparse and gives unreliable sparseness measure. In the proposed method, the image mixture is first transformed to sparse images. These images are divided into blocks and for each block the sparseness measure ε0 norm is applied. The block having the most sparseness is considered to determine the separation matrix. The efficiency of the proposed method is compared with other sparse representation functions.
Cursive Handwriting Segmentation Using Ideal Distance Approach IJECEIAES
Offline cursive handwriting becomes a major challenge due to the huge amount of handwriting varieties such as slant handwriting, space between words, the size and direction of the letter, the style of writing the letter and handwriting with contour similarity on some letters. There are some steps for recursive handwriting recognition. The steps are preprocessing, morphology, segmentation, features of letter extraction and recognition. Segmentation is a crucial process in handwriting recognition since the success of segmentation step will determine the success level of recognition. This paper proposes a segmentation algorithm that segment recursive handwriting into letters. These letters will form words using a method that determine the intersection cutting point of image recursive handwriting with an ideal image distance. The ideal distance of recursive handwriting image is an ideal distance segmentation point in order to avoid the cutting of other letter’s section. The width and height of images are used to determine the accurate segmentation point. There were 999 recursive handwriting input images taken from 25 researchers used for this study. The images used are the images obtained from preprocessing step. Those are the images with slope correction. This study used Support Vector Machine (SVM) to recognize recursive handwriting. The experiments show the proposed segmentation algorithm able to segment the image precisely and have 97% success recognizing the recursive handwriting.
JARDIKNAS adalah jaringan pendidikan nasional yang terdiri atas empat zona jaringan untuk kantor dinas, perguruan tinggi, sekolah, dan guru siswa. JARDIKNAS memungkinkan akses informasi dan e-learning serta transaksi data pendidikan secara online. Rencananya, JARDIKNAS akan mengintegrasikan sistem informasi manajemen, konten pembelajaran, dan e-learning untuk mendukung pendidikan jarak jauh di seluruh Indonesia.
Dois terroristas da Al Qaeda foram enviados para realizar um ataque ao Santuário de Fátima em Portugal, porém sofreram uma série de contratempos cômicos que impediram a conclusão da missão, incluindo extravio de bagagem, dificuldades de comunicação, assalto, intoxicação alimentar, surra de torcedores, entre outros. Após duas semanas de desventuras, os terroristas desejam apenas retornar ao Afeganistão.
El documento describe las tarjetas de expansión utilizadas en las computadoras. Menciona tarjetas como las tarjetas de video, que procesan gráficos; las tarjetas de red, que permiten la conectividad inalámbrica o cableada; y las tarjetas controladoras, que permiten la conexión de dispositivos de almacenamiento. Explica que estas tarjetas se insertan en ranuras de la placa madre para expandir las capacidades de la computadora.
Bộ ảnh “cùng con đi khắp thế gian” siêu yêu của mẹ việtgiangcdby04
Nói về bộ ảnh “Cùng con đi khắp thế gian”, Như Thủy cho biết cô có đam mê lớn là du lịch và mong ước sau này sẽ có thể cùng con gái Minh Châu có nhiều chuyến đi khám phá thế giới.
The document summarizes NATO's largest military exercise in decades called Trident Juncture. Nearly 36,000 troops from over 30 countries participated in the exercise, which was held in Spain, Italy, and Portugal. The purpose of the exercise was to certify NATO's Very High Readiness Joint Task Force (VJTF) for 2016, with Spain playing a key role by hosting many locations and contributing the most troops, around 8,000. The exercise involved multiple phases of testing command and control as well as a large live field exercise where a multinational brigade conducted joint offensive operations against opposing forces to liberate the fictional country of Tytan.
a ppt on good food habits,sldes are adopted from varous sources,i acknowledge each & every one ,i am using this for educating teenagers who are studyng in the schools,plz forgive me if used them inappropriately or inadvertantly
This document summarizes information about Bitcoin micropayments. It discusses that while Bitcoin transactions are cheap, they are not cheap enough for micropayments. It then presents a solution of using rapidly adjusted micropayments to a predetermined party. The document also notes issues with small transactions like being caught by anti-flooding algorithms and having fixed minimum transaction values. Additionally, it provides information on Bitcoin scripting and how standard transactions and multisignature transactions work.
Current ODA Allocation Across Sectors in Bangladesh and Effective Financing f...Abdullah Al Mamun
Despite initial skepticism, Bangladesh has made significant economic and social progress since gaining independence in 1971, reducing poverty and becoming less dependent on foreign aid. However, it remains a poor country that relies on official development assistance (ODA). ODA allocation across sectors could be improved, with underfunding of education and infrastructure. To finance development goals, Bangladesh will need to mobilize more domestic resources through measures like tax reform, reduce capital flight, and encourage public-private partnerships while ensuring climate change adaptation funding is separate from ODA. Greater transparency and accountability in resource use will also help achieve development targets.
Valeant provides revised guidance for Q4 2015 and full year 2015 due to impacts from separating from Philidor and transitioning to a new partnership with Walgreens, estimating a $250M revenue impact from Philidor separation and $150M from the Walgreens transition. Valeant also provides initial guidance for 2016, estimating $12.5-12.7B in revenue and $13.25-13.75 per share in adjusted EPS, representing over 20% growth compared to updated 2015 guidance.
Adapted Branch-and-Bound Algorithm Using SVM With Model SelectionIJECEIAES
Branch-and-Bound algorithm is the basis for the majority of solving methods in mixed integer linear programming. It has been proving its efficiency in different fields. In fact, it creates little by little a tree of nodes by adopting two strategies. These strategies are variable selection strategy and node selection strategy. In our previous work, we experienced a methodology of learning branch-and-bound strategies using regression-based support vector machine twice. That methodology allowed firstly to exploit information from previous executions of Branch-and-Bound algorithm on other instances. Secondly, it created information channel between node selection strategy and variable branching strategy. And thirdly, it gave good results in term of running time comparing to standard Branch-and-Bound algorithm. In this work, we will focus on increasing SVM performance by using cross validation coupled with model selection.
A comparative study of three validities computation methods for multimodel ap...IJECEIAES
The multimodel approach offers a very satisfactory results in modelling, diagnose and control of complex systems. In the modelling case, this approach passes by three steps: the determination of the model’s library, the validities computation and the establishment of the final model. In this context, this paper focuses on the elaboration of a comparative study between three recent methods of validities computation. Thus, it highlight the method that offers the best performances in term of precision. To achieve this goal, we apply, these three methods on two simulation examples in order to compare their performances.
A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...gerogepatton
A support vector machine (SVM) learns the decision surface from two different classes of the input points, there are misclassifications in some of the input points in several applications. In this paper a bi-objective quadratic programming model is utilized and different feature quality measures are optimized simultaneously using the weighting method for solving our bi-objective quadratic programming problem. An important contribution will be added for the proposed bi-objective quadratic programming model by getting different efficient support vectors due to changing the weighting values. The numerical examples, give evidence of the effectiveness of the weighting parameters on reducing the misclassification between two classes of the input points. An interactive procedure will be added to identify the best compromise solution from the generated efficient solutions.
A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...ijaia
A support vector machine (SVM) learns the decision surface from two different classes of the input points, there are misclassifications in some of the input points in several applications. In this paper a bi-objective quadratic programming model is utilized and different feature quality measures are optimized simultaneously using the weighting method for solving our bi-objective quadratic programming problem. An important contribution will be added for the proposed bi-objective quadratic programming model by getting different efficient support vectors due to changing the weighting values. The numerical examples, give evidence of the effectiveness of the weighting parameters on reducing the misclassification between two classes of the input points. An interactive procedure will be added to identify the best compromise solution from the generated efficient solutions.
IRJET - Stock Market Prediction using Machine Learning AlgorithmIRJET Journal
This document discusses using machine learning algorithms to predict stock market prices. Specifically, it analyzes using Support Vector Machine (SVM) and linear regression (LR) algorithms to predict stock prices. It finds that linear regression provides more accurate predictions than SVM when tested on the same stock data. The methodology trains models on historical stock data using these algorithms and predicts future prices, achieving up to 98% accuracy when testing linear regression predictions on Google stock prices. It concludes that input data and machine learning techniques can effectively predict stock market movements.
EFFICIENT USE OF HYBRID ADAPTIVE NEURO-FUZZY INFERENCE SYSTEM COMBINED WITH N...csandit
This research study proposes a novel method for automatic fault prediction from foundry data
introducing the so-called Meta Prediction Function (MPF). Kernel Principal Component
Analysis (KPCA) is used for dimension reduction. Different algorithms are used for building the
MPF such as Multiple Linear Regression (MLR), Adaptive Neuro Fuzzy Inference System
(ANFIS), Support Vector Machine (SVM) and Neural Network (NN). We used classical
machine learning methods such as ANFIS, SVM and NN for comparison with our proposed
MPF. Our empirical results show that the MPF consistently outperform the classical methods.
MIXTURES OF TRAINED REGRESSION CURVESMODELS FOR HANDRITTEN ARABIC CHARACTER R...ijaia
In this paper, we demonstrate how regression curves can be used to recognize 2D non-rigid handwritten shapes. Each shape is represented by a set of non-overlapping uniformly distributed landmarks. The underlying models utilize 2nd order of polynomials to model shapes within a training set. To estimate the regression models, we need to extract the required coefficients which describe the variations for a set of shape class. Hence, a least square method is used to estimate such modes. We proceed then, by training these coefficients using the apparatus Expectation Maximization algorithm. Recognition is carried out by finding the least error landmarks displacement with respect to the model curves. Handwritten isolated Arabic characters are used to evaluate our approach.
Optimization is considered to be one of the pillars of statistical learning and also plays a major role in the design and development of intelligent systems such as search engines, recommender systems, and speech and image recognition software. Machine Learning is the study that gives the computers the ability to learn and also the ability to think without being explicitly programmed. A computer is said to learn from an experience with respect to a specified task and its performance related to that task. The machine learning algorithms are applied to the problems to reduce efforts. Machine learning algorithms are used for manipulating the data and predict the output for the new data with high precision and low uncertainty. The optimization algorithms are used to make rational decisions in an environment of uncertainty and imprecision. In this paper a methodology is presented to use the efficient optimization algorithm as an alternative for the gradient descent machine learning algorithm as an optimization algorithm.
A parsimonious SVM model selection criterion for classification of real-world ...o_almasi
This paper proposes and optimizes a two-term cost function consisting of a sparseness term and a generalized v-fold cross-validation term by a new adaptive particle swarm optimization (APSO). APSO updates its parameters adaptively based on a dynamic feedback from the success rate of the each particle’s personal best. Since the proposed cost function is based on the choosing fewer numbers of support vectors, the complexity of SVM models decreased while the accuracy remains in an acceptable range. Therefore, the testing time decreases and makes SVM more applicable for practical applications in real data sets. A comparative study on data sets of UCI database is performed between the proposed cost function and conventional cost function to demonstrate the effectiveness of the proposed cost function.
A Fuzzy Interactive BI-objective Model for SVM to Identify the Best Compromis...ijfls
This document summarizes a research paper that proposes a fuzzy bi-objective support vector machine (SVM) model to identify infected COVID-19 patients. The model uses SVM classification with two objectives - maximizing margin between classes and minimizing misclassification errors. An α-cut transforms the fuzzy model into a classical bi-objective problem solved using weighting methods. This generates multiple efficient solutions. An interactive process then identifies the best compromise based on minimizing the number of support vectors in each class. The model constructs a utility function to measure COVID-19 infection levels based on the SVM classification.
A FUZZY INTERACTIVE BI-OBJECTIVE MODEL FOR SVM TO IDENTIFY THE BEST COMPROMIS...ijfls
A support vector machine (SVM) learns the decision surface from two different classes of the input points. In several applications, some of the input points are misclassified and each is not fully allocated to either of these two groups. In this paper a bi-objective quadratic programming model with fuzzy parameters is utilized and different feature quality measures are optimized simultaneously. An α-cut is defined to transform the fuzzy model to a family of classical bi-objective quadratic programming problems. The weighting method is used to optimize each of these problems. For the proposed fuzzy bi-objective quadratic programming model, a major contribution will be added by obtaining different effective support vectors due to changes in weighting values. The experimental results, show the effectiveness of the α-cut with the weighting parameters on reducing the misclassification between two classes of the input points. An interactive procedure will be added to identify the best compromise solution from the generated efficient solutions. The main contribution of this paper includes constructing a utility function for measuring the degree of infection with coronavirus disease (COVID-19).
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
The International Journal of Engineering and Science (The IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
This document describes a study that investigated using a support vector machine (SVM) to develop a football match result prediction system. The SVM model was trained on 16 datasets from the 2014-2015 English Premier League season and tested on 15 additional matches. The SVM used a Gaussian combination kernel and various parameters were optimized. The prediction accuracy of the SVM model was 53.3%, which is relatively low. The study concludes that an SVM may not be well-suited for football match prediction based on the feature sets used, and that other machine learning techniques like artificial neural networks may perform better.
Support Vector Machine–Based Prediction System for a Football Match Resultiosrjce
This document describes a study that used a support vector machine (SVM) to develop a football match result prediction system. The SVM model was trained on 16 datasets from the 2014-2015 English Premier League season and tested on 15 additional matches. The SVM used a Gaussian combination kernel and was optimized with various parameters. The system achieved a prediction accuracy of 53.3%, which the study concluded was a relatively low performance. The document discusses related work on prediction systems and provides details on SVM algorithm implementation and parameters used in an effort to predict football match results.
This document discusses using artificial neural networks for network intrusion detection. Specifically, it proposes a hybrid classification model that uses entropy-based feature selection to reduce the dataset, followed by four neural network techniques (RBFN, SOM, SMO, PART) for classification. It provides details on each neural network technique and the overall methodology, which uses 10-fold cross validation to evaluate performance based on standard criteria. The goal is to build an efficient intrusion detection system with low false alarms and high detection rates.
Probabilistic Self-Organizing Maps for Text-Independent Speaker IdentificationTELKOMNIKA JOURNAL
The present paper introduces a novel speaker modeling technique for text-independent speaker identification using probabilistic self-organizing maps (PbSOMs). The basic motivation behind the introduced technique was to combine the self-organizing quality of the self-organizing maps and generative power of Gaussian mixture models. Experimental results show that the introduced modeling technique using probabilistic self-organizing maps significantly outperforms the traditional technique using the classical GMMs and the EM algorithm or its deterministic variant. More precisely, a relative accuracy improvement of roughly 39% has been gained, as well as, a much less sensitivity to the model-parameters initialization has been exhibited by using the introduced speaker modeling technique using probabilistic self-organizing maps.
An Automatic Clustering Technique for Optimal ClustersIJCSEA Journal
This document presents a new automatic clustering algorithm called Automatic Merging for Optimal Clusters (AMOC). AMOC is a two-phase iterative extension of k-means clustering that aims to automatically determine the optimal number of clusters for a given dataset. In the first phase, AMOC initializes a large number of clusters k using k-means. In the second phase, it iteratively merges the lowest probability cluster with its closest neighbor, recomputing metrics each time to evaluate if the merge improved clustering quality. The algorithm stops merging once no improvements are found. Experimental results on synthetic and real datasets show AMOC finds nearly optimal cluster structures in terms of number, compactness and separation of clusters.
Number of sources estimation using a hybrid algorithm for smart antennaIJECEIAES
The number of sources estimation is one of the vital key technologies in smart antenna. The current paper adopts a new system that employs a hybrid algorithm of artificial bee colony (ABC) and complex generalized Hebbian (CGHA) neural network to Bayesian information criterion (BIC) technique, aiming to enhance the accuracy of number of sources estimation. The advantage of the new system is that no need to compute the covariance matrix, since its principal eigenvalues are computed using the CGHA neural network for the received signals. Moreover, the proposed system can optimize the training condition of the CGHA neural network, therefore it can overcome the random selection of initial weights and learning rate, which evades network oscillation and trapping into local solution. Simulation results of the offered system show good responses through reducing the required time to train the CGHA neural network, fast converge speed, effectiveness, in addition to achieving the correct number of sources.
IRJET- Expert Independent Bayesian Data Fusion and Decision Making Model for ...IRJET Journal
This document proposes an expert independent Bayesian data fusion and decision making model for multi-sensor systems smart control. The model uses a Naive Bayes classifier to predict the system state based only on prior and current sensor data. Simulations of a three sensor system (soil temperature, air temperature, and moisture) achieved an overall prediction accuracy of more than 96%. However, real-world implementation of the proposed algorithm is still needed.
Similar to THE APPLICATION OF BAYES YING-YANG HARMONY BASED GMMS IN ON-LINE SIGNATURE VERIFICATION (20)
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Programming Foundation Models with DSPy - Meetup Slides
THE APPLICATION OF BAYES YING-YANG HARMONY BASED GMMS IN ON-LINE SIGNATURE VERIFICATION
1. International Journal of Artificial Intelligence & Applications (IJAIA), Vol. 5, No. 6, November 2014
DOI : 10.5121/ijaia.2014.5602 15
THE APPLICATION OF BAYES YING-YANG HARMONY
BASED GMMS IN ON-LINE SIGNATURE
VERIFICATION
Xiaosha Zhao Mandan Liu
Key Laboratory of Advanced Control and Optimization for Chemical Processes, Ministry
of Education, East China Uni. of Sci. and Tech.
ABSTRACT
In this contribution, a Bayes Ying-Yang(BYY) harmony based approach for on-line signature verification is
presented. In the proposed method, a simple but effective Gaussian Mixture Models(GMMs) is used to
represent for each user’s signature model based on the prior information collected. Different from the early
works, in this paper, we use the Bayes Ying Yang machine combined with the harmony function to achieve
Automatic Model Selection(AMS) during the parameter learning for the GMMs, so that a better
approximation of the user model is assured. Experiments on a database from the First International
Signature Verification Competition(SVC 2004) confirm that this combined algorithm yields quite a
satisfactory result.
KEYWORDS
Signature verification, Bayes Ying-Yang machine, Gaussian Mixture Models, AMS
1. INTRODUCTION
The on-line signature verification task can be expressed as follows: given a complete on-line
signature ܷand a claimed user c, decide whether cindeed produced the signature ܷ[1]. To
implement this procedure mathematically, a model Θ
for the user c and Θି
an antithetical
model need to be learnt so that a score function S(ܷ,Θ
,Θି
) can be calculated, which will later be
used to compare against some pre-set threshold T to finally determine the authenticity of the
signature ܷ:
Sሺܷ, Θ
, Θି
ሻ ൜
≥ ݈ܶܽܿܿ݁݉݅ܽܿݕݐ݅ݐ݊݁݀݅ݐ
< ݈ܶ݉݅ܽܿݕݐ݅ݐ݊݁݀݅ݐ݆ܿ݁݁ݎ
(1)
Based on the above theory, selecting a good model is the most important step in designing a
signature verification system. Despite the most commonly used, distance based Dynamic
Warping(DW)[2], or the feature-based statistical method, Hidden Markov Modeling(HMM)[3][4],
2. International Journal of Artificial Intelligence & Applications (IJAIA), Vol. 5, No. 6, November 2014
16
in this paper, we propose a new BYY based GMMs to build the signature models for the users. The
GMMs based recognizers are conceptually less complex than HMM, which leads to significantly
shorter training time as well as less parameters to learn[5]. Distinguished from the earlier works
with the cluster numbers pre-settled and same for all the users[6], the BYY based GMMs can
decide the optimal cluster numbers automatically according to the data distribution of different
users in the process of parameter leaning, such that improves the performance of the algorithm.
This paper is structured as follows: the introduction involves some definitions, and the brief
description of the main idea in this work. In Section 2, the subset of the BYY based GMMs method
we focused on is detailed. Section 3 presents the feature select and data processing used in our
work, followed by the model training and similarity score computation in Section4. And the
experiment result as well as the performance evaluation of this method proposed is explained in
section 5.
2. BYY BASED GMMS FOR SIGNATURE VERIFICATION
2.1 Gaussian Mixed Models
GMMs are such well known and so much referenced statistical models in many pattern recognition
applications. Based on the representation of a weighted linear combination of Gaussian
probabilistic function, as shown in the following equation (2),they are versatile modeling tools to
approximate any probability density function(pdf) given a sufficient number of components while
impose only minimal assumptions about the modeled random variables.
ሺ|ݑΘሻ = ߙ൫ݑหΘ൯
ୀଵ
൫ݑหΘ൯ = ܩ൫ݑห݉, σ൯
=
ଵ
ሺଶగሻ/మหೕห
భ/మ ݁ି
భ
మ
൫௨ିೕ൯
ೕ
షభ
൫௨ିೕ൯
(2)
Where ݑ ∈ ܴௗ
are the feature vectors that represent a handwritten signature, Θ = ߙ ∪ ൛θൟ୨ୀଵ
୩
,
ߙ = ሾߙଵ, … , ߙ୩ሿ
, θ୨ = ൫݉, σ൯, ߙ is the mixing weight for the ݆th component with each ߙ ≥ 0
and ∑ ߙ
ୀଵ = 1, and ܩ൫ݑห݉, σ൯ denotes a Gaussian density with a mean ݉ and a covariance
matrix σ. Each ൫ݑหΘ൯ is called a component, and k refers to the component number, i.e. the
cluster number.
To learn the unknown parameter Θ, the EM algorithm has been used in an iterative mode.
However, one defect bothering in this method is that the component number k needs to be set
manually, which when not consist with the actual data distribution, will leads to local optimum and
further impact the accuracy of the verification. Some prior works would pick several promising
values to run and choose the best one as k. In addition to the additional time cost, the discontinuity
3. International Journal of Artificial Intelligence & Applications (IJAIA), Vol. 5, No. 6, November 2014
17
of the values in such method can also miss the best choice for k. The BYY, on the other hand, can
choose the optimal cluster number automatically during the parameter learning.
2.2 The Bayes Ying-Yang theory and harmony function
A BYY system describes each data vector ݑ ∈ ܷ ∈ ܴௗ
and its corresponding inner representation
ݕ ∈ ࣳ ∈ ܴௗ
using two types of Bayesian decomposition of the joint density ሺ,ݑ ݕሻ =
ሺݑ|ݕሻሺݑሻ and ሺ,ݑ ݕሻ = ݍሺݕ|ݑሻݍሺݕሻ , named as Yang machine and Ying machine,
respectively[7]. Given a data set ܷ = ሼݑ௧ሽ௧ୀଵ
ே
, where N is the total number of the sample, from the
observation spaceܷ, the task of learning on a BYY system is mainly to specify each of ሺܷ|ݕሻ,
ሺܷሻ, ݍሺܷ|ݕሻ, ݍሺݕሻ with a harmony learning principle implemented by maximizing the
functional[8]:
ܪሺݍ||ሻ = ሺܷ|ݕሻሺܷሻ ݈݊ሾݍሺܷ|ݕሻݍሺݕሻሿ ܷ݀݀ݕ − ݈݊ݖ (3)
where ݖ is a regulation term.
As a matter of fact, the maximization of the harmony function ܪሺݍ||ሻ can push to get best
parameter match as well as the least structure complexity, thus to produce the favorite property of
AMS as long as k is set to be larger than the true number of components in the sample data ܷ.
Based on this algorithm, it can be applied on the GMMs learning to strive for better accuracy.
3. FEATURE SELECT AND DATA PROCESSING
The signature sample data employed in our work are sampled by the pen tablets, which can detect
the horizontal position(ݔ௧), vertical position(ݕ௧), pressure(௧) and azimuth(ܽ௧) of the pen point, as
well as the elevation of the pen. Besides, the sensor also records the pen-up(௧ = 0) (ݖ௧)points. So
the raw signature vector can be expressed as follows:
ݑ௧෦ = ሾݔ௧, ݕ௧, ௧, ܽ௧, ݖ௧ሿ (4)
Referring to former experimental examples[2], as well as considering the practical application, we
restricted the investigation to horizontal position, vertical position and pressure data. Besides, to get
more discriminative, two dynamic features, trajectory tangent angle ߠ௧ and instantaneous
velocitiyߥ௧, which two are difficult to reproduce based only on visual inspection[9], are computed
as follows:
ߠ௧ = ܽ݊ܽݐܿݎ
௬ሶ
௫ሶ
ߥ௧ = ටݔ௧ሶ ଶ
+ ݕ௧ሶ ଶ
(5)
where ݔ௧ሶ , ݕ௧ሶ represent the first derivatives of ݔ௧ and ݕ௧ with respect to time. Finally, we get the
basic feature vector for each sample:
ݑ௧
′
= ሾݔ௧, ݕ௧, ௧, ߥ௧, ߠ௧ሿ (6)
4. International Journal of Artificial Intelligence & Applications (IJAIA), Vol. 5, No. 6, November 2014
18
Figure 1 gives an example of the signature date from one user.
To eliminate the dynamic ranges of the different features and ensure a better learning result, each
individual feature ݑௗ௧
′
∈ ሼݔ௧, ݕ௧, ௧, ߠ௧, ߥ௧ሽ with ݀=1,...,5 is transformed into a zero-mean, unit
variance normal distribution using:
ݑௗ௧ ൌ
௨
′ି௨
′
ఙ
′ (7)
Where ݑௗ
′
is the mean value of the ݄݀ݐ dimension vectors, andߪௗ
′
the corresponding variance
value. After the transformation, we can get the unified signature vectorݑ௧:
ݑ௧ = ሾݔ௧ഥ , ݕ௧ഥ , ௧ഥ , ߥ௧ഥ , ߠ௧
ഥ ሿ (8)
And the final complete observation comes as
0 20 40 60 80 100 120 140 160 180
0
5000
10000
15000
Xt
0 20 40 60 80 100 120 140 160 180
0
5000
10000
Yt
0 20 40 60 80 100 120 140 160 180
0
500
1000
1500
Pt
0 20 40 60 80 100 120 140 160 180
0
200
400
Vt
0 20 40 60 80 100 120 140 160 180
-2
0
2
θt
Sample number
Figure 1 Signature data
5. International Journal of Artificial Intelligence & Applications (IJAIA), Vol. 5, No. 6, November 2014
19
ܷ = ሾܷଵ … , ܷ … , ܷெሿ (9)
where M is the total number of the users.
The complete verification process is described in the following flow chart:
Signature
data
Feature
selection and
normalization
GMMs training
(based on BYY)
Similarity score
computation Test results
Feature
selection and
normalization
Samples for
modeling
Whole samples
Figure 2 The complete verification follow chart
4 MODEL TRAINING AND SIGNATURE SCORE COMPUTING
4.1 Model training
The main task of the model training is to maximize the harmony function ܪሺݍ||ሻ. And in our
work, we chose annealing learning algorithm proposed in [10], which sets the regulation term ݖ
in equation (3) to one, and ܷ(ሻ to some empirical density estimation:
ܷ(ሻ ൌ
ଵ
ே
∑ ݑ(ܭ − ݑ௧ሻே
௧ୀଵ (9)
where K(.) is a prefixed kernel function[8], and further converge to the delta function:
ݑ(ܭ − ݑ௧ሻ ൌ ൜
+∞, ݑ = ݑ௧
0, ݑ ≠ ݑ௧
(10)
.According to Bayes’ law and the definition of GMMs:
ሺݕ = ݆|ܷሻ ൌ
ೕ൫ೝหఏೕ൯
(ೝ|Θೖሻ
, ݍሺܷ|Θሻ = ∑ ߙݍ൫ܷหߠ൯
ୀଵ (11)
Θ
(ݔ௧ഥ , ݕ௧ഥ , ௧ഥ ,ߥ௧ഥ , ߠ௧
ഥ )
(ݔ௧ഥ , ݕ௧ഥ , ௧ഥ ,ߥ௧ഥ , ߠ௧
ഥ )
Θି
(ݔ௧, ݕ௧, ௧)
6. International Journal of Artificial Intelligence & Applications (IJAIA), Vol. 5, No. 6, November 2014
20
where ݍ൫ܷหߠ൯ = ݍሺܷ|ݕ = ݆ሻ,ߠ represents the unknown parameters in each component and
Θ = ൛ܽ, ߠൟୀଵ
represents all the parameters.
Substituting (9)-(11) into (3), can we get :
ܮሺ߆ሻ = ܪሺݍ||ሻ ൌ
ଵ
ே
∑ ∑
ೕ൫௨หఏೕ൯
∑ (௨|ఏሻೖ
సభ
݈݊ൣܽݍ൫ݑ௧หߠ൯൧
ୀଵ
ே
௧ୀଵ (12)
As we consume the components as Gaussian functions, so:
ݍ൫ݑ௧หߠ൯ ൌ ݍ൫ݑ௧ห݉, σ൯ =
ଵ
ሺଶగሻ/మหೕห
భ/మ ݁ି
భ
మ
൫௨ିೕ൯
ೕ
షభ൫௨ିೕ൯
(13)
with the Θ changed into Θ = ൛ܽ, ݉, σൟୀଵ
, while the ሺݑ|ݕ௧ሻ is a free probability
distribution under the basic probability constrains.In this situation, the harmony function can be
rewritten as
L(Θሻ ൌ
ଵ
ே
∑ ∑ ݑ|݆(௧ሻ݈݊ൣܽݍ൫ݑ௧ห݉, σ൯൧
ୀଵ
ே
௧ୀଵ (14)
with the parameters Θ = ൛ܽ, ݉, σ, ݐ = 1, … , ܰൟୀଵ
.
However, one problem here to learn directly on the equation (14) is that the learning result makes it
the hard-cut EM algorithm[11], which can be easily trapped in a local maximum while the
component number k set bigger than the true one during the training as stated earlier. To get an
optimum k, the annealing algorithm attaches a soften item to LሺΘሻ in (15):
LఒሺΘሻ =
ଵ
ே
∑ ∑ ሺ݆|ݑ௧ሻ݈݊ൣܽݍ൫u௧ห݉, σ൯൧
ୀଵ
ே
௧ୀଵ + ߣΟே൫ሺܷ|ݕሻ൯ (15)
where
Οே൫ܷ|ݕ(ሻ൯ ൌ −
ଵ
ே
∑ ∑ ݑ|݆(௧ሻ݈݊ݑ|݆(௧ሻ
ୀଵ
ே
௧ୀଵ (16)
By controlling ߣ → 0 from ߣ ൌ 1, the maximum of Lఒ(Θሻ can lead to the global maximum of
the harmony function L(Θሻ.
The annealing learning algorithm can be realized by alternatively maximizing L(Θሻ with
Θଵ ൌ ሼݑ|݆(௧ሻ, ݐ = 1, … , ܰሽୀଵ
and Θଶ = Θ, as shown follows:
ሺ݆|ݑ௧ሻ ൌ
ൣೕ൫୳หೕ,ೕ൯൧
భ
ഊ
∑ ሾሺ୳|,ሻሿ
భ
ഊೖ
సభ
(17)
ܽ
∗
=
ଵ
ே
∑ ሺ݆|ݑ௧ሻ ே
௧ୀଵ (18)
݉
∗
ൌ
ଵ
∑ (|௨ሻಿ
సభ
∑ ݑ|݆(௧ሻே
௧ୀଵ ݑ௧ (19)
7. International Journal of Artificial Intelligence & Applications (IJAIA), Vol. 5, No. 6, November 2014
21
Σ
∗
ൌ
ଵ
∑ (|௨ሻಿ
సభ
∑ ݑ|݆(௧ሻே
௧ୀଵ ൫ݑ௧ − ݉
∗
൯൫ݑ௧ − ݉
∗
൯
்
(20)
At last, we will get the optimum models for each user, which will be used in the following steps.
4.2 Signature score computation
As we can get access to both genuine and forgery signature data, the test model for the signature
score computation we use here is the ratio of the posterior probabilities. Suppose the prior
probability of a forgery is, then 1 − is a genuine one’s prior probability. Let G(ܷ, Θሻ denote
the Gaussian density of the model Θ evaluated at u. Thus the signature score can be expressed in
equation (20).
Sሺܷ, Θ
, Θି
ሻ =
ୋ൫ ,Θ
൯൫ଵି൯
ୋሺ,Θష
ሻ
(20)
And according to the Bayes-optimal classification rule, when Sሺܷ, Θ
, Θି
ሻ < 1, which means the
probability of the test signature U belonging to Θି
is bigger than that of Θ
, so we decide it to be
forgery, otherwise genuine[12].However, as the sample users are limited in our work, so we adjust
the threshold to 2 to get a better recognition rate.
5 EXPERIMENT RESULTS
5.1 The performance of the BYY based GMMs
The data used in our experiment is derived from the First International Signature Verification
Competition(SVC 2004)[13]. In this experiment, we use 1600 signatures from 40 users, consisting
of 20 genuine ones and 20 forgeries of each, which means the in equation (20) to be 0.5. During
the experiment, the first 5 out of 20 genuine signatures from each user were used to build the model
Θ
, and the first 5 forgeries were used for the model Θି
.
As stated above, the BYY based GMMs is able to choose the optimal cluster number k
automatically, so different users can have different component number in his/her GMMs Θ
. Even
more, according to our experiment results, the component number for Θ
and Θି
of the same user
can also be different, as shown in the following table 1.
Table 1 The component numbers (k) in દ۱
and દି
of each user
k in Θ
k in Θି
k in Θ
k in Θି
User1 15 23 User21 8 8
User2 5 5 User22 8 8
User3 18 20 User23 26 34
User4 21 18 User24 8 8
User5 8 8 User25 18 16
8. International Journal of Artificial Intelligence & Applications (IJAIA), Vol. 5, No. 6, November 2014
22
User6 16 16 User26 8 8
User7 30 17 User27 8 8
User8 8 8 User28 16 12
User9 24 24 User29 30 28
User10 19 12 User30 22 22
User11 20 18 User31 8 8
User12 24 23 User32 16 16
User13 32 32 User33 14 6
User14 16 16 User34 22 8
User15 24 24 User35 8 8
User16 32 32 User36 32 32
User17 16 16 User37 10 18
User18 32 32 User38 16 16
User19 5 5 User39 32 32
User20 14 12 User40 13 22
Based on the models built in table 1, along with the threshold chosen, we can finally get the
signatures recognized. In order to get a whole vision of the recognition results, Figure 1 shows two
examples of the logarithm of the similarity scores computed against the threshold in our experiment
for User1 and User 5, respectively.
0 5 10 15 20 25 30 35 40
-6
-4
-2
0
2
4
6
The xth signature
Logarithmofthesimilarityscore
Logarithm of the similarity score
Threshold
User 5
0 5 10 15 20 25 30 35 40
-10
-8
-6
-4
-2
0
2
4
6
8
The xth signature
Logarithmofthesimilarityscore
Logarithm of the similarity score
Threshold
User 1
9. International Journal of Artificial Intelligence & Applications (IJAIA), Vol. 5, No. 6, November 2014
23
Figure3 The verification results of User 1 and User 5
Table 2 The FAR, FRR and verification rate of each user by BYY based GMMs
FAR(%) FRR(%)
Verification
Rate(%) FAR(%) FRR(%)
Verification
Rate(%)
User1 10.0000 10.0000 80.0000 User21 0.0000 0.0000 100.0000
User2 2.5000 5.0000 92.5000 User22 0.0000 0.0000 100.0000
User3 2.5000 0.0000 97.5000 User23 0.0000 0.0000 100.0000
User4 5.0000 0.0000 95.0000 User24 0.0000 2.5000 97.5000
User5 0.0000 0.0000 100.0000 User25 2.5000 0.0000 97.5000
User6 0.0000 0.0000 100.0000 User26 0.0000 5.0000 95.0000
User7 0.0000 10.0000 90.0000 User27 15.0000 0.0000 85.0000
User8 5.0000 12.5000 82.5000 User28 0.0000 2.5000 97.5000
User9 0.0000 0.0000 100.0000 User29 0.0000 0.0000 100.0000
User10 2.5000 0.0000 97.5000 User30 0.0000 0.0000 100.0000
User11 0.0000 5.0000 95.0000 User31 0.000 0.0000 100.0000
User12 0.0000 2.5000 97.5000 User32 20.0000 0.0000 80.0000
User13 0.0000 0.0000 100.0000 User33 12.5000 0.0000 87.5000
User14 0.0000 0.0000 100.0000 User34 0.0000 2.5000 97.5000
User15 7.5000 5.0000 87.5000 User35 7.5000 0.0000 92.5000
User16 0.0000 12.5000 87.5000 User36 0.000 12.5000 87.5000
User17 7.5000 0.0000 92.5000 User37 15.0000 5.0000 80.0000
User18 0.0000 0.0000 100.0000 User38 5.0000 7.5000 87.5000
User19 0.0000 0.0000 100.0000 User39 0.0000 0.0000 100.0000
User20 0.0000 0.0000 100.0000 User40 0.0000 0.0000 100.0000
And the False Reject Rate(FRR), the False Accept Rate(FAR) as well as the recognition rate are
listed in the following table 2. From table 2 we can see that the BYY based GMMs can achievean
average recognition rate of 94.5000%, with FAR at 3.0000% and FRR at 2.5000%.
5.2 Comparison with the traditional GMMs and DTW
10. International Journal of Artificial Intelligence & Applications (IJAIA), Vol. 5, No. 6, November 2014
24
To evaluate the performance of the BYY based GMMs, we use two other recognition methods, the
traditional GMMs and DTW, based on the same signature data base.
In the traditional GMMs experiment, the feature vector, the computation of the similarity score as
well as the threshold are the same as used in the BYY based GMMs experiment, except that the
models are learnt by the EM algorithm. As the component number has to be settled before hand, so
we set the k to 8, 16, 24 and 32, respectively, to get the best result. The average FAR, FFR and
verification rate are listed in Table 3.
Table 3 The average FAR, FRR and verification rate of the 40 users by normal GMMs
k=8 k=16 k=24 k=32
FAR(%) 6.4375 7.7500 12.9375 14.9375
FRR(%) 6.5625 4.0625 5.2500 9.1875
Verification Rate(%) 87.0000 88.1875 81.8125 75.8750
From Table 3 it can be concluded that the normal GMMs achieves the best performance with an
average verification rate of 88.1875%, FAR at 7.7500% and FRR at 4.0625% when k is set to 16.
As to the method of DTW, the feature vector is extracted by way of interpolation and wavelet
function, including total sample time, the ratio of height and width, standard deviation in horizontal
and vertical direction, standard deviation of pressure, rotation and azimuth, average velocity in
horizontal and vertical direction, average pressure, azimuth and rotation, pressure, rotation and
azimuth energy extracted by wavelet function, adding up to 36 features altogether.
Among the first 5 genuine signatures, the smallest values in each of the 36 features form a new 36
feature vectorܸ௦; and the biggest values in each of the 36 features form another 36 feature vectorܸ.
ܸ௦is used as the model and the matching distance between ܸ௦ and ܸ calculated by DTW is used as
the threshold. The average recognition rate is 68.9375%, the FAR is 17.6875% and the FRR is
13.3750%.
5.3 Conclusion
Comparing with the experiment results of traditional GMMs and DTW can we find that the BYY
based GMMs has a significant better performance, which proves that the BYY based GMMs can
build relatively accurate models for the users, and its application in signature verification produces
satisfactory results based on the data samples. So it can be a promising solution in this field.
REFERENCES
[1] J. Richiardi, A. Drygajlo. Gaussian mixture models for on-line signature verification[A]. WBMA’03
Proceedings of the 2003 ACM SIGMM workshop on Biometrics methods and applications[C].New
York, USA: 2003. 115-122.
[2] Y. Sato, K. Kogure. Online signature verification based on shape, motion and writing pressure[A].
11. International Journal of Artificial Intelligence & Applications (IJAIA), Vol. 5, No. 6, November 2014
25
Proceedings of the 6th International Conference on Pattern Recognition[C]. 1982. 823-826.
[3] L. R. Rabiner, B. H. Juang. Fundamentals of speech recognition[M]. Englewood Cliffs, NJ : Prentice
Hall, 1993.
[4] L. Yang, B. K. Widjaja, R. Prasad. Application of hidden markov models for signature verification[J].
Pattern Recognition, 1995. 28:161-170.
[5] A. Schlapbach, H. Bunke. Off-line writer identification using Gaussian mixture models[A]. The 18th
International Conference on Pattern Recognition[C], Hong Kong, China: 2006.992-995.
[6] C. J. Xiao, C. L. Li, Y. W. Qiao, M. Zhang. Wavelet packs and Gauss model for off-line handwritten
signature recognition[J]. Computer Engineering and Applications. 2009, 46(36): 161-164.
[7] J. W. Ma. Automated model selection(AMS) on finite mixture: a new prospective for date modeling[J].
Chinese Journal of Engineering Mathematics. 2007, 24(4): 571-584.
[8] L. Xu. Best harmony, unified RPCL, and automated model selection for unsupervised and supervised
learning on Gaussian mixtures, three-layer nets and ME-RBF_SVM models[J]. International of Neural
Systems. 2001, 11(1): 43-69.
[9] R. Kashi, J. Hu, W. Nelson, W. Turin. A Hidden Markov Model approach to online handwritten
signature recognition[J]. International Journal on Document Analysis and Recognition.1998,
1(2):102-109.
[10] J. Ma, T. Y. Wang, L. Xu. An annealing approach to BYY harmony learing on Gaussian mixture with
automated model selection[A]. Proceeding 2003 International Conference Neural Networks and Signal
Processing (ICNNSP’03)[C], Nanjing, China. 2003: 23-28.
[11] L. Xu. Bayesian Ying-Yang machine, clustering and number of clusters[J]. Pattern Recognition Letters.
1997, 18:1167-1178.
[12] W. Nelson, W. Turin, T. Hastie. Statistical methods for on-line signature verification[J]. International
Journal of Pattern Recognition and Artificial Intelligent. 1194, 8(3):749-770.
[13] SVC: First International Signature Verification Competition[EB/OL]. http://www.cs.ust.hk/svc2004/.
Authors
Xiaosha Zhao (1989-), female, obtained her B.Eng. degree in Measurement and Control
Techniques and Instrument from East China University of Science and Technology, in
2012. She is currently a M.D student of Department of Automation in East China
University of Science and Technology. Her research interests include statistical learning
and clustering.
Mandan Liu (1973-), female, Professor, Doctor, vice-president of School of Information
Science and Engineering, East China University of Science and Technology, Executive
Member of China Academy of System Simulation, majoring in Intelligent Control Theory
and Application and Intelligent Optimization Algorithm and Application.