In this paper, we demonstrate how regression curves can be used to recognize 2D non-rigid handwritten shapes. Each shape is represented by a set of non-overlapping uniformly distributed landmarks. The underlying models utilize 2nd order of polynomials to model shapes within a training set. To estimate the regression models, we need to extract the required coefficients which describe the variations for a set of shape class. Hence, a least square method is used to estimate such modes. We proceed then, by training these coefficients using the apparatus Expectation Maximization algorithm. Recognition is carried out by finding the least error landmarks displacement with respect to the model curves. Handwritten isolated Arabic characters are used to evaluate our approach.
MIXTURES OF TRAINED REGRESSION CURVES MODELS FOR HANDWRITTEN ARABIC CHARACTER...gerogepatton
In this paper, we demonstrate how regression curves can be used to recognize 2D non-rigid handwritten shapes. Each shape is represented by a set of non-overlapping uniformly distributed landmarks. The underlying models utilize 2nd order of polynomials to model shapes within a training set. To estimate the regression models, we need to extract the required coefficients which describe the variations for a set of shape class. Hence, a least square method is used to estimate such modes. We proceed then, by training these coefficients using the apparatus Expectation Maximization algorithm. Recognition is carried out by finding the least error landmarks displacement with respect to the model curves. Handwritten isolated Arabic characters are used to evaluate our approach.
This document presents a new method for interpolation called weighted average interpolation (WAI). WAI uses the concepts of positive and negative effect to determine the influence of neighboring data points. Correction factors are derived from Pascal's triangle to match the results of WAI to Lagrange interpolation. The method is extended to extrapolation and unevenly spaced data using similar concepts. WAI aims to reduce the number of operations compared to Lagrange interpolation while maintaining accuracy.
This document discusses and compares different methods for solving assignment problems. It begins with an abstract that defines assignment problems as optimally assigning n objects to m other objects in an injective (one-to-one) fashion. It then provides an introduction to the Hungarian method and a new proposed Matrix Ones Assignment (MOA) method. The body of the document provides details on modeling assignment problems with cost matrices, formulations as linear programs, and step-by-step explanations of the Hungarian and MOA methods. It includes an example solved using the Hungarian method.
Some Engg. Applications of Matrices and Partial DerivativesSanjaySingh011996
This document contains a submission by three students to Dr. Sona Raj Mam regarding partial differentiation, matrices and determinants, and eigenvectors and eigenvalues. It provides examples of how these mathematical concepts are applied in fields like engineering. Partial differentiation is used in economics to analyze demand and in image processing for edge detection. Matrices and determinants allow representing linear transformations in graphics software. Eigenvalues and eigenvectors have applications in areas like computer science, smartphone apps, and modeling structures in civil engineering. The document also provides real-world examples and references textbooks and websites for further information.
Contradictory of the Laplacian Smoothing Transform and Linear Discriminant An...TELKOMNIKA JOURNAL
Laplacian smoothing transform uses the negative diagonal element to generate the new space. The negative diagonal elements will deliver the negative new spaces. The negative new spaces will cause decreasing of the dominant characteristics. Laplacian smoothing transform usually singular matrix, such that the matrix cannot be solved to obtain the ordered-eigenvalues and corresponding eigenvectors. In this research, we propose a modeling to generate the positive diagonal elements to obtain the positive new spaces. The secondly, we propose approach to overcome singularity matrix to found eigenvalues and eigenvectors. Firstly, the method is started to calculate contradictory of the laplacian smoothing matrix. Secondly, we calculate the new space modeling on the contradictory of the laplacian smoothing. Moreover, we calculate eigenvectors of the discriminant analysis. Fourth, we calculate the new space modeling on the discriminant analysis, select and merge features. The proposed method has been tested by using four databases, i.e. ORL, YALE, UoB, and local database (CAI-UTM). Overall, the results indicate that the proposed method can overcome two problems and deliver higher accuracy than similar methods.
11.polynomial regression model of making cost prediction in mixed cost analysisAlexander Decker
This document presents a study comparing different regression models for predicting costs based on production levels. It finds that a cubic polynomial regression model provides a better fit than linear regression or the high-low method. The study uses cost and production data from a company to build linear, quadratic, and cubic regression models. It finds the cubic polynomial regression has the highest R-squared value and lowest p-value, indicating it is best able to model the cost patterns in the data. The document concludes the cubic polynomial regression provides a better approach for cost prediction than traditional linear regression or high-low methods.
Polynomial regression model of making cost prediction in mixed cost analysisAlexander Decker
This document presents a study comparing different regression models for predicting costs based on production levels. It finds that a cubic polynomial regression model provides a better fit than linear regression or the high-low method. The study uses cost and production data from a company to build linear, quadratic, and cubic regression models. It finds the cubic polynomial regression has the highest R-squared value and lowest p-value, indicating it is the best-fitting model. The study concludes that polynomial regression generally provides a better approach for cost prediction than conventional linear regression or the high-low method.
Medical Conferences, Pharma Conferences, Engineering Conferences, Science Conferences, Manufacturing Conferences, Social Science Conferences, Business Conferences, Scientific Conferences Malaysia, Thailand, Singapore, Hong Kong, Dubai, Turkey 2014 2015 2016
Global Research & Development Services (GRDS) is a leading academic event organizer, publishing Open Access Journals and conducting several professionally organized international conferences all over the globe annually. GRDS aims to disseminate knowledge and innovation with the help of its International Conferences and open access publications. GRDS International conferences are world-class events which provide a meaningful platform for researchers, students, academicians, institutions, entrepreneurs, industries and practitioners to create, share and disseminate knowledge and innovation and to develop long-lasting network and collaboration.
GRDS is a blend of Open Access Publications and world-wide International Conferences and Academic events. The prime mission of GRDS is to make continuous efforts in transforming the lives of people around the world through education, application of research and innovative ideas.
Global Research & Development Services (GRDS) is also active in the field of Research Funding, Research Consultancy, Training and Workshops along with International Conferences and Open Access Publications.
International Conferences 2014 – 2015
Malaysia Conferences, Thailand Conferences, Singapore Conferences, Hong Kong Conferences, Dubai Conferences, Turkey Conferences, Conference Listing, Conference Alerts
MIXTURES OF TRAINED REGRESSION CURVES MODELS FOR HANDWRITTEN ARABIC CHARACTER...gerogepatton
In this paper, we demonstrate how regression curves can be used to recognize 2D non-rigid handwritten shapes. Each shape is represented by a set of non-overlapping uniformly distributed landmarks. The underlying models utilize 2nd order of polynomials to model shapes within a training set. To estimate the regression models, we need to extract the required coefficients which describe the variations for a set of shape class. Hence, a least square method is used to estimate such modes. We proceed then, by training these coefficients using the apparatus Expectation Maximization algorithm. Recognition is carried out by finding the least error landmarks displacement with respect to the model curves. Handwritten isolated Arabic characters are used to evaluate our approach.
This document presents a new method for interpolation called weighted average interpolation (WAI). WAI uses the concepts of positive and negative effect to determine the influence of neighboring data points. Correction factors are derived from Pascal's triangle to match the results of WAI to Lagrange interpolation. The method is extended to extrapolation and unevenly spaced data using similar concepts. WAI aims to reduce the number of operations compared to Lagrange interpolation while maintaining accuracy.
This document discusses and compares different methods for solving assignment problems. It begins with an abstract that defines assignment problems as optimally assigning n objects to m other objects in an injective (one-to-one) fashion. It then provides an introduction to the Hungarian method and a new proposed Matrix Ones Assignment (MOA) method. The body of the document provides details on modeling assignment problems with cost matrices, formulations as linear programs, and step-by-step explanations of the Hungarian and MOA methods. It includes an example solved using the Hungarian method.
Some Engg. Applications of Matrices and Partial DerivativesSanjaySingh011996
This document contains a submission by three students to Dr. Sona Raj Mam regarding partial differentiation, matrices and determinants, and eigenvectors and eigenvalues. It provides examples of how these mathematical concepts are applied in fields like engineering. Partial differentiation is used in economics to analyze demand and in image processing for edge detection. Matrices and determinants allow representing linear transformations in graphics software. Eigenvalues and eigenvectors have applications in areas like computer science, smartphone apps, and modeling structures in civil engineering. The document also provides real-world examples and references textbooks and websites for further information.
Contradictory of the Laplacian Smoothing Transform and Linear Discriminant An...TELKOMNIKA JOURNAL
Laplacian smoothing transform uses the negative diagonal element to generate the new space. The negative diagonal elements will deliver the negative new spaces. The negative new spaces will cause decreasing of the dominant characteristics. Laplacian smoothing transform usually singular matrix, such that the matrix cannot be solved to obtain the ordered-eigenvalues and corresponding eigenvectors. In this research, we propose a modeling to generate the positive diagonal elements to obtain the positive new spaces. The secondly, we propose approach to overcome singularity matrix to found eigenvalues and eigenvectors. Firstly, the method is started to calculate contradictory of the laplacian smoothing matrix. Secondly, we calculate the new space modeling on the contradictory of the laplacian smoothing. Moreover, we calculate eigenvectors of the discriminant analysis. Fourth, we calculate the new space modeling on the discriminant analysis, select and merge features. The proposed method has been tested by using four databases, i.e. ORL, YALE, UoB, and local database (CAI-UTM). Overall, the results indicate that the proposed method can overcome two problems and deliver higher accuracy than similar methods.
11.polynomial regression model of making cost prediction in mixed cost analysisAlexander Decker
This document presents a study comparing different regression models for predicting costs based on production levels. It finds that a cubic polynomial regression model provides a better fit than linear regression or the high-low method. The study uses cost and production data from a company to build linear, quadratic, and cubic regression models. It finds the cubic polynomial regression has the highest R-squared value and lowest p-value, indicating it is best able to model the cost patterns in the data. The document concludes the cubic polynomial regression provides a better approach for cost prediction than traditional linear regression or high-low methods.
Polynomial regression model of making cost prediction in mixed cost analysisAlexander Decker
This document presents a study comparing different regression models for predicting costs based on production levels. It finds that a cubic polynomial regression model provides a better fit than linear regression or the high-low method. The study uses cost and production data from a company to build linear, quadratic, and cubic regression models. It finds the cubic polynomial regression has the highest R-squared value and lowest p-value, indicating it is the best-fitting model. The study concludes that polynomial regression generally provides a better approach for cost prediction than conventional linear regression or the high-low method.
Medical Conferences, Pharma Conferences, Engineering Conferences, Science Conferences, Manufacturing Conferences, Social Science Conferences, Business Conferences, Scientific Conferences Malaysia, Thailand, Singapore, Hong Kong, Dubai, Turkey 2014 2015 2016
Global Research & Development Services (GRDS) is a leading academic event organizer, publishing Open Access Journals and conducting several professionally organized international conferences all over the globe annually. GRDS aims to disseminate knowledge and innovation with the help of its International Conferences and open access publications. GRDS International conferences are world-class events which provide a meaningful platform for researchers, students, academicians, institutions, entrepreneurs, industries and practitioners to create, share and disseminate knowledge and innovation and to develop long-lasting network and collaboration.
GRDS is a blend of Open Access Publications and world-wide International Conferences and Academic events. The prime mission of GRDS is to make continuous efforts in transforming the lives of people around the world through education, application of research and innovative ideas.
Global Research & Development Services (GRDS) is also active in the field of Research Funding, Research Consultancy, Training and Workshops along with International Conferences and Open Access Publications.
International Conferences 2014 – 2015
Malaysia Conferences, Thailand Conferences, Singapore Conferences, Hong Kong Conferences, Dubai Conferences, Turkey Conferences, Conference Listing, Conference Alerts
The document provides an overview of five fundamental machine learning algorithms: linear regression, logistic regression, decision tree learning, k-nearest neighbors, and neural networks. It describes the problem statement, solution, and key aspects of each algorithm. For linear regression, it discusses minimizing the squared error loss to find the optimal regression line. Logistic regression maximizes the likelihood function to find the optimal classification model. Decision tree learning uses an ID3 algorithm to greedily construct a non-parametric model by optimizing the average log-likelihood.
This document discusses various types of regression modeling and linear regression. It provides examples of linear regression analysis on fraud data and discusses assessing goodness of fit. It also briefly covers non-linear regression, problem areas like heteroskedasticity and collinearity, and model selection methods. Linear regression is presented geometrically and the assumptions and computations of ordinary least squares regression are explained.
In this paper we have described histogram expansion. Histogram expansion is a technique of histogram equalization. In this we have described three different techniques of expansion namely dynamic range expansion, linear contrast expansion and symmetric range expansion. Each of these has their specific uses and advantages. For colored images linear contrast expansion is used. These all methods help in easy study of histograms and helps in image enhancement.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Development modeling methods of analysis and synthesis of fingerprint deforma...IJECEIAES
The current study is to develop modeling methods, analysis and synthesis of fingerprints deformations images and their application in problems of automatic fingerprint identification. In the introduction justified urgency of the problem, is given a brief description of thematic publications. In this study will review of modern technologies of biometric technologies and methods of biometric identification, the review of fingerprint identification systems, investigate for distorting factors. The influence of deformations is singled out, the causes of deformation of fingerprints are analyzed. The review of modern ways of the account and modeling of deformations in problems of automatic fingerprint identification is given. The scientific novelty of the work is the development of information technologies for the analysis and synthesis of deformations of fingerprint images. The practical value of the work in the application of the developed methods, algorithms and information technologies in fingerprints identification systems. In addition, it has been found that our paper "devoted to research methods and synthesis of the fingerprint deformations" is a more appropriate choice than other papers.
The document presents four numerical methods for finding the nth root of a positive number: the Fixed-b method, Adaptive-b method, Simplified Adaptive-b method, and Newton-Raphson Improved (NRI) method. It analyzes the convergence properties and rate of convergence for each method. The Fixed-b and Adaptive-b methods introduce a parameter b that can improve convergence, and the Adaptive-b method allows b to adapt at each iteration. The NRI method is derived from the Simplified Adaptive-b method and shows faster convergence than Newton-Raphson in some cases.
This document discusses curve fitting in Matlab. It introduces the Curve Fitting tool and how to use it to fit data to functions. Key aspects covered include importing data into the tool, selecting fitting functions, viewing and analyzing fit results, and plotting residuals. Examples are provided of fitting data to a sine wave, linear function with and without weights, and examining fit confidence bounds and predictions over different ranges. The document provides a tutorial on using Matlab's Curve Fitting tool to model experimental data with functions.
The document discusses assignment problems and the Hungarian method for solving them. It begins by introducing the concept of assignment problems where the goal is to assign n jobs to n workers in a way that maximizes profit or efficiency. It then provides the mathematical formulation of an assignment problem as minimizing a cost function subject to constraints. The bulk of the document describes the Hungarian method, a multi-step algorithm for finding optimal assignments. It involves row/column reductions, finding a complete assignment of zeros, drawing lines to cover remaining zeros, and modifying the cost matrix to increase the number of zeros. An example is provided to illustrate the method.
Solving ONE’S interval linear assignment problemIJERA Editor
This document presents a new method called the Matrix Ones Interval Linear Assignment Method (MOILA) for solving assignment problems with interval costs. It begins with definitions of assignment problems and interval analysis concepts. Then it describes the existing Hungarian method and provides an example solved using both Hungarian and MOILA. MOILA involves creating ones in the assignment matrix and making assignments based on the ones. The document outlines algorithms for MOILA as well as extensions to unbalanced and interval assignment problems. It provides an example of applying MOILA to solve a balanced interval assignment problem and compares the solutions to Hungarian. The document introduces MOILA as a systematic alternative to Hungarian for solving a variety of assignment problem types.
Stress-Strength Reliability of type II compound Laplace distributionIRJET Journal
The document discusses estimating the reliability (R) of a stress-strength model where the strength (Y) and stress (X) variables follow a type II compound Laplace distribution. R is defined as the probability that stress is less than strength (Pr(X<Y)).
It first provides background on stress-strength models and reliability (R) in engineering. It then defines the type II compound Laplace distribution and derives an expression for R when X and Y follow this distribution. Maximum likelihood estimation is used to estimate the three parameters of the distribution. Simulation studies validate the estimation algorithm. Applications of stress-strength models are discussed in fields like engineering, medicine, and genetics.
Differential evolution (DE) algorithm has been applied as a powerful tool to find optimum switching angles for selective harmonic elimination pulse width modulation (SHEPWM) inverters. However, the DE’s performace is very dependent on its control parameters. Conventional DE generally uses either trial and error mechanism or tuning technique to determine appropriate values of the control paramaters. The disadvantage of this process is that it is very time comsuming. In this paper, an adaptive control parameter is proposed in order to speed up the DE algorithm in optimizing SHEPWM switching angles precisely. The proposed adaptive control parameter is proven to enhance the convergence process of the DE algorithm without requiring initial guesses. The results for both negative and positive modulation index (M) also indicate that the proposed adaptive DE is superior to the conventional DE in generating SHEPWM switching patterns.
This document discusses regression analysis and linear regression models. It provides examples of how regression can be used to understand relationships between variables and predict values. Specifically, it examines a case study of how sales from a home renovation company (Triple A Construction) can be predicted based on area payroll. The regression line that best fits the data is calculated and metrics like the coefficient of determination are used to evaluate how well the regression model fits the data. Assumptions of the linear regression model like independent and normally distributed errors are also covered.
This is a special type of LPP in which the objective function is to find the optimum allocation of a number of tasks (jobs) to an equal number of facilities (persons). Here we make the assumption that each person can perform each job but with varying degree of efficiency. For example, a departmental head may have 4 persons available for assignment and 4 jobs to fill. Then his interest is to find the best assignment which will be in the best interest of the department.
Iterative methods for the solution of saddle point problemIAEME Publication
This document summarizes an iterative method for solving saddle point problems that arise in mixed finite element approximations of Stokes fluid flow problems. It presents classical solution methods like primal-dual conjugate gradient algorithms. It then proposes a general primal-dual algorithm that combines these classical methods to solve the coupled primal and dual problems simultaneously. The algorithm defines descent directions as solutions to preconditioned residual equations to minimize both the primal and dual residuals at each iteration.
Developed a system that generates a 3D face using a video of single frontal face as the input
Implemented Principal Component Analysis and Active Shape Model (ASM) for facial feature point detection
Applied the POSIT algorithm to estimate the head pose for realistic shading
Modified the classic ASM and critically appraised my approach by comparing it with alternative approaches
The document discusses the assignment problem and provides examples of solving it using different methods. It can be summarized as:
1) The assignment problem involves allocating tasks or jobs to workers or machines in the most efficient way, often to minimize costs. It has special characteristics as each job can only be assigned to one worker.
2) The document provides examples of solving assignment problems using methods like the least cost method, stepping stone method, and the Hungarian method.
3) It also discusses how to model assignment problems as transportation problems and how to solve maximization versions by converting to minimization problems.
The Application Of Bayes Ying-Yang Harmony Based Gmms In On-Line Signature Ve...ijaia
In this contribution, a Bayes Ying-Yang(BYY) harmony based approach for on-line signature verification is
presented. In the proposed method, a simple but effective Gaussian Mixture Models(GMMs) is used to
represent for each user’s signature model based on the prior information collected. Different from the early
works, in this paper, we use the Bayes Ying Yang machine combined with the harmony function to achieve
Automatic Model Selection(AMS) during the parameter learning for the GMMs, so that a better
approximation of the user model is assured. Experiments on a database from the First International
Signature Verification Competition(SVC 2004) confirm that this combined algorithm yields quite a
satisfactory result.
A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...ijaia
A support vector machine (SVM) learns the decision surface from two different classes of the input points, there are misclassifications in some of the input points in several applications. In this paper a bi-objective quadratic programming model is utilized and different feature quality measures are optimized simultaneously using the weighting method for solving our bi-objective quadratic programming problem. An important contribution will be added for the proposed bi-objective quadratic programming model by getting different efficient support vectors due to changing the weighting values. The numerical examples, give evidence of the effectiveness of the weighting parameters on reducing the misclassification between two classes of the input points. An interactive procedure will be added to identify the best compromise solution from the generated efficient solutions.
LEARNING SPLINE MODELS WITH THE EM ALGORITHM FOR SHAPE RECOGNITIONgerogepatton
This paper demonstrates how cubic Spline (B-Spline) models can be used to recognize 2-dimension nonrigid handwritten isolated characters. Each handwritten character is represented by a set of nonoverlapping uniformly distributed landmarks. The Spline models are constructed by utilizing cubic order of
polynomial to model the shapes under study. The approach is a two-stage process. The first stage is
learning, we construct a mixture of spline class parameters to capture the variations in spline coefficients
using the apparatus Expectation Maximization algorithm. The second stage is recognition, here we use the
Fréchet distance to compute the variations between the spline models and test spline shape for recognition.
We test the approach on a set of handwritten Arabic letters.
A Computationally Efficient Algorithm to Solve Generalized Method of Moments ...Waqas Tariq
Generalized method of moment estimating function enables one to estimate regression parameters consistently and efficiently. However, it involves one major computational problem: in complex data settings, solving generalized method of moments estimating function via Newton-Raphson technique gives rise often to non-invertible Jacobian matrices. Thus, parameter estimation becomes unreliable and computationally inefficient. To overcome this problem, we propose to use secant method based on vector divisions instead of the usual Newton-Raphson technique to estimate the regression parameters. This new method of estimation demonstrates a decrease in the number of non-convergence iterations as compared to the Newton-Raphson technique and provides reliable estimates.
Symmetric quadratic tetration interpolation using forward and backward opera...IJECEIAES
The existed interpolation method, based on the second-order tetration polynomial, has the asymmetric property. The interpolation results, for each considering region, give individual characteristics. Although the interpolation performance has been better than the conventional methods, the symmetric property for signal interpolation is also necessary. In this paper, we propose the symmetric interpolation formulas derived from the second-order tetration polynomial. The combination of the forward and backward operations was employed to construct two types of the symmetric interpolation. Several resolutions of the fundamental signals were used to evaluate the signal reconstruction performance. The results show that the proposed interpolations can be used to reconstruct the fundamental signal and its peak signal to noise ratio (PSNR) is superior to the conventional interpolation methods, except the cubic spline interpolation for the sine wave signal. However, the visual results show that it has a small difference. Moreover, our proposed interpolations converge to the steady-state faster than the cubic spline interpolation. In addition, the option number increasing will reinforce their sensitivity.
The document provides an overview of five fundamental machine learning algorithms: linear regression, logistic regression, decision tree learning, k-nearest neighbors, and neural networks. It describes the problem statement, solution, and key aspects of each algorithm. For linear regression, it discusses minimizing the squared error loss to find the optimal regression line. Logistic regression maximizes the likelihood function to find the optimal classification model. Decision tree learning uses an ID3 algorithm to greedily construct a non-parametric model by optimizing the average log-likelihood.
This document discusses various types of regression modeling and linear regression. It provides examples of linear regression analysis on fraud data and discusses assessing goodness of fit. It also briefly covers non-linear regression, problem areas like heteroskedasticity and collinearity, and model selection methods. Linear regression is presented geometrically and the assumptions and computations of ordinary least squares regression are explained.
In this paper we have described histogram expansion. Histogram expansion is a technique of histogram equalization. In this we have described three different techniques of expansion namely dynamic range expansion, linear contrast expansion and symmetric range expansion. Each of these has their specific uses and advantages. For colored images linear contrast expansion is used. These all methods help in easy study of histograms and helps in image enhancement.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Development modeling methods of analysis and synthesis of fingerprint deforma...IJECEIAES
The current study is to develop modeling methods, analysis and synthesis of fingerprints deformations images and their application in problems of automatic fingerprint identification. In the introduction justified urgency of the problem, is given a brief description of thematic publications. In this study will review of modern technologies of biometric technologies and methods of biometric identification, the review of fingerprint identification systems, investigate for distorting factors. The influence of deformations is singled out, the causes of deformation of fingerprints are analyzed. The review of modern ways of the account and modeling of deformations in problems of automatic fingerprint identification is given. The scientific novelty of the work is the development of information technologies for the analysis and synthesis of deformations of fingerprint images. The practical value of the work in the application of the developed methods, algorithms and information technologies in fingerprints identification systems. In addition, it has been found that our paper "devoted to research methods and synthesis of the fingerprint deformations" is a more appropriate choice than other papers.
The document presents four numerical methods for finding the nth root of a positive number: the Fixed-b method, Adaptive-b method, Simplified Adaptive-b method, and Newton-Raphson Improved (NRI) method. It analyzes the convergence properties and rate of convergence for each method. The Fixed-b and Adaptive-b methods introduce a parameter b that can improve convergence, and the Adaptive-b method allows b to adapt at each iteration. The NRI method is derived from the Simplified Adaptive-b method and shows faster convergence than Newton-Raphson in some cases.
This document discusses curve fitting in Matlab. It introduces the Curve Fitting tool and how to use it to fit data to functions. Key aspects covered include importing data into the tool, selecting fitting functions, viewing and analyzing fit results, and plotting residuals. Examples are provided of fitting data to a sine wave, linear function with and without weights, and examining fit confidence bounds and predictions over different ranges. The document provides a tutorial on using Matlab's Curve Fitting tool to model experimental data with functions.
The document discusses assignment problems and the Hungarian method for solving them. It begins by introducing the concept of assignment problems where the goal is to assign n jobs to n workers in a way that maximizes profit or efficiency. It then provides the mathematical formulation of an assignment problem as minimizing a cost function subject to constraints. The bulk of the document describes the Hungarian method, a multi-step algorithm for finding optimal assignments. It involves row/column reductions, finding a complete assignment of zeros, drawing lines to cover remaining zeros, and modifying the cost matrix to increase the number of zeros. An example is provided to illustrate the method.
Solving ONE’S interval linear assignment problemIJERA Editor
This document presents a new method called the Matrix Ones Interval Linear Assignment Method (MOILA) for solving assignment problems with interval costs. It begins with definitions of assignment problems and interval analysis concepts. Then it describes the existing Hungarian method and provides an example solved using both Hungarian and MOILA. MOILA involves creating ones in the assignment matrix and making assignments based on the ones. The document outlines algorithms for MOILA as well as extensions to unbalanced and interval assignment problems. It provides an example of applying MOILA to solve a balanced interval assignment problem and compares the solutions to Hungarian. The document introduces MOILA as a systematic alternative to Hungarian for solving a variety of assignment problem types.
Stress-Strength Reliability of type II compound Laplace distributionIRJET Journal
The document discusses estimating the reliability (R) of a stress-strength model where the strength (Y) and stress (X) variables follow a type II compound Laplace distribution. R is defined as the probability that stress is less than strength (Pr(X<Y)).
It first provides background on stress-strength models and reliability (R) in engineering. It then defines the type II compound Laplace distribution and derives an expression for R when X and Y follow this distribution. Maximum likelihood estimation is used to estimate the three parameters of the distribution. Simulation studies validate the estimation algorithm. Applications of stress-strength models are discussed in fields like engineering, medicine, and genetics.
Differential evolution (DE) algorithm has been applied as a powerful tool to find optimum switching angles for selective harmonic elimination pulse width modulation (SHEPWM) inverters. However, the DE’s performace is very dependent on its control parameters. Conventional DE generally uses either trial and error mechanism or tuning technique to determine appropriate values of the control paramaters. The disadvantage of this process is that it is very time comsuming. In this paper, an adaptive control parameter is proposed in order to speed up the DE algorithm in optimizing SHEPWM switching angles precisely. The proposed adaptive control parameter is proven to enhance the convergence process of the DE algorithm without requiring initial guesses. The results for both negative and positive modulation index (M) also indicate that the proposed adaptive DE is superior to the conventional DE in generating SHEPWM switching patterns.
This document discusses regression analysis and linear regression models. It provides examples of how regression can be used to understand relationships between variables and predict values. Specifically, it examines a case study of how sales from a home renovation company (Triple A Construction) can be predicted based on area payroll. The regression line that best fits the data is calculated and metrics like the coefficient of determination are used to evaluate how well the regression model fits the data. Assumptions of the linear regression model like independent and normally distributed errors are also covered.
This is a special type of LPP in which the objective function is to find the optimum allocation of a number of tasks (jobs) to an equal number of facilities (persons). Here we make the assumption that each person can perform each job but with varying degree of efficiency. For example, a departmental head may have 4 persons available for assignment and 4 jobs to fill. Then his interest is to find the best assignment which will be in the best interest of the department.
Iterative methods for the solution of saddle point problemIAEME Publication
This document summarizes an iterative method for solving saddle point problems that arise in mixed finite element approximations of Stokes fluid flow problems. It presents classical solution methods like primal-dual conjugate gradient algorithms. It then proposes a general primal-dual algorithm that combines these classical methods to solve the coupled primal and dual problems simultaneously. The algorithm defines descent directions as solutions to preconditioned residual equations to minimize both the primal and dual residuals at each iteration.
Developed a system that generates a 3D face using a video of single frontal face as the input
Implemented Principal Component Analysis and Active Shape Model (ASM) for facial feature point detection
Applied the POSIT algorithm to estimate the head pose for realistic shading
Modified the classic ASM and critically appraised my approach by comparing it with alternative approaches
The document discusses the assignment problem and provides examples of solving it using different methods. It can be summarized as:
1) The assignment problem involves allocating tasks or jobs to workers or machines in the most efficient way, often to minimize costs. It has special characteristics as each job can only be assigned to one worker.
2) The document provides examples of solving assignment problems using methods like the least cost method, stepping stone method, and the Hungarian method.
3) It also discusses how to model assignment problems as transportation problems and how to solve maximization versions by converting to minimization problems.
The Application Of Bayes Ying-Yang Harmony Based Gmms In On-Line Signature Ve...ijaia
In this contribution, a Bayes Ying-Yang(BYY) harmony based approach for on-line signature verification is
presented. In the proposed method, a simple but effective Gaussian Mixture Models(GMMs) is used to
represent for each user’s signature model based on the prior information collected. Different from the early
works, in this paper, we use the Bayes Ying Yang machine combined with the harmony function to achieve
Automatic Model Selection(AMS) during the parameter learning for the GMMs, so that a better
approximation of the user model is assured. Experiments on a database from the First International
Signature Verification Competition(SVC 2004) confirm that this combined algorithm yields quite a
satisfactory result.
A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...ijaia
A support vector machine (SVM) learns the decision surface from two different classes of the input points, there are misclassifications in some of the input points in several applications. In this paper a bi-objective quadratic programming model is utilized and different feature quality measures are optimized simultaneously using the weighting method for solving our bi-objective quadratic programming problem. An important contribution will be added for the proposed bi-objective quadratic programming model by getting different efficient support vectors due to changing the weighting values. The numerical examples, give evidence of the effectiveness of the weighting parameters on reducing the misclassification between two classes of the input points. An interactive procedure will be added to identify the best compromise solution from the generated efficient solutions.
LEARNING SPLINE MODELS WITH THE EM ALGORITHM FOR SHAPE RECOGNITIONgerogepatton
This paper demonstrates how cubic Spline (B-Spline) models can be used to recognize 2-dimension nonrigid handwritten isolated characters. Each handwritten character is represented by a set of nonoverlapping uniformly distributed landmarks. The Spline models are constructed by utilizing cubic order of
polynomial to model the shapes under study. The approach is a two-stage process. The first stage is
learning, we construct a mixture of spline class parameters to capture the variations in spline coefficients
using the apparatus Expectation Maximization algorithm. The second stage is recognition, here we use the
Fréchet distance to compute the variations between the spline models and test spline shape for recognition.
We test the approach on a set of handwritten Arabic letters.
A Computationally Efficient Algorithm to Solve Generalized Method of Moments ...Waqas Tariq
Generalized method of moment estimating function enables one to estimate regression parameters consistently and efficiently. However, it involves one major computational problem: in complex data settings, solving generalized method of moments estimating function via Newton-Raphson technique gives rise often to non-invertible Jacobian matrices. Thus, parameter estimation becomes unreliable and computationally inefficient. To overcome this problem, we propose to use secant method based on vector divisions instead of the usual Newton-Raphson technique to estimate the regression parameters. This new method of estimation demonstrates a decrease in the number of non-convergence iterations as compared to the Newton-Raphson technique and provides reliable estimates.
Symmetric quadratic tetration interpolation using forward and backward opera...IJECEIAES
The existed interpolation method, based on the second-order tetration polynomial, has the asymmetric property. The interpolation results, for each considering region, give individual characteristics. Although the interpolation performance has been better than the conventional methods, the symmetric property for signal interpolation is also necessary. In this paper, we propose the symmetric interpolation formulas derived from the second-order tetration polynomial. The combination of the forward and backward operations was employed to construct two types of the symmetric interpolation. Several resolutions of the fundamental signals were used to evaluate the signal reconstruction performance. The results show that the proposed interpolations can be used to reconstruct the fundamental signal and its peak signal to noise ratio (PSNR) is superior to the conventional interpolation methods, except the cubic spline interpolation for the sine wave signal. However, the visual results show that it has a small difference. Moreover, our proposed interpolations converge to the steady-state faster than the cubic spline interpolation. In addition, the option number increasing will reinforce their sensitivity.
Comparing the methods of Estimation of Three-Parameter Weibull distributionIOSRJM
This document compares different methods for estimating the parameters of a three-parameter Weibull distribution from sample data, including graphical methods, trial and error, Jiang/Murthy's approach, and maximum likelihood estimation. It applies these methods to simulated sample data from a Weibull distribution with known parameters to estimate the location, scale, and shape parameters. The trial and error method uses residual sum of squares to select the location parameter that provides the best fit. The maximum likelihood method derives equations that are solved numerically. The results show that all methods can estimate the parameters reasonably well, though accuracy varies depending on the full or censored sample size.
LOGNORMAL ORDINARY KRIGING METAMODEL IN SIMULATION OPTIMIZATIONorajjournal
This paper presents a lognormal ordinary kriging (LOK) metamodel algorithm and its application to
optimize a stochastic simulation problem. Kriging models have been developed as an interpolation method
in geology. They have been successfully used for the deterministic simulation optimization (SO) problem. In
recent years, kriging metamodeling has attracted a growing interest with stochastic problems. SO
researchers have begun using ordinary kriging through global optimization in stochastic systems. The
goals of this study are to present LOK metamodel algorithm and to analyze the result of the application
step-by-step. The results show that LOK is a powerful alternative metamodel in simulation optimization
when the data are too skewed.
APPROACHES IN USING EXPECTATIONMAXIMIZATION ALGORITHM FOR MAXIMUM LIKELIHOOD ...cscpconf
EM algorithm is popular in maximum likelihood estimation of parameters for state-space models. However, extant approaches for the realization of EM algorithm are still not able to fulfill the task of identification systems, which have external inputs and constrained parameters. In this paper, we propose new approaches for both initial guessing and MLE of the parameters of a constrained state-space model with an external input. Using weighted least square for the initial guess and the partial differentiation of the joint log-likelihood function for the EM algorithm, we estimate the parameters and compare the estimated values with the “actual” values, which are set to generate simulation data. Moreover, asymptotic variances of the estimated parameters are calculated when the sample size is large, while statistics of the estimated parameters are obtained through bootstrapping when the sample size issmall. The results demonstrate that the estimated values are close to the “actual” values.Consequently, our approaches are promising and can applied in future research.
THE APPLICATION OF BAYES YING-YANG HARMONY BASED GMMS IN ON-LINE SIGNATURE VE...ijaia
In this contribution, a Bayes Ying-Yang(BYY) harmony based approach for on-line signature verification is
presented. In the proposed method, a simple but effective Gaussian Mixture Models(GMMs) is used to
represent for each user’s signature model based on the prior information collected. Different from the early
works, in this paper, we use the Bayes Ying Yang machine combined with the harmony function to achieve
Automatic Model Selection(AMS) during the parameter learning for the GMMs, so that a better
approximation of the user model is assured. Experiments on a database from the First International
Signature Verification Competition(SVC 2004) confirm that this combined algorithm yields quite a
satisfactory result.
Large sample property of the bayes factor in a spline semiparametric regressi...Alexander Decker
This document summarizes a research paper about investigating the large sample property of the Bayes factor for testing the polynomial component of a spline semiparametric regression model against a fully spline alternative model. It considers a semiparametric regression model where the mean function has two parts - a parametric linear component and a nonparametric penalized spline component. By representing the model as a mixed model, it obtains the closed form of the Bayes factor and proves that the Bayes factor is consistent under certain conditions on the prior and design matrix. It establishes that the Bayes factor converges to infinity under the pure polynomial model and converges to zero almost surely under the spline semiparametric alternative model.
PREDICTION MODELS BASED ON MAX-STEMS Episode Two: Combinatorial Approachahmet furkan emrehan
This document describes prediction models that use combinations of stems from documents to make predictions, as an extension of models from the previous chapter that used individual stems. It defines the parameters and components adapted for the combinatorial approach, including combinations of stems (comboijs) from documents and the counts and probabilities associated with them. It then presents the general scheme for prediction models using the combinatorial approach and describes 5 specific models, noting that for s=1 the models are equivalent to those from the previous chapter. It introduces an application of these models to news data from a Turkish website.
On The Numerical Solution of Picard Iteration Method for Fractional Integro -...DavidIlejimi
ff(xx),
ff(xx) + λλ ∫ KK(xx, tt)uu0(tt)dddd,
xx
0
(14)
1. This document discusses the Picard iteration method for solving fractional integro-differential equations. The fractional derivative is considered in the Caputo sense.
2. The proposed Picard iteration method reduces fractional integro-differential equations to standard integral equations of the second kind.
3. Some test problems are considered to demonstrate the accuracy and convergence of the presented Picard iteration method for solving fractional integro-differential equations. Numerical results show the approach is easy and accurate.
Exploring Support Vector Regression - Signals and Systems ProjectSurya Chandra
Our team competed in a Kaggle competition to predict the bike share use as a part of their capital bike share program in Washington DC using a powerful function approximation technique called support vector regression.
This document summarizes an analysis of using Support Vector Regression (SVR) to predict bike rental data from a bike sharing program in Washington D.C. It begins with an introduction to SVR and the bike rental prediction competition. It then shows that linear regression performs poorly on this non-linear problem. The document explains how SVR maps data into higher dimensions using kernel functions to allow for non-linear fits. It concludes by outlining the derivation of the SVR method using kernel functions to simplify calculations for the regression.
A big task often faced by practitioners is in deciding the appropriate model to adopt
in fitting count datasets. This paper is aimed at investigating a suitable model for fitting
highly skewed count datasets. Among other models, COM-Poisson regression model was
proposed in this paper for fitting count data due to its varying normalizing constant. Some
statistical models were investigated along with the proposed model; these include
Poisson, Negative Binomial, Zero-Inflated, Zero-inflated Poisson and Quasi- Poisson
models. A real life dataset relating to visits to Doctor within a given period was equally
used to test the behavior of the underlying models. From the findings, it is recommended
that COM-Poisson regression model should be adopted in fitting highly skewed count
datasets irrespective of the type of dispersion.
A New Method For Solving Kinematics Model Of An RA-02IJERA Editor
The kinematics miniature are established for a 4 DOF robotic arm. Denavit-Hartenberg (DH) convention and the
product of exponential formula are used for solving kinematic problem based on screw theory. For acquiring
simple matrix for inverse kinematics a new simple method is derived by solving problems like robot base
movement, actuator restoration. Simulations are done by using MATlab programming for the kinematics
exemplary.
Method of Fracture Surface Matching Based on Mathematical StatisticsIJRESJOURNAL
ABSTRACT: Fracture surface matching is an important part of point cloud registration. In this paper, a method of fracture surface matching based on mathematical statistics is proposed. We reconstruct a coordinate system of the fractured surface points, and analyze the characteristics of the point cloud in the new coordinate system, using the theory of mathematical statistcs. The general distribution of the points is determined. The method can realize the matching relation among some point cloud.
Bayes estimators for the shape parameter of pareto type iAlexander Decker
This document discusses Bayesian estimators for the shape parameter of the Pareto Type I distribution under different loss functions. It begins by introducing the Pareto distribution and some classical estimators for the shape parameter, including the maximum likelihood estimator, uniformly minimum variance unbiased estimator, and minimum mean squared error estimator. It then derives the Bayesian estimators under a generalized square error loss function and quadratic loss function. Both informative priors (Exponential distribution) and non-informative priors (Jeffreys prior) are considered. The performance of the estimators is compared using Monte Carlo simulations and mean squared errors.
Bayes estimators for the shape parameter of pareto type iAlexander Decker
This document discusses Bayesian estimators for the shape parameter of the Pareto Type I distribution under different loss functions. It begins by introducing the Pareto distribution and some classical estimators for the shape parameter, including the maximum likelihood estimator, uniformly minimum variance unbiased estimator, and minimum mean squared error estimator. It then derives the Bayesian estimators under a generalized square error loss function and quadratic loss function. Both informative priors (Exponential distribution) and non-informative priors (Jeffreys prior) are considered. The performance of the estimators is compared using Monte Carlo simulations and mean squared errors.
This document summarizes an exercise involving calculating the inverse of square matrices in three ways: analytically, using LU decomposition, and singular value decomposition. It finds that analytical calculation becomes impractically slow for matrices larger than order 13, while LU decomposition and SVD using GNU Scientific Library functions can calculate the inverse of matrices up to order 350 in under 20 seconds. SVD is found to be slightly more efficient than LU decomposition for higher order matrices. Accuracy is also compared when the input matrix is close to singular, finding SVD returns the most accurate inverse.
A comparative study of three validities computation methods for multimodel ap...IJECEIAES
The multimodel approach offers a very satisfactory results in modelling, diagnose and control of complex systems. In the modelling case, this approach passes by three steps: the determination of the model’s library, the validities computation and the establishment of the final model. In this context, this paper focuses on the elaboration of a comparative study between three recent methods of validities computation. Thus, it highlight the method that offers the best performances in term of precision. To achieve this goal, we apply, these three methods on two simulation examples in order to compare their performances.
Face Alignment Using Active Shape Model And Support Vector MachineCSCJournals
The document proposes improvements to the classical Active Shape Model (ASM) algorithm for face alignment. The improvements include: 1) Combining Sobel filtering and 2D profiles to build texture models, 2) Applying Canny edge detection for enhancement, 3) Using Support Vector Machines (SVM) to classify landmarks for more accurate localization, and 4) Automatically adjusting 2D profile lengths based on image size. Experimental results on two face databases show the proposed ASM-SVM method achieves better alignment accuracy than the classical ASM and other methods.
Similar to MIXTURES OF TRAINED REGRESSION CURVESMODELS FOR HANDRITTEN ARABIC CHARACTER RECOGNITION (20)
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
artificial intelligence and data science contents.pptxGauravCar
What is artificial intelligence? Artificial intelligence is the ability of a computer or computer-controlled robot to perform tasks that are commonly associated with the intellectual processes characteristic of humans, such as the ability to reason.
› ...
Artificial intelligence (AI) | Definitio
Null Bangalore | Pentesters Approach to AWS IAMDivyanshu
#Abstract:
- Learn more about the real-world methods for auditing AWS IAM (Identity and Access Management) as a pentester. So let us proceed with a brief discussion of IAM as well as some typical misconfigurations and their potential exploits in order to reinforce the understanding of IAM security best practices.
- Gain actionable insights into AWS IAM policies and roles, using hands on approach.
#Prerequisites:
- Basic understanding of AWS services and architecture
- Familiarity with cloud security concepts
- Experience using the AWS Management Console or AWS CLI.
- For hands on lab create account on [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
# Scenario Covered:
- Basics of IAM in AWS
- Implementing IAM Policies with Least Privilege to Manage S3 Bucket
- Objective: Create an S3 bucket with least privilege IAM policy and validate access.
- Steps:
- Create S3 bucket.
- Attach least privilege policy to IAM user.
- Validate access.
- Exploiting IAM PassRole Misconfiguration
-Allows a user to pass a specific IAM role to an AWS service (ec2), typically used for service access delegation. Then exploit PassRole Misconfiguration granting unauthorized access to sensitive resources.
- Objective: Demonstrate how a PassRole misconfiguration can grant unauthorized access.
- Steps:
- Allow user to pass IAM role to EC2.
- Exploit misconfiguration for unauthorized access.
- Access sensitive resources.
- Exploiting IAM AssumeRole Misconfiguration with Overly Permissive Role
- An overly permissive IAM role configuration can lead to privilege escalation by creating a role with administrative privileges and allow a user to assume this role.
- Objective: Show how overly permissive IAM roles can lead to privilege escalation.
- Steps:
- Create role with administrative privileges.
- Allow user to assume the role.
- Perform administrative actions.
- Differentiation between PassRole vs AssumeRole
Try at [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
Discover the latest insights on Data Driven Maintenance with our comprehensive webinar presentation. Learn about traditional maintenance challenges, the right approach to utilizing data, and the benefits of adopting a Data Driven Maintenance strategy. Explore real-world examples, industry best practices, and innovative solutions like FMECA and the D3M model. This presentation, led by expert Jules Oudmans, is essential for asset owners looking to optimize their maintenance processes and leverage digital technologies for improved efficiency and performance. Download now to stay ahead in the evolving maintenance landscape.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...shadow0702a
This document serves as a comprehensive step-by-step guide on how to effectively use PyCharm for remote debugging of the Windows Subsystem for Linux (WSL) on a local Windows machine. It meticulously outlines several critical steps in the process, starting with the crucial task of enabling permissions, followed by the installation and configuration of WSL.
The guide then proceeds to explain how to set up the SSH service within the WSL environment, an integral part of the process. Alongside this, it also provides detailed instructions on how to modify the inbound rules of the Windows firewall to facilitate the process, ensuring that there are no connectivity issues that could potentially hinder the debugging process.
The document further emphasizes on the importance of checking the connection between the Windows and WSL environments, providing instructions on how to ensure that the connection is optimal and ready for remote debugging.
It also offers an in-depth guide on how to configure the WSL interpreter and files within the PyCharm environment. This is essential for ensuring that the debugging process is set up correctly and that the program can be run effectively within the WSL terminal.
Additionally, the document provides guidance on how to set up breakpoints for debugging, a fundamental aspect of the debugging process which allows the developer to stop the execution of their code at certain points and inspect their program at those stages.
Finally, the document concludes by providing a link to a reference blog. This blog offers additional information and guidance on configuring the remote Python interpreter in PyCharm, providing the reader with a well-rounded understanding of the process.
MIXTURES OF TRAINED REGRESSION CURVESMODELS FOR HANDRITTEN ARABIC CHARACTER RECOGNITION
1. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.10, No.3, May 2019
DOI : 10.5121/ijaia.2019.10305 49
MIXTURES OF TRAINED REGRESSION
CURVESMODELS FOR
HANDRITTEN ARABIC CHARACTER RECOGNITION
Abdullah A. Al-Shaher1
1
Department of Computer and Information Systems
College of Business Studies, Public Authority for
Applied Education and Training, Kuwait
ABSTRACT
In this paper, we demonstrate how regression curves can be used to recognize 2D non-rigid handwritten
shapes. Each shape is represented by a set of non-overlapping uniformly distributed landmarks. The
underlying models utilize 2nd order of polynomials to model shapes within a training set. To estimate the
regression models, we need to extract the required coefficients which describe the variations for a set of
shape class. Hence, a least square method is used to estimate such modes. We proceed then, by training these
coefficients using the apparatus Expectation Maximization algorithm. Recognition is carried out by finding
the least error landmarks displacement with respect to the model curves. Handwritten isolated Arabic
characters are used to evaluate our approach.
KEYWORDS
Shape Recognition, Arabic Handwritten Characters, Regression Curves, Expectation Maximization
Algorithm.
1.INTRODUCTION
Shape recognition has been the focus of many researchers for the past seven decades [1] and
attracted many communities in the field of pattern recognition [2], artificial intelligence[3], signal
processing [4], image analysis [5], and computer vision [6]. The difficulties arise when the shape
under study exhibits a high degree in variation: as in handwritten characters [7], digits [8], face
detection [9], and gesture authentication [10]. For a single data, shape variation is limited and
cannot be captured ultimately due to the fact that single data does not provide sufficient information
and knowledge about the data; therefore, multiple existence of data provides better understanding
of shape analysis and manifested by mixture models [11]. Because of the existence of multivariate
data under study, there is always the need to estimate the parameters which describe the data that is
encapsulated within a mixture of shapes.
The literature demonstrates many statistical and structural approaches to various algorithms to
model shape variations using supervised and unsupervised learning [12] algorithms. Precisely, the
powerful Expectation Maximization Algorithm of Dempster [13] has widely been used for such
cases. The EM Algorithm revolves around two step procedures. The expectation E step revolves
around estimating the parameters of a log-likelihood function and passes it to the Maximization M
2. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.10, No.3, May 2019
50
step. In a maximization (M) step, the algorithm computes parameters maximizing the expected
log-likelihood found on the E step. The process is iterative one until all parameters come to
unchanged. For instance, Jojic and Frey [14] have used the EM algorithm to fit mixture models to
the appearances manifolds for faces. Bishop and Winn [15] have used a mixture of principal
components analyzers to learn and synthesize variations in facial appearance. Vasconcelos and
Lippman [16] have used the EM Algorithm to learn queries for content-based image retrieval. In
general, several authors have used the EM algorithm to track multiple moving objects [17]. Revov
et al. [18] have developed a generative model which can be used for handwritten character
recognition. Their method employs the EM algorithm to model the distribution of sample points.
Curves are widely used in research by the computer vision society [1] [2] [3] [4] [5]. Curvatures are
mainly used to distinguish different shapes such as characters [6], digits, faces [2], and topographic
maps [3]. Curve fitting [18][19] is the process of constructing a 2nd order or higher mathematical
function that best fits to a series of landmark points. A related topic is a regression analysis which
stresses on probabilistic conclusion on how uncertainty occurs when fitting a curve to a set of data
landmarks with marginal errors. Regression curves are applied in data visualization [12][13] to
capture the values of a function with missing data [14] and to gain relationship between multiple
variables.
In this paper, we demonstrate how curves are used to recognize 2D handwritten shapes by applying
2nd order of polynomial quadratic function to a set of landmark points presented in a shape. We
then train such curves to capture the optimal characteristics of multiple shapes in the training set.
Handwritten Arabic characters are used and tested in this investigation.
2.REGRESSION CURVES
We would like to extract the best fit modes that describe the shapes under study, hence, multiple
image shapes are required and explained through training sets of class shape and complete sets
of shape classes denoted by . Let us assume that each training set is represented by the following
2D training patterns as a long vector
𝑋 𝜔
= ((𝑥1
𝜔1
, 𝑦1
𝜔1
), . . , (𝑥 𝑘
𝜔1
, 𝑦𝑘
𝜔1
), (𝑥1
𝜔2
, 𝑦1
𝜔2
), . . , (𝑥 𝑘
𝜔2
, 𝑦𝑘
𝜔2
), (𝑥1
𝜔 𝑇
, 𝑦1
𝜔 𝑇
), . . , (𝑥 𝑘
𝜔 𝑇
, 𝑦𝑘
𝜔 𝑇
)) (1)
Our model here is a polynomial of a higher order.In this example, we choose 2nd
order of quadratic
curves. Consider the following generic form for polynomial of order j
𝑓(𝑥 𝑘
𝜔 𝑇
) = 𝑎0 + 𝑎1 𝑥 𝑘
𝜔 𝑇
+ 𝑎2(𝑥 𝑘
𝜔 𝑇
)2
+ 𝑎3(𝑥 𝑘
𝜔 𝑇
)3
+ ⋯ + 𝑎𝑗(𝑥 𝑘
𝜔 𝑇
) 𝑗
= 𝑎0 + ∑ 𝑎 𝑥(𝑥 𝑘
𝜔 𝑇
) 𝜏
𝑗
𝜏=1
(2)
The nonlinear regression above requires the estimation of the coefficients thatbest fit the sample
shape land marks, we approach the least square error between the data y and f(x) in
𝑒𝑟𝑟 = ∑(𝑑𝑖)2
= ( 𝑦1
𝜔 𝑇
− 𝑓(𝑥1
𝜔1
))2
+ ( 𝑦2
𝜔 𝑇
− 𝑓(𝑥2
𝜔1
))2
+ ( 𝑦3
𝜔 𝑇
− 𝑓(𝑥3
𝜔1
))2
+ (𝑦4
𝜔 𝑇
− 𝑓(𝑥4
𝜔1
))2
(3)
3. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.10, No.3, May 2019
51
where the goal is to minimize the error, we substitute the form of equation (3) with a general least
square error
𝑒𝑟𝑟 = ∑(𝑦𝑘
𝜔 𝑇
𝑇
𝑘=1
− (𝑎0 + 𝑎1 𝑥 𝑘
𝜔 𝑇
+ 𝑎2 𝑥 𝑘
𝜔 𝑇2
+ 𝑎3 𝑥 𝑘
𝜔 𝑇3
+ ⋯ + 𝑎𝑗 𝑥 𝑘
𝜔 𝑇 𝑗
))2 (4)
where T is the number of pattern set, k is the current data landmark point being summed, jis the
order of polynomial equation. Rewriting equation (4) in a more readable format
𝑒𝑟𝑟 = ∑(𝑦𝑘
𝜔 𝑇
− ( 𝑎0 + ∑ 𝑎 𝑘
𝑗
𝑘=1
𝑥 𝑘
))2
𝑇
𝑘=1
(5)
Finding the best fitting curve is equivalent to minimize the squared distance between the curve and
landmark points. The aim here is to find the coefficients, hence, solving the equations by taking the
partial derivative with respect each coefficient a0, .. ,ak; for k = 1 .. j and set each to zero in
𝜕𝑒𝑟𝑟
𝜕𝑎0
= ∑(𝑦𝑘
𝜔 𝑇
− (𝑎0
𝑇
𝑘=1
+ ∑ 𝑎 𝑘 𝑥 𝑘
)) = 0
𝑇
𝐾=1
(6)
𝜕𝑒𝑟𝑟
𝜕𝑎1
= ∑(𝑦𝑘
𝜔 𝑇
− (𝑎0
𝑇
𝑘=1
+ ∑ 𝑎 𝑘 𝑥 𝑘
))𝑥 𝑘
𝜔 𝑇
= 0
𝑇
𝐾=1
(7)
𝜕𝑒𝑟𝑟
𝜕𝑎2
= ∑(𝑦𝑘
𝜔 𝑇
− (𝑎0
𝑇
𝑘=1
+ ∑ 𝑎 𝑘 𝑥 𝑘
))𝑥 𝑘
𝜔 𝑇2
= 0
𝑇
𝐾=1
(8)
Rewriting upper equations in the form of a matrix and applying linear algebra matrix differentiation
, we get
[
𝑇 ∑ 𝑥 𝑘
𝜔 𝑇
𝑇
𝑘=1
∑ 𝑥 𝑘
𝜔 𝑇2
𝑇
𝑘=1
∑ 𝑥 𝑘
𝜔 𝑇
𝑇
𝑘=1
∑ 𝑥 𝑘
𝜔 𝑇2
𝑇
𝑘=1
∑ 𝑥 𝑘
𝜔 𝑇3
𝑇
𝑘=1
∑ 𝑥 𝑘
𝜔 𝑇2
𝑇
𝑘=1
∑ 𝑥 𝑘
𝜔 𝑇3
𝑇
𝑘=1
∑ 𝑥 𝑘
𝜔 𝑇4
𝑇
𝑘=1 ]
[
𝑎0
𝑎1
𝑎2
] =
[
∑ 𝑦𝑘
𝜔 𝑇
𝑇
𝑘=1
∑ 𝑥 𝑘
𝜔 𝑇
𝑦𝑘
𝜔 𝑇
𝑇
𝑘=1
∑ 𝑥 𝑘
𝜔 𝑇2
𝑦𝑘
𝜔 𝑇
𝑇
𝑘=1 ]
(9)
Choosing Gaussian elimination procedure to rewrite the upper equation in more solvable in
𝐴𝑥 = 𝐵 (10)
where
4. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.10, No.3, May 2019
52
A =
[
𝑇 ∑ 𝑥 𝑘
𝜔 𝑇𝑇
𝑘=1 ∑ 𝑥 𝑘
𝜔 𝑇2𝑇
𝑘=1
∑ 𝑥 𝑘
𝜔 𝑇𝑇
𝑘=1 ∑ 𝑥 𝑘
𝜔 𝑇2𝑇
𝑘=1 ∑ 𝑥 𝑘
𝜔 𝑇3𝑇
𝑘=1
∑ 𝑥 𝑘
𝜔 𝑇2𝑇
𝑘=1 ∑ 𝑥 𝑘
𝜔 𝑇3𝑇
𝑘=1 ∑ 𝑥 𝑘
𝜔 𝑇4𝑇
𝑘=1 ]
,
X = [
𝑎0
𝑎1
𝑎2
],
B = [
∑ 𝑦𝑘
𝜔 𝑇𝑇
𝑘=1
∑ 𝑥 𝑘
𝜔 𝑇
𝑦𝑘
𝜔 𝑇𝑇
𝑘=1
∑ 𝑥 𝑘
𝜔 𝑇2
𝑦𝑘
𝜔 𝑇𝑇
𝑘=1
]
(11)
solving X to find the coefficients A, B in
𝑋 = 𝐴−1
∗ 𝐵 (12)
The outcome would be the coefficients a0, a1, a2. We follow the similar procedure to find the
coefficient sets of the remaining landmark points.With these coefficients models at hand, it is
possible to project them in order to generate a sample shape similar to those in the training patterns.
The extracted coefficients are then plugged in to a set of corresponding coordinate points which
results in a new shape in
𝑋 𝜔̂ = ((𝑥1
𝜔1
, 𝑦1
𝜔1
= (𝑥1
𝜔1
)
2
∗ 𝑎𝑡11
+ (𝑥1
𝜔1
) ∗ 𝑎𝑡12
+ 𝑎𝑡13
), (𝑥2
𝜔1
, 𝑦2
𝜔1
= (𝑥2
𝜔1
)
2
∗ 𝑎𝑡21
+ (𝑥2
𝜔1
) ∗ 𝑎𝑡22
+ 𝑎𝑡23
) , . . , (𝑥 𝑘
𝜔1
, 𝑦𝑘
𝜔1
= (𝑥 𝑘
𝜔1
)2
∗ 𝑎𝑡 𝑘1
+ (𝑥 𝑘
𝜔1
) ∗ 𝑎𝑡 𝑘2
+ 𝑎𝑡 𝑘3
))
(13)
3.LEARNING REGRESSION CURVES
It has been acknowledge that when using learning algorithms to train models of suchcase, the
outcome is trained models with superior performance to those of untrained models Bishop [19]. In
this stage, we are concerned with capturing the optimal curve coefficients which describe the
patterns variations under testing; hence, training is required , thereby, fitting the Gaussian mixtures
to curve coefficient models to a set of shape curve patterns. The previous approaches consider
producing variations in shapes in a linear fashion. To obtain more complex shape variations, we
have to proceed by employing non-linear deformation to a set of curve coefficients. Unsupervised
learning is encapsulated in a framework of the apparatus Expectation Maximization EM
Algorithm.The idea is borrowed from Cootes[20] who was the pioneer in constructing point
distribution models; however, the algorithm introduced by Cootes[20] is transformed to learn
regression curves coefficientsαtsimilar to that approach of AlShaher [21]. Suppose that a set of
curve coefficients αt for a set of training patterns is t = (1 … T ) where T is the complete set of
training curves is represented in a long vector of coefficients :
𝛼 𝑡 = ( 𝑎 𝑡11
, 𝑎 𝑡12
, 𝑎 𝑡13
, 𝑎 𝑡21
, 𝑎 𝑡22
, 𝑎 𝑡23
, . . . , 𝑎 𝑡 𝑖 𝑛
𝑎 𝑡 𝑖 𝑛
) (14)
The mean vector of coefficient patterns is represented by
5. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.10, No.3, May 2019
53
𝜇 =
1
𝑇
∑ 𝛼 𝑡
𝑇
𝑡=1
(15)
The covariance matrix is then constructed by
∑ =
1
𝑇
∑(𝛼 𝑡 − 𝜇
𝑇
𝑡=1
) (𝛼 𝑡 − 𝜇) 𝑇 (16)
The following approach is based on fitting a Gaussian mixture modelsto a set of training examples
of curve coefficients. We further assume that training patterns are independent from one to another;
thus, they are neither flagged nor labelled to any curve class. Each curve class ω belongs to a set of
curve classes Ω has its own mean µ and covariance matrix ∑. With these component elements. For
each curve class, we establish the likelihood function for a set of the curve patterns in
𝑝(𝛼 𝑡) = ∏ ∑ 𝑝(𝛼 𝑡
Ω
𝑤=1
|𝜇 𝑤
𝑇
𝑡=1
, ∑ 𝑤) (17)
Where the term 𝑝(𝛼 𝑡|𝜇 𝑤, ∑ 𝑤) is the probability of drawing curve pattern αtfrom the curve-class
ω.Associating the above likelihood function with the Expectation Maximization Algorithm, the
likelihood function can be a process reformed in two steps.The process revolves around estimating
the expected log-likelihood function iteratively in
𝓆 𝐿(𝐶(𝑛+1)
|𝐶(𝑛)
) = ∑ ∑ 𝑃(𝛼 𝑡, 𝜇 𝑤
(𝑛)
, ∑ 𝑤
(𝑛)
)
Ω
𝑤=1
𝑋 ln 𝑝(𝛼 𝑡|𝜇 𝑤
(𝑛+1)
, ∑ 𝑤
(𝑛+1)
𝑇
𝑡=1
) (18)
Where the quantity and 𝜇 𝑤
(𝑛)
and ∑ 𝑤
(𝑛)
are the estimated mean curve vector and variance
covariance matrix both at iteration (n) of the Algorithm. The quantity 𝑝(𝛼 𝑡, 𝜇 𝑤
(𝑛)
, ∑ 𝑤
(𝑛)
) is the a
posteriori probability that the training pattern curve belongs to the curve-class ω at iteration n of the
Algorithm. The term𝑝(𝛼 𝑡|𝜇 𝑤
(𝑛+1)
, ∑ 𝑤
(𝑛+1)
) is the probability of distribution of curve-pattern αt
belongs to curve-class ω at iteration ( n + 1 )of the algorithm; thus, the probability density is
associated with the curve- patterns αtfor ( t = 1 … T ) to curve-class ω are estimated by the updated
construction of the mean-vector 𝜇 𝑤
(𝑛+1)
, and covariance matrix ∑ 𝑤
(𝑛+1)
at iteration n+1 of the
algorithm. According to the EM algorithm, the expected log-likelihood function is implemented in
a two iterative processes. In the M or maximization step of the algorithm, our aim is to maximize
the curve mean-vector 𝜇 𝑤
(𝑛+1)
, and covariance matrix ∑ 𝑤
(𝑛+1)
, while, in the E or expectation step,
the aim is to estimate the distribution of curve-patterns at iteration n along with the mixing
proportion parameters for curve-class ω.
In the E, or Expectation step of the algorithm, the a posteriori curve-class probability is updated
by applying the Bayes factorization rule to the curve-class distribution density at iteration n+1. The
new estimate is computed by
𝑝 (∝𝑡, 𝜇 𝑤
(𝑛)
, ∑
𝑤
(𝑛)
) =
𝑝(𝛼 𝑡|𝜇 𝑤
(𝑛)
, ∑ 𝑤
(𝑛 )
) 𝜋 𝑤
(𝑛)
∑ 𝑤=1
Ω
𝑝(𝛼 𝑡|𝜇 𝑤
(𝑛)
, ∑ 𝑤
(𝑛 )
) 𝜋 𝑤
(𝑛)
(19)
6. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.10, No.3, May 2019
54
where the revised curve-class ω mixing proportions 𝜋 𝑤
(𝑛+1)
at iteration ( n + 1 )is computed by
𝜋 𝑤
(𝑛+1)
=
1
𝑇
∑ 𝑝(𝛼 𝑡|𝜇 𝑤
(𝑛)
, ∑ 𝑤
(𝑛 )
)
𝑇
𝑡=1
(20)
With that at hand, the distributed curve-pattern αt to the class-curve ω is Gaussian distribution and
is classified according to
𝑝 (𝛼 𝑡|𝜇 𝑤
(𝑛)
, ∑ 𝑤
(𝑛)
) =
1
(2𝜋) 𝐿√| ∑ 𝑤
(𝑛)
|
exp [−
1
2
(𝛼 𝑡 − 𝜇 𝑤
(𝑛)
)
𝑇
𝑋 (∑
𝑤
(𝑛)
)
−1
𝑋 (𝛼 𝑡 − 𝜇 𝑤
(𝑛)
)] (21)
In the M, or Maximization step, our goal is to maximize the curve-class ω parameters. The
updated curve mean-vector 𝜇 𝑤
(𝑛+1)
estimate is computed using the following
𝜇 𝑤
(𝑛+1)
= ∑ 𝑝(𝛼 𝑡, 𝜇 𝑤
(𝑛)
, ∑ 𝑤
(𝑛)
)𝛼 𝑡
𝑇
𝑡=1
(22)
And the new estimate of the curve-class covariance matrix is weighted by
∑ 𝑤
(𝑛+1)
= ∑ 𝑝 (𝛼 𝑡, 𝜇 𝑤
(𝑛)
, ∑
𝑤
(𝑛)
)
𝑇
𝑡=1
𝑋(𝛼 𝑡 − 𝜇 𝑤
(𝑛)
)(𝛼 𝑡 − 𝜇 𝑤
(𝑛)
)
𝑇
(23)
Both E, and M steps are iteratively converged, the outcome of the learning stage is a set of
curve-class ω parameters such as 𝜇 𝑤
(𝑛)
𝑎𝑛𝑑 ∑ 𝑤
(𝑛 )
, hence the complete set of all curve-class Ω are
computed and ready to be used for recognition.
With the stroke and shape point distribution models to hand, our recognition method proceeds in a
hierarchical manner.
4.RECOGNITION
In this stage, we focus on utilizing the parameters extracted from the learning phase to obtain in
shape recognition. Here, we assume that the testing shapes
𝑓(𝑡) = ∑ ∑(𝑥 𝑡 𝑖
𝑛
𝑖=1
𝑋
𝑡=1
, 𝑦𝑡 𝑖
), 𝑤ℎ𝑒𝑟𝑒 (𝑖 = 1 . . 𝑛), ( 𝑡 = 1 . . 𝑋) (24)
Hence, each testing pattern is represented by
𝜒𝑡 = ( (𝑥 𝑡1
, 𝑦𝑡1
), (𝑥𝑡2
, 𝑦𝑡2
), … (𝑥 𝑡 𝑖
, 𝑦𝑡 𝑖
) ) for (t = 1 .. X ) (25)
7. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.10, No.3, May 2019
55
Such testing patterns are classified according to computing the new point position of the testing
data χ after projecting the sequence of curve-coefficients by
𝑓(𝑥, 𝑦) = ∑(𝜒𝑡 𝑦 𝑖
− ( 𝛼 𝑡 𝑖1
𝜒𝑡 𝑥 𝑖
2
+
𝑛
𝑖=1
𝛼 𝑡 𝑖2
𝜒𝑡 𝑥 𝑖
+ 𝛼 𝑡 𝑖3
)) (26)
So the sample shape χt is registered to class ω which has the highest probability using Bayes rule
over the total curve-classes Ω in
arg min
𝑓(𝑥, 𝑦)
∑ 𝑓(𝑥, 𝑦)Ω
𝑤=1
(27)
5.EXPERIMENTS
We have evaluated our approachwith sets of Arabic handwritten characters. Here, we have used 23
shape-classes for different writers, each with 80 training patterns. In total, we have examined the
approach with 1840 handwritten Arabic character shape patterns for training and 4600 patterns for
recognition phase. Figures 1illustrates some training patterns used in this paper. Figure 2
demonstrates single shapes and their landmarks representation.
Figure 1: Training sets sample
Figure 2: Training patterns and extracted landmarks
Figure 3: Sample visual of regression Curve-classes
8. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.10, No.3, May 2019
56
Figure 3indicates regression sample curve-classes as a result of the training stage. Figure 4
demonstrates the curve-classes Ω convergence rate graph as a function per iteration no. in the
training phase. The graphs shows how associated distributed probabilities for the set of
curve-classes Ω converged into a few iterations. Table 1 shows sample curve coefficients for
different shapes under investigation.
To take this investigation further, we demonstrate how well the approach behaves in the presence
of noise. In figure 5, we show how recognition rate is achieved when point position displacement
error is applied. Test shape coordinates are being moved away from their original position. The
figure proves that the recognition rate fails to register shapes to their correct classes in a few
iterations and it decreases completely when coordinates are moved away, yet,increasing variance
significantly.
Figure 4: Convergence Rate as a function per
iteration no.
Figure 5: Recognition rate as a function per iteration
no with point position error.
Table 1: Sample Curve Coefficients for chosen shapes
Coefficients
Shape 𝑎 𝑡11
𝑎 𝑡12
𝑎 𝑡13
𝑎 𝑡21
𝑎 𝑡22
𝑎 𝑡23
𝑎 𝑡 𝑘1
𝑎 𝑡 𝑘2
𝑎 𝑡 𝑘3
Lam -0.01452 3.222 -168.4 -0.006319 1.336 -42.01 -0.09073 20.1 -1053
Ain 0.1667 -23.5 832.3 -0.7917 102.1 -3269 -0.645 60.84 -1415
Baa -0.091 26.74 -1905 0.3 -86 6390 0.2911 -86 6390
Haa 0.334 -22.17 387.1 0.31 -25.3 539 0.1333 -14.87 436.8
Kaf -0.1667 40 -2366 0.25 -53 2844 0.3665 -69.97 3374
Table 2 shows recognition rates per curve-classes ω. Table 1 demonstrates recognition rates per
curve-class. In total, we have achieved 94% recognition rate with this approach.
Table 2:
Recogniti
on Rates
for sample
shapesSampl
e Shape
Test
Size
Corre
ct
False
Recognitio
n Rate
Sample Shape
Test
Size
Correct False
Recognit
ion Rate
200 191 9 95.5% 200 176 24 88%
9. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.10, No.3, May 2019
57
200 193 7 96.5% 200 175 25 87.5%
200 183 17 91.5% 200 172 28 86%
200 187 13 93.5% 200 181 19 90.5%
200 196 4 98% 200 190 10 95%
200 180 20 90% 200 182 18 91%
200 178 22 89% 200 193 7 96.5%
6.CONCLUSION
In this paper, we have proved how Regression Curves can be utilized to model the variation of
Handwritten Arabic characters. A 2nd order of Polynomials curves are injected along the skeleton
of the proposed shape under study, where the appropriate set of curve-coefficients which describe
the shape were extracted. We, then have used the Apparatus of the Expectation Maximization
Algorithm to train the set of extracted set of curve-coefficients within a probabilistic framework to
capture the optimal shape variations coefficients. The set of best fitted parameters are then
projected to recognize handwritten shapes using Bayes rule of factorization. The proposed
approach has been evaluated on sets of Handwritten Arabic Shapes for multiple different writers by
which we have achieved a recognition rate of nearly 94% on corrected registered shape classes.
7. ACKNOWLEDGMENT
I would like to express my sincere gratitude to the Public Authority for Applied Education and
Training ( Research Department ) for their full funding to this research. Research project number
BS-17-06 under the title "Arabic Character Recognition". Their kind support including hardware,
software, books, and conference fees allowed me to investigate and conduct this research with
significant results. Their attention permitted me to contribute a good result to the literature of
Pattern Recognition field specifically in recognizing handwritten Arabic characters.
REFERENCES
[1] J. Wood, "Invariant pattern recognition: A review," Pattern Recognition, vol. 29, no. 1, 1996, 1-17.
[2] Anil, K. Jain,Robert P.W. Duin, and Jianchang Mao:“Statistical Pattern Recognition: A Review”, IEEE
Pattern Analysis and Machine Intelligence, vol 22, No. 1, PP 4-37, 2000.
[3] P.F. Baldi and K. Hornik, "Learning in linear neural networks: A survey," IEEE Transactions on Neural
Networks, vol. 6, no. 4, 1995, 837-858.
[4] T.Y. Kong and A. Rosenfeld, "Digital topology: introduction and survey," Computer Vision, Graphics,
and Image Processing, vol. 48, no. 3, pp. 357-393, 1989.
10. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.10, No.3, May 2019
58
[5] T.R. Reed and J.M.H. Dubuf, "A review of recent texture segmentation and feature extraction
techniques," CVGIP - Image Understanding, vol. 57, no. 3, pp. 359-372, 1993.
[6] S. Sarkar and K.L. Boyer, "Perceptual organization in computer vision - a review and a proposal for a
classifactory structure", IEEE Transactions on Systems Man and Cybernetics, vol. 23, no. 2, 1993,
382-399.
[7] Lorigo L. M., Govindaraju V., “Offline Arabic Handwriting Recognition: A Survey”, IEEE Trans
Pattern Anal Mach Intelligence, Vol: 28(5): PP 712-24, 2006.
[8] Ishani Patel, ViragJagtap, Ompriya Kale, “A Survey on Feature Extraction Methods for Handwritten
Digits Recognition”, International Journal of Computer Applications, Vol107, No12, PP 11-17, 2014.
[9] Muhammad Sharif, Farah Naz, Mussarat Yasmin, Muhammad AlyasShahid, AmjadRehman, “Face
Recognition: A Survey”, Journal of Engineering Science and Technology Review, Vol 10, No 2, PP
166-177, 2017.
[10] Rafiqul Zaman Khan, Noor Adnan Ibraheem, “HAND GESTURE RECOGNITION: A LITERATURE
REVIEW “,International Journal of Artificial Intelligence & Applications, Vol.3, No.4, pp 161-174,
2012.
[11] Bruce George Lindsay, Mary L. Lesperance , “A review of semiparametric mixture models”, Journal of
Statistical Planning and Inference, Vol: 47 , No: 1 PP 29-39 , 1995.
[12] Christopher M. Bishop, “Neural Networks for Pattern Recognition”, Clarendon Press, 1995, ISSN
0198538642.
[13] A. Dempster, N. Laird, D. Rubin, Maximum likelihood from incomplete data via the em algorithm, J.
Roy. Statist. Soc. Ser. 39 (1977) 1–38
[14] J. Brendan, N. Jojic, Estimating mixture models of images and inferring spatial transformations using
the em algorithm, IEEE Comput. Vision Pattern Recognition 2 (1999) 416–422.
[15] C. Bishop, J. Winn, Non-linear Bayesian image modelling, Proceedings of Sixth European
[16] N. Vasconcelos, A. Lippman, A probabilistic architecture for content-based image retrieval,
Proceedings of International Conference on Computer Vision and Pattern Recognition, 2000, pp.
216–221.
[17] B. North, A. Blake, Using expectation-maximisation to learn dynamical models from visual data,
Image Vision Computing. 17 (8) (1999) 611–616.
[18] M. Revow, C. Williams, G.E. Hinton, Using generative models for handwritten digit recognition, IEEE
Trans. Pattern Anal. Mach. Intell. 20 (2) (1996) 592–606
[19] Christopher M. Bishop, Pattern Recognition and Machine Learning, Springer Science, and Business
Media, 2006.
[20] T. Cootes, C. Taylor, A mixture model for representing shape variations. Image and Vision Computing,
17(1999) 403-409
[21] AlShaher Abdullah, Hancock Edwin, Learning mixtures of Point Distribution models with the EM
11. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.10, No.3, May 2019
59
algorithm. Pattern Recognition, 36(2003) 2805-2818.