Cancelable biometrics, a template transformation approach, attempts to provide robustness for authentication services based on biometrics. Several biometric template protection techniques represent the biometric information in binary form as it provides benefits in matching and storage. In this context, it becomes clear that often such transformed binary representations can be easily compromised and breached. In this paper, we propose an efficient non-invertible template transformation approach using random projection technique and Discrete Fourier transformation to shield the binary biometric representations. The cancelable fingerprint templates designed by the proposed technique meets the requirements of revocability, diversity, non-invertibility and performance. The matching performance of the cancelable fingerprint templates generated using proposed technique, have improved when compared with the state-of-art methods.
An Experiment with Sparse Field and Localized Region Based Active Contour Int...CSCJournals
This paper discusses various experiments conducted on different types of Level Sets interactive segmentation techniques using Matlab software, on select images. The objective is to assess the effectiveness on specific natural images, which have complex image composition in terms of intensity, colour mix, indistinct object boundary, low contrast, etc. Besides visual assessment, measures such as Jaccard Index, Dice Coefficient and Hausdorrf Distance have been computed to assess the accuracy of these techniques, between segmented and ground truth images. This paper particularly discusses Sparse Field Matrix and Localized Region Based Active Contours, both based on Level Sets. These techniques were not found to be effective where object boundary is not very distinct and/or has low contrast with background. Also, the techniques were ineffective on such images where foreground object stretches up to the image boundary.
KNN and ARL Based Imputation to Estimate Missing Valuesijeei-iaes
Missing data are the absence of data items for a subject; they hide some information that may be important. In practice, missing data have been one major factor affecting data quality. Thus, Missing value imputation is needed. Methods such as hierarchical clustering and K-means clustering are not robust to missing data and may lose effectiveness even with a few missing values. Therefore, to improve the quality of data method for missing value imputation is needed. In this paper KNN and ARL based Imputation are introduced to impute missing values and accuracy of both the algorithms are measured by using normalized root mean sqare error. The result shows that ARL is more accurate and robust method for missing value estimation.
Quite often in experimental work, many situations arise where some observations are lost or become
unavailable due to some accidents or cost constraints. When there are missing observations, some
desirable design properties like orthogonality,rotatability and optimality can be adversely affected. Some
attention has been given, in literature, to investigating the prediction capability of response surface
designs; however, little or no effort has been devoted to investigating same for such designs when some
observations are missing. This work therefore investigates the impact of a single missing observation of the
various design points: factorial, axial and center points, on the estimation and predictive capability of
Central Composite Designs (CCDs). It was observed that for each of the designs considered, precision of
model parameter estimates and the design prediction properties were adversely affected by the missing
observations and that the largest loss in precision of parameters corresponds to a missing factorial point.
Artificial Intelligence based optimization of weld bead geometry in laser wel...IJMER
This paper reports on a modeling and optimization of laser welding of aluminum-magnesium alloy thickness of 1.7mm. Regression analysis is used for modeling and Genetic algorithm is used for optimize the process parameters.The input values for the regression methods is taken according the Taguchi based orthogonal array. A software named Computer aided Robust Parameter Genetic Algorithm CARPGA has been developed in MATLAB 2013 which combine all of these methodologies. This software has been validated with some published paper.
An Approach for Iris Recognition Based On Singular Value Decomposition and Hi...paperpublications3
Abstract: This paper presents a new approach using Hidden Markov Model as classifier and Singular Values Decomposition (SVD) coefficients as features for iris recognition. As iris is a complex multi-dimensional structure and needs good computing techniques for recognition and it is an integral part of biometrics. Features extracted from a iris are processed and compared with similar irises which exist in database. The recognition of human irises is carried out by comparing characteristics of the iris to those of known individuals. Here seven state Hidden Markov Model (HMM)-based iris recognition system is proposed .A small number of quantized Singular Value Decomposition (SVD) coefficients as features describing blocks of iris images. SVD is a method for transforming correlated variables into a set of uncorrelated ones that better expose the various relationships among the original data item. This makes the system very fast. The proposed approach has been examined on CASIA database. The results show that the proposed method is the fastest one, having good accuracy.
Application of Adomian Decomposition Method in Solving Second Order Nonlinear...inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
The Journal will bring together leading researchers, engineers and scientists in the domain of interest from around the world. Topics of interest for submission include, but are not limited to:
Flexibility Analysis In Industrial Piping Through The Finite Elements And Pho...IJERA Editor
The industry needs predictability to work on a large scale without complications, only this way you can ensure your productivity. The piping flexibility analysis provides a prediction of future problems and proposes applicable solutions, with the objective of preventing pipes to suffer collapses, that can impact the production process and costs, and provide safety to workers and the environment, while avoid leaks and possible contamination. The aim of this study is analyse the flexibility of industrial piping through the finite elements and photoelasticity methods. For stresses analysis, using a computerized body of proof, it`s possible to find, through finite elements and photoelasticity`s practical project, the values of the stresses and the places where they are being applied. To guarantee that the computerized and practical models are consistent with reality, a mathematical model, already tested and proved, will also be implemented and compared to the others, so there are evidences that all models used are really reliable and can be used in large-scale industrial projects, with complex studies. A comparison of a mathematical model through balanced guided beam, a finite elements model using the software ANSYS® and a photoelasticity of a resin pipe will show that the method with better applicability in industries is the computational, showing trustable stress, reaction and deformation values as well as a detailed visualization of them distribution along the object of study.
Smart Response Surface Models using Legacy Data for Multidisciplinary Optimiz...IOSRJMCE
One of the key challenges in multidisciplinary design is integration of design and analysis methods of various systems in design framework. To achieve Multidisciplinary Design Optimization (MDO) goals of aircraft systems, high fidelity analysis are required from multiple disciplines like aerodynamics, structures or performance. High Fidelity Analysis like Computer-Aided Design and Engineering (CAD/CAE) techniques, complex computer models and computation-intensive analyses/simulations are often used to accurately study the system behaviour towards design optimization. Due to high computational cost and numerical noise associated with these analyses, they cannot be used effectively. The use of surrogates or Response Surface Models (RSM) is one approach in Multi Disciplinary design optimization to avoid the computation barrier and to take care of artificial minima due to numerical noise. This paper brings out a method based on use of “Smart Response Surface Models" to generate surrogate models, with its validated subspace, in the design space around the point of interest with the use of legacy data for MDO. The method has been evaluated on three test cases, which are created based on High Speed Civil Transport (HSCT) Multidisciplinary Design Optimization Test Suite
An Experiment with Sparse Field and Localized Region Based Active Contour Int...CSCJournals
This paper discusses various experiments conducted on different types of Level Sets interactive segmentation techniques using Matlab software, on select images. The objective is to assess the effectiveness on specific natural images, which have complex image composition in terms of intensity, colour mix, indistinct object boundary, low contrast, etc. Besides visual assessment, measures such as Jaccard Index, Dice Coefficient and Hausdorrf Distance have been computed to assess the accuracy of these techniques, between segmented and ground truth images. This paper particularly discusses Sparse Field Matrix and Localized Region Based Active Contours, both based on Level Sets. These techniques were not found to be effective where object boundary is not very distinct and/or has low contrast with background. Also, the techniques were ineffective on such images where foreground object stretches up to the image boundary.
KNN and ARL Based Imputation to Estimate Missing Valuesijeei-iaes
Missing data are the absence of data items for a subject; they hide some information that may be important. In practice, missing data have been one major factor affecting data quality. Thus, Missing value imputation is needed. Methods such as hierarchical clustering and K-means clustering are not robust to missing data and may lose effectiveness even with a few missing values. Therefore, to improve the quality of data method for missing value imputation is needed. In this paper KNN and ARL based Imputation are introduced to impute missing values and accuracy of both the algorithms are measured by using normalized root mean sqare error. The result shows that ARL is more accurate and robust method for missing value estimation.
Quite often in experimental work, many situations arise where some observations are lost or become
unavailable due to some accidents or cost constraints. When there are missing observations, some
desirable design properties like orthogonality,rotatability and optimality can be adversely affected. Some
attention has been given, in literature, to investigating the prediction capability of response surface
designs; however, little or no effort has been devoted to investigating same for such designs when some
observations are missing. This work therefore investigates the impact of a single missing observation of the
various design points: factorial, axial and center points, on the estimation and predictive capability of
Central Composite Designs (CCDs). It was observed that for each of the designs considered, precision of
model parameter estimates and the design prediction properties were adversely affected by the missing
observations and that the largest loss in precision of parameters corresponds to a missing factorial point.
Artificial Intelligence based optimization of weld bead geometry in laser wel...IJMER
This paper reports on a modeling and optimization of laser welding of aluminum-magnesium alloy thickness of 1.7mm. Regression analysis is used for modeling and Genetic algorithm is used for optimize the process parameters.The input values for the regression methods is taken according the Taguchi based orthogonal array. A software named Computer aided Robust Parameter Genetic Algorithm CARPGA has been developed in MATLAB 2013 which combine all of these methodologies. This software has been validated with some published paper.
An Approach for Iris Recognition Based On Singular Value Decomposition and Hi...paperpublications3
Abstract: This paper presents a new approach using Hidden Markov Model as classifier and Singular Values Decomposition (SVD) coefficients as features for iris recognition. As iris is a complex multi-dimensional structure and needs good computing techniques for recognition and it is an integral part of biometrics. Features extracted from a iris are processed and compared with similar irises which exist in database. The recognition of human irises is carried out by comparing characteristics of the iris to those of known individuals. Here seven state Hidden Markov Model (HMM)-based iris recognition system is proposed .A small number of quantized Singular Value Decomposition (SVD) coefficients as features describing blocks of iris images. SVD is a method for transforming correlated variables into a set of uncorrelated ones that better expose the various relationships among the original data item. This makes the system very fast. The proposed approach has been examined on CASIA database. The results show that the proposed method is the fastest one, having good accuracy.
Application of Adomian Decomposition Method in Solving Second Order Nonlinear...inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
The Journal will bring together leading researchers, engineers and scientists in the domain of interest from around the world. Topics of interest for submission include, but are not limited to:
Flexibility Analysis In Industrial Piping Through The Finite Elements And Pho...IJERA Editor
The industry needs predictability to work on a large scale without complications, only this way you can ensure your productivity. The piping flexibility analysis provides a prediction of future problems and proposes applicable solutions, with the objective of preventing pipes to suffer collapses, that can impact the production process and costs, and provide safety to workers and the environment, while avoid leaks and possible contamination. The aim of this study is analyse the flexibility of industrial piping through the finite elements and photoelasticity methods. For stresses analysis, using a computerized body of proof, it`s possible to find, through finite elements and photoelasticity`s practical project, the values of the stresses and the places where they are being applied. To guarantee that the computerized and practical models are consistent with reality, a mathematical model, already tested and proved, will also be implemented and compared to the others, so there are evidences that all models used are really reliable and can be used in large-scale industrial projects, with complex studies. A comparison of a mathematical model through balanced guided beam, a finite elements model using the software ANSYS® and a photoelasticity of a resin pipe will show that the method with better applicability in industries is the computational, showing trustable stress, reaction and deformation values as well as a detailed visualization of them distribution along the object of study.
Smart Response Surface Models using Legacy Data for Multidisciplinary Optimiz...IOSRJMCE
One of the key challenges in multidisciplinary design is integration of design and analysis methods of various systems in design framework. To achieve Multidisciplinary Design Optimization (MDO) goals of aircraft systems, high fidelity analysis are required from multiple disciplines like aerodynamics, structures or performance. High Fidelity Analysis like Computer-Aided Design and Engineering (CAD/CAE) techniques, complex computer models and computation-intensive analyses/simulations are often used to accurately study the system behaviour towards design optimization. Due to high computational cost and numerical noise associated with these analyses, they cannot be used effectively. The use of surrogates or Response Surface Models (RSM) is one approach in Multi Disciplinary design optimization to avoid the computation barrier and to take care of artificial minima due to numerical noise. This paper brings out a method based on use of “Smart Response Surface Models" to generate surrogate models, with its validated subspace, in the design space around the point of interest with the use of legacy data for MDO. The method has been evaluated on three test cases, which are created based on High Speed Civil Transport (HSCT) Multidisciplinary Design Optimization Test Suite
Fault diagnosis using genetic algorithms and principal curveseSAT Journals
Abstract Several applications of nonlinear principal component analysis (NPCA) have appeared recently in process monitoring and fault diagnosis. In this paper a new approach is proposed for fault detection based on principal curves and genetic algorithms. The principal curve is a generation of linear principal component (PCA) introduced by Hastie as a parametric curve passes satisfactorily through the middle of data. The existing principal curves algorithms employ the first component of the data as an initial estimation of principal curve. However the dependence on initial line leads to a lack of flexibility and the final curve is only satisfactory for specific problems. In this paper we extend this work in two ways. First, we propose a new method based on genetic algorithms to find the principal curve. Here, lines are fitted and connected to form polygonal lines (PL). Second, potential application of principal curves is discussed. An example is used to illustrate fault diagnosis of nonlinear process using the proposed approach. Index Terms: Principal curve, Genetic Algorithm, Nonlinear principal component analysis, Fault detection.
Validation Study of Dimensionality Reduction Impact on Breast Cancer Classifi...ijcsit
A fundamental problem in machine learning is identifying the most representative subset of features from
which we can construct a predictive model for a classification task. This paper aims to present a validation
study of dimensionality reduction effect on the classification accuracy of mammographic images. The
studied dimensionality reduction methods were: locality-preserving projection (LPP), locally linear
embedding (LLE), Isometric Mapping (ISOMAP) and spectral regression (SR). We have achieved high
rates of classifications. In some combinations the classification rate was 100%. But in most of the cases the
classification rate is about 95%. It was also found that the classification rate increases with the size of the
reduced space and the optimal value of space dimension is 60. We proceeded to validate the obtained
results by measuring some validation indices such as: Xie-Beni index, Dun index and Alternative Dun
index. The measurement of these indices confirms that the optimal value of reduced space dimension is
d=60.
BPSO&1-NN algorithm-based variable selection for power system stability ident...IJAEMSJORNAL
Due to the very high nonlinearity of the power system, traditional analytical methods take a lot of time to solve, causing delay in decision-making. Therefore, quickly detecting power system instability helps the control system to make timely decisions become the key factor to ensure stable operation of the power system. Power system stability identification encounters large data set size problem. The need is to select representative variables as input variables for the identifier. This paper proposes to apply wrapper method to select variables. In which, Binary Particle Swarm Optimization (BPSO) algorithm combines with K-NN (K=1) identifier to search for good set of variables. It is named BPSO&1-NN. Test results on IEEE 39-bus diagram show that the proposed method achieves the goal of reducing variables with high accuracy.
A Thresholding Method to Estimate Quantities of Each ClassWaqas Tariq
Thresholding method is a general tool for classification of a population. Various thresholding methods have been proposed by many researchers. However, there are some cases in which existing methods are not appropriate for a population analysis. For example, this is the case when the objective of analysis is to select a threshold to estimate the total number of data (pixels) of each classified population. In particular, If there is a significant difference between the total numbers and/or variances of two populations, error possibilities in classification differ excessively from each other. Consequently, estimated quantities of each classified population could be very different from the actual one. In this report, a new method which could be applied to select a threshold to estimate quantities of classes more precisely in the above mentioned case is proposed. Then verification of features and ranges of application of the proposed method by sample data analysis is presented.
Modeling cassava yield a response surface approachijcsa
This paper reports on application of theory of experimental design using graphical techniques in R
programming language and application of nonlinear bootstrap regression method to demonstrate the
invariant property of parameter estimates of the Inverse polynomial Model (IPM) in a nonlinear surface.
IDENTIFICATION OF DELAMINATION SIZE AND LOCATION OF COMPOSITE LAMINATE FROM TIME DOMAIN DATA OF MAGNETOSTRICTIVE SENSOR AND ACTUATOR USING ARTIFICIAL NEURAL NETWORK.
Adaptive response surface by kriging using pilot points for structural reliab...IOSR Journals
Structural reliability analysis aims to compute the probability of failure by considering system uncertainties. However, this approach may require very time-consuming computation and becomes impracticable for complex structures especially when complex computer analysis and simulation codes are involved such as finite element method. Approximation methods are widely used to build simplified approximations, or metamodels providing a surrogate model of the original codes. The most popular surrogate model is the response surface methodology, which typically employs second order polynomial approximation using least-squares regression techniques. Several authors have been used response surface methods in reliability analysis. However, another approximation method based on kriging approach has successfully applied in the field of deterministic optimization. Few studies have treated the use of kriging approximation in reliability analysis and reliability-based design optimization. In this paper, the kriging approximation is used an alternative to the traditional response surface method, to approximate the performance function of the reliability analysis. The main objective of this work is to develop an efficient global approximation while controlling the computational cost and accurate prediction. A pilot point method is proposed to the kriging approximation in order to increase the prior predictivity of the approximation, which the pilot points are good candidates for numerical simulation. In other words, the predictive quality of the initial kriging approximation is improved by adding adaptive information called “pilot points” in areas where the kriging variance is maximum. This methodology allows for an efficient modeling of highly non-linear responses, while the number of simulations is reduced compared to Latin Hypercubes approach. Numerical examples show the efficiency and the interest of the proposed method.
Multimodal Biometrics Recognition by Dimensionality Diminution MethodIJERA Editor
Multimodal biometric system utilizes two or more character modalities, e.g., face, ear, and fingerprint,
Signature, plamprint to improve the recognition accuracy of conventional unimodal methods. We propose a new
dimensionality reduction method called Dimension Diminish Projection (DDP) in this paper. DDP can not only
preserve local information by capturing the intra-modal geometry, but also extract between-class relevant
structures for classification effectively. Experimental results show that our proposed method performs better
than other algorithms including PCA, LDA and MFA.
DCT AND DFT BASED BIOMETRIC RECOGNITION AND MULTIMODAL BIOMETRIC SECURITYIAEME Publication
This Research paper discusses the study and analysis conducted during this research on various techniques in biometric domain. A close glance on biometric enhancement techniques and their limitations are presented in this research paper. This process would enable researcher to understand the research contributions in the area of DCT and DFT based recognition and security, locate some crucial limitations of these notable research. This paper having summary about the different research papers that applicable to our topic of research which mentioned above. Biometric Recognition and security is a most important subject of research in this area of image processing.
Construction Management (CM) has to deal with a variety of uncertainties related to Time, Cost, Quality, and Safety, to name a few. Such uncertainties make the entire construction process highly unpredictable. It, therefore, falls under the purview of artificial neural networks (ANNs) in which the given hazy information can be effectively interpreted in order to arrive at meaningful conclusions. This paper reviews the application of ANNs in construction activities related to the prediction of costs, risk, and safety, tender bids, as well as labor and equipment productivity. The review suggests that the ANN’s had been highly beneficial in correctly interpreting inadequate input information. It was seen that most of the investigators used the feed forward back propagation type of the network; however, if a single ANN architecture was found to be insufficient, then hybrid modeling in association with other machine learning tools such as genetic programming and support vector machines were much useful. It was however clear that the authenticity of data and experience of the modeler are important in obtaining good results.
K-Means Segmentation Method for Automatic Leaf Disease DetectionIJERA Editor
Automatic detection of leaf disease is an essential research topic in agricultural research. It may prove benefits
in monitoring fields and early detection of leaf diseases by the symptoms that appear on the leaves. Defect
segmentation is carried out in two steps. This paper illustrates K-means clustering method for segmentation of
diseased portion of leaf. At first, the pixels are clustered based on their color and spatial features, where the
clustering process is accomplished. Then the clustered blocks are merged to a specific number of regions. This
approach provides a feasible robust solution for defect segmentation of leaves
Architecture neural network deep optimizing based on self organizing feature ...journalBEEI
Forward neural network (FNN) execution relying on the algorithm of training and architecture selection. Different parameters using for nip out the architecture of FNN such as the connections number among strata, neurons hidden number in each strata hidden and hidden strata number. Feature architectural combinations exponential could be uncontrollable manually so specific architecture can be design automatically by using special algorithm which build system with ability generalization better. Determination of architecture FNN can be done by using the algorithm of optimization numerous. In this paper methodology new proposes achievement where FNN neurons respective with hidden layers estimation work where in this work collect algorithm training self organizing feature map (SOFM) with advantages to explain how the best architectural selected automatically by SOFM from criteria error testing based on architecture populated. Different size of dataset benchmark of 4 classifications tested for approach proposed.
MAGNETIC RESONANCE BRAIN IMAGE SEGMENTATIONVLSICS Design
Segmentation of tissues and structures from medical images is the first step in many image analysis applications developed for medical diagnosis. With the growing research on medical image segmentation, it is essential to categorize the research outcomes and provide researchers with an overview of the existing segmentation techniques in medical images. In this paper, different image segmentation methods applied on magnetic resonance brain images are reviewed. The selection of methods includes sources from image processing journals, conferences, books, dissertations and thesis. The conceptual details of the methods are explained and mathematical details are avoided for simplicity. Both broad and detailed categorizations of reviewed segmentation techniques are provided. The state of art research is provided with emphasis on developed techniques and image properties used by them. The methods defined are not always mutually independent. Hence, their inter relationships are also stated. Finally, conclusions are drawn summarizing commonly used techniques and their complexities in application.
FACE RECOGNITION USING PRINCIPAL COMPONENT ANALYSIS WITH MEDIAN FOR NORMALIZA...csandit
Recognizing Faces helps to name the various subjects present in the image. This work focuses
on labeling faces on an image which includes faces of humans being of various age group
(heterogeneous set ). Principal component analysis concentrates on finds the mean of the data
set and subtracts the mean value from the data set with an intention to normalize that data.
Normalization with respect to image is the removal of common features from the data set. This
work brings in the novel idea of deploying the median another measure of central tendency for
normalization rather than mean. The above work was implemented using matlab. Results show
that Median is the best measure for normalization for a heterogeneous data set which gives
raise to outliers.
MIXTURES OF TRAINED REGRESSION CURVESMODELS FOR HANDRITTEN ARABIC CHARACTER R...ijaia
In this paper, we demonstrate how regression curves can be used to recognize 2D non-rigid handwritten shapes. Each shape is represented by a set of non-overlapping uniformly distributed landmarks. The underlying models utilize 2nd order of polynomials to model shapes within a training set. To estimate the regression models, we need to extract the required coefficients which describe the variations for a set of shape class. Hence, a least square method is used to estimate such modes. We proceed then, by training these coefficients using the apparatus Expectation Maximization algorithm. Recognition is carried out by finding the least error landmarks displacement with respect to the model curves. Handwritten isolated Arabic characters are used to evaluate our approach.
Reliability and Fuzzy Logic Concepts as Applied to Slope Stability Analysis –...IJERA Editor
Considerable uncertainty exist with regard to stability of slopes due to several factors and recognition of these uncertainties has made designees to introduce factor of safety. Several studies, during recent years on analytical methods using soil properties have improved understanding the several uncertainties. The reliability analysis of slopes can be used to represent uncertainty in mathematical models, which can be assumed to follow the characteristic of random uncertainty. The distribution uncertain variable, which is unknown, makes its estimation difficult. Hence, the concepts of fuzzy set theory appear to be quite reliable when limited information is available. This paper attempts to review the slope stability problem and deals with the intricacies of the concept of reliability and fuzzy logic as applied to stability analysis of slope. It has been suggested that the FOSM algorithm provides a general agreement among the different slope stability solutions.
Evaluation of 6 noded quareter point element for crack analysis by analytical...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
The Application Of Bayes Ying-Yang Harmony Based Gmms In On-Line Signature Ve...ijaia
In this contribution, a Bayes Ying-Yang(BYY) harmony based approach for on-line signature verification is
presented. In the proposed method, a simple but effective Gaussian Mixture Models(GMMs) is used to
represent for each user’s signature model based on the prior information collected. Different from the early
works, in this paper, we use the Bayes Ying Yang machine combined with the harmony function to achieve
Automatic Model Selection(AMS) during the parameter learning for the GMMs, so that a better
approximation of the user model is assured. Experiments on a database from the First International
Signature Verification Competition(SVC 2004) confirm that this combined algorithm yields quite a
satisfactory result.
Prediction of surface roughness in high speed machining a comparisoneSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
A Statistical study on effects of fundamental machining parameters on surface...dbpublications
Roughness consists of the irregularities of the surface texture, usually including those irregularities that result from the actions involved in the production process. Surface roughness is an important measure of the quality of a machined product and a factor that greatly determines manufacturing cost. In this work,in order to estimate surface quality and dimensional precision properties in advance, theoretical models are employed making it feasible to do prediction in function of operation conditions and machining parameters such as feed speed and depth of cut etc. The need for statistical method like DOE for studying the relationship between the machining parameters is because of this need for prediction. It is a analysis technique which uses the regression method to find out the relationship between various factors in a DOE setup depending upon the interactions of the predictor variables and the response variables which is performed in the experiments. The research in this domain will help advance further investigations into the relationship between the machining factors and the surface quality of the machined components. The DOE using Taguchi’s method and statistical study of the experimental data helps to understand the interaction between various factors like speed, feed and depth of cut in the machining.
A Comparison of Optimization Methods in Cutting Parameters Using Non-dominate...Waqas Tariq
Since cutting conditions have an influence on reducing the production cost and time and deciding the quality of a final product the determination of optimal cutting parameters such as cutting speed, feed rate, depth of cut and tool geometry is one of vital modules in process planning of metal parts. With use of experimental results and subsequently, with exploitation of main effects plot, importance of each parameter is studied. In this investigation these parameters was considered as input in order to optimized the surface finish and tool life criteria, two conflicting objectives, as the process performance simultaneously. In this study, micro genetic algorithm (MGA) and Non-dominated Sorting Genetic Algorithm (NSGA-II) were compared with each other proving the superiority of Non-dominated Sorting Genetic Algorithm over micro genetic since Non-dominated Sorting Genetic Algorithm results were more satisfactory than micro genetic algorithm in terms of optimizing machining parameters.
Fault diagnosis using genetic algorithms and principal curveseSAT Journals
Abstract Several applications of nonlinear principal component analysis (NPCA) have appeared recently in process monitoring and fault diagnosis. In this paper a new approach is proposed for fault detection based on principal curves and genetic algorithms. The principal curve is a generation of linear principal component (PCA) introduced by Hastie as a parametric curve passes satisfactorily through the middle of data. The existing principal curves algorithms employ the first component of the data as an initial estimation of principal curve. However the dependence on initial line leads to a lack of flexibility and the final curve is only satisfactory for specific problems. In this paper we extend this work in two ways. First, we propose a new method based on genetic algorithms to find the principal curve. Here, lines are fitted and connected to form polygonal lines (PL). Second, potential application of principal curves is discussed. An example is used to illustrate fault diagnosis of nonlinear process using the proposed approach. Index Terms: Principal curve, Genetic Algorithm, Nonlinear principal component analysis, Fault detection.
Validation Study of Dimensionality Reduction Impact on Breast Cancer Classifi...ijcsit
A fundamental problem in machine learning is identifying the most representative subset of features from
which we can construct a predictive model for a classification task. This paper aims to present a validation
study of dimensionality reduction effect on the classification accuracy of mammographic images. The
studied dimensionality reduction methods were: locality-preserving projection (LPP), locally linear
embedding (LLE), Isometric Mapping (ISOMAP) and spectral regression (SR). We have achieved high
rates of classifications. In some combinations the classification rate was 100%. But in most of the cases the
classification rate is about 95%. It was also found that the classification rate increases with the size of the
reduced space and the optimal value of space dimension is 60. We proceeded to validate the obtained
results by measuring some validation indices such as: Xie-Beni index, Dun index and Alternative Dun
index. The measurement of these indices confirms that the optimal value of reduced space dimension is
d=60.
BPSO&1-NN algorithm-based variable selection for power system stability ident...IJAEMSJORNAL
Due to the very high nonlinearity of the power system, traditional analytical methods take a lot of time to solve, causing delay in decision-making. Therefore, quickly detecting power system instability helps the control system to make timely decisions become the key factor to ensure stable operation of the power system. Power system stability identification encounters large data set size problem. The need is to select representative variables as input variables for the identifier. This paper proposes to apply wrapper method to select variables. In which, Binary Particle Swarm Optimization (BPSO) algorithm combines with K-NN (K=1) identifier to search for good set of variables. It is named BPSO&1-NN. Test results on IEEE 39-bus diagram show that the proposed method achieves the goal of reducing variables with high accuracy.
A Thresholding Method to Estimate Quantities of Each ClassWaqas Tariq
Thresholding method is a general tool for classification of a population. Various thresholding methods have been proposed by many researchers. However, there are some cases in which existing methods are not appropriate for a population analysis. For example, this is the case when the objective of analysis is to select a threshold to estimate the total number of data (pixels) of each classified population. In particular, If there is a significant difference between the total numbers and/or variances of two populations, error possibilities in classification differ excessively from each other. Consequently, estimated quantities of each classified population could be very different from the actual one. In this report, a new method which could be applied to select a threshold to estimate quantities of classes more precisely in the above mentioned case is proposed. Then verification of features and ranges of application of the proposed method by sample data analysis is presented.
Modeling cassava yield a response surface approachijcsa
This paper reports on application of theory of experimental design using graphical techniques in R
programming language and application of nonlinear bootstrap regression method to demonstrate the
invariant property of parameter estimates of the Inverse polynomial Model (IPM) in a nonlinear surface.
IDENTIFICATION OF DELAMINATION SIZE AND LOCATION OF COMPOSITE LAMINATE FROM TIME DOMAIN DATA OF MAGNETOSTRICTIVE SENSOR AND ACTUATOR USING ARTIFICIAL NEURAL NETWORK.
Adaptive response surface by kriging using pilot points for structural reliab...IOSR Journals
Structural reliability analysis aims to compute the probability of failure by considering system uncertainties. However, this approach may require very time-consuming computation and becomes impracticable for complex structures especially when complex computer analysis and simulation codes are involved such as finite element method. Approximation methods are widely used to build simplified approximations, or metamodels providing a surrogate model of the original codes. The most popular surrogate model is the response surface methodology, which typically employs second order polynomial approximation using least-squares regression techniques. Several authors have been used response surface methods in reliability analysis. However, another approximation method based on kriging approach has successfully applied in the field of deterministic optimization. Few studies have treated the use of kriging approximation in reliability analysis and reliability-based design optimization. In this paper, the kriging approximation is used an alternative to the traditional response surface method, to approximate the performance function of the reliability analysis. The main objective of this work is to develop an efficient global approximation while controlling the computational cost and accurate prediction. A pilot point method is proposed to the kriging approximation in order to increase the prior predictivity of the approximation, which the pilot points are good candidates for numerical simulation. In other words, the predictive quality of the initial kriging approximation is improved by adding adaptive information called “pilot points” in areas where the kriging variance is maximum. This methodology allows for an efficient modeling of highly non-linear responses, while the number of simulations is reduced compared to Latin Hypercubes approach. Numerical examples show the efficiency and the interest of the proposed method.
Multimodal Biometrics Recognition by Dimensionality Diminution MethodIJERA Editor
Multimodal biometric system utilizes two or more character modalities, e.g., face, ear, and fingerprint,
Signature, plamprint to improve the recognition accuracy of conventional unimodal methods. We propose a new
dimensionality reduction method called Dimension Diminish Projection (DDP) in this paper. DDP can not only
preserve local information by capturing the intra-modal geometry, but also extract between-class relevant
structures for classification effectively. Experimental results show that our proposed method performs better
than other algorithms including PCA, LDA and MFA.
DCT AND DFT BASED BIOMETRIC RECOGNITION AND MULTIMODAL BIOMETRIC SECURITYIAEME Publication
This Research paper discusses the study and analysis conducted during this research on various techniques in biometric domain. A close glance on biometric enhancement techniques and their limitations are presented in this research paper. This process would enable researcher to understand the research contributions in the area of DCT and DFT based recognition and security, locate some crucial limitations of these notable research. This paper having summary about the different research papers that applicable to our topic of research which mentioned above. Biometric Recognition and security is a most important subject of research in this area of image processing.
Construction Management (CM) has to deal with a variety of uncertainties related to Time, Cost, Quality, and Safety, to name a few. Such uncertainties make the entire construction process highly unpredictable. It, therefore, falls under the purview of artificial neural networks (ANNs) in which the given hazy information can be effectively interpreted in order to arrive at meaningful conclusions. This paper reviews the application of ANNs in construction activities related to the prediction of costs, risk, and safety, tender bids, as well as labor and equipment productivity. The review suggests that the ANN’s had been highly beneficial in correctly interpreting inadequate input information. It was seen that most of the investigators used the feed forward back propagation type of the network; however, if a single ANN architecture was found to be insufficient, then hybrid modeling in association with other machine learning tools such as genetic programming and support vector machines were much useful. It was however clear that the authenticity of data and experience of the modeler are important in obtaining good results.
K-Means Segmentation Method for Automatic Leaf Disease DetectionIJERA Editor
Automatic detection of leaf disease is an essential research topic in agricultural research. It may prove benefits
in monitoring fields and early detection of leaf diseases by the symptoms that appear on the leaves. Defect
segmentation is carried out in two steps. This paper illustrates K-means clustering method for segmentation of
diseased portion of leaf. At first, the pixels are clustered based on their color and spatial features, where the
clustering process is accomplished. Then the clustered blocks are merged to a specific number of regions. This
approach provides a feasible robust solution for defect segmentation of leaves
Architecture neural network deep optimizing based on self organizing feature ...journalBEEI
Forward neural network (FNN) execution relying on the algorithm of training and architecture selection. Different parameters using for nip out the architecture of FNN such as the connections number among strata, neurons hidden number in each strata hidden and hidden strata number. Feature architectural combinations exponential could be uncontrollable manually so specific architecture can be design automatically by using special algorithm which build system with ability generalization better. Determination of architecture FNN can be done by using the algorithm of optimization numerous. In this paper methodology new proposes achievement where FNN neurons respective with hidden layers estimation work where in this work collect algorithm training self organizing feature map (SOFM) with advantages to explain how the best architectural selected automatically by SOFM from criteria error testing based on architecture populated. Different size of dataset benchmark of 4 classifications tested for approach proposed.
MAGNETIC RESONANCE BRAIN IMAGE SEGMENTATIONVLSICS Design
Segmentation of tissues and structures from medical images is the first step in many image analysis applications developed for medical diagnosis. With the growing research on medical image segmentation, it is essential to categorize the research outcomes and provide researchers with an overview of the existing segmentation techniques in medical images. In this paper, different image segmentation methods applied on magnetic resonance brain images are reviewed. The selection of methods includes sources from image processing journals, conferences, books, dissertations and thesis. The conceptual details of the methods are explained and mathematical details are avoided for simplicity. Both broad and detailed categorizations of reviewed segmentation techniques are provided. The state of art research is provided with emphasis on developed techniques and image properties used by them. The methods defined are not always mutually independent. Hence, their inter relationships are also stated. Finally, conclusions are drawn summarizing commonly used techniques and their complexities in application.
FACE RECOGNITION USING PRINCIPAL COMPONENT ANALYSIS WITH MEDIAN FOR NORMALIZA...csandit
Recognizing Faces helps to name the various subjects present in the image. This work focuses
on labeling faces on an image which includes faces of humans being of various age group
(heterogeneous set ). Principal component analysis concentrates on finds the mean of the data
set and subtracts the mean value from the data set with an intention to normalize that data.
Normalization with respect to image is the removal of common features from the data set. This
work brings in the novel idea of deploying the median another measure of central tendency for
normalization rather than mean. The above work was implemented using matlab. Results show
that Median is the best measure for normalization for a heterogeneous data set which gives
raise to outliers.
MIXTURES OF TRAINED REGRESSION CURVESMODELS FOR HANDRITTEN ARABIC CHARACTER R...ijaia
In this paper, we demonstrate how regression curves can be used to recognize 2D non-rigid handwritten shapes. Each shape is represented by a set of non-overlapping uniformly distributed landmarks. The underlying models utilize 2nd order of polynomials to model shapes within a training set. To estimate the regression models, we need to extract the required coefficients which describe the variations for a set of shape class. Hence, a least square method is used to estimate such modes. We proceed then, by training these coefficients using the apparatus Expectation Maximization algorithm. Recognition is carried out by finding the least error landmarks displacement with respect to the model curves. Handwritten isolated Arabic characters are used to evaluate our approach.
Reliability and Fuzzy Logic Concepts as Applied to Slope Stability Analysis –...IJERA Editor
Considerable uncertainty exist with regard to stability of slopes due to several factors and recognition of these uncertainties has made designees to introduce factor of safety. Several studies, during recent years on analytical methods using soil properties have improved understanding the several uncertainties. The reliability analysis of slopes can be used to represent uncertainty in mathematical models, which can be assumed to follow the characteristic of random uncertainty. The distribution uncertain variable, which is unknown, makes its estimation difficult. Hence, the concepts of fuzzy set theory appear to be quite reliable when limited information is available. This paper attempts to review the slope stability problem and deals with the intricacies of the concept of reliability and fuzzy logic as applied to stability analysis of slope. It has been suggested that the FOSM algorithm provides a general agreement among the different slope stability solutions.
Evaluation of 6 noded quareter point element for crack analysis by analytical...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
The Application Of Bayes Ying-Yang Harmony Based Gmms In On-Line Signature Ve...ijaia
In this contribution, a Bayes Ying-Yang(BYY) harmony based approach for on-line signature verification is
presented. In the proposed method, a simple but effective Gaussian Mixture Models(GMMs) is used to
represent for each user’s signature model based on the prior information collected. Different from the early
works, in this paper, we use the Bayes Ying Yang machine combined with the harmony function to achieve
Automatic Model Selection(AMS) during the parameter learning for the GMMs, so that a better
approximation of the user model is assured. Experiments on a database from the First International
Signature Verification Competition(SVC 2004) confirm that this combined algorithm yields quite a
satisfactory result.
Prediction of surface roughness in high speed machining a comparisoneSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
A Statistical study on effects of fundamental machining parameters on surface...dbpublications
Roughness consists of the irregularities of the surface texture, usually including those irregularities that result from the actions involved in the production process. Surface roughness is an important measure of the quality of a machined product and a factor that greatly determines manufacturing cost. In this work,in order to estimate surface quality and dimensional precision properties in advance, theoretical models are employed making it feasible to do prediction in function of operation conditions and machining parameters such as feed speed and depth of cut etc. The need for statistical method like DOE for studying the relationship between the machining parameters is because of this need for prediction. It is a analysis technique which uses the regression method to find out the relationship between various factors in a DOE setup depending upon the interactions of the predictor variables and the response variables which is performed in the experiments. The research in this domain will help advance further investigations into the relationship between the machining factors and the surface quality of the machined components. The DOE using Taguchi’s method and statistical study of the experimental data helps to understand the interaction between various factors like speed, feed and depth of cut in the machining.
A Comparison of Optimization Methods in Cutting Parameters Using Non-dominate...Waqas Tariq
Since cutting conditions have an influence on reducing the production cost and time and deciding the quality of a final product the determination of optimal cutting parameters such as cutting speed, feed rate, depth of cut and tool geometry is one of vital modules in process planning of metal parts. With use of experimental results and subsequently, with exploitation of main effects plot, importance of each parameter is studied. In this investigation these parameters was considered as input in order to optimized the surface finish and tool life criteria, two conflicting objectives, as the process performance simultaneously. In this study, micro genetic algorithm (MGA) and Non-dominated Sorting Genetic Algorithm (NSGA-II) were compared with each other proving the superiority of Non-dominated Sorting Genetic Algorithm over micro genetic since Non-dominated Sorting Genetic Algorithm results were more satisfactory than micro genetic algorithm in terms of optimizing machining parameters.
FINGERPRINT MATCHING USING HYBRID SHAPE AND ORIENTATION DESCRIPTOR -AN IMPROV...IJCI JOURNAL
Fingerprint recognition is a promising factor for the Biometric Identification and authentication process.
Fingerprints are broadly used for personal identification due to its feasibility, distinctiveness, permanence,
accuracy and acceptability. This paper proposes a way to improve the Equal Error Rate (EER) in
fingerprint matching techniques in the domain of hybrid shape and orientation descriptor. This type of
fingerprint matching domain is popular due to capability of filtering false and strange minutiae pairings.
EER is calculated by using FMR and FNMR to check the performance of proposed technique.
Computational Intelligence Approach for Predicting the Hardness Performances ...Waqas Tariq
This paper presents a computational approach on predicting of hardness performances for Titanium Aluminium Nitride (TiA1N) coating process. A new application in predicting the hardness performances of TiA1N coatings using a method called Support Vector Machine (SVM) and Artificial Neural Network (ANN) is implemented. TiAlN coatings are usually used in high-speed machining due to its excellent properties in surface hardness and wear resistance. Physical Vapor Deposition (PVD) magnetron sputtering process has been used to produce the TiA1N coatings. Based on the experimental dataset of previous work, the SVM and ANN model is used in predicting the hardness of TiA1N coatings. The influential factors of three coating process parameter namely substrate sputtering power, substrate bias voltage and substrate temperature were selected as input while the output parameter is the hardness. The results of proposed SVM and ANN models are compared to the experimental result and the hybrid RSM-Fuzzy model from previous work. The comparisons of SVM and ANN models against hybrid RSM-Fuzzy were based on predictive performances in order to obtain the most accurate model for prediction of hardness in TiA1N coating. In terms of predictive performance evaluation, four performances matrix were applied that are percentage error, mean square error (MSE), co-efficient determination (R 2) and model accuracy. The result has proved that the proposed SVM model shows the better result compared to the ANN and hybrid RSM-fuzzy model. The good performances of the results obtained by the SVM method shows that this method can be applied for prediction of hardness performances in TiA1N coating process with better predictive performances compared to ANN and hybrid RSM-Fuzzy.
Comparison of Cost Estimation Methods using Hybrid Artificial Intelligence on...IJERA Editor
Cost estimating at schematic design stage as the basis of project evaluation, engineering design, and cost
management, plays an important role in project decision under a limited definition of scope and constraints in
available information and time, and the presence of uncertainties. The purpose of this study is to compare the
performance of cost estimation models of two different hybrid artificial intelligence approaches: regression
analysis-adaptive neuro fuzzy inference system (RANFIS) and case based reasoning-genetic algorithm (CBRGA)
techniques. The models were developed based on the same 50 low-cost apartment project datasets in
Indonesia. Tested on another five testing data, the models were proven to perform very well in term of accuracy.
A CBR-GA model was found to be the best performer but suffered from disadvantage of needing 15 cost drivers
if compared to only 4 cost drivers required by RANFIS for on-par performance.
PERFORMANCE EVALUATION OF FUZZY LOGIC AND BACK PROPAGATION NEURAL NETWORK FOR...ijesajournal
ABSTRACT
Fuzzy c-mean is one of the efficient tools used in character recognition. Back propagation neural network is another powerful that may be used in such field. A comparison between fuzzy c-mean and BP neural network classifiers are presented in this research to obtain the performance of both classifiers. The comparison was based on recognition efficiency; this efficiency was evaluated as the ratio of the number of assigned characters with unknown one to the number of character set related to that character. The fuzzy C-mean and BP neural network algorithms were tested on a set of hand written and machine printed dataset named Chars74K dataset using Matlab (2016 b) programming language and the result was that neural network classifier gave 82% recognition efficiency while fuzzy c –mean gave 78%. Neural network classifier is more superior than fuzzy C-mean in recognition due to the limitations of processing time of fuzzy C-mean that requires smaller image size and eventually this will cause less efficiency.
CONSISTENT AND LUMPED MASS MATRICES IN DYNAMICS AND THEIR IMPACT ON FINITE EL...IAEME Publication
There are two strategies in the finite element analysis of dynamic problems related to natural frequency determination viz. the consistent / coupled mass matrix and the lumped mass matrix. Correct determination of natural frequencies is extremely important and forms the basis of any further NVH (Noise vibration and harshness) calculations and Impact or crash analysis. It has been thought by the finite element community that the consistent mass matrix should not be used as it leads to a higher computational cost and this opinion has been prevalent since 1970. We are of the opinion that in today’s age where computers have become so fast we can use the consistent mass matrix on relatively coarse meshes with an advantage for better accuracy rather than going for finer meshes and lumped mass matrix
EFFICIENT USE OF HYBRID ADAPTIVE NEURO-FUZZY INFERENCE SYSTEM COMBINED WITH N...csandit
This research study proposes a novel method for automatic fault prediction from foundry data
introducing the so-called Meta Prediction Function (MPF). Kernel Principal Component
Analysis (KPCA) is used for dimension reduction. Different algorithms are used for building the
MPF such as Multiple Linear Regression (MLR), Adaptive Neuro Fuzzy Inference System
(ANFIS), Support Vector Machine (SVM) and Neural Network (NN). We used classical
machine learning methods such as ANFIS, SVM and NN for comparison with our proposed
MPF. Our empirical results show that the MPF consistently outperform the classical methods.
Reflectivity Parameter Extraction from RADAR Images Using Back Propagation Al...IJECEIAES
Pattern recognition has been acknowledged as one of the promising research areas and it has drawn the awareness among many researchers since its existence at the beginning of the nineties. Multilayer Neural networks are used in pattern Recognition and classification based on the features derived from the input patterns. The Reflectivity information extracted from the Doppler Weather Radar (DWR) image helps in identifying the convective cloud type which has a strong relation to the precipitation rate. The reflectivity information is rooted in the DWR image with the help of colors and color bar is provided to distinguish among different reflectivity information. Artificial Neural network predicts the color based on the maximum likelihood estimation problem. This paper presents a best possible backpropagation algorithm for color identification in DWR images by comparing various backpropagation algorithms such as LevenbergMarquardt, Conjugate gradient, and Resilient back propagation etc.,. Pattern recognition using Neural networks presents better results compared to standard distance measures. It is observed that Levenberg-Marquardt backpropagation algorithm yields a regression value of 99% approximately and accuracy of 98%.
Towards better performance: phase congruency based face recognitionTELKOMNIKA JOURNAL
Phase congruency is an edge detector and measurement of the significant feature in the image. It is a robust method against contrast and illumination variation. In this paper, two novel techniques are introduced for developing alow-cost human identification system based on face recognition. Firstly, the valuable phase congruency features, the gradient-edges and their associate dangles are utilized separately for classifying 130 subjects taken from three face databases with the motivation of eliminating the feature extraction phase. By doing this, the complexity can be significantly reduced. Secondly, the training process is modified when a new technique, called averaging-vectors is developed to accelerate the training process and minimizes the matching time to the lowest value. However, for more comparison and accurate evaluation,three competitive classifiers: Euclidean distance (ED),cosine distance (CD), and Manhattan distance (MD) are considered in this work. The system performance is very competitive and acceptable, where the experimental results show promising recognition rates with a reasonable matching time.
Compressive Sensing in Speech from LPC using Gradient Projection for Sparse R...IJERA Editor
This paper presents compressive sensing technique used for speech reconstruction using linear predictive coding because the
speech is more sparse in LPC. DCT of a speech is taken and the DCT points of sparse speech are thrown away arbitrarily.
This is achieved by making some point in DCT domain to be zero by multiplying with mask functions. From the incomplete
points in DCT domain, the original speech is reconstructed using compressive sensing and the tool used is Gradient
Projection for Sparse Reconstruction. The performance of the result is compared with direct IDCT subjectively. The
experiment is done and it is observed that the performance is better for compressive sensing than the DCT.
SVD BASED LATENT SEMANTIC INDEXING WITH USE OF THE GPU COMPUTATIONSijscmcj
The purpose of this article is to determine the usefulness of the Graphics Processing Unit (GPU) calculations used to implement the Latent Semantic Indexing (LSI) reduction of the TERM-BY DOCUMENT matrix. Considered reduction of the matrix is based on the use of the SVD (Singular Value Decomposition) decomposition. A high computational complexity of the SVD decomposition - O(n3), causes that a reduction of a large indexing structure is a difficult task. In this article there is a comparison of the time complexity and accuracy of the algorithms implemented for two different environments. The first environment is associated with the CPU and MATLAB R2011a. The second environment is related to graphics processors and the CULA library. The calculations were carried out on generally available benchmark matrices, which were combined to achieve the resulting matrix of high size. For both considered environments computations were performed for double and single precision data.
Similar to 32 8950 surendar s comparison of surface roughness (edit ari)new (20)
In our homes or offices, security has been a vital issue. Control of home security system remotely always offers huge advantages like the arming or disarming of the alarms, video monitoring, and energy management control apart from safeguarding the home free up intruders. Considering the oldest simple methods of security that is the mechanical lock system that has a key as the authentication element, then an upgrade to a universal type, and now unique codes for the lock. The recent advancement in the communication system has brought the tremendous application of communication gadgets into our various areas of life. This work is a real-time smart doorbell notification system for home Security as opposes of the traditional security methods, it is composed of the doorbell interfaced with GSM Module, a GSM module would be triggered to send an SMS to the house owner by pressing the doorbell, the owner will respond to the guest by pressing a button to open the door, otherwise, a message would be displayed to the guest for appropriate action. Then, the keypad is provided for an authorized person for the provision of password for door unlocking, if multiple wrong password attempts were made to unlock, a message of burglary attempt would be sent to the house owner for prompt action. The main benefit of this system is the uniqueness of the incorporation of the password and messaging systems which denies access to any unauthorized personality and owner's awareness method.
Augmented reality, the new age technology, has widespread applications in every field imaginable. This technology has proven to be an inflection point in numerous verticals, improving lives and improving performance. In this paper, we explore the various possible applications of Augmented Reality (AR) in the field of Medicine. The objective of using AR in medicine or generally in any field is the fact that, AR helps in motivating the user, making sessions interactive and assist in faster learning. In this paper, we discuss about the applicability of AR in the field of medical diagnosis. Augmented reality technology reinforces remote collaboration, allowing doctors to diagnose patients from a different locality. Additionally, we believe that a much more pronounced effect can be achieved by bringing together the cutting edge technology of AR and the lifesaving field of Medical sciences. AR is a mechanism that could be applied in the learning process too. Similarly, virtual reality could be used in the field where more of practical experience is needed such as driving, sports, neonatal care training.
Image fusion is a sub field of image processing in which more than one images are fused to create an image where all the objects are in focus. The process of image fusion is performed for multi-sensor and multi-focus images of the same scene. Multi-sensor images of the same scene are captured by different sensors whereas multi-focus images are captured by the same sensor. In multi-focus images, the objects in the scene which are closer to the camera are in focus and the farther objects get blurred. Contrary to it, when the farther objects are focused then closer objects get blurred in the image. To achieve an image where all the objects are in focus, the process of images fusion is performed either in spatial domain or in transformed domain. In recent times, the applications of image processing have grown immensely. Usually due to limited depth of field of optical lenses especially with greater focal length, it becomes impossible to obtain an image where all the objects are in focus. Thus, it plays an important role to perform other tasks of image processing such as image segmentation, edge detection, stereo matching and image enhancement. Hence, a novel feature-level multi-focus image fusion technique has been proposed which fuses multi-focus images. Thus, the results of extensive experimentation performed to highlight the efficiency and utility of the proposed technique is presented. The proposed work further explores comparison between fuzzy based image fusion and neuro fuzzy fusion technique along with quality evaluation indices.
Graphs have become the dominant life-form of many tasks as they advance a
structure to represent many tasks and the corresponding relations. A powerful
role of networks/graphs is to bridge local feats that exist in vertices as they
blossom into patterns that help explain how nodal relations and their edges
impacts a complex effect that ripple via a graph. User cluster are formed as a
result of interactions between entities. Many users can hardly categorize their
contact into groups today such as “family”, “friends”, “colleagues” etc. Thus,
the need to analyze such user social graph via implicit clusters, enables the
dynamism in contact management. Study seeks to implement this dynamism
via a comparative study of deep neural network and friend suggest algorithm.
We analyze a user’s implicit social graph and seek to automatically create
custom contact groups using metrics that classify such contacts based on a
user’s affinity to contacts. Experimental results demonstrate the importance
of both the implicit group relationships and the interaction-based affinity in
suggesting friends.
This paper projects Gryllidae Optimization Algorithm (GOA) has been applied to solve optimal reactive power problem. Proposed GOA approach is based on the chirping characteristics of Gryllidae. In common, male Gryllidae chirp, on the other hand some female Gryllidae also do as well. Male Gryllidae draw the females by this sound which they produce. Moreover, they caution the other Gryllidae against dangers with this sound. The hearing organs of the Gryllidae are housed in an expansion of their forelegs. Through this, they bias to the produced fluttering sounds. Proposed Gryllidae Optimization Algorithm (GOA) has been tested in standard IEEE 14, 30 bus test systems and simulation results show that the projected algorithms reduced the real power loss considerably.
In the wake of the sudden replacement of wood and kerosene by gas cookers for several purposes in Nigeria, gas leakage has caused several damages in our homes, Laboratories among others. installation of a gas leakage detection device was globally inspired to eliminate accidents related to gas leakage. We present an alternative approach to developing a device that can automatically detect and control gas leakages and also monitor temperature. The system detects the leakage of the LPG (Liquefied Petroleum Gas) using a gas sensor, then triggred the control system response which employs ventilator system, Mobile phone alert and alarm when the LPG concentration in the air exceeds a certain level. The performance of two gas sensors (MQ5 and MQ6) were tested for a guided decision. Also, when the temperature of the environment poses a danger, LED (indicator), buzzer and LCD (16x2) display was used to indicate temperature and gas leakage status in degree Celsius and PPM respectively. Attension was given to the response time of the control system, which was ascertained that this system significantly increases the chances and efficiency of eliminating gas leakage related accident.
Feature selection problem is one of the main important problems in the text and data mining domain. This paper presents a comparative study of feature selection methods for Arabic text classification. Five of the feature selection methods were selected: ICHI square, CHI square, Information Gain, Mutual Information and Wrapper. It was tested with five classification algorithms: Bayes Net, Naive Bayes, Random Forest, Decision Tree and Artificial Neural Networks. In addition, Data Collection was used in Arabic consisting of 9055 documents, which were compared by four criteria: Precision, Recall, F-measure and Time to build model. The results showed that the improved ICHI feature selection got almost all the best results in comparison with other methods.
In this paper Gentoo Penguin Algorithm (GPA) is proposed to solve optimal reactive power problem. Gentoo Penguins preliminary population possesses heat radiation and magnetizes each other by absorption coefficient. Gentoo Penguins will move towards further penguins which possesses low cost (elevated heat concentration) of absorption. Cost is defined by the heat concentration, distance. Gentoo Penguins penguin attraction value is calculated by the amount of heat prevailed between two Gentoo penguins. Gentoo Penguins heat radiation is measured as linear. Less heat is received in longer distance, in little distance, huge heat is received. Gentoo Penguin Algorithm has been tested in standard IEEE 57 bus test system and simulation results show the projected algorithm reduced the real power loss considerably.
08 20272 academic insight on applicationIAESIJEECS
This research has thrown up many questions in need of further investigation.There was an expressive quantitative-qualitative research, which a common investigation form was used in.The dialogue item was also applied to discover if the contributors asserted the media-based attitude supplements their learning of academic English writing classes or not.Data recounted academic” insights toward using Skype as a sustaining implement for lessons releasing based on chosen variables: their occupation, year of education, and knowledge with Skype discovered that there were no important statistical differences in the use of Skype units because of medical academics major knowledge. There are statistically important differences in using Skype units. The findings also, disclosed that there are statistically significant differences in using Skype units due to the practice with Skype variable, in favors of academics with no Skype use practice. Skype instrument as an instructive media is a positive medium to be employed to supply academic medical writing data and assist education. Academics who do not have enough time to contribute in classes believe comfortable using the Skype-based attitude in scientific writing. They who took part in the course claimed that their approval of this media is due to learning academic innovative medical writing.
Cloud computing has sweeping impact on the human productivity. Today it’s used for Computing, Storage, Predictions and Intelligent Decision Making, among others. Intelligent Decision-Making using Machine Learning has pushed for the Cloud Services to be even more fast, robust and accurate. Security remains one of the major concerns which affect the cloud computing growth however there exist various research challenges in cloud computing adoption such as lack of well managed service level agreement (SLA), frequent disconnections, resource scarcity, interoperability, privacy, and reliability. Tremendous amount of work still needs to be done to explore the security challenges arising due to widespread usage of cloud deployment using Containers. We also discuss Impact of Cloud Computing and Cloud Standards. Hence in this research paper, a detailed survey of cloud computing, concepts, architectural principles, key services, and implementation, design and deployment challenges of cloud computing are discussed in detail and important future research directions in the era of Machine Learning and Data Science have been identified.
Notary is an official authorized to make an authentic deed regarding all deeds, agreements and stipulations required by a general rule. Activities carried out at the notary office such as recording client data and file data still use traditional systems that tend to be manual. The problem that occurs is the inefficiency in data processing and providing information to clients. Clients have difficulty getting information related to the progress of documents that are being taken care of at the notary's office. The client must take the time to arrive to the notary's office repeatedly to check the progress of the work of the document file. The purpose of this study is to facilitate clients in obtaining information about the progress of the work in progress, and make it easier for employees to process incoming documents by implementing an administrative system. This system was developed with the waterfall system development method and uses the Multi-Channel Access Technology integrated in the website to simplify the process of delivering information and requesting information from clients and to clients with Telegram and SMS Gateway. Clients will come to the office only when there is a notification from the system via Telegram or SMS notifying that the client must come directly to the notary's office, thus leading to an efficient time and avoiding excessive transportation costs. The overall functional system can function properly based on the results of alpha testing. The results of beta testing conducted by distributing the system feasibility test questionnaire to end users, get a percentage of 96% of users agree the system is feasible to be implemented.
In this work Tundra wolf algorithm (TWA) is proposed to solve the optimal reactive power problem. In the projected Tundra wolf algorithm (TWA) in order to avoid the searching agents from trapping into the local optimal the converging towards global optimal is divided based on two different conditions. In the proposed Tundra wolf algorithm (TWA) omega tundra wolf has been taken as searching agent as an alternative of indebted to pursue the first three most excellent candidates. Escalating the searching agents’ numbers will perk up the exploration capability of the Tundra wolf wolves in an extensive range. Proposed Tundra wolf algorithm (TWA) has been tested in standard IEEE 14, 30 bus test systems and simulation results show the proposed algorithm reduced the real power loss effectively.
In this work Predestination of Particles Wavering Search (PPS) algorithm has been applied to solve optimal reactive power problem. PPS algorithm has been modeled based on the motion of the particles in the exploration space. Normally the movement of the particle is based on gradient and swarming motion. Particles are permitted to progress in steady velocity in gradient-based progress, but when the outcome is poor when compared to previous upshot, immediately particle rapidity will be upturned with semi of the magnitude and it will help to reach local optimal solution and it is expressed as wavering movement. In standard IEEE 14, 30, 57,118,300 bus systems Proposed Predestination of Particles Wavering Search (PPS) algorithm is evaluated and simulation results show the PPS reduced the power loss efficiently.
In this paper, Mine Blast Algorithm (MBA) has been intermingled with Harmony Search (HS) algorithm for solving optimal reactive power dispatch problem. MBA is based on explosion of landmines and HS is based on Creativeness progression of musicians-both are hybridized to solve the problem. In MBA Initial distance of shrapnel pieces are reduced gradually to allow the mine bombs search the probable global minimum location in order to amplify the global explore capability. Harmony search (HS) imitates the music creativity process where the musicians supervise their instruments’ pitch by searching for a best state of harmony. Hybridization of Mine Blast Algorithm with Harmony Search algorithm (MH) improves the search effectively in the solution space. Mine blast algorithm improves the exploration and harmony search algorithm augments the exploitation. At first the proposed algorithm starts with exploration & gradually it moves to the phase of exploitation. Proposed Hybridized Mine Blast Algorithm with Harmony Search algorithm (MH) has been tested on standard IEEE 14, 300 bus test systems. Real power loss has been reduced considerably by the proposed algorithm. Then Hybridized Mine Blast Algorithm with Harmony Search algorithm (MH) tested in IEEE 30, bus system (with considering voltage stability index)- real power loss minimization, voltage deviation minimization, and voltage stability index enhancement has been attained.
Artificial Neural Networks have proved their efficiency in a large number of research domains. In this paper, we have applied Artificial Neural Networks on Arabic text to prove correct language modeling, text generation, and missing text prediction. In one hand, we have adapted Recurrent Neural Networks architectures to model Arabic language in order to generate correct Arabic sequences. In the other hand, Convolutional Neural Networks have been parameterized, basing on some specific features of Arabic, to predict missing text in Arabic documents. We have demonstrated the power of our adapted models in generating and predicting correct Arabic text comparing to the standard model. The model had been trained and tested on known free Arabic datasets. Results have been promising with sufficient accuracy.
In the present-day communications speech signals get contaminated due to
various sorts of noises that degrade the speech quality and adversely impacts
speech recognition performance. To overcome these issues, a novel approach
for speech enhancement using Modified Wiener filtering is developed and
power spectrum computation is applied for degraded signal to obtain the
noise characteristics from a noisy spectrum. In next phase, MMSE technique
is applied where Gaussian distribution of each signal i.e. original and noisy
signal is analyzed. The Gaussian distribution provides spectrum estimation
and spectral coefficient parameters which can be used for probabilistic model
formulation. Moreover, a-priori-SNR computation is also incorporated for
coefficient updation and noise presence estimation which operates similar to
the conventional VAD. However, conventional VAD scheme is based on the
hard threshold which is not capable to derive satisfactory performance and a
soft-decision based threshold is developed for improving the performance of
speech enhancement. An extensive simulation study is carried out using
MATLAB simulation tool on NOIZEUS speech database and a comparative
study is presented where proposed approach is proved better in comparison
with existing technique.
Previous research work has highlighted that neuro-signals of Alzheimer’s disease patients are least complex and have low synchronization as compared to that of healthy and normal subjects. The changes in EEG signals of Alzheimer’s subjects start at early stage but are not clinically observed and detected. To detect these abnormalities, three synchrony measures and wavelet-based features have been computed and studied on experimental database. After computing these synchrony measures and wavelet features, it is observed that Phase Synchrony and Coherence based features are able to distinguish between Alzheimer’s disease patients and healthy subjects. Support Vector Machine classifier is used for classification giving 94% accuracy on experimental database used. Combining, these synchrony features and other such relevant features can yield a reliable system for diagnosing the Alzheimer’s disease.
Attenuation correction designed for PET/MR hybrid imaging frameworks along with portion making arrangements used for MR-based radiation treatment remain testing because of lacking high-energy photon weakening data. We present a new method so as to uses the learned nonlinear neighborhood descriptors also highlight coordinating toward foresee pseudo-CT pictures starting T1w along with T2w MRI information. The nonlinear neighborhood descriptors are acquired through anticipating the direct descriptors interested in the nonlinear high-dimensional space utilizing an unequivocal constituent guide also low-position guess through regulated complex regularization. The nearby neighbors of every near descriptor inside the data MR pictures are looked during an obliged spatial extent of the MR pictures among the training dataset. By that point, the pseudo-CT patches are evaluated through k-closest neighbor relapse. The planned procedure designed for pseudo-CT forecast is quantitatively broke downward on top of a dataset comprising of coordinated mind MRI along with CT pictures on or after 13 subjects.
The cognitive radio prototype performance is to alleviate the scarcity of spectral resources for wireless communication through intelligent sensing and quick resource allocation techniques. Secondary users (SU’s) actively obtain the spectrum access opportunity by supporting primary users (PU’s) in cognitive radio networks (CRNs). In present generation, spectrum access is endowed through cooperative communication-based link-level frame-based cooperative (LLC) principle. In this SUs independently act as conveyors for PUs to achieve spectrum access opportunities. Unfortunately, this LLC approach cannot fully exploit spectrum access opportunities to enhance the throughput of CRNs and fails to motivate PUs to join the spectrum sharing processes. Therefore, to overcome this con, network level cooperative (NLC) principle was used, where SUs are integrated mutually to collaborate with PUs session by session, instead of frame based cooperation for spectrum access opportunities. NLC approach has justified the challenges facing in LLC approach. In this paper we make a survey of some models that have been proposed to tackle the problem of LLC. We show the relevant aspects of each model, in order to characterize the parameters that we should take in account to achieve a spectrum access opportunity.
In this paper, the author provides insights and lessons that can be learned from colleagues at American universities about their online education experiences. The literature review and previous studies of online educations gains are explored and summarized in this research. Emerging trends in online education are discussed in detail, and strategies to implement these trends are explained. The author provides several tools and strategies that enable universities to ensure the quality of online education. At the end of this research paper, the researcher provides examples from Arab universities who have successfully implemented online education and expanded their impact on the society. This research provides a strategy and a model that can be used by universities in the Middle East as a roadmap to implement online education in their regions.
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxR&R Consult
CFD analysis is incredibly effective at solving mysteries and improving the performance of complex systems!
Here's a great example: At a large natural gas-fired power plant, where they use waste heat to generate steam and energy, they were puzzled that their boiler wasn't producing as much steam as expected.
R&R and Tetra Engineering Group Inc. were asked to solve the issue with reduced steam production.
An inspection had shown that a significant amount of hot flue gas was bypassing the boiler tubes, where the heat was supposed to be transferred.
R&R Consult conducted a CFD analysis, which revealed that 6.3% of the flue gas was bypassing the boiler tubes without transferring heat. The analysis also showed that the flue gas was instead being directed along the sides of the boiler and between the modules that were supposed to capture the heat. This was the cause of the reduced performance.
Based on our results, Tetra Engineering installed covering plates to reduce the bypass flow. This improved the boiler's performance and increased electricity production.
It is always satisfying when we can help solve complex challenges like this. Do your systems also need a check-up or optimization? Give us a call!
Work done in cooperation with James Malloy and David Moelling from Tetra Engineering.
More examples of our work https://www.r-r-consult.dk/en/cases-en/
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Immunizing Image Classifiers Against Localized Adversary Attacksgerogepatton
This paper addresses the vulnerability of deep learning models, particularly convolutional neural networks
(CNN)s, to adversarial attacks and presents a proactive training technique designed to counter them. We
introduce a novel volumization algorithm, which transforms 2D images into 3D volumetric representations.
When combined with 3D convolution and deep curriculum learning optimization (CLO), itsignificantly improves
the immunity of models against localized universal attacks by up to 40%. We evaluate our proposed approach
using contemporary CNN architectures and the modified Canadian Institute for Advanced Research (CIFAR-10
and CIFAR-100) and ImageNet Large Scale Visual Recognition Challenge (ILSVRC12) datasets, showcasing
accuracy improvements over previous techniques. The results indicate that the combination of the volumetric
input and curriculum learning holds significant promise for mitigating adversarial attacks without necessitating
adversary training.
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
2. ISSN: 2502-4752
IJEECS Vol. 7, No. 3, September 2017 : 887 – 892
888
reduction using principle component analysis and later applied multiple linear regression
method for prediction of surface roughness [4]. S. C. Lin and M. F. Chang confirmed that
surface roughness strongly correlates with cutting tool vibration [5]. Shinn-Ying Ho et al. used
the image of the work piece to predict the surface roughness by using the gray scale [6]. Lin et
al. used the abductive network and regression analysis methods for prediction of cutting force
and surface roughness during machining [7]. Machine learning concept introduced in different
areas like medical field for predicting the diseases, power distribution, condition monitoring etc.
[8-9]. For instance, A. Etemad-Shahidi and J. Mahjoobi used network tree and M5 model tree
for prediction of wave height in the lake and stated that M5 model tree results were more
accurate than artificial neural networks result [10]. C. J. Poor and J. L. Ullman inferred that
regression tree (RT) has higher predictability than multiple linear regression (MLP) [11]. A.M.
Handhal showed that both adaptive neuro-fuzzy inference system (ANFIS) and M5 model tree
have similar results but ANFIS has a more complex structure than M5 model tree [12].
Among the various regression methods, a study was drawn out to find the one that has
the least root mean squared error (RMSE) value and the lowest computational time. The study
proposes, prediction of surface roughness during a boring operation by using multiple linear
regression (MLR), regression tree and M5P tree then followed by extracting the statistical
features from the vibration signal and finally results are compared by analyzed in four cases as
follows:
Case I: the arithmetic mean (statistical features) of the vibration signal
Case II: First four moments of the statistical features
Case III: Statistical features of the vibration signal
Case IV: feature reduction by principle components analysis (PCA)
2. Methodology
In order to evolve a prediction model, surface roughness (Ra) values were recorded for
every experiment that was conducted by varying the cutting parameters and flank wear of the
cutting tool during a boring operation. Simultaneously, the vibration signals were acquired using
a DAQ unit. For every variation of the parameters, surface roughness (Ra) value was measured
using a stylus probe type (HandySurf) surface measurement instrument. A cutoff length of 2.5
mm and measuring the length of 12.5 mm was used and the average Ra value is recorded. A
prediction model using M5P tree, multiple linear regression and regression tree was carried out
using KNIME Analytics Platform. The data was manipulated using 10-fold cross validation.
3. Multiple Linear Regression
If two or more independent variables in a linear regression analysis have one
dependent variable is called multiple linear regression (MLR). Multiple linear regression is one
of the easiest as well as an excellent method for the prediction of numerical values and it has
been widely used technique for statistical application. MLR model develops a relationship in the
straight line (linear) form that best approximates of the all the single data points. The basic
regression model is of the form Y=x0+g1x1+g2x2 + ….. +gnxn where, Y is the predictand or
dependent variable and x1, x2,…, xn are the independent variables or predictors that are related
to Y. x0 is the value of Y when all the predictor variables are equal to zero. g1, g2,…, gn are the
regression coefficients of the predictor variable that are calculated from the training data set.
Estimate the mean square error and root mean squared error using statistical software.
Y
2
= ∑ ( ) where P is regression coefficient, N-P is degree of freedom and is
measured surface roughness (Ra) value.
4. Regression Tree
A regression tree is a combination of the decision tree and linear regression which is
free from constraints like linearity, etc. Regression tree (RT) gives piecewise linear relationship
between predictand or dependent variable and predictors or independent variables and this tree
can evaluate complex data sets with many interacting predictors. Regression tree has two types
of node that are a rounded node which represents splitting node and squared boxes are leaves
3. IJEECS ISSN: 2502-4752
Comparison of Surface Roughness Prediction with Regression and Tree… (S. Surendar)
889
node of the tree which represent predictions of the target variable or predictand by using local
multiple linear regression (MLR) method. A prediction using regression tree can be attained by
dropping the test data set the case down the tree which following the correct nodes according to
the output of the test data sets on the input variables. The corresponding prediction of numerical
values has been reached the squared node or leaf node.
5. MP5 Tree
M5P tree is similar to a regression tree and has numerical values at its leaf node. An
M5P tree is one of the numeric prediction algorithms in machine learning and the linear
regression model is stored in the leaf node that predicts the numerical value. To determine the
attribute is the best split into the training portion of data which reach a node at the equally split
node. To measure of the error at that node using the standard deviation (SD) and calculating
the expected reduction in error of the attribute at that node were tested. To reduce the error at
that node where split equally. R=SD (Tr)-∑ x SD (Tri) where R is standard deviation
reduction and Tri is trained data set where i=1, 2, 3,.., n are spitting rounded node based on the
result. The linear regression models continuous predict the numerical attributes at the leaf node
of the M5P model tree.
6. Principle Component Analysis (PCA)
PCA is a feature reduction algorithm that reduces the no. of features from the training
data set (for example, from 8 to 2) to provide higher efficiency. M. Elangovan et al. explained
that root mean square error value was lowered by dimensional reduction when PCA was
used [4].
7. Experimental Setup
The experimental setup constitutes a lathe machine (Kirloskar Turnmaster-40), a
National instrument (NI) make data acquisition device -DAQ 9234, a piezoelectric accelerometer
(PCB PIEZOTRONICS), and a computer to store digital signals setup shown in Figure 1. The
experiments were conducted by varying the cutting parameters of boring operation and the flank
wear of the cutting insert. The input cutting parameters are spindle speed (280 rpm, 400 rpm
and 630 rpm), feed rate (0.05 mm/rev, 0.071, mm/rev and 0.09 mm/rev) and depth of the cut
(0.25 mm, 0.375 mm, 0.50 mm) along with flank wear (good tool, 0.2 mm, 0.4 mm). As per ISO
standards when the flank wear of the cutting tool is greater than 0.6 mm, it is considered as
failed [3]. A total of 81 experiments were conducted using a full factorial combination of spindle
speed, feed rate, depth of the cut and flank wear. After machining each work-piece was
measured for its surface roughness using HANDYSURF E-35A/B instrument.
Figure1. Experimental setup
7.1. Tool Wear Measurement
4. ISSN: 2502-4752
IJEECS Vol. 7, No. 3, September 2017 : 887 – 892
890
In order to get tool tips with different flank wear, the optical measuring instrument was
used to segregate used carbide cutting tool inserts (CCMT090408) based on their flank wear
but with the same make and grade.
7.2. Experimental procedure and data acquisition
The experiment procedure consisted of mounting a new uncoated carbide tool tip
(CCMT0900408) on a boring bar clamped on the tool post. The piezoelectric accelerometer was
fixed on the boring bar using the adhesive mounting technique as shown in the Figure 1. The
accelerometer was connected to NI DAQ 9234. The signals were recorded using LABVIEW
software and a DAQ unit. The sampling frequency of 51 kHz was set for acquiring the signal. A
hollow workpiece of material EN8 and inner diameter as 25 mm and a wall thickness of 15mm
was held on a self-centering three jaw chuck. The oxidized layer on the inner wall was removed
by giving a small depth of cut and during this process, no signal was recorded. Data acquisition
unit (NI DAQ 9234) was switched on and the signals recorded after leaving first few seconds for
the signal to get stabilized. Every signal was recorded for 20 seconds using LAB VIEW software
and statistical features were extracted from the time domain signals.
8. Results and Discussion
Multiple linear regression (MLR), regression tree (RT) and model tree (M5P) algorithms
were used to predict the surface roughness (Ra) of the inner wall of the bored hollow shaft using
an uncoated carbide cutting insert. The results are discussed in four states as shown in Table 1.
The mean absolute error, the root mean squared error (RMSE) and computational time (CT) of
the multiple regression, regression tree and model tree results are compared and shown in
Table 2. The measured Ra and predicted Ra in Y-axis against the sample number of rows in X-
axis were plotted for the case I of RT to show the conformance of predicted value over the
measured value as shown in Fgure 2 and Figure 3. The measured Ra and predicted Ra in Y-
axis against the sample number of rows in X-axis were plotted for the case I of MLR and M5P
tree (first 50 rows) and shown in Figure 4 and Figure 5.
Figure 2. Regression tree
5. IJEECS ISSN: 2502-4752
Comparison of Surface Roughness Prediction with Regression and Tree… (S. Surendar)
891
Figure 3. Regression Tree
Figure 4. MLR
Figure 5. MP5 tree
Table 1. Dependent and Independent Variables
Dependent variable Independent variables
Case I Surface roughness (Ra) Spindle speed, feed rate, depth of cut, flank wear and mean
Case II Surface roughness (Ra)
Spindle speed, feed rate, depth of cut, flank wear, mean, standard deviation,
kurtosis and skewness
Case III Surface roughness (Ra)
Spindle speed, feed rate, depth of cut, flank wear, mean, RMS, standard
deviation, variance, kurtosis, median, mode and skewness
Case IV Surface roughness (Ra) Spindle speed, feed rate, depth of cut, flank wear, RMS and variance
Table 2. Showing S and RMSE Values When Using MLR, RT and M5P Algorithms
Algorithms Mean Absolute Error (S) Root Mean Squared Error (RMSE)
Computation
Time (second)
Case I
MLR 1.053 1.39 1.2
RT 0.02 0.191 1.03
M5P 0.401 0.4088 2.05
Case II
MLR 1.056 1.391 1.61
RT 0.059 0.392 1.36
M5P 0.5596 0.5969 3.3
Case III
MLR 1.05 1.423 2.13
RT 0.064 0.404 2.41
M5P 0.4132 0.6232 3.95
Case IV
MLR 1.045 1.381 0.633
RT 0.007 0.085 0.56
M5P 0.386 0.525 1.8
6. ISSN: 2502-4752
IJEECS Vol. 7, No. 3, September 2017 : 887 – 892
892
The table 2 show that case IV (i.e RT) has the lowest RMSE value of 0.085 than other
cases. In case III of RT have the RMSE value of 0.404 when the entire statistical features were
included, however the computational time is higher. When the statistical features are reduced
from 8 to 2 by PCA, the RT becomes more approximated in case IV. Convincingly, case IV has
a lower computational time than the case I, II and III of RT and also has a low ‘S’ value.
9. Conclusion and Potential for Future Work
Statistical features that were extracted from time domain signals were regressed using
MLR, RT and M5P algorithms. Regression tree was found to be better algorithm among the
three algorithms that were taken for study. The results show that RSME value obtained by RT is
not only low but also has low computational time as shown table 2. Machine Learning approach
was used to enhance the reliability and to reduce the RMSE and computational time. This study
proves the possibility of usage of Machine Learning algorithms for prediction of surface
roughness by simple and costs effective method. The study can be enhanced by using a
different type of signals like image, sound etc., and by applying other algorithms. There is a
good potential for future work in this direction as different feature reduction methods may be
attempted combined with different algorithms and choosing the best feature- algorithm pair. This
study will also be a step towards making machine tools more intelligent which are capable of
expressing the quality of surface roughness of the workpiece as and when they are being
machined.
References
[1] M Elangovan, KI Ramachandran, V Sugumaran. Studies on Bayes Classifier for Condition Monitoring
of Single Point Carbide Tipped Tool Based on Statistical and HISTOGRAM FEATURES. Expert Syst.
Appl. 2010; 37(3): 2059–2065.
[2] F Kuster, PE Gygax. Cutting Dynamics and Stability of Boring Bars. CIRP Ann.-Manuf. Technol.,
1990; 39(1): 361–366.
[3] K Venkata Rao, BSN Murthy, N Mohan Rao. Prediction of Cutting Tool Wear, Surface Roughness and
Vibration of Work Piece in Boring of AISI 316 Steel with Artificial Neural Network. Meas. J. Int. Meas.
Confed. 2014; 51(1): 63–70.
[4] M Elangovan, NR Sakthivel, S Saravanamurugan, BB Nair, V Sugumaran. Machine Learning
Approach to the Prediction of Surface Roughness Using Statistical Features of Vibration Signal
Acquired in Turning. Procedia Comput. Sci. 2015; 50: 282–288.
[5] SC Lin, MF Chang. A Study on the Effects of Vibrations on the Surface Finish Using a Surface
Topography Simulation Model for Turning. Int. J. Mach. Tools Manuf. 1998; 38: 763–782.
[6] S-Y Ho, K-C Lee, S-S Chen, S-J Ho. Accurate Modeling and Prediction of Surface Roughness by
Computer Vision in Turning Operations Using an Adaptive Neuro-fuzzy Inference System. Int. J.
Mach. Tools Manuf. 2002; 42: 1441–1446.
[7] WS Lin, BY Lee, CL Wu. Modeling the Surface Roughness and Cutting Force for Turning. Journal of
Materials Processing Technology. 2001; 108(April 1999): 286–293.
[8] M Abdar, SRN Kalhori, T Sutikno, I Much, I Subroto, G Arji. Comparing Performance of Data Mining
Algorithms in Prediction Heart Diseases. International Journal of Public Health Science (IJPHS). 2015;
5(6): 1569–1577. [Online]. Available: http://iaesjournal.com/online/index.php/IJPHS/article/view/1380
[9] H Waguih. A Data Mining Approach for the Detection of Denial of Service Attack. IAES International
Journal of Artificial Intelligence (IJAI). 2013; 2(2): 99-106. [Online].
Available:http://iaesjournal.com/online/index.php/IJAI/article/view/1937
[10] A Etemad-Shahidi, J Mahjoobi. Comparison between M5՜ Model Tree and Neural Networks for
Prediction of Significant Wave Height in Lake Superior. Ocean Eng. 2009; 36(15–16): 1175–1181.
[11] CJ Poor, JL Ullman. Using Regression Tree Analysis to Improve Predictions of Low-flow Nitrate and
Chloride in Willamette River Basin Watersheds. Environ. Manage. 2010; 46(5): 771–780.
[12] AM Handhal. Prediction of Reservoir Permeability from Porosity Measurements for the Upper
Sandstone Member of Zubair Formation in Super-Giant South Rumila oil field, Southern Iraq, Using
M5P Decision Tress and Adaptive Neuro-fuzzy Inference System (ANFIS): a Comparat. Model. Earth
Syst. Environ. 2016; 2(3): 111.