This paper proposed new adjusted Hotelling’s T 2 control chart for individual observations. For this objective, bootstrap method for producing the individual observations were employed. To do so, both arithmetic mean vector and the covariance matrix in the traditional Hotelling’s T chart were substituted by the trimmed mean vector and the covariance matrix of the robust scale estimators Q , respectively which, in turn, its performance is carried out by simulated. In fact, the calculation of false alarms and the probability of detection outlier is used for determining the validity of this modified chart. The findings revealed a considerable significance in its performance. n 2
Parameter Optimisation for Automated Feature Point DetectionDario Panada
Parameter optimization for an automated feature point detection model was explored. Increasing the number of random displacements up to 20 improved performance but additional increases did not. Larger patch sizes consistently improved performance. Increasing the number of decision trees did not affect performance for this single-stage model, unlike previous findings for a two-stage model. Overall, some parameter tuning was found to enhance the model's accuracy but not all parameters significantly impacted results.
Application of Principal Components Analysisin Quality Control ProblemMaxwellWiesler
Principal components analysis (PCA) can be used to monitor processes with a large number of variables more efficiently than traditional multivariate control charts. PCA transforms the original variables into a new set of uncorrelated principal components. It reduces the dimensionality of the data while retaining most of the variation. Two papers discussed how PCA can improve process monitoring when there is autocorrelation in the data, and how combining PCA with multivariate exponentially weighted moving average (MEWMA) charts can further enhance shift detection performance. An example showed the PCA-MEWMA approach had a shorter average run length than standard MEWMA alone.
This document discusses recursive least-squares estimation when observation data contains interval uncertainty, also known as imprecision, in addition to random variability. It introduces a recursive formulation of least-squares estimation that efficiently combines the most recent parameter estimate with new observation data. Overestimation is a key challenge for recursive formulations when working with interval data that must be rigorously avoided. The paper also presents an illustrative example of estimating the state of a damped harmonic oscillation using the proposed recursive interval least-squares approach.
The document discusses quality management in the imaging sciences. It provides information on quality management forms, tools, and strategies. It also lists additional resources on quality management including free eBooks, forms, templates, key performance indicators, job descriptions, and interview questions. The document then discusses the contents of a book on quality management in the imaging sciences, which covers quality management procedures and evaluation forms for various imaging modalities. It also lists features of the book like learning objectives, regulations, practice exams, and online resources. Finally, the document describes several quality management tools: check sheets, control charts, Pareto charts, scatter plots, and Ishikawa diagrams.
Forecasting of electric consumption in a semiconductor plant using time serie...Alexander Decker
This document summarizes a study that used time series methods to forecast electricity consumption in a semiconductor plant. The study analyzed 36 months of historical electricity consumption data from 2010-2012 to select the best forecasting model. Single exponential smoothing was found to have the lowest Mean Absolute Percentage Error (MAPE) of 5.60% and was determined to be the best forecasting method. The selected model will be used to forecast future electricity consumption for the plant.
This document discusses evaluating measurement capabilities using gauge repeatability and reproducibility (gauge R&R) analysis of variance (ANOVA) methods. It begins by defining types of measurement errors like accuracy, precision, repeatability and reproducibility. It then describes how ANOVA separates variations due to operators and tools. The document provides an example gauge R&R study using ANOVA calculations in Minitab software to evaluate a measurement system. It finds that less than 1% of the total variation is due to the measurement system, indicating it is acceptable for use. In conclusion, gauge R&R ANOVA analysis can determine if a measurement system produces precise and accurate data to improve process results.
The document provides information about air quality management districts, including their purpose to manage and enhance air quality through programs that oversee emissions and protect public health. It describes the functions of an air quality management district, such as processing permits, maintaining an emissions inventory, and adopting regulations to meet air quality standards. Additionally, the document lists several quality management tools that can be used, such as check sheets, control charts, Pareto charts, scatter plots, Ishikawa diagrams, and histograms.
Hellinger distance based drift detection forDSPG Bangaluru
This document proposes a Hellinger distance-based method for detecting concept drift in non-stationary environments. It begins with background on concept drift and the need for algorithms to detect when the underlying data distribution changes. The proposed method, called Hellinger Distance Drift Detection Method (HDDDM), uses the Hellinger distance to monitor changes between the distribution of new data batches and a reference distribution, and performs a statistical test to detect if a significant change (drift) has occurred. It is a classifier-free, batch-based approach that analyzes raw feature distributions rather than classifier performance. The Hellinger distance is well-suited as it makes no distribution assumptions and provides a bounded measure of distributional divergence.
Parameter Optimisation for Automated Feature Point DetectionDario Panada
Parameter optimization for an automated feature point detection model was explored. Increasing the number of random displacements up to 20 improved performance but additional increases did not. Larger patch sizes consistently improved performance. Increasing the number of decision trees did not affect performance for this single-stage model, unlike previous findings for a two-stage model. Overall, some parameter tuning was found to enhance the model's accuracy but not all parameters significantly impacted results.
Application of Principal Components Analysisin Quality Control ProblemMaxwellWiesler
Principal components analysis (PCA) can be used to monitor processes with a large number of variables more efficiently than traditional multivariate control charts. PCA transforms the original variables into a new set of uncorrelated principal components. It reduces the dimensionality of the data while retaining most of the variation. Two papers discussed how PCA can improve process monitoring when there is autocorrelation in the data, and how combining PCA with multivariate exponentially weighted moving average (MEWMA) charts can further enhance shift detection performance. An example showed the PCA-MEWMA approach had a shorter average run length than standard MEWMA alone.
This document discusses recursive least-squares estimation when observation data contains interval uncertainty, also known as imprecision, in addition to random variability. It introduces a recursive formulation of least-squares estimation that efficiently combines the most recent parameter estimate with new observation data. Overestimation is a key challenge for recursive formulations when working with interval data that must be rigorously avoided. The paper also presents an illustrative example of estimating the state of a damped harmonic oscillation using the proposed recursive interval least-squares approach.
The document discusses quality management in the imaging sciences. It provides information on quality management forms, tools, and strategies. It also lists additional resources on quality management including free eBooks, forms, templates, key performance indicators, job descriptions, and interview questions. The document then discusses the contents of a book on quality management in the imaging sciences, which covers quality management procedures and evaluation forms for various imaging modalities. It also lists features of the book like learning objectives, regulations, practice exams, and online resources. Finally, the document describes several quality management tools: check sheets, control charts, Pareto charts, scatter plots, and Ishikawa diagrams.
Forecasting of electric consumption in a semiconductor plant using time serie...Alexander Decker
This document summarizes a study that used time series methods to forecast electricity consumption in a semiconductor plant. The study analyzed 36 months of historical electricity consumption data from 2010-2012 to select the best forecasting model. Single exponential smoothing was found to have the lowest Mean Absolute Percentage Error (MAPE) of 5.60% and was determined to be the best forecasting method. The selected model will be used to forecast future electricity consumption for the plant.
This document discusses evaluating measurement capabilities using gauge repeatability and reproducibility (gauge R&R) analysis of variance (ANOVA) methods. It begins by defining types of measurement errors like accuracy, precision, repeatability and reproducibility. It then describes how ANOVA separates variations due to operators and tools. The document provides an example gauge R&R study using ANOVA calculations in Minitab software to evaluate a measurement system. It finds that less than 1% of the total variation is due to the measurement system, indicating it is acceptable for use. In conclusion, gauge R&R ANOVA analysis can determine if a measurement system produces precise and accurate data to improve process results.
The document provides information about air quality management districts, including their purpose to manage and enhance air quality through programs that oversee emissions and protect public health. It describes the functions of an air quality management district, such as processing permits, maintaining an emissions inventory, and adopting regulations to meet air quality standards. Additionally, the document lists several quality management tools that can be used, such as check sheets, control charts, Pareto charts, scatter plots, Ishikawa diagrams, and histograms.
Hellinger distance based drift detection forDSPG Bangaluru
This document proposes a Hellinger distance-based method for detecting concept drift in non-stationary environments. It begins with background on concept drift and the need for algorithms to detect when the underlying data distribution changes. The proposed method, called Hellinger Distance Drift Detection Method (HDDDM), uses the Hellinger distance to monitor changes between the distribution of new data batches and a reference distribution, and performs a statistical test to detect if a significant change (drift) has occurred. It is a classifier-free, batch-based approach that analyzes raw feature distributions rather than classifier performance. The Hellinger distance is well-suited as it makes no distribution assumptions and provides a bounded measure of distributional divergence.
This document provides information about a quality management group including their research focus areas and tools. The quality management group's research is multidisciplinary and focuses on the physical, chemical and biological processes in aquatic ecosystems. They develop knowledge and tools for ecosystem and water quality management using quantitative models. Their tools for quality management include check sheets, control charts, Pareto charts, scatter plots, Ishikawa diagrams and histograms. Other related topics like quality management systems and standards are also listed.
The document compares two meta-heuristic algorithms, shuffled complex evolution (SCE) and a modified genetic algorithm (GA), on their ability to optimize parameters for hydrologic models. The algorithms were tested on two mathematical functions and in calibrating a rainfall-runoff model. For the mathematical functions, SCE performed best on one function while the GA was better on the other, showing the algorithms have different strengths. When calibrating the rainfall-runoff model, configurations that maintained higher population diversity performed better for both algorithms. Overall, SCE had a slightly better performance in model calibration than the GA. The paper recommends developing a catalogue of algorithm performances on test functions to help configuration for applied problems like hydrologic modeling.
Influence over the Dimensionality Reduction and Clustering for Air Quality Me...IJAEMSJORNAL
The current trend in the industry is to analyze large data sets and apply data mining, machine learning techniques to identify a pattern. But the challenges with huge data sets are the high dimensions associated with it. Sometimes in data analytics applications, large amounts of data produce worse performance. Also, most of the data mining algorithms are implemented column wise and too many columns restrict the performance and make it slower. Therefore, dimensionality reduction is an important step in data analysis. Dimensionality reduction is a technique that converts high dimensional data into much lower dimension, such that maximum variance is explained within the first few dimensions. This paper focuses on multivariate statistical and artificial neural networks techniques for data reduction. Each method has a different rationale to preserve the relationship between input parameters during analysis. Principal Component Analysis which is a multivariate technique and Self Organising Map a neural network technique is presented in this paper. Also, a hierarchical clustering approach has been applied to the reduced data set. A case study of Air quality measurement has been considered to evaluate the performance of the proposed techniques.
A Mathematical Programming Approach for Selection of Variables in Cluster Ana...IJRES Journal
The document presents a mathematical programming approach for selecting important variables in cluster analysis. It formulates a nonlinear binary model to minimize the distance between observations within clusters, using indicator variables to select important variables. The model is applied to a sample dataset of 30 observations across 5 variables, correctly identifying variables 3, 4 and 5 as most important for clustering the observations into two groups. The results are compared to an existing variable selection heuristic, with the mathematical programming approach achieving a 100% correct classification versus 97% for the other method.
This document describes a study on enhancing the serial estimation of discrete choice models through the use of standardization, warm starting, and early stopping techniques. It first reviews literature on accelerating discrete choice model estimation and quasi-Newton optimization methods. It then details the three techniques used: standardizing variables, initializing subsequent models with previous solutions, and stopping optimization early based on log-likelihood trends. Two sequences of 100 discrete choice models are tested to evaluate the effectiveness of the techniques. Results show that warm starting parameters and the Hessian matrix from previous solutions significantly reduces estimation time compared to estimating models separately.
This document provides information about quality management courses in Ireland, including an overview of courses offered. It discusses a 1-day introduction course, a 2-day implementation course, and a 3-day lead auditor course that provide an overview of ISO 9001 requirements. Quality management tools are also summarized, including check sheets, control charts, Pareto charts, scatter plots, Ishikawa diagrams, histograms, and other related quality management topics.
This document provides information about tools and strategies for training quality management systems. It discusses six quality management tools - check sheets, control charts, Pareto charts, scatter plots, Ishikawa diagrams, and histograms. These tools can be used to collect and analyze quality data to help improve processes. The document is intended to provide comprehensive training materials to help laboratories design quality management system training workshops.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Natural convection in a differentially heated cavity plays a
major role in the understanding of flow physics and heat
transfer aspects of various applications. Parameters such as
Rayleigh number, Prandtl number, aspect ratio, inclination
angle and surface emissivity are considered to have either
individual or grouped effect on natural convection in an
enclosed cavity. In spite of this, simultaneous study of these
parameters over a wide range is rare. Development of
correlation which helps to investigate the effect of the large
number and wide range of parameters is challenging. The
number of simulations required to generate correlations for
even a small number of parameters is extremely large. Till
date there is no streamlined procedure to optimize the number
of simulations required for correlation development.
Therefore, the present study aims to optimize the number of
simulations by using Taguchi technique and later generate
correlations by employing multiple variable regression
analysis. It is observed that for a wide range of parameters,
the proposed CFD-Taguchi-Regression approach drastically
reduces the total number of simulations for correlation
generation.
Defining Homogenous Climate zones of Bangladesh using Cluster AnalysisPremier Publishers
Climate zones of Bangladesh are identified by using mathematical methodology of cluster analysis. Monthly data from 34 climate stations for rainfall from 1991 to 2013 are used in the cluster analysis. Five Agglomerative Hierarchical clustering measures based on mostly used six proximity measures are chosen to perform the regionalization. Besides three popular measures: K-means, Fuzzy and density based clustering techniques are applied initially to decide the most suitable method for the identification of homogeneous region. Stability of the cluster is also tested based on nine validity indices. It is decided that Ward method based on Euclidean distance, K-means, Fuzzy are the most likely to yield acceptable results in this particular case, as is often the case in climatological research. In this analysis we found seven different climate zones in Bangladesh.
Measurement systems analysis and a study of anova methodeSAT Journals
Abstract
Instruments and measurement systems form the base of any process improvement strategies. The much widely used QC tools like
SPC depends on sample data taken from processes to track process variation which in turn depends on measuring system itself.
The purpose of Measurement System Analysis is to qualify a measurement system for use by quantifying its accuracy, precision,
and stability and to minimize their contribution in process variation through inherent tools such as ANOVA. The purpose of this
paper is to outline MSA and study ANOVA method through a real-time shop floor experiment.
Keywords: SPC, Accuracy, Precision, Stability, QC, ANOVA
Calibration of Environmental Sensor Data Using a Linear Regression Techniqueijtsrd
The linear regression is a linear approach for modeling the relationship between a scalar dependent variable y and one or more explanatory variables denoted X. A linear regression analysis method is used to improve the performance of the instruments using light scattering method. The data measured by these instruments have to be converted to actual PM10 concentrations using some factors. These findings propose that the instruments using light scattering method can be used to measure and control the PM10 concentrations of the underground subway stations. In this study, the reliability of the instrument for PM measurement using light scattering method was improved with the help of a linear regression analysis technique to continuously measure the PM10 concentrations in the subway stations. In addition, the linear regression analysis technique was also applied to calibrate a radon counter with the help of an expensive radon measuring apparatus. Chungyong Kim | Gyu-Sik Kim"Calibration of Environmental Sensor Data Using a Linear Regression Technique" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-1 , December 2017, URL: http://www.ijtsrd.com/papers/ijtsrd7060.pdf http://www.ijtsrd.com/engineering/electrical-engineering/7060/calibration-of-environmental-sensor-data-using-a-linear-regression-technique/chungyong-kim
Accurate time series classification using shapeletsIJDKP
Time series data are sequences of values measured over time. One of the most recent approaches to
classification of time series data is to find shapelets within a data set. Time series shapelets are time series
subsequences which represent a class. In order to compare two time series sequences, existing work uses
Euclidean distance measure. The problem with Euclidean distance is that it requires data to be
standardized if scales differ. In this paper, we perform classification of time series data using time series
shapelets and used Mahalanobis distance measure. The Mahalanobis distance is a descriptive statistic
that provides a relative measure of a data point's distance (residual) from a common point. The
Mahalanobis distance is used to identify and gauge similarity of an unknown sample set to a known one. It
differs from Euclidean distance in that it takes into account the correlations of the data set and is scaleinvariant.
We show that Mahalanobis distance results in more accuracy than Euclidean distance measure
A Threshold Fuzzy Entropy Based Feature Selection: Comparative StudyIJMER
Feature selection is one of the most common and critical tasks in database classification. It
reduces the computational cost by removing insignificant and unwanted features. Consequently, this
makes the diagnosis process accurate and comprehensible. This paper presents the measurement of
feature relevance based on fuzzy entropy, tested with Radial Basis Classifier (RBF) network,
Bagging(Bootstrap Aggregating), Boosting and stacking for various fields of datasets. Twenty
benchmarked datasets which are available in UCI Machine Learning Repository and KDD have been
used for this work. The accuracy obtained from these classification process shows that the proposed
method is capable of producing good and accurate results with fewer features than the original
datasets.
This document describes using machine learning techniques to classify products into 9 categories using data from the Otto Group. It evaluates feature selection methods like PCA and stepwise selection, and supervised learning algorithms like discriminant analysis, random forests, and neural networks. PCA performed best overall at feature selection. Random forests with 140 trees and random neurons ensembles of 50 neural networks with 42 hidden neurons each performed best, with a log loss of 0.47537 on training data and 0.58521 on test data, indicating no overfitting.
This paper proposes a novel model management technique to be applied in population- based heuristic optimization. This technique adaptively selects different computational models (both physics-based and statistical models) to be used during optimization, with the overall goal to end with high fidelity solutions in a reasonable time period. For example, in optimizing an aircraft wing to obtain maximum lift-to-drag ratio, one can use low-fidelity models such as given by the vortex lattice method, or a high-fidelity finite volume model (that solves the full Navier-Stokes equations), or a surrogate model that substitutes the high-fidelity model.The information from models with different levels of fidelity is inte- grated into the heuristic optimization process using a novel model-switching metric. In this context, models could be surrogate models, low-fidelity physics-based analytical mod- els, and medium-to-high fidelity computational models (based on grid density). The model switching technique replaces the current model with the next higher fidelity model, when a stochastic switching criterion is met at a given iteration during the optimization process. The switching criteria is based on whether the uncertainty associated with the current model output dominates the latest improvement of the fitness function. In the case of the physics-based models, the uncertainty in their output is quantified through an inverse assessment process by comparing with high-fidelity model responses or experimental data (if available). To determine the fidelity of surrogate models, the Predictive Estimation of Model Fidelity (PEMF) method is applied. The effectiveness of the proposed method is demonstrated by applying it to airfoil optimization with the objective to maximize the lift to drag ratio of the wing under different flow regimes. It was found that the tuned low fidelity model dominates the optimization process in terms of computational time and function calls.
Estimating Parameter of Nonlinear Bias Correction Method using NSGA-II in Dai...TELKOMNIKA JOURNAL
Nonlinear (NL) method is the most effective bias correction method for correcting statistical bias
when observation precipitation data can not be approximated using gamma distribution. Since NL method
only adjusts mean and variance, it does not perform well in handling bias on quantile values. This
paperpresents a scheme of NL method with additional condition aiming to mitigate bias on quantile values.
Non-dominated Sorting Genetic Algorithm II (NSGA-II) was applied to estimate parameter of NL method.
Furthermore, to investigate suitability of application of NSGA-II, we performed Single Objective Genetic
Algorithm (SOGA) as a comparison. The experiment results revealed NSGA-II was suitable when solution
of SOGA produced low fitness. Application of NSGA-II could minimize impact of daily bias correction on
monthly precipitation. The proposed scheme successfully reduced biases on mean, variance, first and
second quantile However, biases on third and fourth moment could not be handled robustly while biases
on third quantile only reduced during dry months.
A major challenge in hydrological modelling is to identification of optimal
parameter set of different data, catchment characteristics and objectives. Although, the
identification of optimal parameter set is difficult because of conceptual hydrological
models contain more number of parameters and accuracy also depends upon all the
relevant number of parameters influencing in a model. This identification process
cannot estimate directly and therefore it measured based on calibrating the model
which minimizing an objective function. Here, the objective function can depend upon
the sensitivity of model parameters and calibration of model. In this paper, we proposed
the Emulator Based Optimization (EBO) for reducing number of runs and improving
conceptual model efficiency. Where, emulator models are used to represent the
response surface of the simulation models and it can play a valuable role for
optimization. In this study evaluates EBO for calibrating of SWAT hydrological model
with following steps like input design, simulation model, emulator modelling,
convergence criteria and validation. The results show that EBO calibrates the model
with high accuracy and it captured the observed model with consuming less time. This
study helps for decision making, planning and designing of water resources.
The document discusses implementing an integrated approach of the K-means clustering algorithm for prediction analysis. It begins with motivating the need to improve the accuracy and dependability of existing overlapping K-means clustering by removing its dependency on random initialization parameters. The proposed methodology determines the optimal number of clusters K based on the dataset, calculates initial centroid positions using a harmonic means method, and applies overlapping K-means clustering. The implementation and results on two large datasets show the integrated approach outperforms original overlapping K-means in terms of accuracy, F-measure, Rand index, and number of iterations.
A NEW METHOD OF CENTRAL DIFFERENCE INTERPOLATIONmathsjournal
In Numerical analysis, interpolation is a manner of calculating the unknown values of a function for any conferred value of argument within the limit of the arguments. It provides basically a concept of estimating unknown data with the aid of relating acquainted data. The main goal of this research is to constitute a central difference interpolation method which is derived from the combination of Gauss’s third formula, Gauss’s Backward formula and Gauss’s forward formula. We have also demonstrated the graphical presentations as well as comparison through all the existing interpolation formulas with our propound method of central difference interpolation. By the comparison and graphical presentation, the new method gives the best result with the lowest error from another existing interpolationformula.
A NEW METHOD OF CENTRAL DIFFERENCE INTERPOLATIONmathsjournal
In Numerical analysis, interpolation is a manner of calculating the unknown values of a function for any
conferred value of argument within the limit of the arguments. It provides basically a concept of
estimating unknown data with the aid of relating acquainted data. The main goal of this research is to
constitute a central difference interpolation method which is derived from the combination of Gauss’s
third formula, Gauss’s Backward formula and Gauss’s forward formula. We have also demonstrated the
graphical presentations as well as comparison through all the existing interpolation formulas with our
propound method of central difference interpolation. By the comparison and graphical presentation, the
new method gives the best result with the lowest error from another existing interpolationformula.
A NEW METHOD OF CENTRAL DIFFERENCE INTERPOLATIONmathsjournal
The document presents a new method of central difference interpolation that is derived from Gauss's third formula and another existing formula. It compares the new method to other common interpolation formulas through two example problems. For both examples, the new method is shown to provide results that are very close to the exact values and have lower error than the other existing interpolation formulas.
This document provides information about a quality management group including their research focus areas and tools. The quality management group's research is multidisciplinary and focuses on the physical, chemical and biological processes in aquatic ecosystems. They develop knowledge and tools for ecosystem and water quality management using quantitative models. Their tools for quality management include check sheets, control charts, Pareto charts, scatter plots, Ishikawa diagrams and histograms. Other related topics like quality management systems and standards are also listed.
The document compares two meta-heuristic algorithms, shuffled complex evolution (SCE) and a modified genetic algorithm (GA), on their ability to optimize parameters for hydrologic models. The algorithms were tested on two mathematical functions and in calibrating a rainfall-runoff model. For the mathematical functions, SCE performed best on one function while the GA was better on the other, showing the algorithms have different strengths. When calibrating the rainfall-runoff model, configurations that maintained higher population diversity performed better for both algorithms. Overall, SCE had a slightly better performance in model calibration than the GA. The paper recommends developing a catalogue of algorithm performances on test functions to help configuration for applied problems like hydrologic modeling.
Influence over the Dimensionality Reduction and Clustering for Air Quality Me...IJAEMSJORNAL
The current trend in the industry is to analyze large data sets and apply data mining, machine learning techniques to identify a pattern. But the challenges with huge data sets are the high dimensions associated with it. Sometimes in data analytics applications, large amounts of data produce worse performance. Also, most of the data mining algorithms are implemented column wise and too many columns restrict the performance and make it slower. Therefore, dimensionality reduction is an important step in data analysis. Dimensionality reduction is a technique that converts high dimensional data into much lower dimension, such that maximum variance is explained within the first few dimensions. This paper focuses on multivariate statistical and artificial neural networks techniques for data reduction. Each method has a different rationale to preserve the relationship between input parameters during analysis. Principal Component Analysis which is a multivariate technique and Self Organising Map a neural network technique is presented in this paper. Also, a hierarchical clustering approach has been applied to the reduced data set. A case study of Air quality measurement has been considered to evaluate the performance of the proposed techniques.
A Mathematical Programming Approach for Selection of Variables in Cluster Ana...IJRES Journal
The document presents a mathematical programming approach for selecting important variables in cluster analysis. It formulates a nonlinear binary model to minimize the distance between observations within clusters, using indicator variables to select important variables. The model is applied to a sample dataset of 30 observations across 5 variables, correctly identifying variables 3, 4 and 5 as most important for clustering the observations into two groups. The results are compared to an existing variable selection heuristic, with the mathematical programming approach achieving a 100% correct classification versus 97% for the other method.
This document describes a study on enhancing the serial estimation of discrete choice models through the use of standardization, warm starting, and early stopping techniques. It first reviews literature on accelerating discrete choice model estimation and quasi-Newton optimization methods. It then details the three techniques used: standardizing variables, initializing subsequent models with previous solutions, and stopping optimization early based on log-likelihood trends. Two sequences of 100 discrete choice models are tested to evaluate the effectiveness of the techniques. Results show that warm starting parameters and the Hessian matrix from previous solutions significantly reduces estimation time compared to estimating models separately.
This document provides information about quality management courses in Ireland, including an overview of courses offered. It discusses a 1-day introduction course, a 2-day implementation course, and a 3-day lead auditor course that provide an overview of ISO 9001 requirements. Quality management tools are also summarized, including check sheets, control charts, Pareto charts, scatter plots, Ishikawa diagrams, histograms, and other related quality management topics.
This document provides information about tools and strategies for training quality management systems. It discusses six quality management tools - check sheets, control charts, Pareto charts, scatter plots, Ishikawa diagrams, and histograms. These tools can be used to collect and analyze quality data to help improve processes. The document is intended to provide comprehensive training materials to help laboratories design quality management system training workshops.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Natural convection in a differentially heated cavity plays a
major role in the understanding of flow physics and heat
transfer aspects of various applications. Parameters such as
Rayleigh number, Prandtl number, aspect ratio, inclination
angle and surface emissivity are considered to have either
individual or grouped effect on natural convection in an
enclosed cavity. In spite of this, simultaneous study of these
parameters over a wide range is rare. Development of
correlation which helps to investigate the effect of the large
number and wide range of parameters is challenging. The
number of simulations required to generate correlations for
even a small number of parameters is extremely large. Till
date there is no streamlined procedure to optimize the number
of simulations required for correlation development.
Therefore, the present study aims to optimize the number of
simulations by using Taguchi technique and later generate
correlations by employing multiple variable regression
analysis. It is observed that for a wide range of parameters,
the proposed CFD-Taguchi-Regression approach drastically
reduces the total number of simulations for correlation
generation.
Defining Homogenous Climate zones of Bangladesh using Cluster AnalysisPremier Publishers
Climate zones of Bangladesh are identified by using mathematical methodology of cluster analysis. Monthly data from 34 climate stations for rainfall from 1991 to 2013 are used in the cluster analysis. Five Agglomerative Hierarchical clustering measures based on mostly used six proximity measures are chosen to perform the regionalization. Besides three popular measures: K-means, Fuzzy and density based clustering techniques are applied initially to decide the most suitable method for the identification of homogeneous region. Stability of the cluster is also tested based on nine validity indices. It is decided that Ward method based on Euclidean distance, K-means, Fuzzy are the most likely to yield acceptable results in this particular case, as is often the case in climatological research. In this analysis we found seven different climate zones in Bangladesh.
Measurement systems analysis and a study of anova methodeSAT Journals
Abstract
Instruments and measurement systems form the base of any process improvement strategies. The much widely used QC tools like
SPC depends on sample data taken from processes to track process variation which in turn depends on measuring system itself.
The purpose of Measurement System Analysis is to qualify a measurement system for use by quantifying its accuracy, precision,
and stability and to minimize their contribution in process variation through inherent tools such as ANOVA. The purpose of this
paper is to outline MSA and study ANOVA method through a real-time shop floor experiment.
Keywords: SPC, Accuracy, Precision, Stability, QC, ANOVA
Calibration of Environmental Sensor Data Using a Linear Regression Techniqueijtsrd
The linear regression is a linear approach for modeling the relationship between a scalar dependent variable y and one or more explanatory variables denoted X. A linear regression analysis method is used to improve the performance of the instruments using light scattering method. The data measured by these instruments have to be converted to actual PM10 concentrations using some factors. These findings propose that the instruments using light scattering method can be used to measure and control the PM10 concentrations of the underground subway stations. In this study, the reliability of the instrument for PM measurement using light scattering method was improved with the help of a linear regression analysis technique to continuously measure the PM10 concentrations in the subway stations. In addition, the linear regression analysis technique was also applied to calibrate a radon counter with the help of an expensive radon measuring apparatus. Chungyong Kim | Gyu-Sik Kim"Calibration of Environmental Sensor Data Using a Linear Regression Technique" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-1 , December 2017, URL: http://www.ijtsrd.com/papers/ijtsrd7060.pdf http://www.ijtsrd.com/engineering/electrical-engineering/7060/calibration-of-environmental-sensor-data-using-a-linear-regression-technique/chungyong-kim
Accurate time series classification using shapeletsIJDKP
Time series data are sequences of values measured over time. One of the most recent approaches to
classification of time series data is to find shapelets within a data set. Time series shapelets are time series
subsequences which represent a class. In order to compare two time series sequences, existing work uses
Euclidean distance measure. The problem with Euclidean distance is that it requires data to be
standardized if scales differ. In this paper, we perform classification of time series data using time series
shapelets and used Mahalanobis distance measure. The Mahalanobis distance is a descriptive statistic
that provides a relative measure of a data point's distance (residual) from a common point. The
Mahalanobis distance is used to identify and gauge similarity of an unknown sample set to a known one. It
differs from Euclidean distance in that it takes into account the correlations of the data set and is scaleinvariant.
We show that Mahalanobis distance results in more accuracy than Euclidean distance measure
A Threshold Fuzzy Entropy Based Feature Selection: Comparative StudyIJMER
Feature selection is one of the most common and critical tasks in database classification. It
reduces the computational cost by removing insignificant and unwanted features. Consequently, this
makes the diagnosis process accurate and comprehensible. This paper presents the measurement of
feature relevance based on fuzzy entropy, tested with Radial Basis Classifier (RBF) network,
Bagging(Bootstrap Aggregating), Boosting and stacking for various fields of datasets. Twenty
benchmarked datasets which are available in UCI Machine Learning Repository and KDD have been
used for this work. The accuracy obtained from these classification process shows that the proposed
method is capable of producing good and accurate results with fewer features than the original
datasets.
This document describes using machine learning techniques to classify products into 9 categories using data from the Otto Group. It evaluates feature selection methods like PCA and stepwise selection, and supervised learning algorithms like discriminant analysis, random forests, and neural networks. PCA performed best overall at feature selection. Random forests with 140 trees and random neurons ensembles of 50 neural networks with 42 hidden neurons each performed best, with a log loss of 0.47537 on training data and 0.58521 on test data, indicating no overfitting.
This paper proposes a novel model management technique to be applied in population- based heuristic optimization. This technique adaptively selects different computational models (both physics-based and statistical models) to be used during optimization, with the overall goal to end with high fidelity solutions in a reasonable time period. For example, in optimizing an aircraft wing to obtain maximum lift-to-drag ratio, one can use low-fidelity models such as given by the vortex lattice method, or a high-fidelity finite volume model (that solves the full Navier-Stokes equations), or a surrogate model that substitutes the high-fidelity model.The information from models with different levels of fidelity is inte- grated into the heuristic optimization process using a novel model-switching metric. In this context, models could be surrogate models, low-fidelity physics-based analytical mod- els, and medium-to-high fidelity computational models (based on grid density). The model switching technique replaces the current model with the next higher fidelity model, when a stochastic switching criterion is met at a given iteration during the optimization process. The switching criteria is based on whether the uncertainty associated with the current model output dominates the latest improvement of the fitness function. In the case of the physics-based models, the uncertainty in their output is quantified through an inverse assessment process by comparing with high-fidelity model responses or experimental data (if available). To determine the fidelity of surrogate models, the Predictive Estimation of Model Fidelity (PEMF) method is applied. The effectiveness of the proposed method is demonstrated by applying it to airfoil optimization with the objective to maximize the lift to drag ratio of the wing under different flow regimes. It was found that the tuned low fidelity model dominates the optimization process in terms of computational time and function calls.
Estimating Parameter of Nonlinear Bias Correction Method using NSGA-II in Dai...TELKOMNIKA JOURNAL
Nonlinear (NL) method is the most effective bias correction method for correcting statistical bias
when observation precipitation data can not be approximated using gamma distribution. Since NL method
only adjusts mean and variance, it does not perform well in handling bias on quantile values. This
paperpresents a scheme of NL method with additional condition aiming to mitigate bias on quantile values.
Non-dominated Sorting Genetic Algorithm II (NSGA-II) was applied to estimate parameter of NL method.
Furthermore, to investigate suitability of application of NSGA-II, we performed Single Objective Genetic
Algorithm (SOGA) as a comparison. The experiment results revealed NSGA-II was suitable when solution
of SOGA produced low fitness. Application of NSGA-II could minimize impact of daily bias correction on
monthly precipitation. The proposed scheme successfully reduced biases on mean, variance, first and
second quantile However, biases on third and fourth moment could not be handled robustly while biases
on third quantile only reduced during dry months.
A major challenge in hydrological modelling is to identification of optimal
parameter set of different data, catchment characteristics and objectives. Although, the
identification of optimal parameter set is difficult because of conceptual hydrological
models contain more number of parameters and accuracy also depends upon all the
relevant number of parameters influencing in a model. This identification process
cannot estimate directly and therefore it measured based on calibrating the model
which minimizing an objective function. Here, the objective function can depend upon
the sensitivity of model parameters and calibration of model. In this paper, we proposed
the Emulator Based Optimization (EBO) for reducing number of runs and improving
conceptual model efficiency. Where, emulator models are used to represent the
response surface of the simulation models and it can play a valuable role for
optimization. In this study evaluates EBO for calibrating of SWAT hydrological model
with following steps like input design, simulation model, emulator modelling,
convergence criteria and validation. The results show that EBO calibrates the model
with high accuracy and it captured the observed model with consuming less time. This
study helps for decision making, planning and designing of water resources.
The document discusses implementing an integrated approach of the K-means clustering algorithm for prediction analysis. It begins with motivating the need to improve the accuracy and dependability of existing overlapping K-means clustering by removing its dependency on random initialization parameters. The proposed methodology determines the optimal number of clusters K based on the dataset, calculates initial centroid positions using a harmonic means method, and applies overlapping K-means clustering. The implementation and results on two large datasets show the integrated approach outperforms original overlapping K-means in terms of accuracy, F-measure, Rand index, and number of iterations.
A NEW METHOD OF CENTRAL DIFFERENCE INTERPOLATIONmathsjournal
In Numerical analysis, interpolation is a manner of calculating the unknown values of a function for any conferred value of argument within the limit of the arguments. It provides basically a concept of estimating unknown data with the aid of relating acquainted data. The main goal of this research is to constitute a central difference interpolation method which is derived from the combination of Gauss’s third formula, Gauss’s Backward formula and Gauss’s forward formula. We have also demonstrated the graphical presentations as well as comparison through all the existing interpolation formulas with our propound method of central difference interpolation. By the comparison and graphical presentation, the new method gives the best result with the lowest error from another existing interpolationformula.
A NEW METHOD OF CENTRAL DIFFERENCE INTERPOLATIONmathsjournal
In Numerical analysis, interpolation is a manner of calculating the unknown values of a function for any
conferred value of argument within the limit of the arguments. It provides basically a concept of
estimating unknown data with the aid of relating acquainted data. The main goal of this research is to
constitute a central difference interpolation method which is derived from the combination of Gauss’s
third formula, Gauss’s Backward formula and Gauss’s forward formula. We have also demonstrated the
graphical presentations as well as comparison through all the existing interpolation formulas with our
propound method of central difference interpolation. By the comparison and graphical presentation, the
new method gives the best result with the lowest error from another existing interpolationformula.
A NEW METHOD OF CENTRAL DIFFERENCE INTERPOLATIONmathsjournal
The document presents a new method of central difference interpolation that is derived from Gauss's third formula and another existing formula. It compares the new method to other common interpolation formulas through two example problems. For both examples, the new method is shown to provide results that are very close to the exact values and have lower error than the other existing interpolation formulas.
A NEW METHOD OF CENTRAL DIFFERENCE INTERPOLATIONmathsjournal
The document presents a new method of central difference interpolation that is derived from Gauss's third formula and another existing formula. It compares the new method to other common interpolation formulas through two example problems. For both examples, the new method is shown to provide results that are very close to the exact values and have lower error than the other existing interpolation formulas.
Text Skew Angle Detection in Vision-Based Scanning of Nutrition LabelsVladimir Kulyukin
This document presents an algorithm for detecting the text skew angle in images of nutrition labels. The algorithm applies multiple iterations of the 2D Haar Wavelet Transform to downsample the image and compute horizontal, vertical, and diagonal change matrices. It then binarizes and combines these matrices into a set of 2D change points. Finally, it uses a convex hull algorithm to find the minimum area rectangle containing all text pixels, and calculates the text skew angle as the rotation angle of this rectangle. The algorithm's performance is compared to two other text skew detection algorithms on a sample of over 600 nutrition label images, finding a median error of 4.62 degrees compared to 68.85 and 20.92 degrees for the other algorithms.
Power System State Estimation - A ReviewIDES Editor
This document provides a review of power system state estimation techniques. It discusses both static and dynamic state estimation algorithms. For static state estimation, it covers weighted least squares, decoupled, and robust estimation methods. Weighted least squares is commonly used but can have numerical instability issues. Decoupled state estimation approximates the gain matrix for faster computation. Robust estimation uses M-estimators and other techniques to handle outliers and bad data. Dynamic state estimation applies Kalman filtering, leapfrog algorithms, and other methods to continuously monitor system states over time.
https://utilitasmathematica.com/index.php/Index
Our Journal has recently transitioned to becoming a fully open-access journal. This transition marks a significant shift in the landscape of academic publishing, aiming to provide numerous benefits to both authors and readers. Open access has the potential to democratize knowledge, enhance research impact, and foster greater collaboration within the academic community.
Utilitas Mathematica Journal is a broad scope journal that publishes original research and review articles on all aspects of both pure and applied mathematics.The journal publishes original research in all areas of pure and applied mathematics, statistics.
Utilitas Mathematica Journal. It's our journal publishes original research in all areas of pure and applied mathematics.Number Theory,Operations Research,Mathematical Biology,and it's the Utilitas Mathematica Journal commits to strengthening our professional .
https://utilitasmathematica.com/index
Our journal has to a fully open-access format is a significant step toward advancing the principles of open science and equitable access to knowledge. However, this transition also brings challenges, such as ensuring sustainable funding models and maintaining rigorous peer-review standards.
Parametric estimation of construction cost using combined bootstrap and regre...IAEME Publication
The document discusses a method for estimating construction costs using a combined bootstrap and regression technique. It involves using historical project data to develop a regression model relating cost to key parameters. A bootstrap resampling method is then used to generate multiple simulated datasets from the original. Regression analysis is performed on each resampled dataset to calculate coefficients and develop a cost range estimate that captures uncertainty. This allows integrating probabilistic and parametric estimation methods while requiring fewer assumptions than traditional statistical techniques. The goal is to provide more accurate conceptual cost estimates early in projects when design information is limited.
On Confidence Intervals Construction for Measurement System Capability Indica...IRJESJOURNAL
Abstract: There are many criteria that have been proposed to determine the capability of a measurement system, all based on estimates of variance components. Some of them are the Precision to Tolerance Ratio, the Signal to Noise Ratio and the probabilities of misclassification. For most of these indicators, there are no exact confidence intervals, since the exact distributions of the point estimators are not known. In such situations, two approaches are widely used to obtain approximate confidence intervals: the Modified Large Samples (MLS) methods initially proposed by Graybill and Wang, and the construction of Generalized Confidence Intervals (GCI) introduced by Weerahandi. In this work we focus on the construction of the confidence intervals by the generalized approach in the context of Gauge repeatability and reproducibility studies. Since GCI are obtained by simulation procedures, we analyze the effect of the number of simulations on the variability of the confidence limits as well as the effect of the size of the experiment designed to collect data on the precision of the estimates. Both studies allowed deriving some practical implementation guidelinesin the use of the GCI approach. We finally present a real case study in which this technique was applied to evaluate the capability of a destructive measurement system.
This document provides an overview of machine learning concepts including feature selection, dimensionality reduction techniques like principal component analysis and singular value decomposition, feature encoding, normalization and scaling, dataset construction, feature engineering, data exploration, machine learning types and categories, model selection criteria, popular Python libraries, tuning techniques like cross-validation and hyperparameters, and performance analysis metrics like confusion matrix, accuracy, F1 score, ROC curve, and bias-variance tradeoff.
Performance Of A Combined Shewhart-Cusum Control Chart With Binomial Data For...IJERA Editor
The purpose of this study is to investigate the performance of a combined Shewhart-CUSUM chart for binomial
data, when compared to individual Shewhart and CUSUM control charts for large shifts in the process mean.
The comparison is performed by analyzing several ARL (Average Run Length) measures. The ARL of the
combined control charts is identified through Monte Carlo simulations. The results confirm the effectiveness of
the combined charts to increase the sensitivity of CUSUM schemes. The Shewhart limits make it possible to
indicate large changes in the process mean. And, there is a region of process shifts where the combined chart
displays better performance than in individual procedures. Therefore, for the detection of only large alterations
in the process mean, an individual CUSUM or a Shewhart chart is recommended.
SENSITIVITY ANALYSIS IN A LIDARCAMERA CALIBRATIONcscpconf
In this paper, variability analysis was performed on the model calibration methodology between
a multi-camera system and a LiDAR laser sensor (Light Detection and Ranging). Both sensors
are used to digitize urban environments. A practical and complete methodology is presented to
predict the error propagation inside the LiDAR-camera calibration. We perform a sensitivity
analysis in a local and global way. The local approach analyses the output variance with
respect to the input, only one parameter is varied at once. In the global sensitivity approach, all
parameters are varied simultaneously and sensitivity indexes are calculated on the total
variation range of the input parameters. We quantify the uncertainty behaviour in the intrinsic
camera parameters and the relationship between the noisy data of both sensors and their
calibration. We calculated the sensitivity indexes by two techniques, Sobol and FAST (Fourier
amplitude sensitivity test). Statistics of the sensitivity analysis are displayed for each sensor, the
sensitivity ratio in laser-camera calibration data
Sensitivity analysis in a lidar camera calibrationcsandit
In this paper, variability analysis was performed o
n the model calibration methodology between
a multi-camera system and a LiDAR laser sensor (Lig
ht Detection and Ranging). Both sensors
are used to digitize urban environments. A practica
l and complete methodology is presented to
predict the error propagation inside the LiDAR-came
ra calibration. We perform a sensitivity
analysis in a local and global way. The local appro
ach analyses the output variance with
respect to the input, only one parameter is varied
at once. In the global sensitivity approach, all
parameters are varied simultaneously and sensitivit
y indexes are calculated on the total
variation range of the input parameters. We quantif
y the uncertainty behaviour in the intrinsic
camera parameters and the relationship between the
noisy data of both sensors and their
calibration. We calculated the sensitivity indexes
by two techniques, Sobol and FAST (Fourier
amplitude sensitivity test). Statistics of the sens
itivity analysis are displayed for each sensor, the
sensitivity ratio in laser-camera calibration data
This thesis aims to formulate a simple measurement to evaluate and compare the predictive distributions of out-of-sample forecasts between autoregressive (AR) and vector autoregressive (VAR) models. The author conducts simulation studies to estimate AR and VAR models using Bayesian inference. A measurement is developed that uses out-of-sample forecasts and predictive distributions to evaluate the full forecast error probability distribution at different horizons. The measurement is found to accurately evaluate single forecasts and calibrate forecast models.
A parsimonious SVM model selection criterion for classification of real-world ...o_almasi
This paper proposes and optimizes a two-term cost function consisting of a sparseness term and a generalized v-fold cross-validation term by a new adaptive particle swarm optimization (APSO). APSO updates its parameters adaptively based on a dynamic feedback from the success rate of the each particle’s personal best. Since the proposed cost function is based on the choosing fewer numbers of support vectors, the complexity of SVM models decreased while the accuracy remains in an acceptable range. Therefore, the testing time decreases and makes SVM more applicable for practical applications in real data sets. A comparative study on data sets of UCI database is performed between the proposed cost function and conventional cost function to demonstrate the effectiveness of the proposed cost function.
International Journal of Engineering Research and Applications (IJERA) is a team of researchers not publication services or private publications running the journals for monetary benefits, we are association of scientists and academia who focus only on supporting authors who want to publish their work. The articles published in our journal can be accessed online, all the articles will be archived for real time access.
Our journal system primarily aims to bring out the research talent and the works done by sciaentists, academia, engineers, practitioners, scholars, post graduate students of engineering and science. This journal aims to cover the scientific research in a broader sense and not publishing a niche area of research facilitating researchers from various verticals to publish their papers. It is also aimed to provide a platform for the researchers to publish in a shorter of time, enabling them to continue further All articles published are freely available to scientific researchers in the Government agencies,educators and the general public. We are taking serious efforts to promote our journal across the globe in various ways, we are sure that our journal will act as a scientific platform for all researchers to publish their works online.
Similar to Modified hotelling’s 푻 ퟐ control charts using modified mahalanobis distance (20)
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Neural network optimizer of proportional-integral-differential controller par...IJECEIAES
Wide application of proportional-integral-differential (PID)-regulator in industry requires constant improvement of methods of its parameters adjustment. The paper deals with the issues of optimization of PID-regulator parameters with the use of neural network technology methods. A methodology for choosing the architecture (structure) of neural network optimizer is proposed, which consists in determining the number of layers, the number of neurons in each layer, as well as the form and type of activation function. Algorithms of neural network training based on the application of the method of minimizing the mismatch between the regulated value and the target value are developed. The method of back propagation of gradients is proposed to select the optimal training rate of neurons of the neural network. The neural network optimizer, which is a superstructure of the linear PID controller, allows increasing the regulation accuracy from 0.23 to 0.09, thus reducing the power consumption from 65% to 53%. The results of the conducted experiments allow us to conclude that the created neural superstructure may well become a prototype of an automatic voltage regulator (AVR)-type industrial controller for tuning the parameters of the PID controller.
An improved modulation technique suitable for a three level flying capacitor ...IJECEIAES
This research paper introduces an innovative modulation technique for controlling a 3-level flying capacitor multilevel inverter (FCMLI), aiming to streamline the modulation process in contrast to conventional methods. The proposed
simplified modulation technique paves the way for more straightforward and
efficient control of multilevel inverters, enabling their widespread adoption and
integration into modern power electronic systems. Through the amalgamation of
sinusoidal pulse width modulation (SPWM) with a high-frequency square wave
pulse, this controlling technique attains energy equilibrium across the coupling
capacitor. The modulation scheme incorporates a simplified switching pattern
and a decreased count of voltage references, thereby simplifying the control
algorithm.
A review on features and methods of potential fishing zoneIJECEIAES
This review focuses on the importance of identifying potential fishing zones in seawater for sustainable fishing practices. It explores features like sea surface temperature (SST) and sea surface height (SSH), along with classification methods such as classifiers. The features like SST, SSH, and different classifiers used to classify the data, have been figured out in this review study. This study underscores the importance of examining potential fishing zones using advanced analytical techniques. It thoroughly explores the methodologies employed by researchers, covering both past and current approaches. The examination centers on data characteristics and the application of classification algorithms for classification of potential fishing zones. Furthermore, the prediction of potential fishing zones relies significantly on the effectiveness of classification algorithms. Previous research has assessed the performance of models like support vector machines, naïve Bayes, and artificial neural networks (ANN). In the previous result, the results of support vector machine (SVM) were 97.6% more accurate than naive Bayes's 94.2% to classify test data for fisheries classification. By considering the recent works in this area, several recommendations for future works are presented to further improve the performance of the potential fishing zone models, which is important to the fisheries community.
Electrical signal interference minimization using appropriate core material f...IJECEIAES
As demand for smaller, quicker, and more powerful devices rises, Moore's law is strictly followed. The industry has worked hard to make little devices that boost productivity. The goal is to optimize device density. Scientists are reducing connection delays to improve circuit performance. This helped them understand three-dimensional integrated circuit (3D IC) concepts, which stack active devices and create vertical connections to diminish latency and lower interconnects. Electrical involvement is a big worry with 3D integrates circuits. Researchers have developed and tested through silicon via (TSV) and substrates to decrease electrical wave involvement. This study illustrates a novel noise coupling reduction method using several electrical involvement models. A 22% drop in electrical involvement from wave-carrying to victim TSVs introduces this new paradigm and improves system performance even at higher THz frequencies.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Bibliometric analysis highlighting the role of women in addressing climate ch...IJECEIAES
Fossil fuel consumption increased quickly, contributing to climate change
that is evident in unusual flooding and draughts, and global warming. Over
the past ten years, women's involvement in society has grown dramatically,
and they succeeded in playing a noticeable role in reducing climate change.
A bibliometric analysis of data from the last ten years has been carried out to
examine the role of women in addressing the climate change. The analysis's
findings discussed the relevant to the sustainable development goals (SDGs),
particularly SDG 7 and SDG 13. The results considered contributions made
by women in the various sectors while taking geographic dispersion into
account. The bibliometric analysis delves into topics including women's
leadership in environmental groups, their involvement in policymaking, their
contributions to sustainable development projects, and the influence of
gender diversity on attempts to mitigate climate change. This study's results
highlight how women have influenced policies and actions related to climate
change, point out areas of research deficiency and recommendations on how
to increase role of the women in addressing the climate change and
achieving sustainability. To achieve more successful results, this initiative
aims to highlight the significance of gender equality and encourage
inclusivity in climate change decision-making processes.
Voltage and frequency control of microgrid in presence of micro-turbine inter...IJECEIAES
The active and reactive load changes have a significant impact on voltage
and frequency. In this paper, in order to stabilize the microgrid (MG) against
load variations in islanding mode, the active and reactive power of all
distributed generators (DGs), including energy storage (battery), diesel
generator, and micro-turbine, are controlled. The micro-turbine generator is
connected to MG through a three-phase to three-phase matrix converter, and
the droop control method is applied for controlling the voltage and
frequency of MG. In addition, a method is introduced for voltage and
frequency control of micro-turbines in the transition state from gridconnected mode to islanding mode. A novel switching strategy of the matrix
converter is used for converting the high-frequency output voltage of the
micro-turbine to the grid-side frequency of the utility system. Moreover,
using the switching strategy, the low-order harmonics in the output current
and voltage are not produced, and consequently, the size of the output filter
would be reduced. In fact, the suggested control strategy is load-independent
and has no frequency conversion restrictions. The proposed approach for
voltage and frequency regulation demonstrates exceptional performance and
favorable response across various load alteration scenarios. The suggested
strategy is examined in several scenarios in the MG test systems, and the
simulation results are addressed.
Enhancing battery system identification: nonlinear autoregressive modeling fo...IJECEIAES
Precisely characterizing Li-ion batteries is essential for optimizing their
performance, enhancing safety, and prolonging their lifespan across various
applications, such as electric vehicles and renewable energy systems. This
article introduces an innovative nonlinear methodology for system
identification of a Li-ion battery, employing a nonlinear autoregressive with
exogenous inputs (NARX) model. The proposed approach integrates the
benefits of nonlinear modeling with the adaptability of the NARX structure,
facilitating a more comprehensive representation of the intricate
electrochemical processes within the battery. Experimental data collected
from a Li-ion battery operating under diverse scenarios are employed to
validate the effectiveness of the proposed methodology. The identified
NARX model exhibits superior accuracy in predicting the battery's behavior
compared to traditional linear models. This study underscores the
importance of accounting for nonlinearities in battery modeling, providing
insights into the intricate relationships between state-of-charge, voltage, and
current under dynamic conditions.
Smart grid deployment: from a bibliometric analysis to a surveyIJECEIAES
Smart grids are one of the last decades' innovations in electrical energy.
They bring relevant advantages compared to the traditional grid and
significant interest from the research community. Assessing the field's
evolution is essential to propose guidelines for facing new and future smart
grid challenges. In addition, knowing the main technologies involved in the
deployment of smart grids (SGs) is important to highlight possible
shortcomings that can be mitigated by developing new tools. This paper
contributes to the research trends mentioned above by focusing on two
objectives. First, a bibliometric analysis is presented to give an overview of
the current research level about smart grid deployment. Second, a survey of
the main technological approaches used for smart grid implementation and
their contributions are highlighted. To that effect, we searched the Web of
Science (WoS), and the Scopus databases. We obtained 5,663 documents
from WoS and 7,215 from Scopus on smart grid implementation or
deployment. With the extraction limitation in the Scopus database, 5,872 of
the 7,215 documents were extracted using a multi-step process. These two
datasets have been analyzed using a bibliometric tool called bibliometrix.
The main outputs are presented with some recommendations for future
research.
Use of analytical hierarchy process for selecting and prioritizing islanding ...IJECEIAES
One of the problems that are associated to power systems is islanding
condition, which must be rapidly and properly detected to prevent any
negative consequences on the system's protection, stability, and security.
This paper offers a thorough overview of several islanding detection
strategies, which are divided into two categories: classic approaches,
including local and remote approaches, and modern techniques, including
techniques based on signal processing and computational intelligence.
Additionally, each approach is compared and assessed based on several
factors, including implementation costs, non-detected zones, declining
power quality, and response times using the analytical hierarchy process
(AHP). The multi-criteria decision-making analysis shows that the overall
weight of passive methods (24.7%), active methods (7.8%), hybrid methods
(5.6%), remote methods (14.5%), signal processing-based methods (26.6%),
and computational intelligent-based methods (20.8%) based on the
comparison of all criteria together. Thus, it can be seen from the total weight
that hybrid approaches are the least suitable to be chosen, while signal
processing-based methods are the most appropriate islanding detection
method to be selected and implemented in power system with respect to the
aforementioned factors. Using Expert Choice software, the proposed
hierarchy model is studied and examined.
Enhancing of single-stage grid-connected photovoltaic system using fuzzy logi...IJECEIAES
The power generated by photovoltaic (PV) systems is influenced by
environmental factors. This variability hampers the control and utilization of
solar cells' peak output. In this study, a single-stage grid-connected PV
system is designed to enhance power quality. Our approach employs fuzzy
logic in the direct power control (DPC) of a three-phase voltage source
inverter (VSI), enabling seamless integration of the PV connected to the
grid. Additionally, a fuzzy logic-based maximum power point tracking
(MPPT) controller is adopted, which outperforms traditional methods like
incremental conductance (INC) in enhancing solar cell efficiency and
minimizing the response time. Moreover, the inverter's real-time active and
reactive power is directly managed to achieve a unity power factor (UPF).
The system's performance is assessed through MATLAB/Simulink
implementation, showing marked improvement over conventional methods,
particularly in steady-state and varying weather conditions. For solar
irradiances of 500 and 1,000 W/m2
, the results show that the proposed
method reduces the total harmonic distortion (THD) of the injected current
to the grid by approximately 46% and 38% compared to conventional
methods, respectively. Furthermore, we compare the simulation results with
IEEE standards to evaluate the system's grid compatibility.
Enhancing photovoltaic system maximum power point tracking with fuzzy logic-b...IJECEIAES
Photovoltaic systems have emerged as a promising energy resource that
caters to the future needs of society, owing to their renewable, inexhaustible,
and cost-free nature. The power output of these systems relies on solar cell
radiation and temperature. In order to mitigate the dependence on
atmospheric conditions and enhance power tracking, a conventional
approach has been improved by integrating various methods. To optimize
the generation of electricity from solar systems, the maximum power point
tracking (MPPT) technique is employed. To overcome limitations such as
steady-state voltage oscillations and improve transient response, two
traditional MPPT methods, namely fuzzy logic controller (FLC) and perturb
and observe (P&O), have been modified. This research paper aims to
simulate and validate the step size of the proposed modified P&O and FLC
techniques within the MPPT algorithm using MATLAB/Simulink for
efficient power tracking in photovoltaic systems.
Adaptive synchronous sliding control for a robot manipulator based on neural ...IJECEIAES
Robot manipulators have become important equipment in production lines, medical fields, and transportation. Improving the quality of trajectory tracking for
robot hands is always an attractive topic in the research community. This is a
challenging problem because robot manipulators are complex nonlinear systems
and are often subject to fluctuations in loads and external disturbances. This
article proposes an adaptive synchronous sliding control scheme to improve trajectory tracking performance for a robot manipulator. The proposed controller
ensures that the positions of the joints track the desired trajectory, synchronize
the errors, and significantly reduces chattering. First, the synchronous tracking
errors and synchronous sliding surfaces are presented. Second, the synchronous
tracking error dynamics are determined. Third, a robust adaptive control law is
designed,the unknown components of the model are estimated online by the neural network, and the parameters of the switching elements are selected by fuzzy
logic. The built algorithm ensures that the tracking and approximation errors
are ultimately uniformly bounded (UUB). Finally, the effectiveness of the constructed algorithm is demonstrated through simulation and experimental results.
Simulation and experimental results show that the proposed controller is effective with small synchronous tracking errors, and the chattering phenomenon is
significantly reduced.
Remote field-programmable gate array laboratory for signal acquisition and de...IJECEIAES
A remote laboratory utilizing field-programmable gate array (FPGA) technologies enhances students’ learning experience anywhere and anytime in embedded system design. Existing remote laboratories prioritize hardware access and visual feedback for observing board behavior after programming, neglecting comprehensive debugging tools to resolve errors that require internal signal acquisition. This paper proposes a novel remote embeddedsystem design approach targeting FPGA technologies that are fully interactive via a web-based platform. Our solution provides FPGA board access and debugging capabilities beyond the visual feedback provided by existing remote laboratories. We implemented a lab module that allows users to seamlessly incorporate into their FPGA design. The module minimizes hardware resource utilization while enabling the acquisition of a large number of data samples from the signal during the experiments by adaptively compressing the signal prior to data transmission. The results demonstrate an average compression ratio of 2.90 across three benchmark signals, indicating efficient signal acquisition and effective debugging and analysis. This method allows users to acquire more data samples than conventional methods. The proposed lab allows students to remotely test and debug their designs, bridging the gap between theory and practice in embedded system design.
Detecting and resolving feature envy through automated machine learning and m...IJECEIAES
Efficiently identifying and resolving code smells enhances software project quality. This paper presents a novel solution, utilizing automated machine learning (AutoML) techniques, to detect code smells and apply move method refactoring. By evaluating code metrics before and after refactoring, we assessed its impact on coupling, complexity, and cohesion. Key contributions of this research include a unique dataset for code smell classification and the development of models using AutoGluon for optimal performance. Furthermore, the study identifies the top 20 influential features in classifying feature envy, a well-known code smell, stemming from excessive reliance on external classes. We also explored how move method refactoring addresses feature envy, revealing reduced coupling and complexity, and improved cohesion, ultimately enhancing code quality. In summary, this research offers an empirical, data-driven approach, integrating AutoML and move method refactoring to optimize software project quality. Insights gained shed light on the benefits of refactoring on code quality and the significance of specific features in detecting feature envy. Future research can expand to explore additional refactoring techniques and a broader range of code metrics, advancing software engineering practices and standards.
Smart monitoring technique for solar cell systems using internet of things ba...IJECEIAES
Rapidly and remotely monitoring and receiving the solar cell systems status parameters, solar irradiance, temperature, and humidity, are critical issues in enhancement their efficiency. Hence, in the present article an improved smart prototype of internet of things (IoT) technique based on embedded system through NodeMCU ESP8266 (ESP-12E) was carried out experimentally. Three different regions at Egypt; Luxor, Cairo, and El-Beheira cities were chosen to study their solar irradiance profile, temperature, and humidity by the proposed IoT system. The monitoring data of solar irradiance, temperature, and humidity were live visualized directly by Ubidots through hypertext transfer protocol (HTTP) protocol. The measured solar power radiation in Luxor, Cairo, and El-Beheira ranged between 216-1000, 245-958, and 187-692 W/m 2 respectively during the solar day. The accuracy and rapidity of obtaining monitoring results using the proposed IoT system made it a strong candidate for application in monitoring solar cell systems. On the other hand, the obtained solar power radiation results of the three considered regions strongly candidate Luxor and Cairo as suitable places to build up a solar cells system station rather than El-Beheira.
An efficient security framework for intrusion detection and prevention in int...IJECEIAES
Over the past few years, the internet of things (IoT) has advanced to connect billions of smart devices to improve quality of life. However, anomalies or malicious intrusions pose several security loopholes, leading to performance degradation and threat to data security in IoT operations. Thereby, IoT security systems must keep an eye on and restrict unwanted events from occurring in the IoT network. Recently, various technical solutions based on machine learning (ML) models have been derived towards identifying and restricting unwanted events in IoT. However, most ML-based approaches are prone to miss-classification due to inappropriate feature selection. Additionally, most ML approaches applied to intrusion detection and prevention consider supervised learning, which requires a large amount of labeled data to be trained. Consequently, such complex datasets are impossible to source in a large network like IoT. To address this problem, this proposed study introduces an efficient learning mechanism to strengthen the IoT security aspects. The proposed algorithm incorporates supervised and unsupervised approaches to improve the learning models for intrusion detection and mitigation. Compared with the related works, the experimental outcome shows that the model performs well in a benchmark dataset. It accomplishes an improved detection accuracy of approximately 99.21%.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
Applications of artificial Intelligence in Mechanical Engineering.pdfAtif Razi
Historically, mechanical engineering has relied heavily on human expertise and empirical methods to solve complex problems. With the introduction of computer-aided design (CAD) and finite element analysis (FEA), the field took its first steps towards digitization. These tools allowed engineers to simulate and analyze mechanical systems with greater accuracy and efficiency. However, the sheer volume of data generated by modern engineering systems and the increasing complexity of these systems have necessitated more advanced analytical tools, paving the way for AI.
AI offers the capability to process vast amounts of data, identify patterns, and make predictions with a level of speed and accuracy unattainable by traditional methods. This has profound implications for mechanical engineering, enabling more efficient design processes, predictive maintenance strategies, and optimized manufacturing operations. AI-driven tools can learn from historical data, adapt to new information, and continuously improve their performance, making them invaluable in tackling the multifaceted challenges of modern mechanical engineering.
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...shadow0702a
This document serves as a comprehensive step-by-step guide on how to effectively use PyCharm for remote debugging of the Windows Subsystem for Linux (WSL) on a local Windows machine. It meticulously outlines several critical steps in the process, starting with the crucial task of enabling permissions, followed by the installation and configuration of WSL.
The guide then proceeds to explain how to set up the SSH service within the WSL environment, an integral part of the process. Alongside this, it also provides detailed instructions on how to modify the inbound rules of the Windows firewall to facilitate the process, ensuring that there are no connectivity issues that could potentially hinder the debugging process.
The document further emphasizes on the importance of checking the connection between the Windows and WSL environments, providing instructions on how to ensure that the connection is optimal and ready for remote debugging.
It also offers an in-depth guide on how to configure the WSL interpreter and files within the PyCharm environment. This is essential for ensuring that the debugging process is set up correctly and that the program can be run effectively within the WSL terminal.
Additionally, the document provides guidance on how to set up breakpoints for debugging, a fundamental aspect of the debugging process which allows the developer to stop the execution of their code at certain points and inspect their program at those stages.
Finally, the document concludes by providing a link to a reference blog. This blog offers additional information and guidance on configuring the remote Python interpreter in PyCharm, providing the reader with a well-rounded understanding of the process.
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...
Modified hotelling’s 푻 ퟐ control charts using modified mahalanobis distance
1. International Journal of Electrical and Computer Engineering (IJECE)
Vol. 11, No. 1, February 2021, pp. 284~292
ISSN: 2088-8708, DOI: 10.11591/ijece.v11i1.pp284-292 284
Journal homepage: http://ijece.iaescore.com
Modified hotelling’s 𝑻𝟐
control charts using modified
mahalanobis distance
Firas Haddad
Department of General courses, College of Applied Studies and Community Service,
Imam Abdurrahman Bin Faisal University, Al-Dammam, Saudi Arabia
Article Info ABSTRACT
Article history:
Received Apr 12, 2020
Revised Jun 13, 2020
Accepted Jul 5, 2020
This paper proposed new adjusted Hotelling’s T2
control chart for individual
observations. For this objective, bootstrap method for producing
the individual observations were employed. To do so, both arithmetic mean
vector and the covariance matrix in the traditional Hotelling’s T2 chart were
substituted by the trimmed mean vector and the covariance matrix of
the robust scale estimators Qn, respectively which, in turn, its performance is
carried out by simulated. In fact, the calculation of false alarms and
the probability of detection outlier is used for determining the validity of this
modified chart. The findings revealed a considerable significance in its
performance.
Keywords:
Bootstrap
Hotelling’s 𝑇2
charts
Mahalanobis distance
Robust estimator
Trimmean This is an open access article under the CC BY-SA license.
Corresponding Author:
Firas Haddad,
College of Applied Studies and Community Service,
Imam Abdulrahman Bin Faisal University,
Dammam, Saudi Arabia.
Email: fshaddad@iau.edu.sa
1. INTRODUCTION
Statistical process control charts employ statistical implements to notice the accomplishment of
the production process. During the process of practicing, more than one quality feature is defined by
the overall quality of a product. Thus, the quality of a certain type of any product may be defined by degree
of hardness, thickness, weight, width and length etc. So, various features of a manufactured component
require simultaneous monitoring. Hence, it can be used both; the multivariate Shewhart-type 𝒳2
or
the Hotelling’s 𝑇2
control charts might be used.
Hotelling’s 𝑇2
statistic is one of the most public methods in the multivariate statistical control
charts [1, 2]. Furthermore, the Hotelling’s 𝑇2
statistic is the multivariate generalization of the Student’s
t- statistic. Hence, the Hotelling’s 𝑇2
statistic is the expanded case after taking the square for the two sides of
the equation of the Student’s t- statistic. In other words, Hotelling 𝑇2
statistic is equal to.
𝑇2
= 𝑛 (𝑋
̅ − 𝜇0)𝑇
𝑆−1
(𝑋
̅ − 𝜇0) (1)
Where 𝑋
̅ and 𝑆 are the sample mean vector and the p×p covariance matrix in succession.
Several values among the measurements of the characteristics make the traditional Hotelling’s 𝑇2
ineffective, although the Hotelling’s 𝑇2
control charts are effective and appropriate when the data are taken
from normal distribution. As a result, Mostajeran, Iranpanah and Noorossana in [3] pointed out that employ
2. Int J Elec & Comp Eng ISSN: 2088-8708
Modified hotelling’s 𝑻𝟐
control charts using modified… (Firas Haddad)
285
non-parametric bootstrap control charts for unknown distribution. Where non-parametric bootstrap control
charts are suitable when the employ of large size deems impossible in addition to evaluate the process
parameters from the Phase I. Generally speaking, normal distribution is required for control charts for
permitting observation. In respect of cases with non-normal distributions, the usage of non-parametric control
charts containing charts of sign control will be applicable. It is worth mentioning that the algorithm of
non-parametric bootstrap in this study is employed for calculating the control chart parameters. By and large,
original observations might be employed in cases that do not entail any distribution assumption. Similarly,
robust statistics were employed rather than sensitive statistics in the Hotelling 𝑇2
chart. Such methods
were regarded efficient for overcoming poor performance issue in the existence of extreme values in
product features.
There are several previous studies that employed this technique and the majority of them are
rendered in the above-mentioned studies relating with improving this model, see [4]. The abovementioned
studies that employed the bootstrap samples, the location, and scale estimators, the trimmed mean and
the trimmed covariance were mentioned. One of which [5] who used the so-called trimmed mean and
trimmed covariance matrix as an alternative of arithmetic mean and covariance matrix, respectively.
Surtihadi in [6] employed median as a robust location estimator when he constructed robust bivariate sign
tests of Blumen and Hodges. Alfaro and Ortega in [7] suggested a new alternative chart by substituting
the sample mean vector with trimmed mean vector and the covariance matrix by trimmed covariance matrix.
Jones in [8] used in control chart constructions, bootstrap rather than the traditional parametric
assumption. Bootstrap methods use computing power. Phaladiganon et al. in [9] employed a bootstrap-based
multivariate T2
control in their study and indicated the capacity of the chart in observing a process in
non-normal or unknown distribution of data. The author uses a simulation study to assess the performance of
the control chart.
The method provided in [10] depended on the notion of bootstrapping. For this objective,
the authors bootstrapped the data, and then such data were applied on in-control state estimation. The usage
of a non-parametric approach demonstrated in [11] in conducting estimation on the cumulative sum (Cusum)
as well as the exponentially weighted moving average (EWMA) control limits on a given dataset. The usage
of an innovative bootstrap algorithm was revealed in [12] in the Hotelling’s T2
control chart creation.
Applying nonparametric bootstrap multivariate control charts |S|, W, and G depends on bootstrapped
data utilization in evaluating the in control state that was discussed in [3]. The findings reveal that the
bootstrap control charts achieved reasonable performance.
Similarly, in [13] indicated the application of a bootstrap multivariate control chart following
Hotelling’s T2
statistic. The author in [14] modified three robust Hotelling’s T2
charts by replacing the mean
vector and the covariance matrix by the trimmed estimators. The trimming was done using the modified
Mahalanobis distance, where the location estimator is the median and the scale estimator is one of the robust
scale estimators MADn, Sn and 𝑇n.
Tukey and McLaughlin in [15] proposed new substitute Bivariate robust Hotelling's T2
chart.
By Exchanged the arithmetic mean with winsorized modified one step M-estimator (wMOM) vector
and substituted the sensitive covariance matrix with the covariance matrix of robust scale estimator
𝑄n respectively.
The current study aims at enhancing Hotelling’s
2
T chart in terms of its functioning. Thus, a novel
method was suggested, which contains modification to the sensitivities towards outliers. There are a wide
spread choice of robust location and scale estimators that might be regarded in this issue. This research seeks
to develop the function of Hotelling T2
chart by replacing the mean vector and the covariance matrix with
trimmed mean and its corresponding covariance matrix following [5]. The trimmed mean based on robust
scale estimator Qn. To appraise the function of the new modified robust Hotelling T2
control chart,
the outliers are inserted in the data, which are coming from the standard normal distribution by using
bootstrap method in generating the samples, then calculate the false alarms and the probability of detection of
outliers as a technique to judge the function of the modified chart. The following section (section 2) presents
the details of the research methods. After that, section 3 presents the results and discussion. followed by
the conclusion in section 4.
2. RESEARCH METHOD
2.1. Trimming method
The covariance matrix is influenced by the emergence of the outliers. As a result, we used
the trimmed variance-covariance matrix as an exchange for the covariance matrix. The computation of such
an estimator is based on the winsorized covariance matrix. So, the winsorized variance-covariance matrix is
used to calculate the covariance matrix for the winsorized sample. In order to produce the modified Hotelling
3. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 11, No. 1, February 2021 : 284 - 292
286
𝑇2
control charts it has been used the robust location estimator the trimmed mean and the trimmed variance-
covariance matrix. The trimming of the outliers data are produced by implementing the method of modified
Mahalanobis distance, such as the modification approved out by replacing the sample mean vector in
the Mahalanobis distance formula by the median vector and replacing variance-covariance matrix,
by the robust scale variance-covariance matrices of Qn [16]. The trimming and replacement of the data to
obtain the winsorized sample are achieved by employing the values of modified Mahalonobis distance that
the trimming dependent on the percentage 20% from each end. The natural way of Mahalanobis distance.
In the studies of [5, 17], each end from outlier observation was trimmed the data that are provided by
the largest two values of Mahalanobis distance. Meanwhile, this technique is appropriate for dealing with
subgroup observations, but our case is concerned with the bootstrap individual observations. As such,
the percentage for trimming is more appropriate. The authors in [16] suggested that the best percentage is
20%-25% in a symmetric distribution. The authors in [18, 19] proposed to trim 20% from each tail of
the data. The author in [19] proposed the best percentage to trim 20% for each side. Consequently,
the percentage of trimming in this paper for each end is 20%.
2.2. Constructing control charts
Substituting arithmetic mean vector 𝑋
̅ by the alternative robust location estimator, the trimmed mean
𝑋
̅𝑡𝑄𝑛 and substitute the sample variance covariance matrix S by the trimmed variance covariance matrices
each one of them relies on the robust scale estimator 𝑄𝑛. The computation of the trimmed covariance matrix
requires calculating the winsorized covariance matrix before. The winsorized variance covariance matrix is
the variance covariance matrix for the winsorized sample. Its sample is achieved by using some techniques to
trim the outliers from the data. The technique of Mahalanobis distance is employed in such study.
The formula of Mahalanobis distance relies on the arithmetic mean vector and the variance covariance
matrix. This paper modified the Mahalanobis distance by modifying the arithmetic mean by the robust
location estimator the median and modifying the variance covariance matrix by the covariance matrix of
the robust scale estimators 𝑄𝑛 [20]. In respect to the type of robust scale estimator the winsorized sample is
achieved. Thus, the type of winsorized sample is formed. With regard to such winsorized sample,
one trimmed mean has been calculated X
̅tQn and one trimmed variance covariance matrix 𝑆𝑡𝑄𝑛, had been
calculated. Based on this type of winsorized sample, the Modified Hotelling’s 𝑇2
control chart is structured
as follows:
Defining the adjusted Mahalanobis distance values ∆𝟐
for each vector Xi1,….., Xip in the individual
bootstrap data groups, where i = 1, … , n, and p number of variables.
Organizing the values of modified Mahalanobis distance orderly and according to [16] who stated that the
best percentage of trimming is 20%-25% from each end in symmetric distribution. The authors [18, 19]
proposed trimming 20% from each tail of the data. Thus, we employed the 40% as a trimming percentage.
Trimming the largest 40 % of the values of modified Mahalanobis distance ∆2
. According to these values
of Mahalanobis distance, trimming the observations in the data that are manifested by 40 % of the largest
values of modified Mahalanobis distance ∆2
.
For the purpose of creating the winsorized sample, substitute the trimmed observations by other
observations are manifested by the next 40% of the values of modified Mahalanobis distance ∆2
.
Defining the trimmed mean for each bootstrap data set of the individual observations by splitting the total
of the remaining individual observations by (n-k), where k symbols to the number of trimming
observations because only one bootstrap data set, followed by one value of trimmed mean 𝑋
̅𝑡𝑄𝑛.
Defining the winsorized covariance matrix 𝑆𝑤𝑄𝑛 by using the winsorized sample according to the use of
the robust scale estimators 𝑄𝑛, .
Pinpointing the trimmed variance covariance matrix 𝑆𝑡𝑄𝑛 for the individual bootstrap data set by applying
the formula [17]:
𝑆𝑡 =
𝑛−1
𝑛𝑡−1
× 𝑆𝑤 (2)
where 𝑆𝑤 is the winsorized covariance matrix, nt is the number of the rest data after trimming.
The diagonal components in the above mentioned trimmed variance covariance matrix are
the sample-trimmed variance where 𝑆𝑡𝑗𝑗 = 𝑆𝑡𝑗
2
and the other elements are 𝑆𝑡𝑗𝑔 the sample trimmed
covariance matrix of the two vectors 𝑋𝑗, 𝑋𝑔, and is computed as follows:
a) Compute 𝑺t(Xj), 𝑺t(Xg); j=1,…,p, g=1,…,p and, 𝑗 ≠ 𝑔.
4. Int J Elec & Comp Eng ISSN: 2088-8708
Modified hotelling’s 𝑻𝟐
control charts using modified… (Firas Haddad)
287
b) Compute the spearman rank correlation between 𝑋𝑗and 𝑋𝑔, denoted by corr (𝑋𝑗, 𝑋𝑔) because this
type of correlation is robust against the extreme data [21].
c) The sample covariance between the variables 𝑋𝑗and 𝑋𝑔 is computed according to the following
formula:
𝑆𝑡𝑗𝑔 = 𝑐𝑜𝑣(𝑋𝑗, 𝑋𝑔) = 𝑆𝑡( 𝑋𝑗)𝑆𝑡( 𝑋𝑔)𝑐𝑜𝑟𝑟(𝑋𝑗, 𝑋𝑔) (3)
Compute the opposite of the sample trimmed standard covariance matrix for 𝑆𝑡𝑄𝑛,, which is manifested
by 𝑆𝑡𝑄𝑛
−1
.
Defining the modified Hotelling’s 𝑇2
control chart by substituting the sample mean in the traditional
Hotelling’s 𝑇2
control chart by the robust location estimators 𝑋
̅𝑡𝑄𝑛,, and substitute the opposite of
the sample standard covariance matrix in the traditional Hotelling’s 𝑇2
control chart by the opposite
robust scale covariance matrices StQn
−1
. After that, the modified Hotelling’s T2
control chart anchored on
this form:
TtQn
2
(Xi) = (Xi − XtQn
̅̅̅̅̅̅)T
StQn
−1
(Xi − XtQn
̅̅̅̅̅̅) (4)
2.3. Independent and dependent variables
This study touched upon the independent and dependent cases which are acquired by Case A and
Case B respectively. The following formulas clarify the contamination model of the mixture normal
distribution:
(1-ε)Np(µ0, Σ0) + ε Np(µ1 , Σ1) (5)
To illustrate, ε is the percentage of the outliers. Np(µ0, Σ0) is the in control distribution and
the parameters µ0, Σ0 are called in control parameters. Meanwhile, the distribution Np(µ1 , Σ1) is the out of
control distribution and the parameters µ1 , Σ1 are called out of control parameters. Two cases of
the variables shall be taken when such variables are ought to be independent and called (A) while
the dependent variables are called as (B). The following formula clarify that.
2.3.1. Case (A):
(1-ε)Np(0, Ip) + ε Np(µ1 , IP) (6)
Without loss of generality, the in control means vector µ0 is 0, while the variance covariance matrix in the in
control and out of control distributions is equal to the Identity matrix 𝑰𝒑. The authors in [22, 23] indicates that
the variance covariance matrix 𝑰𝒑 is regarded as a homogenous variance covariance matrix with 1 for
the main diagonal and 0 for the other elements in the matrix that considers that without correlation among
the variables. However, as the value of the out of control parameter µ𝟏 depends on the non-centrality
parameter as follows:
(µ1 − µ)′Σ−1
(µ1 − µ) (7)
where µ𝟏 a vector is manifesting the amount of the shift for the mean vector. The larger value of
the non-centrality parameter stands for larger extreme outliers. As such, according to many statisticians such
as [7, 24, 25] they took the following values for the non-centrality parameter µ𝟏 like 0 (when there is no
alteration), 3 and 5 for obtaining more extreme outliers.
2.3.2. Case (B):
(1-ε)Np(0, Σ0) + ε Np(µ1 , Σ0) (8)
In control parameter, without loss in generality, stands for vector µ0 is equal to 0 while the variance
covariance matrix for the in control and out of control distributions are equivalent (i.e. 𝛴0 =𝛴1 = 𝛴0). 𝛴0 is
a homogenous variance covariance matrix of size 𝑝 × 𝑝 with high level of correlation between the variables.
For example, the components of the main diagonal in 𝛴0 are 1 and the other elements are 0.9 [23, 26].
However, the out of control parameter µ1 receives the values 0 (without alteration) and 5 (when there are
5. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 11, No. 1, February 2021 : 284 - 292
288
a good leverage points). Such case was regarded for judging the performance of modified Hotelling’s 𝑇2
statistics concerning dependent variables or when there is correlation among the variables. The values of
the percentages of outliers ε are 0.1, and 0.2. Likewise, Holmes and Mergen in [27] whenever
the Hotelling’s 𝑇2
statistics is out of control, this clarifies that the correlation among the variables has
changed. In case (B) there is a high correlation between the variables, during that, it can be measured whether
the alteration in the correlation between the variables impact on the values of the probability of Type I error
and the probability of detection of outliers for the modified Hotelling 𝑇2
statistics.
2.4. Upper control limits (UCL)
Since the distribution of the Hotelling’s 𝑇2
statistic is vague when the sample size is small,
the upper control limits are calculated by employing the simulation. The simulation occurs during two phases
I and II. Phase I the bootstrap data sets generated from the standard normal distribution 𝑁(0, 𝐼𝑝).
The traditional and the robust estimators are calculated for these data sets. Phase II produces a new additional
observation from the standard normal distribution 𝑁(0, 𝐼𝑝). Repeat the generating for 5000 times.
Then calculate the corresponding modified Hotelling’s 𝑇2
statistic for these 5000 new additional
observations. The percentile 95 computed for these values of adjusted Hotelling’s 𝑇2
statistic and then such
value of the percentile considered as the UCL.
The simulation for computing false alarms and the possibility of detection outliers undergo through
two phases I and II. To clarify, Phase I the bootstrap method for the generated individual observations from
distribution 𝑁(0. 𝐼𝑝) is used. Then the outliers are added according to the two cases independent and
dependent variables (case (A) and case (B)). The traditional and robust estimators computed in this phase.
According to phase II, the false alarms is calculated and the possibility of detection outliers. For instance,
the false alarm is calculated when we pro a new observation from in control distribution. Meanwhile
the possibility of detection of outliers is calculating when the new observation is produced from the out of
control distribution.
3. RESULTS AND DISCUSSION
The following table represents both the results of false alarms and the probability of detection
outliers for the new modified robust chart. As shown in Table 1 particularly in case (A), the bivariate
variables with level of significance α =0.05. In the presence of the data outliers, the values of the rates,
in the robust chart for false alarms are better than the values the rates in the traditional Hotelling’s 𝑇2
chart.
In general, the rates of false alarms are under control when the percentage of outliers ε=0.1. As sample sizes
increase, the rates of false alarms become conservative whenever the percentage of outliers is ε=0.2
In respect of the possibility of detection, outliers are considered better in the robust Hotelling’s
𝑇2
control charts comparing with traditional Hotelling’s 𝑇2
control charts. Particularly when the percentage
of outliers ε=0.2, the huge difference could be detected between the rates of the possibility of detection of
outliers between the robust chart and the traditional chart which implies that the findings concerning robust
charts are better than the findings in the traditional charts. Furthermore, the robust Hotelling 𝑇2
control chart,
the more values of the possibility of detection outliers increase, the more sample size increase that
achieves100% detected when the sample size achieve to 100. On the contrary, the more values of
the probability of detection for the traditional control charts decrease, the more sample size increase.
Therefore, it can be deduced that the modified robust Hotelling’s 𝑇2
control charts are better than
the traditional Hotelling’s 𝑇2
control charts particularly in the possibility of detection outliers.
With regard to case (B) concerning the dependent variables, the robust chart has better performance
comparing with the performance of the traditional chart relating with false alarms and the possibility of
detection. It has been observed that the values of the possibility of detection of outliers are better particularly
when the percentage of outliers ε=0.1 despite that the values of false alarms in this case are not well and
reasonable. The increasing of the sample size impacts on the possibility of detection, but does not impact on
the values of false alarms. This suggests that the modified robust Hotelling’s 𝑇2
control chart is better than
the traditional Hotelling’s 𝑇2
control chart particularly in the possibility of detecting outliers.
Case A indicates that independent variables are demonstrated in Figures 1-4 with the number of
characteristics at p=2. The findings reveal that in the detection of outliers, the new adopted chart has better
performance comparing with the conventional chart. It is obvious the strong performance of the new chart in
detecting outliers data when the sample sizes are 30 and 40 than other sizes. In respect of the dependent
variables (case B), Figures 5 and 6 confirms the superiority of the new chart to the conventional one,
indicating the appropriateness of the new chart for the above-mentioned case.
7. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 11, No. 1, February 2021 : 284 - 292
290
10 % outliers with shifted mean 3 20 % outliers with shifted mean 3
Figure 1. The rates of detection outliers with
sample size m (Case A)
Figure 2. The rates of detection outliers with
sample size m (Case A)
10% outliers with shifted mean 5 20% outliers with shifted mean 5
Figure 3. The rates of detected outliers
with sample size m (Case A)
Figure 4. The rates of detected outliers
with sample size m (Case A)
10% outliers with shifted mean 5 20% outliers with shifted mean 5
Figure 5. The rates of detected outliers
with sample size m (Case B)
Figure 6. The rates of detected outliers
with sample size m (Case B)
3.1. Empirical case study
Vargas J in [28] findings were employed in order to compare between them. Thus, the findings of
the suggested technique and those of the conventional and adjusted control charts were contrasted. It is
obvious two features of random variables in the data namely 1
X and 2
X . Most importantly the data were
elicited from 30 various products from the production process. Vargas J in [28] used two variables from
Quesenberry dataset. As such, Table 2 reveals the observations of the above-mentioned random variables
namely the values of the new Hotelling’s T2
and the conventional T2
chart statistics.
The simulation for the robust charts was used in the UCL calculation. This study reveals that
the value of all UCL was established at 9.4787 for α=0.05. The elicited findings, for the conventional chart,
only one outlier was detected namely the 2nd
, while the robust charts were able to detect five outliers namely
the 2nd
, 5th
, 14nd
, 17th
and 20th
.
8. Int J Elec & Comp Eng ISSN: 2088-8708
Modified hotelling’s 𝑻𝟐
control charts using modified… (Firas Haddad)
291
Table 2. Bivariate variables X1and X2 of Vargas data set with the values of T 2
statistics employing
the conventional and the winsorized MOM estimator
Product No
1
X 2
X 2
0
T
TtQn
2
1 765.0 .76556 0.807 4.8584
2 765.6 5.6.7. 12.975 31.451
3 765. 526595 0.1373 1.026
4 765.9 ..6.79 1.8375 4.9479
5 7656. 5266.5 1.5697 28.0655
6 76595 .76996 0.33 1.4439
7 7655. .7605. 0.977 1.8793
8 7656. 52669. 0.904 2.68149
9 76550 .76.5.7 0.1269 0.2652
10 765.. .76.5 0.801 3.1888
11 7656. 526065 0.7192 0.9387
12 76565 526.05 0.910 1.84799
13 7655 .76562 0.483 2.8319
14 76556 ..67.0 5.2413 13.2413
15 76555 526066 0.073 0.97578
16 765.2 566.5 3.5357 8.3317
17 7650. 526505 2.2696 10.0502
18 76550 5260.6 3.2442 8.8545
19 765.5 .7627. 1.398 2.04784
20 76..5 .76.6 6.8326 14.425
21 76. .7652. 1.8978 3.8182
22 7656. 566.0 3.3564 6.66
23 765.0 .769.. 0.427546 1.7246
24 7652. .769.5 1.1838 5.30574
25 76565 5265 1.4968 7.3952
26 7650. .76759 0.48432 0.3187
27 7659 52657. 0.28989 2.24539
28 7655. 56650. 2.0635 7.575022
29 765.2 566... 1.38596 1.279114
30 76555 .769.2 0.24043 0.152033
4. CONCLUSION
The Hotelling’s T2
chart trimmed covariance matrix and trimmed mean were used in this study
respectively as the scale covariance matrix and the location vector. The comparative concerning the modified
chart and the conventional chart aimed at revealing their performance in detecting outliers and false alarms.
To this end, two cases were utilized case (A) and case (B). To illustrate, the former contains the independent
variables, while the later contains the dependent variables. The findings of the simulation results show that
modified chart was able to control false alarm rates under most of conditions. Surprisingly, such ability
started to reduce followed by the enhance in the shifted mean vector µ and proportion of outliers. The robust
chart has indicated its superior ability in producing the possibility of detection outliers comparing with
the ability of conventional T2
chart.
REFERENCES
[1] Alt, F. B. (Ed.)., “Multivariate quality control,” Encyclopedia of statistical science, vol. 6, pp. 110-122, 1985.
[2] Montgomery, D. C. (Ed.)., “Introduction to Statistical Quality Control,” John Wiley and Sons Inc, 2005.
[3] Mostajeran A., Iranpanah, and N., Noorossana R., “An evaluation of the multivariate dispersion charts with
estimated parameters under non‐normality, Applied Stochastic Models in Busin and Industry, vol. 33, no. 6,
pp. 694-716, 2017.
[4] Haddad F. and Alsmadi M. K., “Improvement of The Hotelling’s T2 Charts Using Robust Location Winsorized
One Step M-Estimator (WMOM),” Punjab University, Journal of Mathematics, vol. 50, no. 1, pp. 97-112, 2018.
[5] Alloway, J. A. J., Raghavachari, M., “Multivariate Control Charts Based on Trimmed Mean,” ASQC Quality
Congress Transactions-San Francisco, pp. 449-453, 1990.
[6] Surtihadi J., “Multivariate and robust control charts for location and dispersion,” Diss. Rensselaer Polytechnic
Institute, 1994.
[7] Alfaro, J. L., and Ortega, J. F., “A Robust Alternative to Hotelling’s T2 Control Chart Using Trimmed Estimators,”
Quality and Reliability Engineering International, vol. 24, no. 5, pp. 601-611, 2008.
[8] Jones L. A. and Woodal W. H., “The performance of bootstrap control charts,” Journal of Quality Technology,
vol. 30, no. 4, pp. 362-375, 1998.
[9] P. Phaladiganon, et al., “Bootstrap-based T 2 multivariate control charts," Communications in Statistics, vol. 40,
no. 5, pp. 645-662, 2011.
9. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 11, No. 1, February 2021 : 284 - 292
292
[10] Gandy A. and Kvaloy J. T., “Guaranteed Conditional Performance of Control Charts via Bootstrap Methods,”
Scandinavian Journal of Statistics, vol. 40, no. 4, pp. 647-668, 2013.
[11] Ogbeide Efosa Micheal and Edopka I. W., “Bootstrap Approach Control limit for statistical quality control,”
International Journal of Engineering Science Invention, vol. 2, no. 4, pp. 2319-6734, 2013.
[12] Mostajeran A., Iranpanah N., and Noorossana R., “A New Bootstrap Based Algorithm for Hotelling’s T2
Multivariate Control Chart,” Journal of Sciences, Islamic Republic of Iran, vol. 27, no. 3, pp. 269-278, 2016.
[13] Mostajeran A., Iranpanah N., and Noorossana R., “An explanatory study on the non-parametric multivariate T2
control chart,” Journal of Modern Applied Statistical Methods, vol. 17, no. 1, 2018.
[14] Syed Yahaya S. S., et al., “Robust Hotelling’s Charts with Median Based Trimmed Estimators,” Journal of
Engineering and Applied science, vol. 14, no. 24, pp. 9632-9638, 2019.
[15] Haddad F. S., “Improving Bivariate Hotelling’s T2 Chart Using New Robust Estimators (WMOM with Qn ) and
Bootstrap Data,” International Journal of Applied Engineering Research, vol. 14, no. 21, pp. 4001-4009, 2019.
[16] Rock, D. M., Downs, G. W., and Rock, A. J., “Are robust estimators really necessary?,” Technimetrics, vol. 24,
no. 2, pp. 95-101, 1982.
[17] Tukey, J. W., and McLaughlin, D. H., “Less Vulnerable Confidence and Significance Procedures for Location
Based on a Single Sample,” Sankhyā: The Indian Journal of Statistics, Series A, vol. 25, pp. 331-332, 1963.
[18] Rosenberger, J. L., and Gasko, M., “Comparing location estimators: Trimmed means, medians and trimmean,”
Understanding robust and exploratory data analysis, 297-336, 1983.
[19] Wilcox, R. R., “ANOVA: A paradigm for low power and misleading measures of effect size?,” Review of
Educational Research, vol. 65, no. 1, pp. 51-77, 1995.
[20] Croux, C., and Rousseeuw, P. J., “Time-efficient algorithms for two highly robust estimators of scale,”
Computational Statistics, vol. 2, pp. 411-428, 1992.
[21] Abdullah, M. B., “On robust correlation coefficient,” Statistician, vol. 39, pp. 455-460, 1990.
[22] Johnson, M. E. (Ed.), “Multivariate statistical simulation: New York,” John Wiley and sons, 1987.
[23] Johnson, N., “A comparitive simulation study of robust estimators of standard error,” Brigham Young Univ., 2007.
[24] Alfaro, J. L., and Ortega, J. F., “A comparison of robust alternative to Hotelling T2 control chart,” Applied
Statistics, vol. 36, no. 12, pp. 1385-1396, 2009.
[25] Chenouri, S., Variyath, A., M, and Steiner, S. H., “A Multivariate Robust Control Chart for Individual
Observations,” Quality Technology., vol. 41, no. 3, pp. 259-271, 2009.
[26] Jensen, W. A, Birch, J. B and Woodall, W. H., “High Breakdown Estimation Methods for Phase I Multivariate
Control Charts Quality,” Reliability Engineering International, vol. 23, no. 5, pp. 615-629, 2007.
[27] Holmes, D. S., and Mergen, A. E., “Identifying the sources for out of control signals when the T2 control chart is
used,” Quality Engineering, vol. 8, no. 1, pp. 137-143, 1995.
[28] Vargas, J. A., “Robust Estimation in Multivariate Control Charts for Individual Observation,” Quality Technology,
vol. 35, pp. 367-376, 2003.