SlideShare a Scribd company logo
1 of 6
Download to read offline
Identification of Manufacturing Processes Signature by a Principal Component Based
Approach
Bianca M. Colosimo1
, Andrea N. Intieri
1
, Massimo Pacella
2
1
Dipartimento di Meccanica, Politecnico di Milano, Italy.
2
Dipartimento di Ingegneria dell’Innovazione, Università degli Studi di Lecce, Italy.
Abstract
Machined surfaces and profiles often present a systematic pattern, usually referred to as “the signature” of the
process. Advantages related with identification of process’ signature have been clearly outlined in the
literature. The proposed approaches are mainly based on parametric models, in which the signature is
described as a combination of analytical functions (predictors) that have to be properly chosen by the analyst
depending on the specific case faced. Analytical tools, which do not use parametric model to describe
profiles, were also presented in the so-called “profile monitoring” research field. In particular, the Principal
Component Analysis (PCA), which is a statistical technique utilized to identify patterns in multivariate data,
was successfully applied in chemiometrics. In this paper, the use of PCA is investigated for process’ signature
modelling in the case of machined profiles. The goal is to describe a general-purpose approach, which
alleviates the analyst from the need to identify a suitable kind of analytical functions for the statistical
description of machined profiles. The illustration of the PCA method is based on real measurements data of
circular items obtained by turning.
Keywords:
Profile, Process’ signature, Principal Component Analysis (PCA), Roundness, Turning.
1 INTRODUCTION
Machined surfaces and profiles often present a
systematic pattern and a superimposed random noise:
the first is mainly due to the process used in specific
operation conditions, the second is due to unpredictable
factors, and is usually called “natural variability”. The
systematic pattern constitutes what we will call “the
signature” of the process.
Advantages related to the identification of process’
signature have been clearly outlined in the literature.
When a model of the signature is available, it can be used
to improve quality monitoring (e.g., quickly detecting
whether process is deviating from its natural behaviour)
and quality control (e.g., deciding appropriate corrective
actions that have to be taken). With reference to
monitoring, the machined signature can be considered
similar to a profile. Hence, approaches recently proposed
in the literature on profile monitoring should be in principle
adopted. As example, in [1] the authors discussed the
statistical control of the functional relationship between
the temperature of a mass flow controller in the
microelectronic industry and the flow of gas released. In
[2] the curves considered were obtained from the spectral
analysis of a chemical solution when the concentration of
a mixture is of interest. In [3] the authors introduced an
approach to monitoring the density profile of engineered
wood boards. In [4] the use of profile monitoring for
calibration applications was discussed.
After the seminal work of Weckenmann et al. [5],
advantages related with identification of the signature also
for manufacturing processes have been widely showed in
the literature. These approaches are mainly based on
parametric models, in which the manufacturing signature
characterizing the profile is described as a linear
combination of analytical functions (predictors). These
predictors have to be properly chosen by the analyst
depending on the specific case faced.
For example in turning operations, roundness observed
on machined items is mainly due to systematic radial
spindle error motions which affect that specific machine
tool, as shown in [6]. In these cases, a commonly
appreciated possibility is to model radial deviations with
periodic functions. Several researchers discussed the
modelling of roundness error by fitting a Fourier series,
i.e. by sinusoidal functions at several frequencies used as
predictors. In other applications, wavelets and splines can
be selected as predictors to model complex signatures,
as reported in [7]. For instance, wavelets functions should
be used instead of sinusoidal ones for modelling step
changes in the profile, as for pockets.
As previously mentioned, when the manufacturing
signature is described by means of parametric models, a
cumbersome activity required to the analyst consists in
selecting the proper type of predictor functions that should
be used. Furthermore, measurement data are most of the
times autocorrelated because they are obtained in similar
condition of the machining process and of the
measurement system. In these cases, an appropriate
model should be also defined to describe the
autocorrelated structure which characterizes the
manufacturing signature.
Different analytical methods, which do not require a
specific parametric description of the profile under study,
were also presented in some applications of profile
monitoring. These approaches make use of Principal
Component Analysis (PCA), a statistical technique aimed
at identifying patterns in multivariate data [8] [9]. PCA is
particularly effective because it does not require to the
analyst the identification of suitable kinds of predictors for
the statistical description of the sampled surface faced. In
particular, PCA was successfully applied in
chemiometrics (a research area that combines data
analysis and multivariate statistics in order to improve
chemical industrial plants). In [10] the authors applied
PCA for monitoring a chemical chromatography process.
Similarly, in [2] the authors applied PCA for monitoring a
profile in a chemical process.
In this paper, the use of PCA is investigated for process’
signature modelling in the case of machined profiles. In
particular, the illustration of the PCA method is based on
Intelligent Computation in Manufacturing Engineering - 5
real measurements data of circular items obtained by
turning. The objective of this study is to investigate
advantages related with the use of PCA for process
signature identification.
This paper is organized as follows. In section 2,
properties of the canonical PCA are briefly discussed.
Section 3 presents the experimental work faced in this
research. In section 4, the PCA is applied on the
measurement data, while section 5 discusses the
robustness of the proposed approach with respect to
filtering of the original data. Eventually, the last section
reports the conclusion and some final remarks.
2 PRINCIPAL COMPONENT ANALYSIS
Principal Component Analysis (PCA) is a general
statistical approach that allows to explain the variability
observed in a set of multivariate data by means of a small
number of components, namely the principal components,
which can be obtained as linear combinations of the
original variables. These components are able to
summarize most of the information contained in the
original data and allow one to better identify the different
sources of variation which are affecting the process. PCA
is the first step of the data analysis, which is concerned
with data reduction of high-dimensional data frequently
encountered in chemometrics, computer vision and other
domains. An exhaustive description of PCA can be found
in [11]. A rough sketch of how PCA works is reported in
the following.
Assume to collect n profiles, each of them constituted of
p equally-spaced measurement points. Let jky denote
the k -th point observed on the j -th profile, where
1,...,k p= and 1,...,j n= . The measurements can be
hence summarized in the following n p× data matrix Y :
11 1 1 1
1
1
T
k p
T
j jk jp j
Tn nk np
n
y y y
y y y
y y y
⎡ ⎤⎡ ⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥= =⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎣ ⎦ ⎣ ⎦
y
Y y
y
(1)
where 1 2
T
j j j jpy y y⎡ ⎤= ⎢ ⎥⎣ ⎦
y is the (column) vector
containing the j -th profile ( 1,...,j n= ) and hence each
row in (1) contains the data points belonging to a specific
profile.
Denote with y the average profile, given by
1
1
n
j
j
n
=
= ∑y y .
Given y , a new matrix T
n= −Y Y i y can be computed,
where ni is an n dimensional column vector of ones, i.e.
[ ]1 1 1
T
n =i . The matrix Y is thus obtained by
subtracting to each profile jy the average profile y and
allows to represent the deviations of each profile from the
average one. Using this new matrix, the covariance of the
original matrix Y , can be rewritten as:
( )( )1
1
1
1 1
n T
T
j j
j
n n
=
= − − =
− −∑
Y Y
S y y y y (2)
The PCA method consists in finding the matrices L and
U which satisfy the following relationship:
1
T
=U S U L (3)
where the matrix L is a diagonal matrix and contains the
eigenvalues of 1S :
1 0 0
0 0
0 0
k
p
l
l
l
⎡ ⎤
⎢ ⎥
⎢ ⎥
⎢ ⎥
⎢ ⎥= ⎢ ⎥
⎢ ⎥
⎢ ⎥
⎢ ⎥
⎢ ⎥⎣ ⎦
L (4)
Without loss of generality, the eigenvalues are supposed
ranked in decreasing order (i.e., 1 2 ... 0pl l l> > > > ).
The matrix of vectors 1 k p
⎡ ⎤= ⎢ ⎥⎣ ⎦
U u u u is instead
orthonormal and is composed by the eigenvectors of 1S
( ku , 1,...,k p= ), which form a new orthonormal basis for
the space spanned by Y . It is worth noticing that when
the covariance matrix 1S is singular, a subset of the
eigenvalues will be equal to zero. For example, in the
case of high-dimensional data where the number p of
data points collected on each profile is greater than the
number of sampled profiles n , the covariance matrix will
have at most rank equal to 1n− and hence just the first
1n− eigenvalues will be greater than zero (i.e.,
1 2 1... 0nl l l −> > > > ) while the remaining ones will be all
equal to zero (i.e., ... 0n pl l= = = ). In this case, the
number of principal components will be at most 1n− .
Given the eigenvector matrix U , each profile jy can be
projected into the directions identified by the
eigenvectors, i.e.:
( ) 1
TT T
j j j j jk jpz z z⎡ ⎤= − = = ⎢ ⎥⎣ ⎦
z U y y U y (5)
where 1,...,j n= . The new variables are called principal
components, while the values assumed by these
variables whit reference to the j -th observation, i.e.,
1,..., ,...,j jk jpz z z , are called “scores” and represent the
weight that each new principal component has in
explaining this particular observation. In particular, the
first principal component corresponds to the direction in
which the projected observations have the largest
variance. The second component is orthogonal to the first
one and corresponds to the second direction in which the
variance of the data is significant, etc.
Given the matrix U is orthonormal, one can easily show
that j j= +y y Uz , for 1,...,j n= . In other words, each
original observation can be obtained from its scores:
1 11 1 11
1
1 1 2 2
i.e.
...
j p j
jp p p pp jp
j j j jp p
y u u zy
y y u u z
z z z
⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎡ ⎤
⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥= +⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥⎣ ⎦⎣ ⎦ ⎣ ⎦ ⎣ ⎦
= + + + +y y u u u
(6)
where 1,...,j n= .
The principal components (PCs) obtained are statistically
independent and each PC has a variance equal to the
corresponding eigenvalue. Therefore, we can rank the
PCs, i.e. the eigenvectors, according to the associated
eigenvalue and decide to retain just the most important
PCs (the ones which correspond to greater values of the
associated variance), while discarding the less important
ones (i.e., the ones which are associated with smaller
variance).
When the whole set of p PCs is considered, the original
data are obtained starting from the scores by using
equation (6). When instead just the first K ( K p< ) most
important PCs are considered, the original data can be
just estimated as follows:
( ) 1 1 2 2ˆ ...j j j jK KK z z z= + + + +y y u u u (7)
The selection of the proper number of PC is a critical step
because when the number of PCs retained in the model is
too small, the resulting model will not be able to represent
all the significant variability contained in the original data.
When the number of PCs retained is excessive, the
resulting model will include some random variability
inducing a model that try to explain even the random
noise of the process.
3 EXPERIMENTAL ROUNDNESS PROFILES
Quality of mechanical components is more and more
often related to geometric tolerances, e.g., roundness,
flatness, etc. Among different geometric specifications,
roundness plays a relevant role in circular and cylindrical
parts, where functionality is directly related with rotation of
the component. For instance, roundness is critically
related to the proper functioning of rotating shafts, pistons
and cylinders.
Here, the PCA is applied to measurement data of
roundness profiles obtained by turning. In particular, the
experimental data consists in a set of 100n =
components machined by turning C20 carbon steel
cylinders (which were supplied in 30mm∅ = rolled bars).
The final diameter of 26mm∅ = was obtained by
performing two turning steps (cutting speed=163 m/min,
feed rate=0.2 mm/rev), where for each step the depth of
cut was equal to 1 mm.
The machined surfaces were eventually scanned on a
coordinate measuring machine. According to the standard
(ISO/TC 273, 2003 [13]) each roundness profile was
described by 748p = equally distributed measurements
of the radius.
As discussed in [7], data in each sample have to be pre-
treated to focus just on roundness form error. In
particular, data were rescaled by subtracting the least
squares estimation of the radius, and by re-centring the
profile on the least square estimation of the centre. As a
matter of fact, the out-of-roundness does not depend on
the centre’s position and on the average radius.
Secondly, a further step of profiles alignment was
required. This alignment was needed because the tool
starts machining the profile at a point which is random in
each sample. Profile alignment allows identifying a
common reference system and can be performed by
minimizing phase delays.
By a polar representation, the j -th sampled profile can
be described as a sequence of deviations of the radius
measured by the nominal radius, ( )j kr θ , where
( )2 1k k pθ π= − is the angle position and 1,...,k p= . A
polar representation of experimental data is given in
Figure 1. It can be observed that all the roundness
profiles share a common behaviour, i.e. the turning
process leaves a fingerprint or a signature on the
machined components.
Even if there is a systematic behaviour characterizing all
the profiles obtained, variation can be noted from profile
to profile. As an example, Figure 2 depicts the difference
between the average profile (i.e. the one obtained by
averaging all the profile collected, represented by a bold
line) and one specific profile (in this figure, the first profile
is reported and represented by the dashed line).
-0.01
0.03
+0.01
30
210
60
240
90
270
120
300
150
330
180 0
Figure 1: Polar diagram of 100 experimental profiles.
-0.01
0
+0.01
30
210
60
240
90
270
120
300
150
330
180 0
Figure 2: Polar diagram of average profile (bold line) and
profile no 1 (dashed line).
Indeed, even when the process is used with given control
parameters, one can not expect to observe exactly the
same profile since a profile-to-profile variability is often
present.
As stated in [14]: “Common-cause variation is that
variation considered to be characteristic of the process
and that cannot be reduced substantially without
fundamental process changes. It must be determined how
much of the profile-to-profile variation is part of the
common-cause variation and should be incorporated” into
the model of the signature.
PCA helps identifying this profile-to-profile variation
because it basically describes the main ways in which the
generic profile obtained is varying with respect to the
average profile. Hence the average profile can be
considered as a model of the “mean” or the expected
pattern characterizing the generic profile while the
principal components will model the principal ways in
which a generic profile can vary with reference to this
expected behaviour.
4 APPLICATION OF PCA
In this section, the application of PCA is illustrated on the
profile data set previously described. By setting
( )jk j ky r θ= , with 1,...,748k = and 1,...,100j = , the
measurements are collected in a n by p data matrix Y
where 100n = and 748p = .
In this case of high-dimensional data where p n> , the
covariance matrix of matrix Y has at most rank equal to
1n− and hence at most 1n− significant principal
components can be considered. Usually, among such
principal components only the first few eigenvectors are
associated with systematic variation in the data while the
remaining ones are associated with noise. Noise, in this
case, refers to uncontrolled experimental and
instrumental variations arising from random sources. PCA
models are formed by retaining only the PCs which are
representing systematic variation in the data.
In order to select a proper number of PCs, the eigenvalue
corresponding to each PC can be examined. Without loss
of generality, assume to rank the PCs with respect to a
decreasing order of the corresponding eigenvalue, i.e.,
1 2 1... 0nl l l −> > > > .
Hence, the variability explained by the j -th PC can be
expressed as the ratio between the corresponding
eigenvalue jl and the sum
1
1
n
j
j
l
−
=∑ . Therefore, the
cumulative variability explained by the first k dominant
PCs is given by:
1
1 1
k n
j j
j j
l l
−
= =∑ ∑ (8)
Table 1 illustrates the variability explained by each of the
first ten PCs, as well as the cumulative variability
explained by using these PCs.
It can be noted that, by using the first 10 PCs, the 52.88%
of the total variability observed in the original machined
profiles is described.
PC Variability Explained Cumulative Variability Explained
1 13.87% 13.87%
2 10.41% 24.28%
3 7.10% 31.38%
4 4.77% 36.15%
5 3.78% 39.92%
6 3.31% 43.24%
7 2.79% 46.02%
8 2.63% 48.65%
9 2.22% 50.87%
10 2.00% 52.88%
Table 1: Variability explained by the first ten dominant
PCs on the original data.
In order to identify a proper meaning of these set of
significant PCs, Figure 3 depicts the first six dominant
eigenvectors 1 2 3 4 5 6, , , , ,u u u u u u in a polar diagram. As it
can be observed from Figure 3, the first eigenvector,
which describes the most important component of
variability, represents a bi-lobe error form around the
average profile. The second and third eigenvectors, on
the other hand, present two different three-lobe form
errors with a different orientation. Similarly, the fourth and
fifth eigenvectors present four-lobe form errors.
As discussed in Cho and Tu (2001) such lobe-form errors
are often characterizing roundness profiles obtained by
turning as a result of spindle error motion which are very
common in turning.
-0.1
0.0
+0.1
30
210
60
240
270
120
300
150
330
180 0
PC no1
-0.1
0.0
+0.1
30
210
60
240
270
120
300
150
330
180 0
PC no2
-0.1
0.0
+0.1
30
210
60
240
270
120
300
150
330
180 0
PC no3
-0.1
0.0
+0.1
30
210
60
240
270
120
300
150
330
180 0
PC no4
-0.1
0.0
+0.1
30
210
60
240
270
120
300
150
330
180 0
PC no5
-0.1
0.0
+0.1
30
210
60
240
270
120
300
150
330
180 0
PC no6
Figure 3: Polar diagrams of the first six dominant PCs of
original data.
5 FILTERING
Traditional methods to analyze one-dimensional
measurements usually involve applying a certain filtering
process in order to separate the roughness and waviness
components of the measured profile. After filtering, the
roughness and waviness components could be added
again to represent the original profile.
In this section, the effect of longwave-pass filtering on the
PCA of measurement data is investigated. In particular, a
linear Gaussian filter is used, since this is the current
state-of-the-art in ISO standards.
In Gaussian filtering, a series of Gaussian curves is fit to
the data at each data point by averaging over an interval,
which can be specified by the stylus tip radius, the trace
length, the number of data points collected, and the step
size. This filter produces a mean line through the data
set, or waviness component, which is than subtracted
from the original curve to yield the roughness component.
According to the standard ISO/TS 12181-2:2003 (E) [15]
the Gaussian longwave-pass filter is defined in the
frequency domain by the following attenuation function:
1
0
exp
c
a f
a f
α
π
⎡ ⎤⎛ ⎞⎟⎜⎢ ⎥⎟⎜= − ⎟⎢ ⎥⎜ ⎟⎟⎜⎝ ⎠⎢ ⎥⎣ ⎦
(10)
Where ( )ln 2 0.4697α π= = , 0a is the amplitude of the
sine wave undulation before filtering, 1a is the amplitude
of this sine undulation after filtering; cf is the cut-off
frequency (in undulation per revolution – UPR) of the
longwave-pass filter, and finally f is the frequency of the
sine wave (in UPR).
The longwave-pass (shortwave-pass) Gaussian filter is a
so-called “phase-correct filter” since it attenuates the
high-frequency (low-frequency) harmonics of the
measurement data without altering their phases. In this
method, the waviness (low-frequencies harmonics) and
roughness (high-frequencies harmonics) components can
be added back together to recreate the original profile.
Figure 4 depicts the data after passing through a
Gaussian filtering process. In particular a longwave-pass
filtering of the original data are considered with cut–off
frequency equal to 50cf = UPR. If compared to the
original data, filtered profiles appear to be smoothed.
+0.01
0.03
+0.01
30
210
60
240
90
270
120
300
150
330
180 0
Figure 4: Polar diagram of 100 experimental profiles
(longwave-pass filtered version with cut-off frequency 50
UPR).
Results obtained by applying PCA to these filtered
profiles are reported in Table 2 and in Figure 5. From the
table, it can be noted that, when the first 10 PCs are
exploited to describe the machined profiles, the ratio of
total variability explained by the first ten PCs is 74.77%.
From the figure, it can be noted that the PCs are
smoothed version of the original ones.
Similar results are obtained by applying a longwave-pass
filtering process to the original data with cut–off frequency
equal to 15cf = UPR. Figure 6 shows that the smoother
effect of the longwave-pass filter is evident.
Table 3 shows that the variability explained by the first
few PCs increases as the data are treated by using a
longwave-pass filter. Therefore, filtering can help in
selecting the most important PCs.
PC Variability Explained Cumulative Variability Explained
1 20.93% 20.93%
2 15.41% 36.34%
3 10.30% 46.63%
4 6.62% 53.26%
5 5.24% 58.50%
6 4.27% 62.76%
7 3.36% 66.12%
8 3.29% 69.41%
9 2.64% 72.05%
10 2.42% 74.47%
Table 2: Variability explained by the first ten dominant
PCs on the longwave-pass filtered data (cut-off frequency
50 UPR).
- 0.1
0.0
+0.1
30
210
60
240
270
120
300
150
330
180 0
PC no1
-0.1
0.0
+0.1
30
210
60
240
270
120
300
150
330
180 0
PC no2
-0.1
0.0
+0.1
30
210
60
240
270
120
300
150
330
180 0
PC no3
-0.1
0.0
+0.1
30
210
60
240
270
120
300
150
330
180 0
PC no4
-0.1
0.0
+0.
30
210
60
240
270
120
300
150
330
180 0
PC no5
-0.1
0.0
+0.1
30
210
60
240
270
120
300
150
330
180 0
PC no6
Figure 5: Polar diagrams of the first six dominant PCs of
the longwave-pass filtered data (cut-off frequency 50
UPR).
-0.01
0.03
+0.01
30
210
60
240
90
270
120
300
150
330
180 0
Figure 6: Polar diagram of 100 experimental profiles
(longwave-pass filtered version with cut-off frequency 15
UPR).
PC Variability Explained Cumulative Variability Explained
1 27.61% 27.61%
2 19.65% 47.26%
3 12.77% 60.03%
4 7.62% 67.65%
5 6.07% 73.72%
6 4.23% 77.95%
7 3.57% 81.52%
8 2.36% 83.89%
9 2.10% 85.99%
10 1.69% 87.68%
Table 3: Variability explained by the first ten dominant
PCs on the longwave-pass filtered data (cut-off frequency
15 UPR).
-0.1
0.0
+0.1
30
210
60
240
270
120
300
150
330
180 0
PC no1
-0.1
0.0
+0.1
30
210
60
240
270
120
300
150
330
180 0
PC no2
-0.1
0.0
+0.1
30
210
60
240
270
120
300
150
330
180 0
PC no3
-0.1
0.0
+0.1
30
210
60
240
270
120
300
150
330
180 0
PC no4
-0.1
0.0
+0.1
30
210
60
240
270
120
300
150
330
180 0
PC no5
-0.1
0.0
+0.1
30
210
60
240
270
120
300
150
330
180 0
PC no6
Figure 7: Polar diagrams of the first six dominant PCs of
the longwave-pass filtered data (cut-off frequency 15
UPR).
6 SUMMARY
This work relies on the idea of identifying the “fingerprint”
of the manufacturing process, by means of statistical
techniques. In this paper, a PCA-based method was
investigated to this aim.
PCA was used to explain the variance-covariance
structure of profile data through few principal components
(PCs), which are linear combinations of the original data
collected on each profile. PCA was indeed applied to real
profile data, representing roundness profiles obtained by
turning. In this case, it was shown that the first set of most
important PCs have a clear physical meaning, since they
can be associated to the lobe-form errors which are often
associated to spindle-motion error which left their
fingerprint on the roundness profiles machined [6]. The
effect of data filtering was further investigated and the
PCA results resulted to be robust to data filtering.
Further research on the applications of PCA to process’
signature is in order. Firstly, the PCA approach will be
applied within a monitoring strategy aimed at detecting
out-of-control in an SPC framework.
7 ACKNOWLEDGMENTS
This work was carried out with the funding of the Italian
M.I.U.R. (Ministry of Education, University, and
Research).
8 REFERENCES
[1] Kang, L. and Albin, S. L., 2000, On line monitoring
when the process yields a linear profile, Journal of
Quality Technology, 32, 418-426.
[2] Mestek, O., Pavlìk, J. and Suchànek, M., 1994,
Multivariate control charts: control charts for
calibration curves, Fresenius’ Journal of Analytical
Chemistry, 350, 344-351.
[3] Walker, E. and Wrigth, S. P., 2002, Comparing
curves using additive models, Journal of Quality
Technology, 34, 118-129.
[4] Mahmoud, M. A. and Woodall, W. H., 2004, Phase I
analysis of linear profiles with calibration applications,
46, 380-391.
[5] Weckenmann, A., Eitzert H., Garmer, M., Webert H.
(1995) Functionality-oriented evaluation and
sampling strategy in coordinate metrology, Precision
Engineering, 17, pp. 244-252.
[6] Cho, N. and Tu, J. F., 2001, Roundness modeling of
machined parts for tolerance analysis, Precision
Engineering, 25, 35-47.
[7] Colosimo, B. M. and Pacella, M. 2005, On the
identification of manufacturing processes’ signature,
18th International Conference on Production
Research (ICPR), Salerno, Italy.
[8] Kourti, T., Lee, J. and MacGregor, J. F., 1996,
Experiences with industrial applications of projection
methods for multivariate statistical process control,
Computers Chem. Engineering, 20, S745-S750.
[9] Qin, S. J., 2003, Statistical process monitoring:
basics and beyond, Journal of Chemometrics, 17,
480-508.
[10] Stover, F. S. and Brill, R. V., 1998, Statistical quality
control applied to Ion chromatography calibrations,
Journal of Chromatography A, 804, 37-43.
[11] Jackson, J. E., 1991, A user’s guide to principal
components, (New York: Wiley).
[12] Bharati M. H., MacGregor, J. F. and Tropper, W.
2003, Softwood lumber grading through on-line
multivariate image analysis techniques, Industrial
Engineering and Chemical Research, 42, 5345-5353.
[13] ISO/TC 213, ISO/TS 12180-2:2003 - Geometrical
product specification (GPS) – Cylindricity, part 2,
International Organization for Standardization,
Geneva, Switzerland.
[14] Woodall, W. H., Spitzner, D. J., Montgomery, D. C.
and Gupta, S., 2004, Using Control Charts to Monitor
Process and Product Quality Profiles, Journal of
Quality Technology, 36-3, 309-320.
[15] ISO/TC 213, ISO/TS 12181-2:2003(E) - Geometrical
product specification (GPS) – Roundness, part 2,
International Organization for Standardization,
Geneva, Switzerland.

More Related Content

What's hot

The Evaluation Model of Garbage Classification System Based on AHP
The Evaluation Model of Garbage Classification System Based on AHPThe Evaluation Model of Garbage Classification System Based on AHP
The Evaluation Model of Garbage Classification System Based on AHPDr. Amarjeet Singh
 
Module 4 data analysis
Module 4 data analysisModule 4 data analysis
Module 4 data analysisILRI-Jmaru
 
Computational Complexity Comparison Of Multi-Sensor Single Target Data Fusion...
Computational Complexity Comparison Of Multi-Sensor Single Target Data Fusion...Computational Complexity Comparison Of Multi-Sensor Single Target Data Fusion...
Computational Complexity Comparison Of Multi-Sensor Single Target Data Fusion...ijccmsjournal
 
COMPUTATIONAL COMPLEXITY COMPARISON OF MULTI-SENSOR SINGLE TARGET DATA FUSION...
COMPUTATIONAL COMPLEXITY COMPARISON OF MULTI-SENSOR SINGLE TARGET DATA FUSION...COMPUTATIONAL COMPLEXITY COMPARISON OF MULTI-SENSOR SINGLE TARGET DATA FUSION...
COMPUTATIONAL COMPLEXITY COMPARISON OF MULTI-SENSOR SINGLE TARGET DATA FUSION...ijccmsjournal
 
GLMM in interventional study at Require 23, 20151219
GLMM in interventional study at Require 23, 20151219GLMM in interventional study at Require 23, 20151219
GLMM in interventional study at Require 23, 20151219Shuhei Ichikawa
 
Research and Development of Algorithmic Transport Control Systems
Research and Development of Algorithmic Transport Control SystemsResearch and Development of Algorithmic Transport Control Systems
Research and Development of Algorithmic Transport Control Systemsijtsrd
 
Missing Value imputation, Poor man's
Missing Value imputation, Poor man'sMissing Value imputation, Poor man's
Missing Value imputation, Poor man'sLeonardo Auslender
 
Lecture 6 guidelines_and_assignment
Lecture 6 guidelines_and_assignmentLecture 6 guidelines_and_assignment
Lecture 6 guidelines_and_assignmentDaria Bogdanova
 
Statistical Modeling: The Two Cultures
Statistical Modeling: The Two CulturesStatistical Modeling: The Two Cultures
Statistical Modeling: The Two CulturesChristoph Molnar
 
An Introduction To Monte Carlo Simulations and Markov Chain Monte Carlo
An Introduction To Monte Carlo Simulations and Markov Chain Monte CarloAn Introduction To Monte Carlo Simulations and Markov Chain Monte Carlo
An Introduction To Monte Carlo Simulations and Markov Chain Monte CarloMax Yousif
 
Penalized Regressions with Different Tuning Parameter Choosing Criteria and t...
Penalized Regressions with Different Tuning Parameter Choosing Criteria and t...Penalized Regressions with Different Tuning Parameter Choosing Criteria and t...
Penalized Regressions with Different Tuning Parameter Choosing Criteria and t...CSCJournals
 
An Influence of Measurement Scale of Predictor Variable on Logistic Regressio...
An Influence of Measurement Scale of Predictor Variable on Logistic Regressio...An Influence of Measurement Scale of Predictor Variable on Logistic Regressio...
An Influence of Measurement Scale of Predictor Variable on Logistic Regressio...IJECEIAES
 
A delay decomposition approach to robust stability analysis of uncertain syst...
A delay decomposition approach to robust stability analysis of uncertain syst...A delay decomposition approach to robust stability analysis of uncertain syst...
A delay decomposition approach to robust stability analysis of uncertain syst...ISA Interchange
 
Ijmer 46067782
Ijmer 46067782Ijmer 46067782
Ijmer 46067782IJMER
 

What's hot (18)

The Evaluation Model of Garbage Classification System Based on AHP
The Evaluation Model of Garbage Classification System Based on AHPThe Evaluation Model of Garbage Classification System Based on AHP
The Evaluation Model of Garbage Classification System Based on AHP
 
Module 4 data analysis
Module 4 data analysisModule 4 data analysis
Module 4 data analysis
 
Computational Complexity Comparison Of Multi-Sensor Single Target Data Fusion...
Computational Complexity Comparison Of Multi-Sensor Single Target Data Fusion...Computational Complexity Comparison Of Multi-Sensor Single Target Data Fusion...
Computational Complexity Comparison Of Multi-Sensor Single Target Data Fusion...
 
COMPUTATIONAL COMPLEXITY COMPARISON OF MULTI-SENSOR SINGLE TARGET DATA FUSION...
COMPUTATIONAL COMPLEXITY COMPARISON OF MULTI-SENSOR SINGLE TARGET DATA FUSION...COMPUTATIONAL COMPLEXITY COMPARISON OF MULTI-SENSOR SINGLE TARGET DATA FUSION...
COMPUTATIONAL COMPLEXITY COMPARISON OF MULTI-SENSOR SINGLE TARGET DATA FUSION...
 
GLMM in interventional study at Require 23, 20151219
GLMM in interventional study at Require 23, 20151219GLMM in interventional study at Require 23, 20151219
GLMM in interventional study at Require 23, 20151219
 
Research and Development of Algorithmic Transport Control Systems
Research and Development of Algorithmic Transport Control SystemsResearch and Development of Algorithmic Transport Control Systems
Research and Development of Algorithmic Transport Control Systems
 
Missing Value imputation, Poor man's
Missing Value imputation, Poor man'sMissing Value imputation, Poor man's
Missing Value imputation, Poor man's
 
recko_paper
recko_paperrecko_paper
recko_paper
 
Lecture 6 guidelines_and_assignment
Lecture 6 guidelines_and_assignmentLecture 6 guidelines_and_assignment
Lecture 6 guidelines_and_assignment
 
Statistical Modeling: The Two Cultures
Statistical Modeling: The Two CulturesStatistical Modeling: The Two Cultures
Statistical Modeling: The Two Cultures
 
An Introduction To Monte Carlo Simulations and Markov Chain Monte Carlo
An Introduction To Monte Carlo Simulations and Markov Chain Monte CarloAn Introduction To Monte Carlo Simulations and Markov Chain Monte Carlo
An Introduction To Monte Carlo Simulations and Markov Chain Monte Carlo
 
Penalized Regressions with Different Tuning Parameter Choosing Criteria and t...
Penalized Regressions with Different Tuning Parameter Choosing Criteria and t...Penalized Regressions with Different Tuning Parameter Choosing Criteria and t...
Penalized Regressions with Different Tuning Parameter Choosing Criteria and t...
 
Lecture 1
Lecture 1Lecture 1
Lecture 1
 
An Influence of Measurement Scale of Predictor Variable on Logistic Regressio...
An Influence of Measurement Scale of Predictor Variable on Logistic Regressio...An Influence of Measurement Scale of Predictor Variable on Logistic Regressio...
An Influence of Measurement Scale of Predictor Variable on Logistic Regressio...
 
A delay decomposition approach to robust stability analysis of uncertain syst...
A delay decomposition approach to robust stability analysis of uncertain syst...A delay decomposition approach to robust stability analysis of uncertain syst...
A delay decomposition approach to robust stability analysis of uncertain syst...
 
Ijmer 46067782
Ijmer 46067782Ijmer 46067782
Ijmer 46067782
 
Modelling and Analysis Laboratory Manual
Modelling and Analysis Laboratory ManualModelling and Analysis Laboratory Manual
Modelling and Analysis Laboratory Manual
 
Analysis Of Attribute Revelance
Analysis Of Attribute RevelanceAnalysis Of Attribute Revelance
Analysis Of Attribute Revelance
 

Viewers also liked (16)

GNwafor_CMS Presentation
GNwafor_CMS PresentationGNwafor_CMS Presentation
GNwafor_CMS Presentation
 
Ppt 5 laminas
Ppt 5 laminasPpt 5 laminas
Ppt 5 laminas
 
Adicción a redes sociales
Adicción a redes socialesAdicción a redes sociales
Adicción a redes sociales
 
Tracing numbers 0 9
Tracing numbers 0 9Tracing numbers 0 9
Tracing numbers 0 9
 
jilani ,r (2)
jilani ,r (2)jilani ,r (2)
jilani ,r (2)
 
ABITARE Sense Hotel
ABITARE Sense HotelABITARE Sense Hotel
ABITARE Sense Hotel
 
Educacion fisica adaptada
Educacion fisica adaptadaEducacion fisica adaptada
Educacion fisica adaptada
 
Metodo Simpson
Metodo SimpsonMetodo Simpson
Metodo Simpson
 
La paternidad-y-maternidad-a-temprana-edad
La paternidad-y-maternidad-a-temprana-edadLa paternidad-y-maternidad-a-temprana-edad
La paternidad-y-maternidad-a-temprana-edad
 
Cherry Blossom
Cherry BlossomCherry Blossom
Cherry Blossom
 
C2 murgia
C2 murgiaC2 murgia
C2 murgia
 
Adiccion a las Redes Sociales y sus consecuencias
Adiccion a las Redes Sociales y sus consecuenciasAdiccion a las Redes Sociales y sus consecuencias
Adiccion a las Redes Sociales y sus consecuencias
 
embolizacion prostatica
embolizacion prostatica embolizacion prostatica
embolizacion prostatica
 
Agujeros en la ionosfera de venus
Agujeros  en la ionosfera de venusAgujeros  en la ionosfera de venus
Agujeros en la ionosfera de venus
 
Tipos de acuiferos_5reis apresentação
Tipos de acuiferos_5reis apresentaçãoTipos de acuiferos_5reis apresentação
Tipos de acuiferos_5reis apresentação
 
Slideshare
SlideshareSlideshare
Slideshare
 

Similar to ColosimoIntieriPacella_v03[1]

Abnormal Patterns Detection In Control Charts Using Classification Techniques
Abnormal Patterns Detection In Control Charts Using Classification TechniquesAbnormal Patterns Detection In Control Charts Using Classification Techniques
Abnormal Patterns Detection In Control Charts Using Classification TechniquesKate Campbell
 
A Combined Model between Artificial Neural Networks and ARIMA Models
A Combined Model between Artificial Neural Networks and ARIMA ModelsA Combined Model between Artificial Neural Networks and ARIMA Models
A Combined Model between Artificial Neural Networks and ARIMA Modelspaperpublications3
 
Investigation of Parameter Behaviors in Stationarity of Autoregressive and Mo...
Investigation of Parameter Behaviors in Stationarity of Autoregressive and Mo...Investigation of Parameter Behaviors in Stationarity of Autoregressive and Mo...
Investigation of Parameter Behaviors in Stationarity of Autoregressive and Mo...BRNSS Publication Hub
 
BPSO&1-NN algorithm-based variable selection for power system stability ident...
BPSO&1-NN algorithm-based variable selection for power system stability ident...BPSO&1-NN algorithm-based variable selection for power system stability ident...
BPSO&1-NN algorithm-based variable selection for power system stability ident...IJAEMSJORNAL
 
A Non Parametric Estimation Based Underwater Target Classifier
A Non Parametric Estimation Based Underwater Target ClassifierA Non Parametric Estimation Based Underwater Target Classifier
A Non Parametric Estimation Based Underwater Target ClassifierCSCJournals
 
SYSTEM IDENTIFICATION AND MODELING FOR INTERACTING AND NON-INTERACTING TANK S...
SYSTEM IDENTIFICATION AND MODELING FOR INTERACTING AND NON-INTERACTING TANK S...SYSTEM IDENTIFICATION AND MODELING FOR INTERACTING AND NON-INTERACTING TANK S...
SYSTEM IDENTIFICATION AND MODELING FOR INTERACTING AND NON-INTERACTING TANK S...ijistjournal
 
Application of Principal Components Analysis in Quality Control Problem
Application of Principal Components Analysisin Quality Control ProblemApplication of Principal Components Analysisin Quality Control Problem
Application of Principal Components Analysis in Quality Control ProblemMaxwellWiesler
 
COMPUTATIONAL COMPLEXITY COMPARISON OF MULTI-SENSOR SINGLE TARGET DATA FUSION...
COMPUTATIONAL COMPLEXITY COMPARISON OF MULTI-SENSOR SINGLE TARGET DATA FUSION...COMPUTATIONAL COMPLEXITY COMPARISON OF MULTI-SENSOR SINGLE TARGET DATA FUSION...
COMPUTATIONAL COMPLEXITY COMPARISON OF MULTI-SENSOR SINGLE TARGET DATA FUSION...ijccmsjournal
 
CELL TRACKING QUALITY COMPARISON BETWEEN ACTIVE SHAPE MODEL (ASM) AND ACTIVE ...
CELL TRACKING QUALITY COMPARISON BETWEEN ACTIVE SHAPE MODEL (ASM) AND ACTIVE ...CELL TRACKING QUALITY COMPARISON BETWEEN ACTIVE SHAPE MODEL (ASM) AND ACTIVE ...
CELL TRACKING QUALITY COMPARISON BETWEEN ACTIVE SHAPE MODEL (ASM) AND ACTIVE ...ijitcs
 
Cone Crusher Model Identification Using Block-Oriented Systems with Orthonorm...
Cone Crusher Model Identification Using Block-Oriented Systems with Orthonorm...Cone Crusher Model Identification Using Block-Oriented Systems with Orthonorm...
Cone Crusher Model Identification Using Block-Oriented Systems with Orthonorm...ijctcm
 
Cone crusher model identification using
Cone crusher model identification usingCone crusher model identification using
Cone crusher model identification usingijctcm
 
292741121 gauge rr_for_an_optical_micrometer_industrial_type_machine
292741121 gauge rr_for_an_optical_micrometer_industrial_type_machine292741121 gauge rr_for_an_optical_micrometer_industrial_type_machine
292741121 gauge rr_for_an_optical_micrometer_industrial_type_machinephgnome
 
Isen 614 project report
Isen 614 project reportIsen 614 project report
Isen 614 project reportVanshaj Handoo
 
IBM SPSS Statistics Algorithms.pdf
IBM SPSS Statistics Algorithms.pdfIBM SPSS Statistics Algorithms.pdf
IBM SPSS Statistics Algorithms.pdfNorafizah Samawi
 

Similar to ColosimoIntieriPacella_v03[1] (20)

Abnormal Patterns Detection In Control Charts Using Classification Techniques
Abnormal Patterns Detection In Control Charts Using Classification TechniquesAbnormal Patterns Detection In Control Charts Using Classification Techniques
Abnormal Patterns Detection In Control Charts Using Classification Techniques
 
A Combined Model between Artificial Neural Networks and ARIMA Models
A Combined Model between Artificial Neural Networks and ARIMA ModelsA Combined Model between Artificial Neural Networks and ARIMA Models
A Combined Model between Artificial Neural Networks and ARIMA Models
 
Investigation of Parameter Behaviors in Stationarity of Autoregressive and Mo...
Investigation of Parameter Behaviors in Stationarity of Autoregressive and Mo...Investigation of Parameter Behaviors in Stationarity of Autoregressive and Mo...
Investigation of Parameter Behaviors in Stationarity of Autoregressive and Mo...
 
04_AJMS_288_20.pdf
04_AJMS_288_20.pdf04_AJMS_288_20.pdf
04_AJMS_288_20.pdf
 
BPSO&1-NN algorithm-based variable selection for power system stability ident...
BPSO&1-NN algorithm-based variable selection for power system stability ident...BPSO&1-NN algorithm-based variable selection for power system stability ident...
BPSO&1-NN algorithm-based variable selection for power system stability ident...
 
I0343047049
I0343047049I0343047049
I0343047049
 
A Non Parametric Estimation Based Underwater Target Classifier
A Non Parametric Estimation Based Underwater Target ClassifierA Non Parametric Estimation Based Underwater Target Classifier
A Non Parametric Estimation Based Underwater Target Classifier
 
SYSTEM IDENTIFICATION AND MODELING FOR INTERACTING AND NON-INTERACTING TANK S...
SYSTEM IDENTIFICATION AND MODELING FOR INTERACTING AND NON-INTERACTING TANK S...SYSTEM IDENTIFICATION AND MODELING FOR INTERACTING AND NON-INTERACTING TANK S...
SYSTEM IDENTIFICATION AND MODELING FOR INTERACTING AND NON-INTERACTING TANK S...
 
Application of Principal Components Analysis in Quality Control Problem
Application of Principal Components Analysisin Quality Control ProblemApplication of Principal Components Analysisin Quality Control Problem
Application of Principal Components Analysis in Quality Control Problem
 
COMPUTATIONAL COMPLEXITY COMPARISON OF MULTI-SENSOR SINGLE TARGET DATA FUSION...
COMPUTATIONAL COMPLEXITY COMPARISON OF MULTI-SENSOR SINGLE TARGET DATA FUSION...COMPUTATIONAL COMPLEXITY COMPARISON OF MULTI-SENSOR SINGLE TARGET DATA FUSION...
COMPUTATIONAL COMPLEXITY COMPARISON OF MULTI-SENSOR SINGLE TARGET DATA FUSION...
 
012
012012
012
 
CELL TRACKING QUALITY COMPARISON BETWEEN ACTIVE SHAPE MODEL (ASM) AND ACTIVE ...
CELL TRACKING QUALITY COMPARISON BETWEEN ACTIVE SHAPE MODEL (ASM) AND ACTIVE ...CELL TRACKING QUALITY COMPARISON BETWEEN ACTIVE SHAPE MODEL (ASM) AND ACTIVE ...
CELL TRACKING QUALITY COMPARISON BETWEEN ACTIVE SHAPE MODEL (ASM) AND ACTIVE ...
 
Cone Crusher Model Identification Using Block-Oriented Systems with Orthonorm...
Cone Crusher Model Identification Using Block-Oriented Systems with Orthonorm...Cone Crusher Model Identification Using Block-Oriented Systems with Orthonorm...
Cone Crusher Model Identification Using Block-Oriented Systems with Orthonorm...
 
Cone crusher model identification using
Cone crusher model identification usingCone crusher model identification using
Cone crusher model identification using
 
292741121 gauge rr_for_an_optical_micrometer_industrial_type_machine
292741121 gauge rr_for_an_optical_micrometer_industrial_type_machine292741121 gauge rr_for_an_optical_micrometer_industrial_type_machine
292741121 gauge rr_for_an_optical_micrometer_industrial_type_machine
 
Isen 614 project report
Isen 614 project reportIsen 614 project report
Isen 614 project report
 
IBM SPSS Statistics Algorithms.pdf
IBM SPSS Statistics Algorithms.pdfIBM SPSS Statistics Algorithms.pdf
IBM SPSS Statistics Algorithms.pdf
 
2016ESWA
2016ESWA2016ESWA
2016ESWA
 
2016ESWA
2016ESWA2016ESWA
2016ESWA
 
2016ESWA
2016ESWA2016ESWA
2016ESWA
 

ColosimoIntieriPacella_v03[1]

  • 1. Identification of Manufacturing Processes Signature by a Principal Component Based Approach Bianca M. Colosimo1 , Andrea N. Intieri 1 , Massimo Pacella 2 1 Dipartimento di Meccanica, Politecnico di Milano, Italy. 2 Dipartimento di Ingegneria dell’Innovazione, Università degli Studi di Lecce, Italy. Abstract Machined surfaces and profiles often present a systematic pattern, usually referred to as “the signature” of the process. Advantages related with identification of process’ signature have been clearly outlined in the literature. The proposed approaches are mainly based on parametric models, in which the signature is described as a combination of analytical functions (predictors) that have to be properly chosen by the analyst depending on the specific case faced. Analytical tools, which do not use parametric model to describe profiles, were also presented in the so-called “profile monitoring” research field. In particular, the Principal Component Analysis (PCA), which is a statistical technique utilized to identify patterns in multivariate data, was successfully applied in chemiometrics. In this paper, the use of PCA is investigated for process’ signature modelling in the case of machined profiles. The goal is to describe a general-purpose approach, which alleviates the analyst from the need to identify a suitable kind of analytical functions for the statistical description of machined profiles. The illustration of the PCA method is based on real measurements data of circular items obtained by turning. Keywords: Profile, Process’ signature, Principal Component Analysis (PCA), Roundness, Turning. 1 INTRODUCTION Machined surfaces and profiles often present a systematic pattern and a superimposed random noise: the first is mainly due to the process used in specific operation conditions, the second is due to unpredictable factors, and is usually called “natural variability”. The systematic pattern constitutes what we will call “the signature” of the process. Advantages related to the identification of process’ signature have been clearly outlined in the literature. When a model of the signature is available, it can be used to improve quality monitoring (e.g., quickly detecting whether process is deviating from its natural behaviour) and quality control (e.g., deciding appropriate corrective actions that have to be taken). With reference to monitoring, the machined signature can be considered similar to a profile. Hence, approaches recently proposed in the literature on profile monitoring should be in principle adopted. As example, in [1] the authors discussed the statistical control of the functional relationship between the temperature of a mass flow controller in the microelectronic industry and the flow of gas released. In [2] the curves considered were obtained from the spectral analysis of a chemical solution when the concentration of a mixture is of interest. In [3] the authors introduced an approach to monitoring the density profile of engineered wood boards. In [4] the use of profile monitoring for calibration applications was discussed. After the seminal work of Weckenmann et al. [5], advantages related with identification of the signature also for manufacturing processes have been widely showed in the literature. These approaches are mainly based on parametric models, in which the manufacturing signature characterizing the profile is described as a linear combination of analytical functions (predictors). These predictors have to be properly chosen by the analyst depending on the specific case faced. For example in turning operations, roundness observed on machined items is mainly due to systematic radial spindle error motions which affect that specific machine tool, as shown in [6]. In these cases, a commonly appreciated possibility is to model radial deviations with periodic functions. Several researchers discussed the modelling of roundness error by fitting a Fourier series, i.e. by sinusoidal functions at several frequencies used as predictors. In other applications, wavelets and splines can be selected as predictors to model complex signatures, as reported in [7]. For instance, wavelets functions should be used instead of sinusoidal ones for modelling step changes in the profile, as for pockets. As previously mentioned, when the manufacturing signature is described by means of parametric models, a cumbersome activity required to the analyst consists in selecting the proper type of predictor functions that should be used. Furthermore, measurement data are most of the times autocorrelated because they are obtained in similar condition of the machining process and of the measurement system. In these cases, an appropriate model should be also defined to describe the autocorrelated structure which characterizes the manufacturing signature. Different analytical methods, which do not require a specific parametric description of the profile under study, were also presented in some applications of profile monitoring. These approaches make use of Principal Component Analysis (PCA), a statistical technique aimed at identifying patterns in multivariate data [8] [9]. PCA is particularly effective because it does not require to the analyst the identification of suitable kinds of predictors for the statistical description of the sampled surface faced. In particular, PCA was successfully applied in chemiometrics (a research area that combines data analysis and multivariate statistics in order to improve chemical industrial plants). In [10] the authors applied PCA for monitoring a chemical chromatography process. Similarly, in [2] the authors applied PCA for monitoring a profile in a chemical process. In this paper, the use of PCA is investigated for process’ signature modelling in the case of machined profiles. In particular, the illustration of the PCA method is based on Intelligent Computation in Manufacturing Engineering - 5
  • 2. real measurements data of circular items obtained by turning. The objective of this study is to investigate advantages related with the use of PCA for process signature identification. This paper is organized as follows. In section 2, properties of the canonical PCA are briefly discussed. Section 3 presents the experimental work faced in this research. In section 4, the PCA is applied on the measurement data, while section 5 discusses the robustness of the proposed approach with respect to filtering of the original data. Eventually, the last section reports the conclusion and some final remarks. 2 PRINCIPAL COMPONENT ANALYSIS Principal Component Analysis (PCA) is a general statistical approach that allows to explain the variability observed in a set of multivariate data by means of a small number of components, namely the principal components, which can be obtained as linear combinations of the original variables. These components are able to summarize most of the information contained in the original data and allow one to better identify the different sources of variation which are affecting the process. PCA is the first step of the data analysis, which is concerned with data reduction of high-dimensional data frequently encountered in chemometrics, computer vision and other domains. An exhaustive description of PCA can be found in [11]. A rough sketch of how PCA works is reported in the following. Assume to collect n profiles, each of them constituted of p equally-spaced measurement points. Let jky denote the k -th point observed on the j -th profile, where 1,...,k p= and 1,...,j n= . The measurements can be hence summarized in the following n p× data matrix Y : 11 1 1 1 1 1 T k p T j jk jp j Tn nk np n y y y y y y y y y ⎡ ⎤⎡ ⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥= =⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎣ ⎦ ⎣ ⎦ y Y y y (1) where 1 2 T j j j jpy y y⎡ ⎤= ⎢ ⎥⎣ ⎦ y is the (column) vector containing the j -th profile ( 1,...,j n= ) and hence each row in (1) contains the data points belonging to a specific profile. Denote with y the average profile, given by 1 1 n j j n = = ∑y y . Given y , a new matrix T n= −Y Y i y can be computed, where ni is an n dimensional column vector of ones, i.e. [ ]1 1 1 T n =i . The matrix Y is thus obtained by subtracting to each profile jy the average profile y and allows to represent the deviations of each profile from the average one. Using this new matrix, the covariance of the original matrix Y , can be rewritten as: ( )( )1 1 1 1 1 n T T j j j n n = = − − = − −∑ Y Y S y y y y (2) The PCA method consists in finding the matrices L and U which satisfy the following relationship: 1 T =U S U L (3) where the matrix L is a diagonal matrix and contains the eigenvalues of 1S : 1 0 0 0 0 0 0 k p l l l ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥= ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ L (4) Without loss of generality, the eigenvalues are supposed ranked in decreasing order (i.e., 1 2 ... 0pl l l> > > > ). The matrix of vectors 1 k p ⎡ ⎤= ⎢ ⎥⎣ ⎦ U u u u is instead orthonormal and is composed by the eigenvectors of 1S ( ku , 1,...,k p= ), which form a new orthonormal basis for the space spanned by Y . It is worth noticing that when the covariance matrix 1S is singular, a subset of the eigenvalues will be equal to zero. For example, in the case of high-dimensional data where the number p of data points collected on each profile is greater than the number of sampled profiles n , the covariance matrix will have at most rank equal to 1n− and hence just the first 1n− eigenvalues will be greater than zero (i.e., 1 2 1... 0nl l l −> > > > ) while the remaining ones will be all equal to zero (i.e., ... 0n pl l= = = ). In this case, the number of principal components will be at most 1n− . Given the eigenvector matrix U , each profile jy can be projected into the directions identified by the eigenvectors, i.e.: ( ) 1 TT T j j j j jk jpz z z⎡ ⎤= − = = ⎢ ⎥⎣ ⎦ z U y y U y (5) where 1,...,j n= . The new variables are called principal components, while the values assumed by these variables whit reference to the j -th observation, i.e., 1,..., ,...,j jk jpz z z , are called “scores” and represent the weight that each new principal component has in explaining this particular observation. In particular, the first principal component corresponds to the direction in which the projected observations have the largest variance. The second component is orthogonal to the first one and corresponds to the second direction in which the variance of the data is significant, etc. Given the matrix U is orthonormal, one can easily show that j j= +y y Uz , for 1,...,j n= . In other words, each original observation can be obtained from its scores: 1 11 1 11 1 1 1 2 2 i.e. ... j p j jp p p pp jp j j j jp p y u u zy y y u u z z z z ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥= +⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥⎣ ⎦⎣ ⎦ ⎣ ⎦ ⎣ ⎦ = + + + +y y u u u (6) where 1,...,j n= .
  • 3. The principal components (PCs) obtained are statistically independent and each PC has a variance equal to the corresponding eigenvalue. Therefore, we can rank the PCs, i.e. the eigenvectors, according to the associated eigenvalue and decide to retain just the most important PCs (the ones which correspond to greater values of the associated variance), while discarding the less important ones (i.e., the ones which are associated with smaller variance). When the whole set of p PCs is considered, the original data are obtained starting from the scores by using equation (6). When instead just the first K ( K p< ) most important PCs are considered, the original data can be just estimated as follows: ( ) 1 1 2 2ˆ ...j j j jK KK z z z= + + + +y y u u u (7) The selection of the proper number of PC is a critical step because when the number of PCs retained in the model is too small, the resulting model will not be able to represent all the significant variability contained in the original data. When the number of PCs retained is excessive, the resulting model will include some random variability inducing a model that try to explain even the random noise of the process. 3 EXPERIMENTAL ROUNDNESS PROFILES Quality of mechanical components is more and more often related to geometric tolerances, e.g., roundness, flatness, etc. Among different geometric specifications, roundness plays a relevant role in circular and cylindrical parts, where functionality is directly related with rotation of the component. For instance, roundness is critically related to the proper functioning of rotating shafts, pistons and cylinders. Here, the PCA is applied to measurement data of roundness profiles obtained by turning. In particular, the experimental data consists in a set of 100n = components machined by turning C20 carbon steel cylinders (which were supplied in 30mm∅ = rolled bars). The final diameter of 26mm∅ = was obtained by performing two turning steps (cutting speed=163 m/min, feed rate=0.2 mm/rev), where for each step the depth of cut was equal to 1 mm. The machined surfaces were eventually scanned on a coordinate measuring machine. According to the standard (ISO/TC 273, 2003 [13]) each roundness profile was described by 748p = equally distributed measurements of the radius. As discussed in [7], data in each sample have to be pre- treated to focus just on roundness form error. In particular, data were rescaled by subtracting the least squares estimation of the radius, and by re-centring the profile on the least square estimation of the centre. As a matter of fact, the out-of-roundness does not depend on the centre’s position and on the average radius. Secondly, a further step of profiles alignment was required. This alignment was needed because the tool starts machining the profile at a point which is random in each sample. Profile alignment allows identifying a common reference system and can be performed by minimizing phase delays. By a polar representation, the j -th sampled profile can be described as a sequence of deviations of the radius measured by the nominal radius, ( )j kr θ , where ( )2 1k k pθ π= − is the angle position and 1,...,k p= . A polar representation of experimental data is given in Figure 1. It can be observed that all the roundness profiles share a common behaviour, i.e. the turning process leaves a fingerprint or a signature on the machined components. Even if there is a systematic behaviour characterizing all the profiles obtained, variation can be noted from profile to profile. As an example, Figure 2 depicts the difference between the average profile (i.e. the one obtained by averaging all the profile collected, represented by a bold line) and one specific profile (in this figure, the first profile is reported and represented by the dashed line). -0.01 0.03 +0.01 30 210 60 240 90 270 120 300 150 330 180 0 Figure 1: Polar diagram of 100 experimental profiles. -0.01 0 +0.01 30 210 60 240 90 270 120 300 150 330 180 0 Figure 2: Polar diagram of average profile (bold line) and profile no 1 (dashed line). Indeed, even when the process is used with given control parameters, one can not expect to observe exactly the same profile since a profile-to-profile variability is often present. As stated in [14]: “Common-cause variation is that variation considered to be characteristic of the process and that cannot be reduced substantially without fundamental process changes. It must be determined how much of the profile-to-profile variation is part of the common-cause variation and should be incorporated” into the model of the signature. PCA helps identifying this profile-to-profile variation because it basically describes the main ways in which the generic profile obtained is varying with respect to the average profile. Hence the average profile can be considered as a model of the “mean” or the expected pattern characterizing the generic profile while the principal components will model the principal ways in which a generic profile can vary with reference to this expected behaviour.
  • 4. 4 APPLICATION OF PCA In this section, the application of PCA is illustrated on the profile data set previously described. By setting ( )jk j ky r θ= , with 1,...,748k = and 1,...,100j = , the measurements are collected in a n by p data matrix Y where 100n = and 748p = . In this case of high-dimensional data where p n> , the covariance matrix of matrix Y has at most rank equal to 1n− and hence at most 1n− significant principal components can be considered. Usually, among such principal components only the first few eigenvectors are associated with systematic variation in the data while the remaining ones are associated with noise. Noise, in this case, refers to uncontrolled experimental and instrumental variations arising from random sources. PCA models are formed by retaining only the PCs which are representing systematic variation in the data. In order to select a proper number of PCs, the eigenvalue corresponding to each PC can be examined. Without loss of generality, assume to rank the PCs with respect to a decreasing order of the corresponding eigenvalue, i.e., 1 2 1... 0nl l l −> > > > . Hence, the variability explained by the j -th PC can be expressed as the ratio between the corresponding eigenvalue jl and the sum 1 1 n j j l − =∑ . Therefore, the cumulative variability explained by the first k dominant PCs is given by: 1 1 1 k n j j j j l l − = =∑ ∑ (8) Table 1 illustrates the variability explained by each of the first ten PCs, as well as the cumulative variability explained by using these PCs. It can be noted that, by using the first 10 PCs, the 52.88% of the total variability observed in the original machined profiles is described. PC Variability Explained Cumulative Variability Explained 1 13.87% 13.87% 2 10.41% 24.28% 3 7.10% 31.38% 4 4.77% 36.15% 5 3.78% 39.92% 6 3.31% 43.24% 7 2.79% 46.02% 8 2.63% 48.65% 9 2.22% 50.87% 10 2.00% 52.88% Table 1: Variability explained by the first ten dominant PCs on the original data. In order to identify a proper meaning of these set of significant PCs, Figure 3 depicts the first six dominant eigenvectors 1 2 3 4 5 6, , , , ,u u u u u u in a polar diagram. As it can be observed from Figure 3, the first eigenvector, which describes the most important component of variability, represents a bi-lobe error form around the average profile. The second and third eigenvectors, on the other hand, present two different three-lobe form errors with a different orientation. Similarly, the fourth and fifth eigenvectors present four-lobe form errors. As discussed in Cho and Tu (2001) such lobe-form errors are often characterizing roundness profiles obtained by turning as a result of spindle error motion which are very common in turning. -0.1 0.0 +0.1 30 210 60 240 270 120 300 150 330 180 0 PC no1 -0.1 0.0 +0.1 30 210 60 240 270 120 300 150 330 180 0 PC no2 -0.1 0.0 +0.1 30 210 60 240 270 120 300 150 330 180 0 PC no3 -0.1 0.0 +0.1 30 210 60 240 270 120 300 150 330 180 0 PC no4 -0.1 0.0 +0.1 30 210 60 240 270 120 300 150 330 180 0 PC no5 -0.1 0.0 +0.1 30 210 60 240 270 120 300 150 330 180 0 PC no6 Figure 3: Polar diagrams of the first six dominant PCs of original data. 5 FILTERING Traditional methods to analyze one-dimensional measurements usually involve applying a certain filtering process in order to separate the roughness and waviness components of the measured profile. After filtering, the roughness and waviness components could be added again to represent the original profile. In this section, the effect of longwave-pass filtering on the PCA of measurement data is investigated. In particular, a linear Gaussian filter is used, since this is the current state-of-the-art in ISO standards. In Gaussian filtering, a series of Gaussian curves is fit to the data at each data point by averaging over an interval, which can be specified by the stylus tip radius, the trace length, the number of data points collected, and the step size. This filter produces a mean line through the data set, or waviness component, which is than subtracted from the original curve to yield the roughness component. According to the standard ISO/TS 12181-2:2003 (E) [15] the Gaussian longwave-pass filter is defined in the frequency domain by the following attenuation function: 1 0 exp c a f a f α π ⎡ ⎤⎛ ⎞⎟⎜⎢ ⎥⎟⎜= − ⎟⎢ ⎥⎜ ⎟⎟⎜⎝ ⎠⎢ ⎥⎣ ⎦ (10) Where ( )ln 2 0.4697α π= = , 0a is the amplitude of the sine wave undulation before filtering, 1a is the amplitude of this sine undulation after filtering; cf is the cut-off
  • 5. frequency (in undulation per revolution – UPR) of the longwave-pass filter, and finally f is the frequency of the sine wave (in UPR). The longwave-pass (shortwave-pass) Gaussian filter is a so-called “phase-correct filter” since it attenuates the high-frequency (low-frequency) harmonics of the measurement data without altering their phases. In this method, the waviness (low-frequencies harmonics) and roughness (high-frequencies harmonics) components can be added back together to recreate the original profile. Figure 4 depicts the data after passing through a Gaussian filtering process. In particular a longwave-pass filtering of the original data are considered with cut–off frequency equal to 50cf = UPR. If compared to the original data, filtered profiles appear to be smoothed. +0.01 0.03 +0.01 30 210 60 240 90 270 120 300 150 330 180 0 Figure 4: Polar diagram of 100 experimental profiles (longwave-pass filtered version with cut-off frequency 50 UPR). Results obtained by applying PCA to these filtered profiles are reported in Table 2 and in Figure 5. From the table, it can be noted that, when the first 10 PCs are exploited to describe the machined profiles, the ratio of total variability explained by the first ten PCs is 74.77%. From the figure, it can be noted that the PCs are smoothed version of the original ones. Similar results are obtained by applying a longwave-pass filtering process to the original data with cut–off frequency equal to 15cf = UPR. Figure 6 shows that the smoother effect of the longwave-pass filter is evident. Table 3 shows that the variability explained by the first few PCs increases as the data are treated by using a longwave-pass filter. Therefore, filtering can help in selecting the most important PCs. PC Variability Explained Cumulative Variability Explained 1 20.93% 20.93% 2 15.41% 36.34% 3 10.30% 46.63% 4 6.62% 53.26% 5 5.24% 58.50% 6 4.27% 62.76% 7 3.36% 66.12% 8 3.29% 69.41% 9 2.64% 72.05% 10 2.42% 74.47% Table 2: Variability explained by the first ten dominant PCs on the longwave-pass filtered data (cut-off frequency 50 UPR). - 0.1 0.0 +0.1 30 210 60 240 270 120 300 150 330 180 0 PC no1 -0.1 0.0 +0.1 30 210 60 240 270 120 300 150 330 180 0 PC no2 -0.1 0.0 +0.1 30 210 60 240 270 120 300 150 330 180 0 PC no3 -0.1 0.0 +0.1 30 210 60 240 270 120 300 150 330 180 0 PC no4 -0.1 0.0 +0. 30 210 60 240 270 120 300 150 330 180 0 PC no5 -0.1 0.0 +0.1 30 210 60 240 270 120 300 150 330 180 0 PC no6 Figure 5: Polar diagrams of the first six dominant PCs of the longwave-pass filtered data (cut-off frequency 50 UPR). -0.01 0.03 +0.01 30 210 60 240 90 270 120 300 150 330 180 0 Figure 6: Polar diagram of 100 experimental profiles (longwave-pass filtered version with cut-off frequency 15 UPR). PC Variability Explained Cumulative Variability Explained 1 27.61% 27.61% 2 19.65% 47.26% 3 12.77% 60.03% 4 7.62% 67.65% 5 6.07% 73.72% 6 4.23% 77.95% 7 3.57% 81.52% 8 2.36% 83.89% 9 2.10% 85.99% 10 1.69% 87.68% Table 3: Variability explained by the first ten dominant PCs on the longwave-pass filtered data (cut-off frequency 15 UPR).
  • 6. -0.1 0.0 +0.1 30 210 60 240 270 120 300 150 330 180 0 PC no1 -0.1 0.0 +0.1 30 210 60 240 270 120 300 150 330 180 0 PC no2 -0.1 0.0 +0.1 30 210 60 240 270 120 300 150 330 180 0 PC no3 -0.1 0.0 +0.1 30 210 60 240 270 120 300 150 330 180 0 PC no4 -0.1 0.0 +0.1 30 210 60 240 270 120 300 150 330 180 0 PC no5 -0.1 0.0 +0.1 30 210 60 240 270 120 300 150 330 180 0 PC no6 Figure 7: Polar diagrams of the first six dominant PCs of the longwave-pass filtered data (cut-off frequency 15 UPR). 6 SUMMARY This work relies on the idea of identifying the “fingerprint” of the manufacturing process, by means of statistical techniques. In this paper, a PCA-based method was investigated to this aim. PCA was used to explain the variance-covariance structure of profile data through few principal components (PCs), which are linear combinations of the original data collected on each profile. PCA was indeed applied to real profile data, representing roundness profiles obtained by turning. In this case, it was shown that the first set of most important PCs have a clear physical meaning, since they can be associated to the lobe-form errors which are often associated to spindle-motion error which left their fingerprint on the roundness profiles machined [6]. The effect of data filtering was further investigated and the PCA results resulted to be robust to data filtering. Further research on the applications of PCA to process’ signature is in order. Firstly, the PCA approach will be applied within a monitoring strategy aimed at detecting out-of-control in an SPC framework. 7 ACKNOWLEDGMENTS This work was carried out with the funding of the Italian M.I.U.R. (Ministry of Education, University, and Research). 8 REFERENCES [1] Kang, L. and Albin, S. L., 2000, On line monitoring when the process yields a linear profile, Journal of Quality Technology, 32, 418-426. [2] Mestek, O., Pavlìk, J. and Suchànek, M., 1994, Multivariate control charts: control charts for calibration curves, Fresenius’ Journal of Analytical Chemistry, 350, 344-351. [3] Walker, E. and Wrigth, S. P., 2002, Comparing curves using additive models, Journal of Quality Technology, 34, 118-129. [4] Mahmoud, M. A. and Woodall, W. H., 2004, Phase I analysis of linear profiles with calibration applications, 46, 380-391. [5] Weckenmann, A., Eitzert H., Garmer, M., Webert H. (1995) Functionality-oriented evaluation and sampling strategy in coordinate metrology, Precision Engineering, 17, pp. 244-252. [6] Cho, N. and Tu, J. F., 2001, Roundness modeling of machined parts for tolerance analysis, Precision Engineering, 25, 35-47. [7] Colosimo, B. M. and Pacella, M. 2005, On the identification of manufacturing processes’ signature, 18th International Conference on Production Research (ICPR), Salerno, Italy. [8] Kourti, T., Lee, J. and MacGregor, J. F., 1996, Experiences with industrial applications of projection methods for multivariate statistical process control, Computers Chem. Engineering, 20, S745-S750. [9] Qin, S. J., 2003, Statistical process monitoring: basics and beyond, Journal of Chemometrics, 17, 480-508. [10] Stover, F. S. and Brill, R. V., 1998, Statistical quality control applied to Ion chromatography calibrations, Journal of Chromatography A, 804, 37-43. [11] Jackson, J. E., 1991, A user’s guide to principal components, (New York: Wiley). [12] Bharati M. H., MacGregor, J. F. and Tropper, W. 2003, Softwood lumber grading through on-line multivariate image analysis techniques, Industrial Engineering and Chemical Research, 42, 5345-5353. [13] ISO/TC 213, ISO/TS 12180-2:2003 - Geometrical product specification (GPS) – Cylindricity, part 2, International Organization for Standardization, Geneva, Switzerland. [14] Woodall, W. H., Spitzner, D. J., Montgomery, D. C. and Gupta, S., 2004, Using Control Charts to Monitor Process and Product Quality Profiles, Journal of Quality Technology, 36-3, 309-320. [15] ISO/TC 213, ISO/TS 12181-2:2003(E) - Geometrical product specification (GPS) – Roundness, part 2, International Organization for Standardization, Geneva, Switzerland.