Published on

Published in: Technology
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide


  1. 1. Recursive Least-Squares Estimation in Case of Interval Observation Data H. Kutterer1), and I. Neumann2) 1) Geodetic Institute, Leibniz University Hannover, D-30167 Hannover, Germany, kutterer@gih.uni-hannover.de 2) Institute of Geodesy - Geodetic Laboratory, University FAF Munich, D-85579 Neubiberg, Germany, ingo.neumann@unibw.de Abstract: In the engineering sciences, observation uncertainty often consists of two main types: random variability due to uncontrollable external effects, and imprecision due to remaining systematic errors in the data. Interval mathematics is well-suited to treat this second type of uncertainty in, e. g., interval- mathematical extensions of the least-squares estimation procedure if the set-theoretical overestimation is avoided (Schön and Kutterer, 2005). Overestimation means that the true range of parameter values representing both a mean value and imprecision is only quantified by rough, meaningless upper bounds. If recursively formulated estimation algorithms are used for better efficiency, overestimation becomes a key problem. This is the case in state-space estimation which is relevant in real-time applications and which is essentially based on recursions. Hence, overestimation has to be analyzed thoroughly to minimize its impact on the range of the estimated parameters. This paper is based on previous work (Kutterer and Neumann, 2009) which is extended regarding the particular modeling of the interval uncertainty of the observations. Besides a naïve approach, observation imprecision models using physically meaningful influence parameters are considered; see, e. g., Schön and Kutterer (2006). The impact of possible overestimation due to the respective models is rigorously avoided. In addition, the recursion algorithm is reformulated yielding an increased efficiency. In order to illustrate and discuss the theoretical results a damped harmonic oscillation is presented as a typical recursive estimation example in Geodesy. Keywords: Interval mathematics, imprecision, recursive parameter estimation, overestimation, least- squares, damped harmonic oscillation. 1. Introduction State-space estimation is an important task in many engineering disciplines. It is typically based on a compact recursive reformulation of the classical least-squares estimation of the parameters which describe the system state. This reformulation reflects the optimal combination of the most recent parameter estimate and of newly available observation data; it is equivalent to a least-squares parameter estimation which uses all available data. However, through the recursive formulation it allows a more efficient update of the estimated values which makes it well-suited for real-time applications. Conventionally, the real-time capability of a process or algorithm, respectively, means that the results are available without any delay when they are required within the process. In a system-theoretical framework also physical knowledge about the dynamic system state can be available in terms of a system of differential equations. In this case a state-space filter such as the well-4th International Workshop on Reliable Engineering Computing (REC 2010)Edited by Michael Beer, Rafi L. Muhanna and Robert L. MullenCopyright © 2010 Professional Activities Centre, National University of Singapore.ISBN: 978-981-08-5118-7. Published by Research Publishing Services.doi:10.3850/978-981-08-5118-7 012 101
  2. 2. H. Kutterer, and I. Neumann known Kalman filter is used which extends the concept of state-space estimation as it combines predicted system information from the solution of the set of differential equations and additional, newly available observation data (Gelb, 1974). As a special case of state-space filtering, state-space estimation considers the same parameter vector through all recursion steps; nevertheless the estimated values will vary. Moreover, time is not the relevant quantity but the observation index. This allows some convenient features such as the efficient elimination of observation data which are considered as outliers. In any case, the state-space can comprise parameters which are system-immanent and not directly observable. It is common practice to assess the uncertainty of the observation data in a stochastic framework, only. This means that the observation errors are modeled as random variables and vectors, respectively. This type of uncertainty is called random variability. Classical models in parameter estimation refer to expectation vectors and variance-covariance matrices as first and second moments of the random distribution of the observation. Other approaches based on the Maximum-Likelihood estimation take the complete random distribution into account. In case of non-normal distribution numerical approximation techniques such as Monte-Carlo sampling procedures are applied for the derivation of the densities of the estimated parameters as well as of derived quantities and measures (Koch, 2007). However, there are more sources of uncertainty in the data than just random errors. Actually, depending on the particular application unknown deterministic effects can introduce a significant level of uncertainty. Such effects are also known as systematic errors which are typically reduced or even eliminated by a mixture of different techniques if an adequate observation configuration was implemented: (i) modification of the observation values using physical or geometrical correction models, (ii) linear combinations of the original observations such as observation differences which can reduce synchronization errors or atmospherically induced run-time differences in distance observations, (iii) dedicated parameterization of the effect in the observation equations. Since none of these techniques is rigorously capable to eliminate an unknown deterministic effect completely or to determine its value, this effect has to be modeled accordingly. Here, interval mathematics is used as theoretical background introducing intervals and interval vectors as additional uncertain quantities. This second type of uncertainty is called imprecision. The joint assessment of random variability and imprecision of observation data in least-squares estimation has been treated in a number of publications. However, the consideration of recursive state-space estimation has to treat the overestimation problem of interval-mathematical evaluations in a more elaborated way than in classical estimation. Overestimation is caused by, e. g., (hidden) dependencies between interval quantities and it is visible in interval-mathematical properties like, e. g., sub-distributivity. A further problem is caused by the interval inclusion of the range of values of a linear mapping of a vector consisting of interval data which usually generates additional values; see, e. g., Schön and Kutterer (2005) for a discussion of the two- and three-dimensional case. Since recursive formulations particularly exploit such dependencies for the sake of a compact and efficient notation a significant overestimation is expected. This study is based on previous work on the interval and fuzzy extension of the Kalman filter (Kutterer and Neumann, 2009). Here, two main differences have to be mentioned. First, the approach is simplified as the system-state parameters are considered as static quantities which do not change with time (or forces). Second, the efficiency of the derivation of the measures of the imprecision of the estimated parameters is increased due to a new formulation. The uncertainty of the observation data is formulated in a comprehensive way referring to physically meaningful deterministic influence parameters. The paper is organized as follows. In Section 2 least-squares parameter estimation is reviewed whereas in Section 3 the recursive formulation is introduced and discussed. In Section 4 the applied model of imprecision is motivated and described. Section 5 provides the interval formulation of the interval-102 4th International Workshop on Reliable Engineering Computing (REC 2010)
  3. 3. Recursive Least-Squares Estimation in Case of Interval Observation Data mathematical extension of recursive least-squares state-space estimation. In Section 6 the recursive estimation of state-space parameters based on the observation of a damped harmonic oscillation is discussed as an illustrative example. Section 7 concludes the paper. 2. Least-Squares Parameter Estimation in Linear Models Recursive least-squares state-space estimation is based on the reformulation of the least-squares estimation using all available observation data; see, e. g., Koch (1999). The model with observation equations is considered in the following. It is a typical linear model which is also known as Gauss-Markov model. It consists of a functional part E l Ax (1) which relates the expectation vector E(l) of the n 1 -dimensional vector l of the observations with a linear combination of the unknown u 1 -dimensional vector of the parameters x with n u. The n u - dimensional matrix A is called configuration matrix or design matrix, respectively. Note that the matrix A can be either column-regular or column-singular. The difference r n u (or r n u d in case of column-singular models with d the rank deficiency) is called redundancy; it quantifies the degree of over- determination of the linear estimation model. In case of an originally non-linear model a linearization based on a multidimensional Taylor series expansion of the n 1 vector-valued function f is derived as f E l f x f x0 x x0 x x0 f E l f x0 x x0 x x0 which yields a fully analogous representation to Eq. (1) if the “ ” sign is neglected: f E l A x, with l : l f x 0 , x : x x 0 , A : . (2) x x0 For the sake of a simpler representation only the linear case according to Eq. (1) is discussed in the following. Typically, the functional model part is given through the residual equations v A x l with v E l l. (3) The Gauss-Markov model also comprises a second model part which refers to uncertainty in terms of the regular variance-covariance matrix (vcm) of the observations ll and residuals vv , respectively, as 2 2 1 V l ll vv 0 Qll 0 P (4) 2 with the (theoretical) variance of the unit weight 0 , the cofactor matrix of the observations Qll and the 1 weight matrix of the observations P Q ll . The unknown vector of parameters is estimated based on the principle of weighted least-squares via the normal equations systems ˆ AT PA x AT P l (5)4th International Workshop on Reliable Engineering Computing (REC 2010) 103
  4. 4. H. Kutterer, and I. Neumann as 1 ˆ x AT PA AT Pl (6) for a column-regular design matrix A. In case of a column-singular design matrix a generalized matrix inverse is used leading to ˆ x AT PA AT Pl. (7) The cofactor matrix and the vcm of the estimated parameters are derived by the law of variance propagation as 1 Q xx ˆˆ AT PA and ˆˆ xx 2 0 Q xx , ˆˆ (8) and Q xx ˆˆ AT PA and ˆˆ xx 2 0 Q xx , ˆˆ (9) respectively. Note that there are several other quantities of interest such as the estimated vectors of observations ˆ and residuals v , the corresponding cofactor matrices and vcms, and the estimated value of l ˆ the variance of the unit weight ˆ ˆ vT Pv ˆ0 2 . (10) r Due to the restricted space these quantities are not treated in this paper. The discussion is limited to the recursive estimation of the parameter vector and on the determination of its vcm. 3. Recursive Parameter Estimation in Linear Models The idea behind recursive parameter estimation is the optimal combination of the most recent estimated parameter vector and of observation data which were not included in the previous estimation due to, e. g., their later availability. This is a typical situation in continuously operating monitoring systems where the state of the considered object is observed repeatedly in defined intervals. The set of parameter vector components can be understood as state-space representation. With each newly incoming set of observations the estimated state of the object is updated as a basis for further analysis and possibly required decisions such as, e. g., in alarm systems. Note that the algorithms presented here just rely on the indices of the observation data which are not necessarily related with time. Hence, by introducing negative weights it is also possible to eliminate observation data from the estimation which is required in case of erroneous data. This combination is considered as optimal in the meaning of the least-squares principle. Thus, the required equations are derived from the equations given in Section 2. The observation vector is separated into two parts, the first one containing the set of all old observations l i 1 and the second one containing the new observations l i . The residual vector v, the design matrix A and the weight matrix P are divided into corresponding parts according to vi 1 Ai 1 li 1 Pi 1 0 x , P (11) vi Ai li 0 Pi104 4th International Workshop on Reliable Engineering Computing (REC 2010)
  5. 5. Recursive Least-Squares Estimation in Case of Interval Observation Data where the old and the new observation vectors are considered as uncorrelated which leads to the 0 matrices ˆ at the off-diagonal blocks of P. The least-squares solution x i of the parameter vector can be obtained using Eq. (6) or (7), respectively. Note that the upper indices in brackets indicate the recursion step. ˆ The recursion algorithm requires the solution x i 1 and its cofactor matrix Q xx 1 which are assumed to i ˆˆ be derived in the previous recursion step. The existence of this solution is guaranteed in general since an ˆ initial solution x0 can always be derived – at least from a first consistent set of observations l 0 with dim l 0 n0 1 u 1 . In the following, only column-regular design matrices are assumed which yield regular normal equations matrices. Note that comparable equations can be derived for column- singular matrices. Application of the least-squares principle on Eq. (11) leads to the extended normal equations system T T T T Ai 1 Pi 1Ai 1 Ai ˆ Pi Ai xi Ai 1 Pi 1li 1 Ai Pili (12) and hence to the new, updated vector of estimated parameters T T 1 T T ˆ xi Ai 1 Pi 1Ai 1 Ai PiAi Ai 1 Pi 1l i 1 Ai Pili (13) which is based on all available observation information. The corresponding cofactor matrix consequently reads as T T 1 i Q xx ˆˆ Ai 1 Pi 1Ai 1 Ai PiAi . (14) The recursion is introduced through the matrix identity according to, e. g., Koch (1999, p. 37), 1 1 1 A BD 1C A AB D CAB CA (15) which allows to reformulate Eq. (14) and thus Eq. (13). This yields the updated vector of estimated parameters T 1 ˆ xi ˆ xi 1 Q xx 1 A i i ˆˆ i Q ww wi (16) with T 1 i Q xx ˆˆ Q xx 1 i ˆˆ Q xx 1 A i i ˆˆ i Q ww A i Q xx 1 , i ˆˆ (17) T i Q ww Qlli A i Q xx 1 A i i ˆˆ , (18) wi li Ai xi 1. ˆ (19) i The vector w quantifies the discrepancy between the new observations and the observation values ˆ which can be predicted from the available parameter values x i 1 . In total, Eq. (16) to Eq. (19) are very compact as they avoid calculate the inverse of the normal equations matrix in total. Instead, the inverse of i the cofactor matrix Q ww is needed which has the same dimension as the number of new observations l i . If the number of new observations in each step is rather small, the recursion sequence is quite efficient and hence well-suited for real-time applications. Computation time can be saved additionally if the matrix T i 1 i product Q xx A ˆˆ is stored in an auxiliary matrix. In order to summarize the derivations in this section it4th International Workshop on Reliable Engineering Computing (REC 2010) 105
  6. 6. H. Kutterer, and I. Neumann can be stated that the recursive formulation of least-squares estimation in a linear model is equivalent to the “all-at-once” estimation using the completely available observation data. The obtained algorithm is thoroughly based on the efficient update of a matrix inverse. Besides the recursive update of least-squares estimates in sequential observation procedures, recursive elimination of incorrect observations from the estimation process is possible as well. In combination, the two techniques can be applied for polynomial filtering as a generalization of the moving-average technique. 4. Observation Imprecision The algorithm for recursive parameter estimation derived in Section 3 relies on observation uncertainty of random variability type only. If, however, imprecision has to be taken into account, the estimation equations have to be extended in a proper way; see, e. g., Kutterer and Neumann (2009) for the Kalman filter. The starting point is the reinterpretation of the observation vector l as g g l y g s y g s0 s s0 y G s with y : y g s 0 , G : (20) s s0 s s0 and with y the random vector of originally obtained observations which have to be reduced regarding physical or geometrical effects. These reductions are considered as additive; they are described as a function g of basic influence parameters s such as temperature or air pressure. The numerical values s0 of these influence parameters are based on, e. g., actual observation, long-term experience, convention, experts’ opinion or just rough estimates. As the values of the parameters s are fixed through all calculations their influence on the estimation is deterministic. Remaining deviations are to be expected; this effect is comprised in the linear approximation G s of the relation between the basic influence parameters and the observation values which are used in the model. Eq. (20) allows the separate introduction of random variability and imprecision. Random variability is associated with the random vector y , mainly through the vcm yy ll . Imprecision refers to s and is modeled by means of a real interval vector s with s sr , sr 0, s r and sr the interval radius as a measure of imprecision. Note that the term in brackets denotes the interval representation with lower and upper bounds whereas the term in angle brackets denotes the midpoint-radius representation. From the viewpoint of applications it is reasonable to assume sm 0 for the interval midpoint since justified knowledge about any deviation would imply more refined corrections leading to the consequent validity of the assumption. This separation allows the identification of the corrected observation values y with the mean point of the interval vector l m y and of the remaining deterministic errors with s which are bounded by s . The total range of the observation vector l with respect to s is given as l l y G s s s . (21) This convex polyhedron is generally a true subset of the interval vector l lm , lr y, G s r . The operator applied to a matrix converts the matrix coefficients to their absolute values. Due to the106 4th International Workshop on Reliable Engineering Computing (REC 2010)
  7. 7. Recursive Least-Squares Estimation in Case of Interval Observation Data construction procedure the interval vector l represents the closest interval inclusion of l which is exact component by component. More information on intervals, interval vectors, arithmetic rules, etc., can be found in standard textbooks such as Alefeld and Herzberger (1983) or Jaulin et al. (2000). For a better understanding of possible models for the basic influence parameters three examples for Eq. (20) are given here which are also relevant for the application example in Section 6. One possibility is the modeling of an individual additive parameter for each observation in terms of l1 y1 1 0  0 s1 l2 y2 0 1  0 s2 . (22)        ln yn 0 0  1 sn An alternative is the modeling of one common additive parameter as an unknown observation offset as l1 y1 1 l2 y2 1 s. (23)    ln yn 1 As a second alternative a common multiplicative parameter can be modeled describing the effect of an unknown drift with time t or step index i as l1 y1 t1 t0 l2 y2 t 2 t0 s. (24)    ln yn t n t0 It is also possible to refer the multiplicative parameter to the magnitude of the observed value yi which can be required in case of an unknown scale factor in distance observations with respect to a reference length such as l1 y1 y1 l2 y2 y2 s. (25)    ln yn yn In addition, all models can be composed for joint use. Many other models can be meaningful; see, e. g., Schön and Kutterer (2006) for a study on a refined interval modeling of observation and parameter uncertainty in GPS (Global Positioning System) data analysis. Note that the modeling of observation imprecision in terms of real intervals can be extended to fuzzy numbers and intervals, respectively, in a straightforward manner. It is well-known that due to the convexity of fuzzy intervals the respective -cuts can be identified as real intervals; see, e. g., Möller and Beer (2004). The technique of -cut discretization exploits this property. For this reason the present discussion can easily be seen as a special case of a fuzzy approach which is discussed here as an interval approach for the sake of simplicity but without loss of generality.4th International Workshop on Reliable Engineering Computing (REC 2010) 107
  8. 8. H. Kutterer, and I. Neumann 5. Interval Extension of Recursive Estimation If recursive estimation as introduced in Section 3 is applied to interval observation data as defined in Section 4, overestimation is the key problem which has to be solved. Overestimation arises from several causes. A first one was indicated in the discussion of Eq. (21) since the range of values of a linear mapping z F x, x x , (26) is a convex polyhedron in general but usually not an interval vector. Hence, interval mathematics is not closed with respect to a linear mapping. Moreover, the sub-distributivity property MF x M F x (27) holds which reflects lacking associativity in case of matrix multiplications and interval vectors. Finally, already for single intervals the range of values can be overestimated such as, e. g., x x 2 xr , 2 xr 0,0 y y x x, x x (28) in case of dependencies between the intervals. This shows that the naïve application of the fundamental rules of interval arithmetic is not a proper way for evaluating the range of parameter values in recursive estimation since it is crucial to avoid any possible cause of overestimation. Actually, the tightest interval inclusion of the actual range of values is always given as z z z MF x, x x z 0 MF x . (29) In case of Eq. (28) this yields the correct range of values 1 1 z x x 1 1 x 1 1 x 0 x 0 z z 0 0,0 . (30) 1 1 If recursive least-squares estimation is considered as described by Eq. (16) to Eq. (19) the extension is ˆ straightforward for the interval midpoints x m and l m which yields T 1 ˆ xm i ˆ xm i 1 Q xx 1 A i i ˆˆ i Q ww wm i (31) in a compact and efficient representation with ˆ w m i l m i A i xm i 1 (32) which is possible because of the symmetry of the intervals with respect to the midpoints. ˆ However, for the calculation of the interval radius xr an alternative method is required because in Eq. (31) overestimation occurs since the true range of values T 1 ˆ xi ˆ ˆ xi xi 1 Q xx 1 A i i ˆˆ i Q ww li ˆ Aixi 1 ,l i ˆ l i ,x i 1 ˆ xi 1 ˆ xi (33) ˆ is a convex polyhedron which is included by a interval vector. Through this inclusion the set x i is enlarged and the additional values are taken into account in the next recursion step. Thus, the effect of overestimation accumulates very quickly. This problem is overcome effectively if the recursion is resolved by referring the recursion equations to the complete set of observations which are available at a respective recursion step. In order to explain and reduce the effect of dependencies the observations on their part are referred to the original, independent values of the basic influence parameters s.108 4th International Workshop on Reliable Engineering Computing (REC 2010)
  9. 9. Recursive Least-Squares Estimation in Case of Interval Observation Data An equivalent result is available if Eq. (13) and Eq. (14) are directly used. This is possible because of the formal identity of the least-squares solution presented in Section 2 which uses all observations at once and the recursive solution given in Section 3. Starting with T T ˆ xi i Q xx ˆˆ Ai 1 Pi 1li 1 Ai Pili (34) i the recursion is only needed for the update of the cofactor matrix Q xx . If for all recursion steps an identical ˆˆ vector s is assumed, Eq. (34) can be rewritten as T T ˆ xi i Q xx ˆˆ Ai 1 Pi 1 yi 1 Gi 1 s Ai Pi yi Gi s (35) using Eq. (20). Note that matrix G i relates the new observations in the i-th recursion step with the constant vector of basic influence parameters whereas matrix G i 1 recursively compiles the respective matrices G of all previous steps. Reordering of Eq. (35) yields T T T T ˆ xi i Q xx ˆˆ Ai 1 Pi 1yi 1 Ai Piyi Ai 1 P i 1G i 1 s Ai PiGi s (36) and T T T T ˆ xi i Q xx ˆˆ Ai 1 Pi 1yi 1 Ai Piyi i Q xx ˆˆ Ai 1 P i 1G i 1 Ai PiGi s (37) and finally T T ˆ xi ˆi xm i Q xx ˆˆ Ai 1 Pi 1G i 1 Ai PiGi s. (38) Thus, the interval vector radius of the estimated parameters in the i-th recursion step is efficiently derived as T T ˆ xri i Q xx ˆˆ Ai 1 Pi 1G i 1 Ai P i G i sr (39) or ˆ xri Q xx M i sr , i ˆˆ (40) respectively, with the recursively calculated matrix T T Mi : Ai 1 Pi 1G i 1 Ai PiGi. (41) 6. Application example Recursive estimation is always relevant if the values of the estimated parameters are needed in real-time or if the available data storage is limited. In order to demonstrate efficient recursive estimation in case of both data random variability and imprecision using interval mathematics, the observation of a damped harmonic oscillation is presented and discussed exemplarily. The principal observation configuration is shown in Figure 1. The mathematical model is defined as4th International Workshop on Reliable Engineering Computing (REC 2010) 109
  10. 10. H. Kutterer, and I. Neumann 2 y t y0 A exp t sin t T y (t ) spring length at time t A oscillation amplitude (42) oscillation phase damping parameter T oscillation period y0 offset parameter with the approximately known parameters A, , , T, and y0 which have to be estimated from the observations yi at discrete times ti . The parameter T is functionally related with the spring constant . Figure 1. Spring-damping model. The values of the parameters chosen for the simulations in this section are presented in Table I; denotes the constant individual standard deviation of all single observations yi . The resulting oscillation is shown in Figure 2. It is identical for all following three simulations. Table I. A priori values for the simulation of a damped harmonic oscillation A T y0 1 - /10 0.01 10 0 0.001 Figure 2. Damped spring oscillation observed with 100 points over 10 periods.110 4th International Workshop on Reliable Engineering Computing (REC 2010)
  11. 11. Recursive Least-Squares Estimation in Case of Interval Observation Data Table II gives the numerical parameters of the imprecision models for the three simulations of the damped harmonic oscillation. The simulations were calculated based on recursive estimation with interval data. The differences between the simulations lie in the modeling of imprecision. Model I assumes individual interval radii of identical size for all observations; cf. Eq. (22). Model II assumes two interval components which are common for all observations: an (additive) offset reflecting the uncertainty about the zero reference (cf. Eq. (23)) and a (multiplicative) factor proportional to the observed spring length y which refers to the epistemic uncertainty with respect to an etalon or a different length reference (cf. Eq. (25)). There are no individual terms as in Model I. Model III is based on Model II but comprises an additional (multiplicative) factor proportional to time t which represents a drift; cf. Eq. (24). Table II. Imprecision models for the simulations 4 Model I Individual imprecision terms for all observations sr 10 Two common imprecision terms for all observations, no individual terms: Model II 3 4 additive term sr 10 and term proportional to spring length y sr 10 4 Model III As imprecision model II with an additional factor prop. to time t: sr 10 Figure 3 shows the results for the recursive estimation using Model I, Figure 4 for Model II and Figure 5 for Model III. In each case the first ten epochs were combined for the estimation of the initial solution of the recursion. 100 observations were used in total; they are indicated in Figure 2. Based on the initial solution the next observation was introduced to the estimation, and the estimated parameters and their cofactor matrix were updated using Eq. (31), Eq. (32), Eq. (17) and Eq. (18). The interval radii of the estimated parameters were calculated using Eq. (39). Random noise was added to each observation value as indicated in Table I; for all three simulation the same noisy observation data were used. In all three figures the recursively estimated parameters are indicated by light gray diamonds. The standard deviations of the estimated parameters are shown with dark gray diamonds for all epochs symmetric to 0. The interval radii of the estimated parameters are shown with black diamonds symmetric to 0. All figures show the decrease of the standard deviations of all estimated parameters tending towards zero with increasing number of observations and epochs, respectively. Like the estimated parameters these values are identical for all three simulations since the model for the standard deviations of the observations was identical as well. Thus, they confirm the general expectation of successively improved information about the non-observable system state information. In contrast to the decrease of the standard deviations there are several effects which reflect the systematic, deterministic character of the modeled imprecision terms. All given values are exact component by component as explained in Section 5 meaning that they represent the correct range of values. In Figure 3 the imprecision of the damping parameter and of the oscillation period are reduced when more observations are available. However, this does not hold for the amplitude, the phase and the offset parameter. Due to the ˆ individually modeled observation interval radii there is a remaining epistemic uncertainty: Ar 2.5 10 4 , ˆr ˆ 2.5 10 4 , y0,r 1 10 4 . Looking at Figure 4, the situation changes completely. Phase, damping and period do not suffer from the modeled interval data uncertainty which reflects two systematic effects: unknown offset and scale4th International Workshop on Reliable Engineering Computing (REC 2010) 111
  12. 12. H. Kutterer, and I. Neumann of the observation of the spring length. Obviously, this type of uncertainty is eliminated in total already in the initial solution of these three parameters. The modeled imprecision is absorbed by the amplitude ˆ A 1 10 4 and by the offset y ˆ 1 10 3 . A possible explanation is that these two parameters r 0, r represent absolute information whereas for the other three parameters only relative information is required. In such a case effect differences are relevant which are eliminated in case of identical effect magnitudes. Of course a similar effect also occurs in case of Model I where observation differences lead to identical interval radii different from 0. This reasoning is also supported by the results shown in Figure 5. Here, an additional scale imprecision component is modeled with respect to time. Hence, the observation interval radii are additionally increasing linearly with time. This is directly propagated to the offset imprecision. All other parameter imprecision measures show periodic effects which indicate that depending on the particular time of estimation with respect to the completeness of a period the modeled systematic components are more or less eliminated – or not. Obviously, for the determination of the parameters there are better and worse conditions which have to be known when imprecision is considered according to Model III. In any case, the presented methods provide a mathematical and algorithmic framework which allows adequate decisions. 7. Conclusions Recursive parameter estimation based on the least-squares principle is an important task in the engineering disciplines if, e. g., the parameters have to be estimated and updated in real time, respectively. Although well known and well established as a classical estimation technique, problems arise in case of recursively propagating data uncertainty comprising both effects of random variability and imprecision due to remaining systematic errors. Imprecise data can be effectively modeled using real intervals. Hence, if intervals are given for the original observations, the determination of the corresponding intervals of the estimated parameters can be considered as the task to calculate the range of values. If standard interval- mathematical rules are applied, the problem of overestimation is relevant. It can be overcome if the computations are referred to independent basis influence parameters which are assumed to cause imprecision. In this paper a method was introduced which allows recursive estimation using interval data in a very efficient way. The actual observation values are used as midpoints of symmetric intervals. Hence, this yields the same results as in classical least-squares estimation. For the computation of the interval radii of the estimated parameters the recursion based on the observation data is resolved. Instead, all observation intervals are introduced simultaneously to the algorithm. The recursion is referred to the update of the cofactor matrix of the estimated parameters and of a matrix product. Both can be achieved efficiently so that the final derivation of the interval radii is straightforward. The method was demonstrated using the application of a damped harmonic oscillation. Based on three simulation runs with different models of data imprecision the ways of propagating random variability and imprecision were shown and discussed. Future work has to extend the presented algorithm to state-space filtering such as the Kalman filter. Although such an extension is already available through the resolution of the recursion it is far from being efficient. Besides, the use of asymmetric observation intervals has to be considered which is more appropriate than symmetric intervals for a realistic modeling of systematic errors due to, e. g., atmospheric refraction.112 4th International Workshop on Reliable Engineering Computing (REC 2010)
  13. 13. Recursive Least-Squares Estimation in Case of Interval Observation Data Figure 3. Uncertainty propagation for the five model parameters over 100 epochs – imprecision model I (black: imprecision measures, dark gray: standard deviations, light gray: parameter variations due to simulated observation values)4th International Workshop on Reliable Engineering Computing (REC 2010) 113
  14. 14. H. Kutterer, and I. Neumann Figure 4. Uncertainty propagation for the five model parameters over 100 epochs – imprecision model II (black: imprecision measures, dark gray: standard deviations, light gray: parameter variations due to simulated observation values)114 4th International Workshop on Reliable Engineering Computing (REC 2010)
  15. 15. Recursive Least-Squares Estimation in Case of Interval Observation Data Figure 5. Uncertainty propagation for the five model parameters over 100 epochs – imprecision model III (black: imprecision measures, dark gray: standard deviations, light gray: parameter variations due to simulated observation values)4th International Workshop on Reliable Engineering Computing (REC 2010) 115
  16. 16. H. Kutterer, and I. Neumann References Alefeld, G. and J. Herzberger. Introduction to Interval Computations. Academic Press, Boston, San Diego & New York, 1983. Gelb, A. Applied Optimal Estimation. MIT Press, Cambridge, MA, 1974 Jaulin, L., E. Walter, O. Didrit and M. Kieffer: Applied Interval Analysis. Springer, Berlin, 2000. Koch, K. R. Parameter Estimation and Hypothesis Testing in Linear Models. Springer, Berlin & New York, 1999. Koch, K. R. Introduction to Bayesian Statistics. Springer, Berlin, 2007. Möller, B. and M. Beer. Fuzzy Randomness. Springer, Berlin & New York, 2004. Kutterer, H. and I. Neumann. Fuzzy extensions in state-space filtering. Proc. ICOSSAR 2009, Taylor and Francis Group, London, ISBN 978-0-415-47557-0, 1268-1275, 2009. Schön, S. and H. Kutterer. Using zonotopes for overestimation-free interval least-squares -some geodetic applications-. Reliable Computing 11(2):137-155, 2005. Schön, S. and H. Kutterer. Uncertainty in GPS networks due to remaining systematic errors: the interval approach. Journal of Geodesy. 80(3):150-162, 2006.116 4th International Workshop on Reliable Engineering Computing (REC 2010)