SlideShare a Scribd company logo
COVARIANCE MODELS FOR GEODETIC APPLICATIONS OF COLLOCATION
Carlo Iapige De Gaetani
Politecnico di Milano, Department of Civil and Environmental Engineering (DICA)
Piazza Leonardo da Vinci, 32 20133 Milan, Italy
carloiapige.degaetani@polimi.it
KEY WORDS: Local Gravity Field, Data Integration, Collocation, Covariance Functions, Linear programming, Simplex Method
ABSTRACT:
The recent gravity mission GOCE aims at measuring the global gravity field of the Earth with un-precedent accuracy. An improved
description of gravity means improved knowledge in e.g. ocean circulation and climate and sea-level change with implications in
areas such as geodesy and surveying. Through GOCE products, the low-medium frequency spectrum of the gravity field is properly
estimated. This is enough to detect the main gravimetric structures but local applications are still questionable. GOCE data can be
integrated with other kind of observations, having different features, frequency content, spatial coverage and resolution. Gravity
anomalies (∆𝑔) and geoid undulation (𝑁) derived from radar-altimetry data (as well as GOCE 𝑇!!) are all linear(ized) functional of
the anomalous gravity potential (𝑇). For local modeling of the gravity field, this useful connection can be used to integrate
information of different observations, in order to obtain a better representation of high frequency, otherwise difficult to recover. The
usual methodology is based on Collocation theory. The nodal problem of this approach is the correct modeling of the empirical
covariance of the observations. Proper covariance models have been proposed by many authors. However, there are problems in
fitting the empirical values when different functional of 𝑇 are combined. The problem of modeling covariance functions has been
dealt with an innovative methodology based on Linear Programming and the Simplex Algorithm. The results obtained during the test
phase of this new methodology of modeling covariance function for local applications show improvements respect to the software
packages available until now.
1. INTRODUCTION
1.1 The Usual Procedure for Gravity Data Integration
The usual procedure for integrated local gravity field modeling
is based on the combination of Remove-Restore technique (RR)
and Collocation theory. The RR principle is one of the most
well-known strategies used for regional and local gravity field
determination. It is based on the idea that modeling gravity field
can be done by assuming that gravity field signal can be divided
into long, medium and short wavelength components. Global
models are used as an approximation of the low frequency part
of the entire spectrum of the global gravitational signal and they
are suitable for a general representation of gravity at global
scale. In the RR procedure the effect due to the gravity field
data outside the local area of interest is removed subtracting to
observed local data a suitable global model. This corresponds to
a high-pass filter and the result is a residual signal where the
long wavelength due to the masses at the deep earth interior and
upper mantle and the contribute of a very smooth topography
have been removed. After reduction for a global model, in
addition to medium frequencies, high frequency components are
still present in local data. They are essentially due to the signal
contribution of the local topography, which is particularly
strong at short wavelengths for a rough terrain so that global
model are not able to represent them properly (Forsberg et al.,
1994). This residual topographic signal is called Residual
Terrain Correction (RTC). The residual data obtained applying
both the reduction of the global model and a correspondent RTC
contain only the intermediate wavelength to be used for local
geoid modeling. All trends or systematic effects are removed
from original data and local residual gravity observations,
related for the most part to local features, can be used in
Collocation to estimate the medium-wavelength gravity signal.
Subsequently, the geopotential model and RTC derived effects
can be restored to this residual component obtaining the final
local estimate (see Figure 1).
Figure 1. The Remove-Restore concept
Collocation is a statistical-mathematical theory applied on
gravity field modeling problems. It is based on the assumption
that the gravity observations can be considered as realizations of
a weak stationary and ergodic stochastic process. This theory
has become more and more important because it allows to
combine many different kind of gravity field observables in
order to obtain a better estimate of gravity field. With the great
amount of heterogeneous data, nowadays available, this
approach has been fully accepted as the standard methodology
for integrated gravity field modeling. In this framework the
concept of spatial correlation expressed as covariance function
is introduced. The key is represented by the fact that quantities
as ∆𝑔, 𝑁 or 𝑇!! are all linear(ized) functionals of the anomalous
gravity potential (𝑇) and covariance functions can be
propagated to one each other applying the proper linear
operators on the well-known analytical model of covariance of
the anomalous potential:
𝐶!!!!
=
𝐺𝑀!
𝑅!
𝑅
𝑟!
!!!
𝑅
𝑟!
!!!
𝜎!
!
𝑃!(𝑐𝑜𝑠
!
!!!
𝜓!") (1)
where: 𝑅 is the mean Earth radius,
𝐺 is gravitational constant,
𝑀 is the total mass of the Earth,
𝑟! and 𝑟! are the radius of point 𝑃 and 𝑄,
𝜎!
!
are a set of adimensional degree variances,
𝑃! are the usual normalized Legendre
polynomials,
𝜓!" is the spherical distance between 𝑃 and 𝑄
Degree variances are related to the coefficients 𝑎!" and 𝑏!" of
the spherical harmonic expansion of 𝑇 as:
𝜎!
!
= 𝑎!"
!
+ 𝑏!"
!!
!!! (2)
The general formula of Least Squares Collocation (LSC) for
geodetic application (Barzaghi and Sansò, 1984) can be written
as:
𝐿! 𝑇 = 𝐿! 𝐿! 𝐶!!!!
𝐿! 𝐿! 𝐶!!!!
+ 𝜎!
!
𝐼
!!
𝐿!(𝑇) + 𝑛!
!
!,!!!
(3)
where: 𝐿! is the linear(ized) operator applied in 𝑃!,
𝐶!!!!
is the anomalous potential covariance
matrix of observations,
𝜎!
!
is the noise variance of the observations
In equation (3) the covariance expressed by local data plays
fundamental role and a correct modeling of their covariance
functions is then required.
2. GRAVITY COVARIANCE FUNCTION MODELING
2.1 The State of the Art in Covariance Functions
Estimation
The application of LSC allows gravity field estimation by
integration of different kind of observations. This requires to
define the covariance function of the anomalous potential as
kernel function. The covariance functions of the other linear
functionals are obtained by covariance propagation. Generally
the exact structure of covariance function of 𝑇 is unknown, and
the way to proceed consists in applying suitable analytical
models fitted on empirical covariances of the available data. In
local areas, stationarity and ergodicity of local field required by
collocation theory are generally guaranteed by removal of
systematic effects from data, related to the long wavelength of
the signal and the topographic effect. In fact local covariance
functions are special case of global covariance functions
(Knudsen, 1987) where the information content in wavelengths
longer than the extent of the local area has been removed, and
the information outside the area is assumed to vary in a way
similar to the information within the area. In practice the
observations are given in discrete points in the area of
computing so that the calculation of the so-called empirical
covariance function is done by computing the product between
all the possible combinations of available data and clustering
them as function of reciprocal distance. The mean value of each
cluster represents the covariance value. Hence, the generic
covariance at distance 𝜓 is given by:
𝐶 𝜓 = 𝐶 𝑘
∆!
!
=
!
!
!
!
𝑙! 𝑙!
!!
!!!
!
!!! (4)	
  	
  
where 𝑛 and 𝑚 are the 𝑙 and 𝑙′ observations with reciprocal
distance (𝑘 − 1)∆𝜓 ≤ 𝜓 ≤ 𝑘∆𝜓 and 𝑘 = 1, … , 𝑠. If 𝑙 = 𝑙′,
equation (4) is referred to auto-covariance of 𝑙. In this case at
zero distance the computed covariance is nothing else than the
sum of the signal variance 𝜎!"#$%&
!
and the noise variance 𝜎!"#$%
!
.
This information is useful for a proper estimate of the empirical
covariance values. In fact the width of the sample spacing ∆𝜓
must be properly selected and a simple criterion for optimizing
∆𝜓 is maximizing the signal to noise ratio. If noise of
observations is uncorrelated, for 𝑘 ≥ 1 the empirical covariance
represents with some approximation, the covariance due only to
the signal component of the observations. Hence empirical
covariance function calculated at distance close to zero (e.g. for
𝑘 = 1) can be considered approximately equal to 𝜎!"#$%&
!
. In this
way optimal ∆𝜓 can be estimated as the one that minimize the
difference 𝐶(0) − 𝐶 ∆𝜓/2 ≅ 𝜎!"#$%
!
. For a fair spatial
distribution of data, the optimal ∆𝜓 is close to twice the mean
spacing of the observations (Mussio, 1984). Once the local
empirical covariance is computed, a theoretical model is fitted
into the empirical values. Based on the global analytical
formulas (1), the usual model for local covariance function is
expressed as:
𝐶 𝜓!" = 𝛼𝑒!
!
𝑇
𝑅!
𝑟! 𝑟!
!!!
𝑃! 𝑐𝑜𝑠𝜓!"
!!"#
!!!
+ ⋯
                                    … + 𝜎!
!
𝑇
𝑅!
𝑟! 𝑟!
!!!
𝑃! 𝑐𝑜𝑠𝜓!"
!!"#
!!!!"#!!
(5)	
  	
  
Such covariance model is the sum of two parts. The first comes
from the commission error of the global model removed from
observations. It is given in terms of the sum of the error degree
variances 𝑒!
!
up to the maximum degree of computation of the
global model subtracted in the remove phase. 𝑒!
!
are computed
as the sum of the variances of the estimated model coefficients.
The second part represents the residual part of the signal
contained in the local data. It is expressed as sum of degree
variances. Error degree variances depend on the global model
used. The coefficient 𝛼 allows to weigh their influence. Suitable
models for 𝜎!
!
in (5) have been proposed by Tscherning and
Rapp (Tscherning and Rapp, 1974) as:
𝜎!
!
=
!
!!! !!!
(6)	
  	
  
𝜎!
!
=
!
!!! !!! (!!!)
By covariance propagation, these covariance models of the
anomalous potential 𝑇 can be used to get models for any
functional of 𝑇. By turning the model constants (e.g. 𝐴 and 𝛼)
these model functions can be used to properly fit the empirical
covariances of the available data. The selected model
covariance function can be then propagated to the other
observed functionals using once again the covariance
propagation formula. This is the usual procedure of covariance
functions modeling. The main drawbacks are connected to
mismodeling. The proposed models are frequently unable to
properly fit the empirical covariances, particularly when
different functionals are considered. In a previous proposed
approach (Barzaghi et al., 2009) regularized least squares
adjustment have been applied for integrated covariance function
modeling. Although more flexible than the one of Tscherning
and Rapp gives frequently problems in the non-negative
condition of 𝜎!
!
, being sum of square quantities as (2) shows.
2.2 An Introduction to Linear Programming
In order to overcome these limits, the proposed covariance
fitting method is based on linear programming. To describe this
method, we can consider the following system of linear
inequalities:
3𝑥 − 4𝑦 ≤ 15
          𝑥 + 𝑦 ≤ 12
                      𝑥 ≥ 0
                      𝑦 ≥ 0
(7)	
  	
  
An ordered pair (𝑋, 𝑌) such that all the inequalities are satisfied
when (𝑥 = 𝑋, 𝑦 = 𝑌) is a solution of the system of linear
inequalities (7). As an example [2,4] is one possible solution.
Such problems of two variables can be easily sketched in a
graph that shows the set of all the couple (𝑋, 𝑌) solving the
system (Figure 2).
Figure 2. Graphical sketch of a system of linear inequalities
There are several solutions to (7). Finding one of particular
interest is an optimization process (Press et al., 1989). When
this solution is the one maximizing (or minimizing) a given
linear combination of the variables (called objective function)
subject to constraints expressed as (7), the optimization process
is called Linear Programming (LP). An example of linear
programming could be finding the minimum value of the
objective function:
𝑧 = 𝑥 + 𝑦 (8)	
  
With the constraints imposed in (7). Each of the inequalities of
(7) separates the plane (𝑥, 𝑦) into two parts identifying the
region containing all the couples of (𝑋, 𝑌) satisfying them. Out
of this region, at least one of the constraints is not satisfied so
the solution to (8) must be chosen among the couples (𝑋, 𝑌)
located inside it. This region ,called feasible region, contains all
the possible solution (feasible solutions) expressed in the
constraints system (Figure 3). One of them is the optimal
solution that solve the LP problem. Fundamental theorem of
linear programming assure that if there is solution, it occurs on
one of the vertex of the feasible region.
Figure 3. Two dimensional feasible region of constraints
For applications involving a large number of constraints or
variables, numerical methods must be applied. One of them is
the Simplex method. It provides a systematic way of examining
the vertices of the feasible solution to determine the optimal
value of the objective function. The simplex method consists in
elementary row operations on a particular matrix corresponding
to the LP problem called tableau. The initial version of the
tableau changes its form through iterative optimality check.
This operation is called pivoting. To form the improved
solution, Gauss-Jordan elimination is applied with the pivot
(crossing pivot row and column) to the column pivot. After
improving the solution, the simplex algorithm start a new
iteration checking for further improvements. Each iteration
change the tableau and the conditions of optimality or
unfeasibility of the solution to the proposed LP problem stop
the algorithm. Based on fundamental theorem of linear
programming, simplex method is able to verify the existence of
at least a solution to the proposed LP problem. If this exist, the
algorithm is also able to find the best numerical solution in a
finite time.
2.3 A New Methodology of Covariance Functions Modeling
A new procedure, based on the simplex method and analytical
covariance function models similar to (5), has been devised for
integrated covariance functions modeling. It applies simplex
method for estimating some suitable parameters of model
covariance functions in such a way to fit simultaneously the
empirical covariances of all available local data. The main
problems of standard local covariance modeling are the
propagation of the covariance between functionals. Often, an
estimated model covariance function showing a good agreement
on the empirical values of one kind of data (typically gravity
anomalies) when propagated to other covariances of observed
data in the same local area, shows a poor fit to the
corresponding empirical covariances. Thus are possible
improvements that consist in taking into account all the
available empirical covariances and find a suitable combination
of error degree variances and degree variances giving the best
overall possible agreement. Other aspects that requires to be
taken into account are the non-negativity of the degree
variances of the covariance model and the fact that when data
are considered, different data height are possible. The first one
can be handled through proper constraints in applying simplex
method while the second one requires an approximation. The
empirical covariance estimation procedure does not take into
account this height variations. Nevertheless, since reduced
values are considered, one can assume that data are referred to a
mean height sphere, either the mean earth sphere or a sphere at
satellite altitude when satellite data are considered. So the basic
model covariance function of the anomalous potential observed
in two points became:
𝐶!!(𝜓) =
𝐺𝑀!
𝑅!
𝑅
𝑅 + ℎ
!(!!!)
𝑒!
! 𝑃!(𝑐𝑜𝑠
!!"#
!!!
𝜓) + ⋯
                      … +
𝑅
𝑅 + ℎ
!(!!!)
𝜎!
!
𝑃!(𝑐𝑜𝑠
!!"#
!!!!"#!!
𝜓)
(9)	
  	
  
where ℎ is the mean height of the data. This equation is a linear
combination of 𝑁!"# − 2 adimensional error degree variances
up to degree 𝑁!"# of reduction of the data 𝑒!
!
and 𝑁!"# − 𝑁!"#
adimensional degree variances 𝜎!
!
that complete the spectrum
up to degree 𝑁!"# and that have to be estimated.
As explained before, simplex method requires the definition of a
set of constraints, expressed as linear inequalities. For given 𝜓!,
there are a set of 𝑒!
!
and 𝜎!
!
such that:
𝐶!!(𝜓!) = 𝐶!!
!"#
(𝜓!) (10)	
  	
  
Referring to more functionals and distance sampling of the
empirical covariance there are little different set of 𝑒!
!
and 𝜎!
!
so
that:
𝐶!! 𝜓! ≅ 𝐶!!
!"#
(𝜓!)
𝐶!! 𝜓! ≅ 𝐶!!
!"#
𝜓!   
𝐶∆!∆! 𝜓! ≅ 𝐶∆!∆!
!"#
(𝜓!)
(11)	
  	
  
(11) can be translated into inequalities that simplex method has
to take into account. Each of them can be expressed in the
following form:
𝐶 𝜓! ≤ 𝐶!"# 𝜓! + ∆𝐶!
𝐶 𝜓! ≥ 𝐶!"#
𝜓! − ∆𝐶! (12)	
  	
  
A collection of constraints as (12) can be used into simplex
method to handle the optimization problem taking into account
that model and empirical covariances have to be as close as
possible (Figure 4).
Figure 4. Constraints applied on model covariance function
The same thing can be done on multiple empirical covariance
functions. With these constraints given on the estimated
covariance functions, simplex method is forced to find a unique
set of adapted 𝑒!
!
and 𝜎!
!
, suitable for all the available empirical
covariances (Figure 5).
Figure 5. Multiple constrains on model covariance functions
𝑒!
!
and 𝜎!
!
, are obtained applying suitable scaling factor on
some reference model of 𝑒!
!
and 𝜎!
!
. This could be possible
estimating each of them but such problem should require at least
𝑁!"# − 1 constraints and a simplex tableau of such dimensions
could be difficult to manage (some tests have been done up to
degree 2190). For this reason the chosen model covariance
function has the form:
𝐶!!(𝜓) =
𝐺𝑀!
𝑅!
𝑅
𝑅 + ℎ
!(!!!)
𝛼𝑒!
! 𝑃!(𝑐𝑜𝑠
!!"#
!!!
𝜓) + ⋯
                      … +
𝑅
𝑅 + ℎ
!(!!!)
𝛽𝜎!
!
𝑃!(𝑐𝑜𝑠
!!"#
!!!!"#!!
𝜓)
(13)	
  	
  
Once the constraints, a linear function of the unknown scale
factors 𝛼 and 𝛽, have been defined, a suitable objective function
must be defined in order to generate the tableau and apply the
simplex algorithm. A possible condition is to impose that the
model function is close to a proper feasible model, e.g. selecting
𝑒!
!
and 𝜎!
!
so that they are close to the values implied by a
global geopotential model. If global sets are able to reproduce
the empirical values of covariances this implies obtaining scale
factor close to unit. So it has been decided to minimize an
objective function that is simply the sum of these two adaptive
coefficients. Many different objective functions have been
tested, but this solution proved to be the most adequate.
Therefore the linear programming problem that simplex method
has to solve is simply:
min  (𝛼 + 𝛽) (14)	
  	
  
with 𝛼 and 𝛽 subject to 𝑚 constraints expressed as:
𝐶! ! !!(!) 𝜓!, 𝛼, 𝛽 ≤ 𝐶! ! !! !
!"#
𝜓! + ∆𝐶! ! !! !
!
(15)	
  	
  𝐶! ! !!(!) 𝜓!, 𝛼, 𝛽 ≥ 𝐶! ! !!
!
!"#
𝜓! − ∆𝐶! ! !! !
!
with 𝑖 = 1, … , 𝑚 , 𝐿(𝑇) or 𝐿!
(𝑇) linear functionals of the
anomalous potential 𝑇 (such ∆𝑔, 𝑁 or 𝑇!!) and 𝐶! ! !!(!) the
relative covariance functions propagated using (13). As
explained previously, if all the 𝑚 constraints are consistent and
the proposed optimization problem has a solution, there is a
feasible region of the solutions space where different
combination of 𝛼 and 𝛽 are suitable. This covariance fitting
methodology is numerically implemented through an iterative
procedure. While objective condition (14) is fixed, conditions
(15) are tuned in order to get the best possible fitting with
empirical covariances. Referring to the feasible region, this
procedure identifies an initial large feasible region (soft
constraints, poor fit) and reduces this area until the vertex of
optimality solution practically coincide one each other
(strongest constraints, best fit). In Figure 6 this process is
sketched.
Figure 6. Impact of iterative constraints adjustment
on feasible region
Thus in this procedure, simplex method has been applied in a
quite different way with respect to standard applications of
linear programming. While in the usual application of simplex
algorithm the focus is on the objective function and the
constraints are fixed, the devised procedure proceeds in a
reverse way. The objective function loses most of its importance
and the focus is on suitable constraints allowing the best
possible agreement between model covariance functions and
empirical values.
3. ASSESSMENT OF THE PROCEDURE
3.1 Preliminary Tests on Covariance Functions Modeling
with Simplex Method
Covariance function modeling with simplex method has been
implemented through many Fortran 95 subroutines, based on
the concepts explained before. For brevity, the entire program
will be called henceforth SIMPLEXCOV. This procedure is
mainly composed by three steps:
1. analysis of input data for assessment of the best
sampling of empirical covariance functions;
2. computing of empirical covariances;
3. iterative computing of best fit model covariance
functions with simplex method.
The third step is an iterative procedure, composed by two nested
loops. In the external loop a set of suitable constraints on
covariance functions are defined. Based on these constraints, in
the internal loop many optimization problems are generated and
solved by simplex method. In each iteration, the starting 𝑒!
!
-𝜎!
!
model (derived e.g. from global model coefficients) is slightly
modified and a simplex algorithm solution is searched. If more
than one set of varied degree variances is able to satisfies the
constraints, an improved fit can be obtained modifying the
constraints in the external loop and so on. On the contrary, if all
the LP problems shows to have no feasible solution in the
internal loop, constraints has to be softened in the external loop.
The final iteration corresponds to a unique combination of set of
adapted degree variances that allow to obtain the best possible
fit between empirical covariances and model covariance
functions. (Figure 7) and (Figure 8) show an intermediate
iteration. In this computation two different set of adapted degree
variances (Figure 7) produce very similar model covariance
functions, both satisfying the same constraints on empirical
values (Figure 8).
Figure 7. Degree variances adapted in intermediate solution
Figure 8. Model covariance functions in intermediate solution
Covariance function modeling can be further improved adding
constraints or making them more stringent until only one
solution is found (Figure 9).
Figure 9. Iterative covariance function modeling process
with Simplex method
This procedure has been tested both with simulated data and
real dataset. A fast test has been implemented on gravity data in
0 100 200 300 400 500 600 700 800 900
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
x 10
-17
degree
adimensional
solution I
solution II
0 0.5 1 1.5 2 2.5 3 3.5 4
-50
0
50
100
150
200
250
300
C(DgDg)
deg
[mGal2
]
empirical
solution I
solution II
the NW part of Italy. In this test, a dataset of 4840 observed
Free Air gravity anomalies ∆𝑔!" (hereafter denoted simply ∆𝑔)
homogeneously distributed in a 5°×5° area (with SW corner
with coordinates 42°𝑁, 7°𝐸), have been reduced for the long
wavelength component removing the global model EGM2008
(Pavlis et al., 2008) up to degree 360 and the corresponding
Residual Terrain Correction. Similarly in the same area 430
observation of geoid undulation 𝑁 obtained by GPS-leveling
have been reduced correspondingly. In a standard local geoid
computation, 𝑁 values are estimated based on ∆𝑔 values.
Empirical covariance function of ∆𝑔 is computed and a suitable
model covariance function is estimated so to represent as best
the spatial correlation given by empirical values. Thus, the
devised methodology has been applied following this estimation
procedure. As a benchmark, the standard fitting method based
on COVFIT program of the GRAVSOFT package has been
adopted (Tscherning, 2004). In (Figure 10a) the empirical
covariance of the reduced gravity values and the COVFIT3 best
fit model are plotted. As one can see, the empirical values and
the model covariance function are in a suitable agreement.
However, the model function is not able to reproduce the
oscillating features of the empirical covariance. As a further
cross-check, the model function of residual geoid undulations,
as derived from the gravity covariance model, has been
compared with the 𝑁 empirical covariances values (Figure 10b).
The agreement is quite poor as the model function displays with
a correlation pattern which, at short distance, is stronger than
the one implied by the empirical covariance.
Figure 10. ∆𝑔 model covariance functions obtained by
COVFIT3 (a) and the same model propagated on 𝑁 (b)
SIMPLEXCOV has been then applied, in order to verify the
results obtained by the new procedure and compare the results
with the standard covariance function modeling procedure. The
result is plotted in (Figure 11). The agreement between model
and empirical covariance values is remarkable. The model is
now able to fully reproduce the oscillations of the empirical
values in a very detailed way (Figure 11a). As done before, the
undulation model covariance implied by the gravity model
covariance has been derived and compared with the empirical
undulation covariance. The model seems to fit a little bit better
the empirical values even though the overall agreement is quite
poor also in this case (Figure 11b).
Figure 11. ∆𝑔 model covariance functions obtained by
SIMPLEXCOV (a) and the same model propagated on 𝑁 (b)
The SIMPLEXCOV procedure has been then applied to both
empirical functions in order to get a set of scaled error
degree/degree variances allowing a common improved fit. Thus
both empirical covariances of residual undulation and residual
gravity have been considered in the fitting procedure. The
results are summarized in (Figure 12).
Figure 12. Integrated model covariance functions of ∆𝑔 (a)
and 𝑁 (b) obtained by SIMPLEXCOV
As expected, a remarkable improvement is obtained. The main
features of both empirical covariances are properly reproduced
using a unique set of scaled error degree/degree variances. In
(Figure 13) the comparison between the starting EGM08
adopted error degree/degree variances and the final scaled
obtained solution is shown. In order to fit the empirical values
properly, larger error degree variances must be considered. On
the other hand, there is a smaller scale factor applied to the
degree variances. This can be explained saying that the formal
error degree variances are too optimistic or, equivalently, that
commission error is not realistic.
Figure 13. Integrated adapted degree variances obtained by
SIMPLEXCOV
This test shows the reliability of the solution obtained when
simplex method is applied on covariance function modeling
problems. Adapted degree variances is able to reproduce
empirical covariance functions of observed data, better than
usual methodologies. In local application, integration of
different data kinds is very useful and suitable covariance
function are needed for computing proper collocation solutions.
Thus the devised procedure seems to be a suitable tool for
improving covariance function modeling.
0 0.5 1 1.5 2 2.5 3 3.5 4
-50
0
50
100
150
200
250
300
C(DgDg)
deg
[mGal2
]
empirical
model
0 0.5 1 1.5 2 2.5 3 3.5 4
-0.05
0
0.05
0.1
0.15
0.2
C(NN)
deg
[m2
]
empirical
model
0 0.5 1 1.5 2 2.5 3 3.5 4
-50
0
50
100
150
200
250
300
C(DgDg)
deg
[mGal2
]
empirical
model
0 0.5 1 1.5 2 2.5 3 3.5 4
-0.05
0
0.05
0.1
0.15
0.2
C(NN)
deg
[m2
]
empirical
model
0 0.5 1 1.5 2 2.5 3 3.5 4
-50
0
50
100
150
200
250
300
C(DgDg)
deg
[mGal2
]
empirical
model
0 0.5 1 1.5 2 2.5 3 3.5 4
-0.05
0
0.05
0.1
0.15
0.2
C(NN)
deg
[m2
]
empirical
model
0 100 200 300 400 500 600 700 800 900
0
0.5
1
1.5
2
2.5
x 10
-17
degree
adimensional
adapted degvar
error degvar EGM08
degvar EGM08
3.2 Integrated Local Estimation of Gravity Field
RR technique and LSC based on covariance function computed
by SIMPLEXCOV has been combined in a unique computing
procedure. This in order to obtain local predictions of gravity
field functionals. This procedure is able to integrate
observations of ∆𝑔!", 𝑁, 𝑇 and 𝑇!! for local prediction of each
one of them. Following the main procedure steps are illustrated.
• REMOVE PHASE:
a) computing of long wavelength (global models);
b) computing of high wavelength (RTC);
c) computing of residual values of any available data.
• COVARIANCE FUNCTION MODELING PHASE:
d) computing of empirical covariance functions (for any
available data);
e) computing of integrated model covariance functions based
on SIMPLEXCOV.
• LEAST SQUARES COLLOCATION PREDICTION
PHASE:
f) selection and down sampling of data if needed (to avoid
numerical problems);
g) assembling the covariance matrices to be used in the
collocation formula;
h) inversion of this covariance matrix;
i) computing the collocation estimate for the selected
functional of the anomalous potential.
• RESTORE PHASE:
j) restoring long and short wavelengths to the predicted
residuals;
The collocation formula (3) is applied as many times as the
number of prediction points. For each computation a down
sampling process is applied on input observed residuals. Many
reasons have lead to this choice. Covariance functions express
spatial correlation of data and the impact of data on the estimate
is tuned by the covariance correlation length. the correlation
length is defined as the distance at which covariance is half of
the origin value. Hence from model covariance function it is
possible to determine this correlation length, in such a way to
select only the most significant data, i.e. those close to the
current prediction point. To this aim, only data having a
distance from the estimation point smaller than correlation
length can be considered (Figure 14).
Figure 14. Window selection and down sampling of data
This helps in selecting the most significant data reducing also
the computation time. In fact, in (3), the covariance matrix to be
inverted has a dimension equal to the total amount of used data.
The proposed selection criterion speeds up computation without
reducing precision because only relevant (i.e. significantly
correlated) data are used. However, for very dense data set,
further down sampling process could be necessary. Once that
only significant data have been selected, if necessary the total
amount is decreased taking into account the homogeneity in
spatial coverage around the prediction point. To maintain
homogeneous coverage of prediction area, selected data are
subdivided into three rings centered on prediction point and data
in each ring is randomly down sampled in such a way to
maintain isotropic and homogeneous distribution of input data.
With this down sampling procedure, computation process
speeds up because the size of matrix to be inverted decreases
and an effective estimation procedure is set.
3.3 Pre-Processing of GOCE Data
Before being integrated with other available datasets, recent
GOCE 𝑇!! freely distributed by ESA needed a pre-processing
step. In order to be used in the integrated computing procedure
these data have been reduced for the long wavelengths by
subtracting a global model up to a given maximum degree. The
GOCE Direct Release 3 (Bruinsma et al., 2010) global model
(GDIR3) up to degree 180 have been subtracted to the observed
𝑇!!. These residuals cannot be used directly and need a further
processing step. These residuals are completely dominated by a
corrupting high frequency noise. Thus, a suitable filter must be
applied on residuals 𝑇!!, so to remove the noisy high
frequencies components. This can be done by low-pass filtering
the data using a suitable moving average. The cut-off frequency
is directly related to the amplitude of this spatial average, so this
must be set in such a way to remove mainly the noise
component. Generally, if the desired cut-off frequency is
expressed in terms of degree of spherical harmonics expansion
𝑛, the amplitude of the moving average ∆𝜓 can be determined
as ∆𝜓(°) = 180° 𝑛. Assuming this, if moving average of 0.7°
is used, residual signal components up to degree 260 are not
filtered. This empirical filtering procedure has been applied on
𝑇!! dataset. This has been done both track-wise (i.e. along each
track) and spatial-wise (i.e. considering a moving average in
space). Two filtered 𝑇!! datasets have been used in the
following analysis. The 𝑇!! spatially filtered data will be
denoted as 𝑇!!
!"#
while those filtered along track as 𝑇!!
!"#
.
3.4 The Calabrian Arc Area Test
A first test of the integration procedure was performed in the
area centered on the Calabrian Arc. This area has been chosen
as area test because here gravity signal presents high variability,
due to the geophysical phenomenon of subduction in the area of
the Ionian Sea. Furthermore, in this area an aerogravimetric
campaign sponsored by ENI S.p.A. was performed in 2004. The
main objective of this test is the assessment of the new
procedure in a situation where dense satellite observations are
integrated with other existing data in order to obtain local
modeling of the gravimetric signal. As mentioned the
benchmark of predictions is a dataset of 1157 ∆𝑔!" (gravity
anomalies free-air) acquired by aerogravimetry. In the 9°×9°
area with SW corner of coordinates (𝜆 = 12°𝐸, 𝜑 = 34°𝑁)
22932 available ERS1-Geodetic Mission radar-altimetry
observations, and 16370 values of GOCE radial second
derivatives have been selected. GOCE data have been processed
to what is explained previously (§3.3). All available data have
been reduced with GOCE Direct Release 3 global model up to
degree 180. Furthermore a correspondent Residual Terrain
Correction computed with GRAVSOFT package’s TC program
(Forsberg and Tscherning, 2008) and a detailed 3′′×3′′ DTM
0 0.5 1 1.5 2 2.5 3 3.5 4
-0.05
0
0.05
0.1
0.15
0.2
0.25
0.3
C(NN)
deg
[m2
]
empirical
model
Correlation
length
have been removed to ∆𝑔!! and 𝑁 data. 𝑇!! data has been
reduced only by global model component because topographic
signal has been proved to be not relevant at satellite altitude
(about 260 km) and for filtered out in the pre-processing phase.
Different combination of residuals 𝑁 and 𝑇!! has been used in
order to predict ∆𝑔!"# on aerogravimetry points. 𝑇!! values have
been considered both as 𝑇!!
!"#
and 𝑇!!
!"#
. The predicted values
have been compared with observed gravity anomalies from
aerogravimetry, in order to assess the computing procedure and
check for the best possible combination in reproducing the
observed data. Different model covariance functions has been
estimated by SIMPLEXCOV, based on empirical covariance
values of residuals data. When the model covariance functions
are computed taking into account only one functional, empirical
values are well reproduced by simplex algorithm. An example is
shown in (Figure 15).
Figure 15. Model covariance function estimated with 𝑁 only
However problems concerning propagation of model
covariances on other functional are still evident (Figure 16).
Figure 16. Model covariance function estimated with 𝑇!!
!"#
only
and then propagated on 𝑁
On the other hand, if the new proposed integrating procedure
(§2.3) is applied jointly for fitting all the available empirical
covariances there are sharp improvements which prove the
effectiveness of this approach. Example of the obtained results
are presented in (Figure 17). In this case, the combination of
two functional 𝑁 + 𝑇!!
!"#
, the agreement between empirical
values and model functions is worse if compared with model
covariance functions computed by fitting the covariance of a
single functional (e.g. Figure 15). However, if all empirical
covariances are used, an overall suitable fit is reached.
Figure 17. Integrated model covariance function estimated
with 𝑁 + 𝑇!!
!"#
propagated on 𝑁 (a) and 𝑇!!
!"#
(b)
This is indeed the procedure to be followed because, as it was
proved, if the covariance function of one observation only is
considered, the obtained remarkable fit does not reflect into the
covariances of other existing data. Thus the integrated estimate
of a covariance model for 𝑇 to be then propagated to any linear
functional of 𝑇 seems to be the proper method. Once that model
covariance functions have been estimated different tests have
been performed. Five combinations of data have been tested and
corresponding different predictions ∆𝑔!"# have been estimated,
with tests also on parameters as the amplitude of the selection
window.
Results of test A have been obtained with a selection window of
0.7° width and 4000 maximum observation for each
computation point. Due to their spatial resolution, 500 values of
𝑇!! and 600 of 𝑁 on sea have been used in the mean for each
estimation point. The results are summarized in (Table 1) where
the notation, for example ∆𝑔!!!!
!"# , correspond to gravity
anomalies predicted by a combination of 𝑁 and 𝑇!!
!"#
data and
∆𝑔!"# are the benchmark residuals obtained by aerogravimetry.
# points E [mGal] σ [mGal]
∆𝑔!"# 1157 -1.12 21.34
∆𝑔!"# − ∆𝑔!!!
!"# 1157 -9.36 23.52
∆𝑔!"# − ∆𝑔!!!
!"# 1157 -8.66 19.80
∆𝑔!"# − ∆𝑔! 1157 8.31 15.64
∆𝑔!"# − ∆𝑔!!!!
!"# 1157 4.72 14.97
∆𝑔!"# − ∆𝑔!!!!
!"# 1157 -2.16 33.23
Table 1. Statistics of the differences between benchmark values
and predictions obtained in test A
The results obtained using 𝑁 data are only slightly improved in
the combined 𝑁 + 𝑇!!
!"#
estimation procedure. Particularly, the
mean values of the residuals is nearby half the one obtained by
considering 𝑁 only. By the way, no significant estimate can be
obtained using 𝑇!! only (It seems, however, that a less poor
estimates are computed when 𝑇!!
!"#
are taken as input data).
This is an expected result because at mean altitude of 260 km,
gravity signal is very attenuated and a downward continuation
from observed 𝑇!! to ∆𝑔!" close earth surface is an unstable
procedure. As a general remark, one can also state that biases
are always present: most of them can be considered significative
being the noise in the data, as derived from cross-over statistics,
of the order of 2 mGal. Finally, an unexpected anomalous result
is obtained when combining 𝑁 + 𝑇!!
!"#
, the anomaly being
related to the standard deviation of these residuals which is to
be compared with the one derived using 𝑇!!
!"#
data. At the
moment this behavior has no reasonable explanation.
0 0.5 1 1.5 2 2.5 3 3.5 4
-0.05
0
0.05
0.1
0.15
0.2
0.25
0.3
C(NN)
deg
[m2
]
empirical
model
0 0.5 1 1.5 2 2.5 3 3.5 4
-0.2
-0.1
0
0.1
0.2
0.3
0.4
0.5
C(NN)
deg
[m2
]
empirical
model
0 0.5 1 1.5 2 2.5 3 3.5 4
-0.1
-0.05
0
0.05
0.1
0.15
0.2
0.25
C(NN)
deg
[m2
]
empirical
model
0 0.5 1 1.5 2 2.5 3 3.5 4
-0.5
0
0.5
1
1.5
2
C(TrrTrr)
deg
[mE2
]
empirical
model
To check for possible improvements, in test B it has been
decided to remove, during the estimation procedure, for each
estimation point and for each windowed dataset, the
corresponding mean value. The results of this test are shown in
(Table 2):
# points E [mGal] σ [mGal]
∆𝑔!"# 1157 -1.12 21.34
∆𝑔!"# − ∆𝑔!!!
!"# 1157 -10.72 22.49
∆𝑔!"# − ∆𝑔!!!
!"# 1157 -17.64 22.54
∆𝑔!"# − ∆𝑔! 1157 -0.04 17.01
∆𝑔!"# − ∆𝑔!!!!
!"# 1157 -1.19 17.46
∆𝑔!"# − ∆𝑔!!!!
!"# 1157 -7.87 16.18
Table 2. Statistics of the differences between benchmark values
and predictions obtained in test B
Results of test B are similar to those obtained previously. We
only remark that, in this second case, the anomalous behavior in
the ∆𝑔!!!!
!"# is not present. This enforces the previous statement
on this anomaly. As previously stated, only combined 𝑁 + 𝑇!!
estimate lead to a significant reduction of the observed
aerogravimetry derived data.
Test C has concerned the amplitude of the windowed selection
of data for each prediction point. To verify if a wider
windowing improves the accuracy a further test has been
performed. In this third case, only combinations of 𝑇!!
!"#
data
have been considered. The selection window has been set to 1°
width and a maximum of 4000 observations for each
computation point has been selected. With these settings, nearby
600 values of 𝑇!! and 800 of 𝑁 on sea have been considered for
each prediction point. After selection, the mean value of
selected has been removed, as it has been done in the second
test. The summary of the results is shown in (Table 3):
# points E [mGal] σ [mGal]
∆𝑔!"# 1157 -1.12 21.34
∆𝑔!"# − ∆𝑔!!!
!"# 1157 -9.38 20.37
∆𝑔!"# − ∆𝑔! 1157 -0.04 17.01
∆𝑔!"# − ∆𝑔!!!!
!"# 1157 2.26 15.94
Table 3. Statistics of the differences between benchmark values
and predictions obtained in test C
The residuals show the same patterns and statistics are
practically equivalent to those in (Table 2). Thus, it can be
concluded that changing windowing amplitude in respect to that
chosen as function of correlation length of data has no
significant impact on the estimation but only on computation
speed. Furthermore, generally speaking, it seems that no
significant improvements are obtained when using 𝑁 + 𝑇!! with
respect to the solution based on 𝑁 only. This is, by no means,
true when residuals are globally considered. However, by
analyzing the residuals track by track, one can see that some
slight improvements can be reached when aerogravimetry tracks
on land areas are considered. There, no altimeter data are
available and the 𝑁 based estimates are poorer than those based
on 𝑁 + 𝑇!!. This can be seen in the differences between ∆𝑔!
and ∆𝑔!!!!
!"# . The larger discrepancies are on land areas (Figure
18). These discrepancies are related to 𝑇!! improvements in the
∆𝑔!!!!
!"# estimations which are closer to the aerogravimetry
∆𝑔!"#, as it can be seen in the statistics of the residuals that are
slightly better (see Table 3). This indeed an indication in favor
of merging all the available data, even though they are satellite
derived.
Figure 18. Differences between predicted values ∆𝑔!!!!
!"#
and ∆𝑔! obtained in test C
An interesting results has not been yet discussed. High degree
global models (e.g. EGM08) allow to compute gravity signal
with high overall accuracy and resolution. In (Table 4) are
shown statistics of differences obtained by EGM08 synthesized
up to different maximum degrees on aerogravimetry tracks.
When it is computed up to maximum degree 2160, EGM08 is
practically able to recover all the signal observed by
aerogravimetry. In this case aerogravimetry campaign could be
replaced by global models, if accuracy requested permit it.
Prediction obtained by the procedure developed in this work,
combining GOCE filtered data, and aerogravimetry, is
comparable with EGM08 computed up to degree 650 (see Table
3 and Table 4).
E [mGal] σ [mGal]
∆g!"# − ∆g!"#$%(!"#$) -0.28 3.74
∆g!"# − ∆g!"#$%(!"##) -0.72 4.49
∆g!"# − ∆g!"#$%(!""") 0.34 9.58
∆g!"# − ∆g!"#$%(!"") 1.09 12.74
∆g!"# − ∆g!"#$%(!"#) 2.50 16.79
Table 4. Statistics of the differences between benchmark values
and EGM2008 computed up to different maximum degrees
This is a interesting result because on central part of
Mediterranean Sea gravity data are generally very dense and
with high quality and it’s obvious that global models such
EGM08 are able to represent gravity field in this area with
resolution comparable to aerogravimetry. However in some
areas of central Africa of south America the availability of high
quality data is not so good and with such procedure of data
integration, specially with a great number of satellite
observations now available, local computing could obtained still
better results in respect to that it could be possible simply
computing a global model. In conclusion, in the discussed tests
reliable and meaningful results have been obtained using the
proposed procedure.
4. CONCLUSIONS
The aim of this work was to set up a procedure able to combine
different functionals of the anomalous potential in order to
obtain prediction of any other functional of 𝑇. A computing
procedure, based on remove-restore technique and least squares
collocation has been devised. Particularly, this procedure was
based on an innovative approach for covariance function
modeling. A new methodology based on simplex algorithm and
linear programming theory allowed to obtain model covariance
functions in good agreement with the empirical covariance
values computed from reduced data. Remarkable results were
obtained in the integrated estimate of model covariance
functions when empirical values of different available
functionals were used. The new methodology proved to be
flexible and able to properly reproduce all the main features of
the given empirical covariances. This result is only a part of a
general procedure able to combine functionals of the anomalous
potential such as gravity anomalies, geoid undulation and
second radial derivatives of 𝑇 for local gravity field estimation.
Another important conclusion has been reached in the
evaluation of the feasibility of a windowed collocation estimate.
Reliable results were obtained in the windowed collocation
procedure implemented in this work. Particularly, it has been
shown that the window amplitude can be selected on the basis
of the covariance correlation length. Also, tests have been
devised for reducing the number of data preserving
homogeneity and isotropy in data distribution for each
computation point. Preliminary tests, have been performed to
validate this procedure from an algorithmic point of view.
Covariance models, through least squares collocation procedure,
were able to give reliable predictions of ∆𝑔. In the test
presented, local predictions of ∆𝑔 have been compared with
observed values coming from aerogravimetry. These predictions
have been obtained with different combination of radar
altimetry data and GOCE data, in order to verify the recovering
of the medium-high frequencies, contained in the gravity signal
measured with this technique. The best fit between empirical
covariance values and model covariances was reached in the
combined estimation procedure. This allows overcoming the
fitting problem that frequently occur when only one empirical
covariance is used to tune the model covariance. As a matter of
fact, it was proved that the joint estimation option leads to an
optimal fit in the selected covariance models for the other
available functionals. The collocation estimates as derived from
combining the different data sets according to this procedure
proved to be consistent with the observed gravity data. Further
tests have been performed on different area and with different
combination of data. They were not presented for brevity but
they confirm the results illustrated in this paper. As a final
comment, one can say that the devised method for covariance
fitting together with a windowed collocation procedure is able
to give standard reliable estimates. However, there are
promising improvements particularly related to the proposed
covariance fitting procedure. The method for covariance fitting
is effective and can remarkably improve the coherence between
empirical and model covariances. Therefore, it can be
considered as a valuable tool in further developments and
applications of collocation.
5. REFERENCES
Barzaghi R., Tselfes N., Tziavos I.N., Vergos G.S., 2009. Geoid
and high resolution sea surface topography modellingin the
mediterranean from gravimetry, altimetry and GOCE data:
evaluation by simulation. Journal of Geodesy, No.83.
Barzaghi R., Sansò F., 1984. La collocazione in geodesia fisica.
Bollettino di geodesia e scienze affini, anno XLIII
Borghi A., 1999. The Italian geoid estimate: present state and
future perspectives. Ph.D. thesis, Politecnico di Milano.
Bruinsma S.L., Marty J.C., Balmino G., Biancale R., Foerste C.,
Abrikosov O., Neumayer H., 2010. GOCE Gravity Field
Recovery by Means of the Direct Numerical Method. presented
at the ESA Living Planet Symposium, 28 June-2 July 2010,
Bergen, Norway.
Forsberg R., 1994. Terrain effect in geoid computation. Lecture
notes, International School of the Determination and Use of the
Geoid, IGeS, Milano.
Heiskanen W.A., Moritz H., 1967. Physical Geodesy. Institute
of Physical Geodesy, Technical University, Graz, Austria.
Knudsen P., 1987. Estimation and modeling of the local
empirical covariance function using gravity and satellite
altimeter data. Bullettin Geodesique, No.61.
Moritz H., 1980. Advanced Physical Geodesy. Wichmann,
Karlsruhe.
Mussio L., 1984. Il metodo della collocazione minimi quadrati e
le sue applicazioni per l’analisi statistica dei risultati delle
compensazioni. Ricerche di Geodesia Topografia e
Fotogrammetria, No. 4, CLUP.
Pavlis N.K., Holmes S.A, Kenyon S.C., Factor J.K., 2012. The
development and evaluation of the Earth Gravitational Model
2008 (EGM2008). Journal of geophysical research, Vol. 117.
Press W.H., Flannery B.P., Teukolsky S.A., Vetterling W.T.,
1989. Numerical Recipes The Art of Scientific Computing.
Cambridge University Press.
Reguzzoni M., 2004. GOCE: the space-wise approach to
gravity field determination by satellite gradiometry. Ph.D.
thesis, Politecnico di Milano.
Tscherning C.C., Rapp R.H., 1974. Closed Covariance
Expressions for Gravity Anomalies, Geoid Undulations, and
Deflections of the Vertical Implied by Anomaly Degree-
Variance Models. reports of the Department of Geodetic
Science, No. 208, The Ohio State University.
Tscherning C.C., 2004. Geoid determination by least squares
collocation using GRAVSOFT. Lecture notes, International
School of the Determination and Use of the Geoid, IGeS,
Milano.
Tselfes N., 2008. Global and local geoid modelling with GOCE
data and collocation. Ph.D. thesis, Politecnico di Milano.
6. ACKNOWLEDGMENTS
Prof. Riccardo Barzaghi and Ph.D Noemi Emanuela Cazzaniga
gave a fundamental contribute in this work and in my entire
Ph.D course. My gratitude goes mainly to them. Thanks.

More Related Content

What's hot

IEEE 2014 MATLAB IMAGE PROCESSING PROJECTS Frame rate- up- conversion- based...
IEEE 2014 MATLAB IMAGE PROCESSING PROJECTS Frame  rate- up- conversion- based...IEEE 2014 MATLAB IMAGE PROCESSING PROJECTS Frame  rate- up- conversion- based...
IEEE 2014 MATLAB IMAGE PROCESSING PROJECTS Frame rate- up- conversion- based...
IEEEBEBTECHSTUDENTPROJECTS
 
Interferogram Filtering Using Gaussians Scale Mixtures in Steerable Wavelet D...
Interferogram Filtering Using Gaussians Scale Mixtures in Steerable Wavelet D...Interferogram Filtering Using Gaussians Scale Mixtures in Steerable Wavelet D...
Interferogram Filtering Using Gaussians Scale Mixtures in Steerable Wavelet D...
CSCJournals
 
Dhrimaj_Helion_FinalPaper_ECE523
Dhrimaj_Helion_FinalPaper_ECE523Dhrimaj_Helion_FinalPaper_ECE523
Dhrimaj_Helion_FinalPaper_ECE523
Helion Dhrimaj
 
Contribution to the investigation of wind characteristics and assessment of w...
Contribution to the investigation of wind characteristics and assessment of w...Contribution to the investigation of wind characteristics and assessment of w...
Contribution to the investigation of wind characteristics and assessment of w...
Université de Dschang
 
bakerca2Thesis2s
bakerca2Thesis2sbakerca2Thesis2s
bakerca2Thesis2s
Christian Baker
 
Mapping spiral structure on the far side of the Milky Way
Mapping spiral structure on the far side of the Milky WayMapping spiral structure on the far side of the Milky Way
Mapping spiral structure on the far side of the Milky Way
Sérgio Sacani
 
Raznjevic, Anja: Comparison of large eddy simulation of a point source methan...
Raznjevic, Anja: Comparison of large eddy simulation of a point source methan...Raznjevic, Anja: Comparison of large eddy simulation of a point source methan...
Raznjevic, Anja: Comparison of large eddy simulation of a point source methan...
Integrated Carbon Observation System (ICOS)
 
A Solution to Land Area Calculation for Android Phone using GPS-Luwei Yang
A Solution to Land Area Calculation for Android Phone using GPS-Luwei YangA Solution to Land Area Calculation for Android Phone using GPS-Luwei Yang
A Solution to Land Area Calculation for Android Phone using GPS-Luwei Yang
Luwei Yang
 
Barber_TU2.T03_hk_mb_casa_fg_hk_FINAL.pptx
Barber_TU2.T03_hk_mb_casa_fg_hk_FINAL.pptxBarber_TU2.T03_hk_mb_casa_fg_hk_FINAL.pptx
Barber_TU2.T03_hk_mb_casa_fg_hk_FINAL.pptx
grssieee
 
Ozone and Aerosol Index as radiation amplification factor of UV index over Eg...
Ozone and Aerosol Index as radiation amplification factor of UV index over Eg...Ozone and Aerosol Index as radiation amplification factor of UV index over Eg...
Ozone and Aerosol Index as radiation amplification factor of UV index over Eg...
inventionjournals
 
Air pollution modeling
Air pollution modelingAir pollution modeling
Air pollution modeling
IARI, NEW DELHI
 
AUTOMATIC IDENTIFICATION OF CLOUD COVER REGIONS USING SURF
AUTOMATIC IDENTIFICATION OF CLOUD COVER REGIONS USING SURF AUTOMATIC IDENTIFICATION OF CLOUD COVER REGIONS USING SURF
AUTOMATIC IDENTIFICATION OF CLOUD COVER REGIONS USING SURF
ijcseit
 
SOM-VIS
SOM-VISSOM-VIS
MPE data validation
MPE data validationMPE data validation
MPE data validation
hojjatseyyedi
 
PolygonalAreaLights_09081016
PolygonalAreaLights_09081016PolygonalAreaLights_09081016
PolygonalAreaLights_09081016
Alexander Petryaev
 
INS/GPS Integrated Navigation Technology for Hypersonic UAV
INS/GPS Integrated Navigation Technology for Hypersonic UAVINS/GPS Integrated Navigation Technology for Hypersonic UAV
INS/GPS Integrated Navigation Technology for Hypersonic UAV
Nooria Sukmaningtyas
 
Evaluation of procedures to improve solar resource assessments presented WREF...
Evaluation of procedures to improve solar resource assessments presented WREF...Evaluation of procedures to improve solar resource assessments presented WREF...
Evaluation of procedures to improve solar resource assessments presented WREF...
Gwendalyn Bender
 
Understanding climate model evaluation and validation
Understanding climate model evaluation and validationUnderstanding climate model evaluation and validation
Understanding climate model evaluation and validation
Puneet Sharma
 
Development of Methodology for Determining Earth Work Volume Using Combined S...
Development of Methodology for Determining Earth Work Volume Using Combined S...Development of Methodology for Determining Earth Work Volume Using Combined S...
Development of Methodology for Determining Earth Work Volume Using Combined S...
IJMER
 
Thesis Guillermo Kardolus
Thesis Guillermo KardolusThesis Guillermo Kardolus
Thesis Guillermo Kardolus
Guillermo Kardolus
 

What's hot (20)

IEEE 2014 MATLAB IMAGE PROCESSING PROJECTS Frame rate- up- conversion- based...
IEEE 2014 MATLAB IMAGE PROCESSING PROJECTS Frame  rate- up- conversion- based...IEEE 2014 MATLAB IMAGE PROCESSING PROJECTS Frame  rate- up- conversion- based...
IEEE 2014 MATLAB IMAGE PROCESSING PROJECTS Frame rate- up- conversion- based...
 
Interferogram Filtering Using Gaussians Scale Mixtures in Steerable Wavelet D...
Interferogram Filtering Using Gaussians Scale Mixtures in Steerable Wavelet D...Interferogram Filtering Using Gaussians Scale Mixtures in Steerable Wavelet D...
Interferogram Filtering Using Gaussians Scale Mixtures in Steerable Wavelet D...
 
Dhrimaj_Helion_FinalPaper_ECE523
Dhrimaj_Helion_FinalPaper_ECE523Dhrimaj_Helion_FinalPaper_ECE523
Dhrimaj_Helion_FinalPaper_ECE523
 
Contribution to the investigation of wind characteristics and assessment of w...
Contribution to the investigation of wind characteristics and assessment of w...Contribution to the investigation of wind characteristics and assessment of w...
Contribution to the investigation of wind characteristics and assessment of w...
 
bakerca2Thesis2s
bakerca2Thesis2sbakerca2Thesis2s
bakerca2Thesis2s
 
Mapping spiral structure on the far side of the Milky Way
Mapping spiral structure on the far side of the Milky WayMapping spiral structure on the far side of the Milky Way
Mapping spiral structure on the far side of the Milky Way
 
Raznjevic, Anja: Comparison of large eddy simulation of a point source methan...
Raznjevic, Anja: Comparison of large eddy simulation of a point source methan...Raznjevic, Anja: Comparison of large eddy simulation of a point source methan...
Raznjevic, Anja: Comparison of large eddy simulation of a point source methan...
 
A Solution to Land Area Calculation for Android Phone using GPS-Luwei Yang
A Solution to Land Area Calculation for Android Phone using GPS-Luwei YangA Solution to Land Area Calculation for Android Phone using GPS-Luwei Yang
A Solution to Land Area Calculation for Android Phone using GPS-Luwei Yang
 
Barber_TU2.T03_hk_mb_casa_fg_hk_FINAL.pptx
Barber_TU2.T03_hk_mb_casa_fg_hk_FINAL.pptxBarber_TU2.T03_hk_mb_casa_fg_hk_FINAL.pptx
Barber_TU2.T03_hk_mb_casa_fg_hk_FINAL.pptx
 
Ozone and Aerosol Index as radiation amplification factor of UV index over Eg...
Ozone and Aerosol Index as radiation amplification factor of UV index over Eg...Ozone and Aerosol Index as radiation amplification factor of UV index over Eg...
Ozone and Aerosol Index as radiation amplification factor of UV index over Eg...
 
Air pollution modeling
Air pollution modelingAir pollution modeling
Air pollution modeling
 
AUTOMATIC IDENTIFICATION OF CLOUD COVER REGIONS USING SURF
AUTOMATIC IDENTIFICATION OF CLOUD COVER REGIONS USING SURF AUTOMATIC IDENTIFICATION OF CLOUD COVER REGIONS USING SURF
AUTOMATIC IDENTIFICATION OF CLOUD COVER REGIONS USING SURF
 
SOM-VIS
SOM-VISSOM-VIS
SOM-VIS
 
MPE data validation
MPE data validationMPE data validation
MPE data validation
 
PolygonalAreaLights_09081016
PolygonalAreaLights_09081016PolygonalAreaLights_09081016
PolygonalAreaLights_09081016
 
INS/GPS Integrated Navigation Technology for Hypersonic UAV
INS/GPS Integrated Navigation Technology for Hypersonic UAVINS/GPS Integrated Navigation Technology for Hypersonic UAV
INS/GPS Integrated Navigation Technology for Hypersonic UAV
 
Evaluation of procedures to improve solar resource assessments presented WREF...
Evaluation of procedures to improve solar resource assessments presented WREF...Evaluation of procedures to improve solar resource assessments presented WREF...
Evaluation of procedures to improve solar resource assessments presented WREF...
 
Understanding climate model evaluation and validation
Understanding climate model evaluation and validationUnderstanding climate model evaluation and validation
Understanding climate model evaluation and validation
 
Development of Methodology for Determining Earth Work Volume Using Combined S...
Development of Methodology for Determining Earth Work Volume Using Combined S...Development of Methodology for Determining Earth Work Volume Using Combined S...
Development of Methodology for Determining Earth Work Volume Using Combined S...
 
Thesis Guillermo Kardolus
Thesis Guillermo KardolusThesis Guillermo Kardolus
Thesis Guillermo Kardolus
 

Similar to Covariance models for geodetic applications of collocation brief version

Mapping the anthropic backfill of the historical center of Rome (Italy) by us...
Mapping the anthropic backfill of the historical center of Rome (Italy) by us...Mapping the anthropic backfill of the historical center of Rome (Italy) by us...
Mapping the anthropic backfill of the historical center of Rome (Italy) by us...
Beniamino Murgante
 
Ravasi_etal_EAGE2015b
Ravasi_etal_EAGE2015bRavasi_etal_EAGE2015b
Ravasi_etal_EAGE2015b
Matteo Ravasi
 
Surface Wave Tomography
Surface Wave TomographySurface Wave Tomography
Surface Wave Tomography
Ali Osman Öncel
 
Surface Wave Tomography
Surface Wave TomographySurface Wave Tomography
Surface Wave Tomography
Ali Osman Öncel
 
OBIA on Coastal Landform Based on Structure Tensor
OBIA on Coastal Landform Based on Structure Tensor OBIA on Coastal Landform Based on Structure Tensor
OBIA on Coastal Landform Based on Structure Tensor
csandit
 
Zhang weglein-2008
Zhang weglein-2008Zhang weglein-2008
Zhang weglein-2008
Arthur Weglein
 
Jgrass-NewAge: Kriging component
Jgrass-NewAge: Kriging componentJgrass-NewAge: Kriging component
Jgrass-NewAge: Kriging component
Niccolò Tubini
 
Inversão com sigmoides
Inversão com sigmoidesInversão com sigmoides
Inversão com sigmoides
juarezsa
 
ANALYSIS OF LAND SURFACE DEFORMATION GRADIENT BY DINSAR
ANALYSIS OF LAND SURFACE DEFORMATION GRADIENT BY DINSAR ANALYSIS OF LAND SURFACE DEFORMATION GRADIENT BY DINSAR
ANALYSIS OF LAND SURFACE DEFORMATION GRADIENT BY DINSAR
cscpconf
 
Evaluation of the Sensitivity of Seismic Inversion Algorithms to Different St...
Evaluation of the Sensitivity of Seismic Inversion Algorithms to Different St...Evaluation of the Sensitivity of Seismic Inversion Algorithms to Different St...
Evaluation of the Sensitivity of Seismic Inversion Algorithms to Different St...
IJERA Editor
 
TU3.T10.1.pptx
TU3.T10.1.pptxTU3.T10.1.pptx
TU3.T10.1.pptx
grssieee
 
TU3.T10_1.pptx
TU3.T10_1.pptxTU3.T10_1.pptx
TU3.T10_1.pptx
grssieee
 
TU3.T10_1.pptx
TU3.T10_1.pptxTU3.T10_1.pptx
TU3.T10_1.pptx
grssieee
 
Improving ambiguity resolution
Improving ambiguity resolutionImproving ambiguity resolution
Improving ambiguity resolution
sfu-kras
 
20320130406015 2-3-4
20320130406015 2-3-420320130406015 2-3-4
20320130406015 2-3-4
IAEME Publication
 
First M87 Event Horizon Telescope Results. IX. Detection of Near-horizon Circ...
First M87 Event Horizon Telescope Results. IX. Detection of Near-horizon Circ...First M87 Event Horizon Telescope Results. IX. Detection of Near-horizon Circ...
First M87 Event Horizon Telescope Results. IX. Detection of Near-horizon Circ...
Sérgio Sacani
 
Sparsity based Joint Direction-of-Arrival and Offset Frequency Estimator
Sparsity based Joint Direction-of-Arrival and Offset Frequency EstimatorSparsity based Joint Direction-of-Arrival and Offset Frequency Estimator
Sparsity based Joint Direction-of-Arrival and Offset Frequency Estimator
Jason Fernandes
 
Seismic Modeling ASEG 082001 Andrew Long
Seismic Modeling ASEG 082001 Andrew LongSeismic Modeling ASEG 082001 Andrew Long
Seismic Modeling ASEG 082001 Andrew Long
Andrew Long
 
A Study of Non-Gaussian Error Volumes and Nonlinear Uncertainty Propagation f...
A Study of Non-Gaussian Error Volumes and Nonlinear Uncertainty Propagation f...A Study of Non-Gaussian Error Volumes and Nonlinear Uncertainty Propagation f...
A Study of Non-Gaussian Error Volumes and Nonlinear Uncertainty Propagation f...
Justin Spurbeck
 
Accuracy improvement of gnss and real time kinematic using egyptian network a...
Accuracy improvement of gnss and real time kinematic using egyptian network a...Accuracy improvement of gnss and real time kinematic using egyptian network a...
Accuracy improvement of gnss and real time kinematic using egyptian network a...
Alexander Decker
 

Similar to Covariance models for geodetic applications of collocation brief version (20)

Mapping the anthropic backfill of the historical center of Rome (Italy) by us...
Mapping the anthropic backfill of the historical center of Rome (Italy) by us...Mapping the anthropic backfill of the historical center of Rome (Italy) by us...
Mapping the anthropic backfill of the historical center of Rome (Italy) by us...
 
Ravasi_etal_EAGE2015b
Ravasi_etal_EAGE2015bRavasi_etal_EAGE2015b
Ravasi_etal_EAGE2015b
 
Surface Wave Tomography
Surface Wave TomographySurface Wave Tomography
Surface Wave Tomography
 
Surface Wave Tomography
Surface Wave TomographySurface Wave Tomography
Surface Wave Tomography
 
OBIA on Coastal Landform Based on Structure Tensor
OBIA on Coastal Landform Based on Structure Tensor OBIA on Coastal Landform Based on Structure Tensor
OBIA on Coastal Landform Based on Structure Tensor
 
Zhang weglein-2008
Zhang weglein-2008Zhang weglein-2008
Zhang weglein-2008
 
Jgrass-NewAge: Kriging component
Jgrass-NewAge: Kriging componentJgrass-NewAge: Kriging component
Jgrass-NewAge: Kriging component
 
Inversão com sigmoides
Inversão com sigmoidesInversão com sigmoides
Inversão com sigmoides
 
ANALYSIS OF LAND SURFACE DEFORMATION GRADIENT BY DINSAR
ANALYSIS OF LAND SURFACE DEFORMATION GRADIENT BY DINSAR ANALYSIS OF LAND SURFACE DEFORMATION GRADIENT BY DINSAR
ANALYSIS OF LAND SURFACE DEFORMATION GRADIENT BY DINSAR
 
Evaluation of the Sensitivity of Seismic Inversion Algorithms to Different St...
Evaluation of the Sensitivity of Seismic Inversion Algorithms to Different St...Evaluation of the Sensitivity of Seismic Inversion Algorithms to Different St...
Evaluation of the Sensitivity of Seismic Inversion Algorithms to Different St...
 
TU3.T10.1.pptx
TU3.T10.1.pptxTU3.T10.1.pptx
TU3.T10.1.pptx
 
TU3.T10_1.pptx
TU3.T10_1.pptxTU3.T10_1.pptx
TU3.T10_1.pptx
 
TU3.T10_1.pptx
TU3.T10_1.pptxTU3.T10_1.pptx
TU3.T10_1.pptx
 
Improving ambiguity resolution
Improving ambiguity resolutionImproving ambiguity resolution
Improving ambiguity resolution
 
20320130406015 2-3-4
20320130406015 2-3-420320130406015 2-3-4
20320130406015 2-3-4
 
First M87 Event Horizon Telescope Results. IX. Detection of Near-horizon Circ...
First M87 Event Horizon Telescope Results. IX. Detection of Near-horizon Circ...First M87 Event Horizon Telescope Results. IX. Detection of Near-horizon Circ...
First M87 Event Horizon Telescope Results. IX. Detection of Near-horizon Circ...
 
Sparsity based Joint Direction-of-Arrival and Offset Frequency Estimator
Sparsity based Joint Direction-of-Arrival and Offset Frequency EstimatorSparsity based Joint Direction-of-Arrival and Offset Frequency Estimator
Sparsity based Joint Direction-of-Arrival and Offset Frequency Estimator
 
Seismic Modeling ASEG 082001 Andrew Long
Seismic Modeling ASEG 082001 Andrew LongSeismic Modeling ASEG 082001 Andrew Long
Seismic Modeling ASEG 082001 Andrew Long
 
A Study of Non-Gaussian Error Volumes and Nonlinear Uncertainty Propagation f...
A Study of Non-Gaussian Error Volumes and Nonlinear Uncertainty Propagation f...A Study of Non-Gaussian Error Volumes and Nonlinear Uncertainty Propagation f...
A Study of Non-Gaussian Error Volumes and Nonlinear Uncertainty Propagation f...
 
Accuracy improvement of gnss and real time kinematic using egyptian network a...
Accuracy improvement of gnss and real time kinematic using egyptian network a...Accuracy improvement of gnss and real time kinematic using egyptian network a...
Accuracy improvement of gnss and real time kinematic using egyptian network a...
 

Recently uploaded

Uni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems Copilot event_05062024_C.Vlachos.pdfUni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems S.M.S.A.
 
How to Get CNIC Information System with Paksim Ga.pptx
How to Get CNIC Information System with Paksim Ga.pptxHow to Get CNIC Information System with Paksim Ga.pptx
How to Get CNIC Information System with Paksim Ga.pptx
danishmna97
 
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
Neo4j
 
20240609 QFM020 Irresponsible AI Reading List May 2024
20240609 QFM020 Irresponsible AI Reading List May 202420240609 QFM020 Irresponsible AI Reading List May 2024
20240609 QFM020 Irresponsible AI Reading List May 2024
Matthew Sinclair
 
TrustArc Webinar - 2024 Global Privacy Survey
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc Webinar - 2024 Global Privacy Survey
TrustArc Webinar - 2024 Global Privacy Survey
TrustArc
 
Essentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FMEEssentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FME
Safe Software
 
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
Neo4j
 
“Building and Scaling AI Applications with the Nx AI Manager,” a Presentation...
“Building and Scaling AI Applications with the Nx AI Manager,” a Presentation...“Building and Scaling AI Applications with the Nx AI Manager,” a Presentation...
“Building and Scaling AI Applications with the Nx AI Manager,” a Presentation...
Edge AI and Vision Alliance
 
RESUME BUILDER APPLICATION Project for students
RESUME BUILDER APPLICATION Project for studentsRESUME BUILDER APPLICATION Project for students
RESUME BUILDER APPLICATION Project for students
KAMESHS29
 
Presentation of the OECD Artificial Intelligence Review of Germany
Presentation of the OECD Artificial Intelligence Review of GermanyPresentation of the OECD Artificial Intelligence Review of Germany
Presentation of the OECD Artificial Intelligence Review of Germany
innovationoecd
 
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
名前 です男
 
GenAI Pilot Implementation in the organizations
GenAI Pilot Implementation in the organizationsGenAI Pilot Implementation in the organizations
GenAI Pilot Implementation in the organizations
kumardaparthi1024
 
HCL Notes and Domino License Cost Reduction in the World of DLAU
HCL Notes and Domino License Cost Reduction in the World of DLAUHCL Notes and Domino License Cost Reduction in the World of DLAU
HCL Notes and Domino License Cost Reduction in the World of DLAU
panagenda
 
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!
SOFTTECHHUB
 
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfUnlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
Malak Abu Hammad
 
Introduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - CybersecurityIntroduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - Cybersecurity
mikeeftimakis1
 
UiPath Test Automation using UiPath Test Suite series, part 5
UiPath Test Automation using UiPath Test Suite series, part 5UiPath Test Automation using UiPath Test Suite series, part 5
UiPath Test Automation using UiPath Test Suite series, part 5
DianaGray10
 
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
GraphSummit Singapore | The Art of the  Possible with Graph - Q2 2024GraphSummit Singapore | The Art of the  Possible with Graph - Q2 2024
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
Neo4j
 
Artificial Intelligence for XMLDevelopment
Artificial Intelligence for XMLDevelopmentArtificial Intelligence for XMLDevelopment
Artificial Intelligence for XMLDevelopment
Octavian Nadolu
 
20240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 202420240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 2024
Matthew Sinclair
 

Recently uploaded (20)

Uni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems Copilot event_05062024_C.Vlachos.pdfUni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems Copilot event_05062024_C.Vlachos.pdf
 
How to Get CNIC Information System with Paksim Ga.pptx
How to Get CNIC Information System with Paksim Ga.pptxHow to Get CNIC Information System with Paksim Ga.pptx
How to Get CNIC Information System with Paksim Ga.pptx
 
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
 
20240609 QFM020 Irresponsible AI Reading List May 2024
20240609 QFM020 Irresponsible AI Reading List May 202420240609 QFM020 Irresponsible AI Reading List May 2024
20240609 QFM020 Irresponsible AI Reading List May 2024
 
TrustArc Webinar - 2024 Global Privacy Survey
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc Webinar - 2024 Global Privacy Survey
TrustArc Webinar - 2024 Global Privacy Survey
 
Essentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FMEEssentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FME
 
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
 
“Building and Scaling AI Applications with the Nx AI Manager,” a Presentation...
“Building and Scaling AI Applications with the Nx AI Manager,” a Presentation...“Building and Scaling AI Applications with the Nx AI Manager,” a Presentation...
“Building and Scaling AI Applications with the Nx AI Manager,” a Presentation...
 
RESUME BUILDER APPLICATION Project for students
RESUME BUILDER APPLICATION Project for studentsRESUME BUILDER APPLICATION Project for students
RESUME BUILDER APPLICATION Project for students
 
Presentation of the OECD Artificial Intelligence Review of Germany
Presentation of the OECD Artificial Intelligence Review of GermanyPresentation of the OECD Artificial Intelligence Review of Germany
Presentation of the OECD Artificial Intelligence Review of Germany
 
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
 
GenAI Pilot Implementation in the organizations
GenAI Pilot Implementation in the organizationsGenAI Pilot Implementation in the organizations
GenAI Pilot Implementation in the organizations
 
HCL Notes and Domino License Cost Reduction in the World of DLAU
HCL Notes and Domino License Cost Reduction in the World of DLAUHCL Notes and Domino License Cost Reduction in the World of DLAU
HCL Notes and Domino License Cost Reduction in the World of DLAU
 
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!
 
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfUnlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
 
Introduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - CybersecurityIntroduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - Cybersecurity
 
UiPath Test Automation using UiPath Test Suite series, part 5
UiPath Test Automation using UiPath Test Suite series, part 5UiPath Test Automation using UiPath Test Suite series, part 5
UiPath Test Automation using UiPath Test Suite series, part 5
 
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
GraphSummit Singapore | The Art of the  Possible with Graph - Q2 2024GraphSummit Singapore | The Art of the  Possible with Graph - Q2 2024
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
 
Artificial Intelligence for XMLDevelopment
Artificial Intelligence for XMLDevelopmentArtificial Intelligence for XMLDevelopment
Artificial Intelligence for XMLDevelopment
 
20240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 202420240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 2024
 

Covariance models for geodetic applications of collocation brief version

  • 1. COVARIANCE MODELS FOR GEODETIC APPLICATIONS OF COLLOCATION Carlo Iapige De Gaetani Politecnico di Milano, Department of Civil and Environmental Engineering (DICA) Piazza Leonardo da Vinci, 32 20133 Milan, Italy carloiapige.degaetani@polimi.it KEY WORDS: Local Gravity Field, Data Integration, Collocation, Covariance Functions, Linear programming, Simplex Method ABSTRACT: The recent gravity mission GOCE aims at measuring the global gravity field of the Earth with un-precedent accuracy. An improved description of gravity means improved knowledge in e.g. ocean circulation and climate and sea-level change with implications in areas such as geodesy and surveying. Through GOCE products, the low-medium frequency spectrum of the gravity field is properly estimated. This is enough to detect the main gravimetric structures but local applications are still questionable. GOCE data can be integrated with other kind of observations, having different features, frequency content, spatial coverage and resolution. Gravity anomalies (∆𝑔) and geoid undulation (𝑁) derived from radar-altimetry data (as well as GOCE 𝑇!!) are all linear(ized) functional of the anomalous gravity potential (𝑇). For local modeling of the gravity field, this useful connection can be used to integrate information of different observations, in order to obtain a better representation of high frequency, otherwise difficult to recover. The usual methodology is based on Collocation theory. The nodal problem of this approach is the correct modeling of the empirical covariance of the observations. Proper covariance models have been proposed by many authors. However, there are problems in fitting the empirical values when different functional of 𝑇 are combined. The problem of modeling covariance functions has been dealt with an innovative methodology based on Linear Programming and the Simplex Algorithm. The results obtained during the test phase of this new methodology of modeling covariance function for local applications show improvements respect to the software packages available until now. 1. INTRODUCTION 1.1 The Usual Procedure for Gravity Data Integration The usual procedure for integrated local gravity field modeling is based on the combination of Remove-Restore technique (RR) and Collocation theory. The RR principle is one of the most well-known strategies used for regional and local gravity field determination. It is based on the idea that modeling gravity field can be done by assuming that gravity field signal can be divided into long, medium and short wavelength components. Global models are used as an approximation of the low frequency part of the entire spectrum of the global gravitational signal and they are suitable for a general representation of gravity at global scale. In the RR procedure the effect due to the gravity field data outside the local area of interest is removed subtracting to observed local data a suitable global model. This corresponds to a high-pass filter and the result is a residual signal where the long wavelength due to the masses at the deep earth interior and upper mantle and the contribute of a very smooth topography have been removed. After reduction for a global model, in addition to medium frequencies, high frequency components are still present in local data. They are essentially due to the signal contribution of the local topography, which is particularly strong at short wavelengths for a rough terrain so that global model are not able to represent them properly (Forsberg et al., 1994). This residual topographic signal is called Residual Terrain Correction (RTC). The residual data obtained applying both the reduction of the global model and a correspondent RTC contain only the intermediate wavelength to be used for local geoid modeling. All trends or systematic effects are removed from original data and local residual gravity observations, related for the most part to local features, can be used in Collocation to estimate the medium-wavelength gravity signal. Subsequently, the geopotential model and RTC derived effects can be restored to this residual component obtaining the final local estimate (see Figure 1). Figure 1. The Remove-Restore concept Collocation is a statistical-mathematical theory applied on gravity field modeling problems. It is based on the assumption that the gravity observations can be considered as realizations of a weak stationary and ergodic stochastic process. This theory has become more and more important because it allows to combine many different kind of gravity field observables in order to obtain a better estimate of gravity field. With the great amount of heterogeneous data, nowadays available, this approach has been fully accepted as the standard methodology for integrated gravity field modeling. In this framework the concept of spatial correlation expressed as covariance function is introduced. The key is represented by the fact that quantities as ∆𝑔, 𝑁 or 𝑇!! are all linear(ized) functionals of the anomalous gravity potential (𝑇) and covariance functions can be propagated to one each other applying the proper linear operators on the well-known analytical model of covariance of the anomalous potential: 𝐶!!!! = 𝐺𝑀! 𝑅! 𝑅 𝑟! !!! 𝑅 𝑟! !!! 𝜎! ! 𝑃!(𝑐𝑜𝑠 ! !!! 𝜓!") (1)
  • 2. where: 𝑅 is the mean Earth radius, 𝐺 is gravitational constant, 𝑀 is the total mass of the Earth, 𝑟! and 𝑟! are the radius of point 𝑃 and 𝑄, 𝜎! ! are a set of adimensional degree variances, 𝑃! are the usual normalized Legendre polynomials, 𝜓!" is the spherical distance between 𝑃 and 𝑄 Degree variances are related to the coefficients 𝑎!" and 𝑏!" of the spherical harmonic expansion of 𝑇 as: 𝜎! ! = 𝑎!" ! + 𝑏!" !! !!! (2) The general formula of Least Squares Collocation (LSC) for geodetic application (Barzaghi and Sansò, 1984) can be written as: 𝐿! 𝑇 = 𝐿! 𝐿! 𝐶!!!! 𝐿! 𝐿! 𝐶!!!! + 𝜎! ! 𝐼 !! 𝐿!(𝑇) + 𝑛! ! !,!!! (3) where: 𝐿! is the linear(ized) operator applied in 𝑃!, 𝐶!!!! is the anomalous potential covariance matrix of observations, 𝜎! ! is the noise variance of the observations In equation (3) the covariance expressed by local data plays fundamental role and a correct modeling of their covariance functions is then required. 2. GRAVITY COVARIANCE FUNCTION MODELING 2.1 The State of the Art in Covariance Functions Estimation The application of LSC allows gravity field estimation by integration of different kind of observations. This requires to define the covariance function of the anomalous potential as kernel function. The covariance functions of the other linear functionals are obtained by covariance propagation. Generally the exact structure of covariance function of 𝑇 is unknown, and the way to proceed consists in applying suitable analytical models fitted on empirical covariances of the available data. In local areas, stationarity and ergodicity of local field required by collocation theory are generally guaranteed by removal of systematic effects from data, related to the long wavelength of the signal and the topographic effect. In fact local covariance functions are special case of global covariance functions (Knudsen, 1987) where the information content in wavelengths longer than the extent of the local area has been removed, and the information outside the area is assumed to vary in a way similar to the information within the area. In practice the observations are given in discrete points in the area of computing so that the calculation of the so-called empirical covariance function is done by computing the product between all the possible combinations of available data and clustering them as function of reciprocal distance. The mean value of each cluster represents the covariance value. Hence, the generic covariance at distance 𝜓 is given by: 𝐶 𝜓 = 𝐶 𝑘 ∆! ! = ! ! ! ! 𝑙! 𝑙! !! !!! ! !!! (4)     where 𝑛 and 𝑚 are the 𝑙 and 𝑙′ observations with reciprocal distance (𝑘 − 1)∆𝜓 ≤ 𝜓 ≤ 𝑘∆𝜓 and 𝑘 = 1, … , 𝑠. If 𝑙 = 𝑙′, equation (4) is referred to auto-covariance of 𝑙. In this case at zero distance the computed covariance is nothing else than the sum of the signal variance 𝜎!"#$%& ! and the noise variance 𝜎!"#$% ! . This information is useful for a proper estimate of the empirical covariance values. In fact the width of the sample spacing ∆𝜓 must be properly selected and a simple criterion for optimizing ∆𝜓 is maximizing the signal to noise ratio. If noise of observations is uncorrelated, for 𝑘 ≥ 1 the empirical covariance represents with some approximation, the covariance due only to the signal component of the observations. Hence empirical covariance function calculated at distance close to zero (e.g. for 𝑘 = 1) can be considered approximately equal to 𝜎!"#$%& ! . In this way optimal ∆𝜓 can be estimated as the one that minimize the difference 𝐶(0) − 𝐶 ∆𝜓/2 ≅ 𝜎!"#$% ! . For a fair spatial distribution of data, the optimal ∆𝜓 is close to twice the mean spacing of the observations (Mussio, 1984). Once the local empirical covariance is computed, a theoretical model is fitted into the empirical values. Based on the global analytical formulas (1), the usual model for local covariance function is expressed as: 𝐶 𝜓!" = 𝛼𝑒! ! 𝑇 𝑅! 𝑟! 𝑟! !!! 𝑃! 𝑐𝑜𝑠𝜓!" !!"# !!! + ⋯                                    … + 𝜎! ! 𝑇 𝑅! 𝑟! 𝑟! !!! 𝑃! 𝑐𝑜𝑠𝜓!" !!"# !!!!"#!! (5)     Such covariance model is the sum of two parts. The first comes from the commission error of the global model removed from observations. It is given in terms of the sum of the error degree variances 𝑒! ! up to the maximum degree of computation of the global model subtracted in the remove phase. 𝑒! ! are computed as the sum of the variances of the estimated model coefficients. The second part represents the residual part of the signal contained in the local data. It is expressed as sum of degree variances. Error degree variances depend on the global model used. The coefficient 𝛼 allows to weigh their influence. Suitable models for 𝜎! ! in (5) have been proposed by Tscherning and Rapp (Tscherning and Rapp, 1974) as: 𝜎! ! = ! !!! !!! (6)     𝜎! ! = ! !!! !!! (!!!) By covariance propagation, these covariance models of the anomalous potential 𝑇 can be used to get models for any functional of 𝑇. By turning the model constants (e.g. 𝐴 and 𝛼) these model functions can be used to properly fit the empirical covariances of the available data. The selected model covariance function can be then propagated to the other observed functionals using once again the covariance propagation formula. This is the usual procedure of covariance functions modeling. The main drawbacks are connected to mismodeling. The proposed models are frequently unable to properly fit the empirical covariances, particularly when
  • 3. different functionals are considered. In a previous proposed approach (Barzaghi et al., 2009) regularized least squares adjustment have been applied for integrated covariance function modeling. Although more flexible than the one of Tscherning and Rapp gives frequently problems in the non-negative condition of 𝜎! ! , being sum of square quantities as (2) shows. 2.2 An Introduction to Linear Programming In order to overcome these limits, the proposed covariance fitting method is based on linear programming. To describe this method, we can consider the following system of linear inequalities: 3𝑥 − 4𝑦 ≤ 15          𝑥 + 𝑦 ≤ 12                      𝑥 ≥ 0                      𝑦 ≥ 0 (7)     An ordered pair (𝑋, 𝑌) such that all the inequalities are satisfied when (𝑥 = 𝑋, 𝑦 = 𝑌) is a solution of the system of linear inequalities (7). As an example [2,4] is one possible solution. Such problems of two variables can be easily sketched in a graph that shows the set of all the couple (𝑋, 𝑌) solving the system (Figure 2). Figure 2. Graphical sketch of a system of linear inequalities There are several solutions to (7). Finding one of particular interest is an optimization process (Press et al., 1989). When this solution is the one maximizing (or minimizing) a given linear combination of the variables (called objective function) subject to constraints expressed as (7), the optimization process is called Linear Programming (LP). An example of linear programming could be finding the minimum value of the objective function: 𝑧 = 𝑥 + 𝑦 (8)   With the constraints imposed in (7). Each of the inequalities of (7) separates the plane (𝑥, 𝑦) into two parts identifying the region containing all the couples of (𝑋, 𝑌) satisfying them. Out of this region, at least one of the constraints is not satisfied so the solution to (8) must be chosen among the couples (𝑋, 𝑌) located inside it. This region ,called feasible region, contains all the possible solution (feasible solutions) expressed in the constraints system (Figure 3). One of them is the optimal solution that solve the LP problem. Fundamental theorem of linear programming assure that if there is solution, it occurs on one of the vertex of the feasible region. Figure 3. Two dimensional feasible region of constraints For applications involving a large number of constraints or variables, numerical methods must be applied. One of them is the Simplex method. It provides a systematic way of examining the vertices of the feasible solution to determine the optimal value of the objective function. The simplex method consists in elementary row operations on a particular matrix corresponding to the LP problem called tableau. The initial version of the tableau changes its form through iterative optimality check. This operation is called pivoting. To form the improved solution, Gauss-Jordan elimination is applied with the pivot (crossing pivot row and column) to the column pivot. After improving the solution, the simplex algorithm start a new iteration checking for further improvements. Each iteration change the tableau and the conditions of optimality or unfeasibility of the solution to the proposed LP problem stop the algorithm. Based on fundamental theorem of linear programming, simplex method is able to verify the existence of at least a solution to the proposed LP problem. If this exist, the algorithm is also able to find the best numerical solution in a finite time. 2.3 A New Methodology of Covariance Functions Modeling A new procedure, based on the simplex method and analytical covariance function models similar to (5), has been devised for integrated covariance functions modeling. It applies simplex method for estimating some suitable parameters of model covariance functions in such a way to fit simultaneously the empirical covariances of all available local data. The main problems of standard local covariance modeling are the propagation of the covariance between functionals. Often, an estimated model covariance function showing a good agreement on the empirical values of one kind of data (typically gravity anomalies) when propagated to other covariances of observed data in the same local area, shows a poor fit to the corresponding empirical covariances. Thus are possible improvements that consist in taking into account all the available empirical covariances and find a suitable combination of error degree variances and degree variances giving the best overall possible agreement. Other aspects that requires to be taken into account are the non-negativity of the degree variances of the covariance model and the fact that when data are considered, different data height are possible. The first one can be handled through proper constraints in applying simplex method while the second one requires an approximation. The empirical covariance estimation procedure does not take into account this height variations. Nevertheless, since reduced values are considered, one can assume that data are referred to a mean height sphere, either the mean earth sphere or a sphere at satellite altitude when satellite data are considered. So the basic model covariance function of the anomalous potential observed in two points became:
  • 4. 𝐶!!(𝜓) = 𝐺𝑀! 𝑅! 𝑅 𝑅 + ℎ !(!!!) 𝑒! ! 𝑃!(𝑐𝑜𝑠 !!"# !!! 𝜓) + ⋯                      … + 𝑅 𝑅 + ℎ !(!!!) 𝜎! ! 𝑃!(𝑐𝑜𝑠 !!"# !!!!"#!! 𝜓) (9)     where ℎ is the mean height of the data. This equation is a linear combination of 𝑁!"# − 2 adimensional error degree variances up to degree 𝑁!"# of reduction of the data 𝑒! ! and 𝑁!"# − 𝑁!"# adimensional degree variances 𝜎! ! that complete the spectrum up to degree 𝑁!"# and that have to be estimated. As explained before, simplex method requires the definition of a set of constraints, expressed as linear inequalities. For given 𝜓!, there are a set of 𝑒! ! and 𝜎! ! such that: 𝐶!!(𝜓!) = 𝐶!! !"# (𝜓!) (10)     Referring to more functionals and distance sampling of the empirical covariance there are little different set of 𝑒! ! and 𝜎! ! so that: 𝐶!! 𝜓! ≅ 𝐶!! !"# (𝜓!) 𝐶!! 𝜓! ≅ 𝐶!! !"# 𝜓!   𝐶∆!∆! 𝜓! ≅ 𝐶∆!∆! !"# (𝜓!) (11)     (11) can be translated into inequalities that simplex method has to take into account. Each of them can be expressed in the following form: 𝐶 𝜓! ≤ 𝐶!"# 𝜓! + ∆𝐶! 𝐶 𝜓! ≥ 𝐶!"# 𝜓! − ∆𝐶! (12)     A collection of constraints as (12) can be used into simplex method to handle the optimization problem taking into account that model and empirical covariances have to be as close as possible (Figure 4). Figure 4. Constraints applied on model covariance function The same thing can be done on multiple empirical covariance functions. With these constraints given on the estimated covariance functions, simplex method is forced to find a unique set of adapted 𝑒! ! and 𝜎! ! , suitable for all the available empirical covariances (Figure 5). Figure 5. Multiple constrains on model covariance functions 𝑒! ! and 𝜎! ! , are obtained applying suitable scaling factor on some reference model of 𝑒! ! and 𝜎! ! . This could be possible estimating each of them but such problem should require at least 𝑁!"# − 1 constraints and a simplex tableau of such dimensions could be difficult to manage (some tests have been done up to degree 2190). For this reason the chosen model covariance function has the form: 𝐶!!(𝜓) = 𝐺𝑀! 𝑅! 𝑅 𝑅 + ℎ !(!!!) 𝛼𝑒! ! 𝑃!(𝑐𝑜𝑠 !!"# !!! 𝜓) + ⋯                      … + 𝑅 𝑅 + ℎ !(!!!) 𝛽𝜎! ! 𝑃!(𝑐𝑜𝑠 !!"# !!!!"#!! 𝜓) (13)     Once the constraints, a linear function of the unknown scale factors 𝛼 and 𝛽, have been defined, a suitable objective function must be defined in order to generate the tableau and apply the simplex algorithm. A possible condition is to impose that the model function is close to a proper feasible model, e.g. selecting 𝑒! ! and 𝜎! ! so that they are close to the values implied by a global geopotential model. If global sets are able to reproduce the empirical values of covariances this implies obtaining scale factor close to unit. So it has been decided to minimize an objective function that is simply the sum of these two adaptive coefficients. Many different objective functions have been tested, but this solution proved to be the most adequate. Therefore the linear programming problem that simplex method has to solve is simply: min  (𝛼 + 𝛽) (14)     with 𝛼 and 𝛽 subject to 𝑚 constraints expressed as: 𝐶! ! !!(!) 𝜓!, 𝛼, 𝛽 ≤ 𝐶! ! !! ! !"# 𝜓! + ∆𝐶! ! !! ! ! (15)    𝐶! ! !!(!) 𝜓!, 𝛼, 𝛽 ≥ 𝐶! ! !! ! !"# 𝜓! − ∆𝐶! ! !! ! ! with 𝑖 = 1, … , 𝑚 , 𝐿(𝑇) or 𝐿! (𝑇) linear functionals of the anomalous potential 𝑇 (such ∆𝑔, 𝑁 or 𝑇!!) and 𝐶! ! !!(!) the relative covariance functions propagated using (13). As
  • 5. explained previously, if all the 𝑚 constraints are consistent and the proposed optimization problem has a solution, there is a feasible region of the solutions space where different combination of 𝛼 and 𝛽 are suitable. This covariance fitting methodology is numerically implemented through an iterative procedure. While objective condition (14) is fixed, conditions (15) are tuned in order to get the best possible fitting with empirical covariances. Referring to the feasible region, this procedure identifies an initial large feasible region (soft constraints, poor fit) and reduces this area until the vertex of optimality solution practically coincide one each other (strongest constraints, best fit). In Figure 6 this process is sketched. Figure 6. Impact of iterative constraints adjustment on feasible region Thus in this procedure, simplex method has been applied in a quite different way with respect to standard applications of linear programming. While in the usual application of simplex algorithm the focus is on the objective function and the constraints are fixed, the devised procedure proceeds in a reverse way. The objective function loses most of its importance and the focus is on suitable constraints allowing the best possible agreement between model covariance functions and empirical values. 3. ASSESSMENT OF THE PROCEDURE 3.1 Preliminary Tests on Covariance Functions Modeling with Simplex Method Covariance function modeling with simplex method has been implemented through many Fortran 95 subroutines, based on the concepts explained before. For brevity, the entire program will be called henceforth SIMPLEXCOV. This procedure is mainly composed by three steps: 1. analysis of input data for assessment of the best sampling of empirical covariance functions; 2. computing of empirical covariances; 3. iterative computing of best fit model covariance functions with simplex method. The third step is an iterative procedure, composed by two nested loops. In the external loop a set of suitable constraints on covariance functions are defined. Based on these constraints, in the internal loop many optimization problems are generated and solved by simplex method. In each iteration, the starting 𝑒! ! -𝜎! ! model (derived e.g. from global model coefficients) is slightly modified and a simplex algorithm solution is searched. If more than one set of varied degree variances is able to satisfies the constraints, an improved fit can be obtained modifying the constraints in the external loop and so on. On the contrary, if all the LP problems shows to have no feasible solution in the internal loop, constraints has to be softened in the external loop. The final iteration corresponds to a unique combination of set of adapted degree variances that allow to obtain the best possible fit between empirical covariances and model covariance functions. (Figure 7) and (Figure 8) show an intermediate iteration. In this computation two different set of adapted degree variances (Figure 7) produce very similar model covariance functions, both satisfying the same constraints on empirical values (Figure 8). Figure 7. Degree variances adapted in intermediate solution Figure 8. Model covariance functions in intermediate solution Covariance function modeling can be further improved adding constraints or making them more stringent until only one solution is found (Figure 9). Figure 9. Iterative covariance function modeling process with Simplex method This procedure has been tested both with simulated data and real dataset. A fast test has been implemented on gravity data in 0 100 200 300 400 500 600 700 800 900 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 x 10 -17 degree adimensional solution I solution II 0 0.5 1 1.5 2 2.5 3 3.5 4 -50 0 50 100 150 200 250 300 C(DgDg) deg [mGal2 ] empirical solution I solution II
  • 6. the NW part of Italy. In this test, a dataset of 4840 observed Free Air gravity anomalies ∆𝑔!" (hereafter denoted simply ∆𝑔) homogeneously distributed in a 5°×5° area (with SW corner with coordinates 42°𝑁, 7°𝐸), have been reduced for the long wavelength component removing the global model EGM2008 (Pavlis et al., 2008) up to degree 360 and the corresponding Residual Terrain Correction. Similarly in the same area 430 observation of geoid undulation 𝑁 obtained by GPS-leveling have been reduced correspondingly. In a standard local geoid computation, 𝑁 values are estimated based on ∆𝑔 values. Empirical covariance function of ∆𝑔 is computed and a suitable model covariance function is estimated so to represent as best the spatial correlation given by empirical values. Thus, the devised methodology has been applied following this estimation procedure. As a benchmark, the standard fitting method based on COVFIT program of the GRAVSOFT package has been adopted (Tscherning, 2004). In (Figure 10a) the empirical covariance of the reduced gravity values and the COVFIT3 best fit model are plotted. As one can see, the empirical values and the model covariance function are in a suitable agreement. However, the model function is not able to reproduce the oscillating features of the empirical covariance. As a further cross-check, the model function of residual geoid undulations, as derived from the gravity covariance model, has been compared with the 𝑁 empirical covariances values (Figure 10b). The agreement is quite poor as the model function displays with a correlation pattern which, at short distance, is stronger than the one implied by the empirical covariance. Figure 10. ∆𝑔 model covariance functions obtained by COVFIT3 (a) and the same model propagated on 𝑁 (b) SIMPLEXCOV has been then applied, in order to verify the results obtained by the new procedure and compare the results with the standard covariance function modeling procedure. The result is plotted in (Figure 11). The agreement between model and empirical covariance values is remarkable. The model is now able to fully reproduce the oscillations of the empirical values in a very detailed way (Figure 11a). As done before, the undulation model covariance implied by the gravity model covariance has been derived and compared with the empirical undulation covariance. The model seems to fit a little bit better the empirical values even though the overall agreement is quite poor also in this case (Figure 11b). Figure 11. ∆𝑔 model covariance functions obtained by SIMPLEXCOV (a) and the same model propagated on 𝑁 (b) The SIMPLEXCOV procedure has been then applied to both empirical functions in order to get a set of scaled error degree/degree variances allowing a common improved fit. Thus both empirical covariances of residual undulation and residual gravity have been considered in the fitting procedure. The results are summarized in (Figure 12). Figure 12. Integrated model covariance functions of ∆𝑔 (a) and 𝑁 (b) obtained by SIMPLEXCOV As expected, a remarkable improvement is obtained. The main features of both empirical covariances are properly reproduced using a unique set of scaled error degree/degree variances. In (Figure 13) the comparison between the starting EGM08 adopted error degree/degree variances and the final scaled obtained solution is shown. In order to fit the empirical values properly, larger error degree variances must be considered. On the other hand, there is a smaller scale factor applied to the degree variances. This can be explained saying that the formal error degree variances are too optimistic or, equivalently, that commission error is not realistic. Figure 13. Integrated adapted degree variances obtained by SIMPLEXCOV This test shows the reliability of the solution obtained when simplex method is applied on covariance function modeling problems. Adapted degree variances is able to reproduce empirical covariance functions of observed data, better than usual methodologies. In local application, integration of different data kinds is very useful and suitable covariance function are needed for computing proper collocation solutions. Thus the devised procedure seems to be a suitable tool for improving covariance function modeling. 0 0.5 1 1.5 2 2.5 3 3.5 4 -50 0 50 100 150 200 250 300 C(DgDg) deg [mGal2 ] empirical model 0 0.5 1 1.5 2 2.5 3 3.5 4 -0.05 0 0.05 0.1 0.15 0.2 C(NN) deg [m2 ] empirical model 0 0.5 1 1.5 2 2.5 3 3.5 4 -50 0 50 100 150 200 250 300 C(DgDg) deg [mGal2 ] empirical model 0 0.5 1 1.5 2 2.5 3 3.5 4 -0.05 0 0.05 0.1 0.15 0.2 C(NN) deg [m2 ] empirical model 0 0.5 1 1.5 2 2.5 3 3.5 4 -50 0 50 100 150 200 250 300 C(DgDg) deg [mGal2 ] empirical model 0 0.5 1 1.5 2 2.5 3 3.5 4 -0.05 0 0.05 0.1 0.15 0.2 C(NN) deg [m2 ] empirical model 0 100 200 300 400 500 600 700 800 900 0 0.5 1 1.5 2 2.5 x 10 -17 degree adimensional adapted degvar error degvar EGM08 degvar EGM08
  • 7. 3.2 Integrated Local Estimation of Gravity Field RR technique and LSC based on covariance function computed by SIMPLEXCOV has been combined in a unique computing procedure. This in order to obtain local predictions of gravity field functionals. This procedure is able to integrate observations of ∆𝑔!", 𝑁, 𝑇 and 𝑇!! for local prediction of each one of them. Following the main procedure steps are illustrated. • REMOVE PHASE: a) computing of long wavelength (global models); b) computing of high wavelength (RTC); c) computing of residual values of any available data. • COVARIANCE FUNCTION MODELING PHASE: d) computing of empirical covariance functions (for any available data); e) computing of integrated model covariance functions based on SIMPLEXCOV. • LEAST SQUARES COLLOCATION PREDICTION PHASE: f) selection and down sampling of data if needed (to avoid numerical problems); g) assembling the covariance matrices to be used in the collocation formula; h) inversion of this covariance matrix; i) computing the collocation estimate for the selected functional of the anomalous potential. • RESTORE PHASE: j) restoring long and short wavelengths to the predicted residuals; The collocation formula (3) is applied as many times as the number of prediction points. For each computation a down sampling process is applied on input observed residuals. Many reasons have lead to this choice. Covariance functions express spatial correlation of data and the impact of data on the estimate is tuned by the covariance correlation length. the correlation length is defined as the distance at which covariance is half of the origin value. Hence from model covariance function it is possible to determine this correlation length, in such a way to select only the most significant data, i.e. those close to the current prediction point. To this aim, only data having a distance from the estimation point smaller than correlation length can be considered (Figure 14). Figure 14. Window selection and down sampling of data This helps in selecting the most significant data reducing also the computation time. In fact, in (3), the covariance matrix to be inverted has a dimension equal to the total amount of used data. The proposed selection criterion speeds up computation without reducing precision because only relevant (i.e. significantly correlated) data are used. However, for very dense data set, further down sampling process could be necessary. Once that only significant data have been selected, if necessary the total amount is decreased taking into account the homogeneity in spatial coverage around the prediction point. To maintain homogeneous coverage of prediction area, selected data are subdivided into three rings centered on prediction point and data in each ring is randomly down sampled in such a way to maintain isotropic and homogeneous distribution of input data. With this down sampling procedure, computation process speeds up because the size of matrix to be inverted decreases and an effective estimation procedure is set. 3.3 Pre-Processing of GOCE Data Before being integrated with other available datasets, recent GOCE 𝑇!! freely distributed by ESA needed a pre-processing step. In order to be used in the integrated computing procedure these data have been reduced for the long wavelengths by subtracting a global model up to a given maximum degree. The GOCE Direct Release 3 (Bruinsma et al., 2010) global model (GDIR3) up to degree 180 have been subtracted to the observed 𝑇!!. These residuals cannot be used directly and need a further processing step. These residuals are completely dominated by a corrupting high frequency noise. Thus, a suitable filter must be applied on residuals 𝑇!!, so to remove the noisy high frequencies components. This can be done by low-pass filtering the data using a suitable moving average. The cut-off frequency is directly related to the amplitude of this spatial average, so this must be set in such a way to remove mainly the noise component. Generally, if the desired cut-off frequency is expressed in terms of degree of spherical harmonics expansion 𝑛, the amplitude of the moving average ∆𝜓 can be determined as ∆𝜓(°) = 180° 𝑛. Assuming this, if moving average of 0.7° is used, residual signal components up to degree 260 are not filtered. This empirical filtering procedure has been applied on 𝑇!! dataset. This has been done both track-wise (i.e. along each track) and spatial-wise (i.e. considering a moving average in space). Two filtered 𝑇!! datasets have been used in the following analysis. The 𝑇!! spatially filtered data will be denoted as 𝑇!! !"# while those filtered along track as 𝑇!! !"# . 3.4 The Calabrian Arc Area Test A first test of the integration procedure was performed in the area centered on the Calabrian Arc. This area has been chosen as area test because here gravity signal presents high variability, due to the geophysical phenomenon of subduction in the area of the Ionian Sea. Furthermore, in this area an aerogravimetric campaign sponsored by ENI S.p.A. was performed in 2004. The main objective of this test is the assessment of the new procedure in a situation where dense satellite observations are integrated with other existing data in order to obtain local modeling of the gravimetric signal. As mentioned the benchmark of predictions is a dataset of 1157 ∆𝑔!" (gravity anomalies free-air) acquired by aerogravimetry. In the 9°×9° area with SW corner of coordinates (𝜆 = 12°𝐸, 𝜑 = 34°𝑁) 22932 available ERS1-Geodetic Mission radar-altimetry observations, and 16370 values of GOCE radial second derivatives have been selected. GOCE data have been processed to what is explained previously (§3.3). All available data have been reduced with GOCE Direct Release 3 global model up to degree 180. Furthermore a correspondent Residual Terrain Correction computed with GRAVSOFT package’s TC program (Forsberg and Tscherning, 2008) and a detailed 3′′×3′′ DTM 0 0.5 1 1.5 2 2.5 3 3.5 4 -0.05 0 0.05 0.1 0.15 0.2 0.25 0.3 C(NN) deg [m2 ] empirical model Correlation length
  • 8. have been removed to ∆𝑔!! and 𝑁 data. 𝑇!! data has been reduced only by global model component because topographic signal has been proved to be not relevant at satellite altitude (about 260 km) and for filtered out in the pre-processing phase. Different combination of residuals 𝑁 and 𝑇!! has been used in order to predict ∆𝑔!"# on aerogravimetry points. 𝑇!! values have been considered both as 𝑇!! !"# and 𝑇!! !"# . The predicted values have been compared with observed gravity anomalies from aerogravimetry, in order to assess the computing procedure and check for the best possible combination in reproducing the observed data. Different model covariance functions has been estimated by SIMPLEXCOV, based on empirical covariance values of residuals data. When the model covariance functions are computed taking into account only one functional, empirical values are well reproduced by simplex algorithm. An example is shown in (Figure 15). Figure 15. Model covariance function estimated with 𝑁 only However problems concerning propagation of model covariances on other functional are still evident (Figure 16). Figure 16. Model covariance function estimated with 𝑇!! !"# only and then propagated on 𝑁 On the other hand, if the new proposed integrating procedure (§2.3) is applied jointly for fitting all the available empirical covariances there are sharp improvements which prove the effectiveness of this approach. Example of the obtained results are presented in (Figure 17). In this case, the combination of two functional 𝑁 + 𝑇!! !"# , the agreement between empirical values and model functions is worse if compared with model covariance functions computed by fitting the covariance of a single functional (e.g. Figure 15). However, if all empirical covariances are used, an overall suitable fit is reached. Figure 17. Integrated model covariance function estimated with 𝑁 + 𝑇!! !"# propagated on 𝑁 (a) and 𝑇!! !"# (b) This is indeed the procedure to be followed because, as it was proved, if the covariance function of one observation only is considered, the obtained remarkable fit does not reflect into the covariances of other existing data. Thus the integrated estimate of a covariance model for 𝑇 to be then propagated to any linear functional of 𝑇 seems to be the proper method. Once that model covariance functions have been estimated different tests have been performed. Five combinations of data have been tested and corresponding different predictions ∆𝑔!"# have been estimated, with tests also on parameters as the amplitude of the selection window. Results of test A have been obtained with a selection window of 0.7° width and 4000 maximum observation for each computation point. Due to their spatial resolution, 500 values of 𝑇!! and 600 of 𝑁 on sea have been used in the mean for each estimation point. The results are summarized in (Table 1) where the notation, for example ∆𝑔!!!! !"# , correspond to gravity anomalies predicted by a combination of 𝑁 and 𝑇!! !"# data and ∆𝑔!"# are the benchmark residuals obtained by aerogravimetry. # points E [mGal] σ [mGal] ∆𝑔!"# 1157 -1.12 21.34 ∆𝑔!"# − ∆𝑔!!! !"# 1157 -9.36 23.52 ∆𝑔!"# − ∆𝑔!!! !"# 1157 -8.66 19.80 ∆𝑔!"# − ∆𝑔! 1157 8.31 15.64 ∆𝑔!"# − ∆𝑔!!!! !"# 1157 4.72 14.97 ∆𝑔!"# − ∆𝑔!!!! !"# 1157 -2.16 33.23 Table 1. Statistics of the differences between benchmark values and predictions obtained in test A The results obtained using 𝑁 data are only slightly improved in the combined 𝑁 + 𝑇!! !"# estimation procedure. Particularly, the mean values of the residuals is nearby half the one obtained by considering 𝑁 only. By the way, no significant estimate can be obtained using 𝑇!! only (It seems, however, that a less poor estimates are computed when 𝑇!! !"# are taken as input data). This is an expected result because at mean altitude of 260 km, gravity signal is very attenuated and a downward continuation from observed 𝑇!! to ∆𝑔!" close earth surface is an unstable procedure. As a general remark, one can also state that biases are always present: most of them can be considered significative being the noise in the data, as derived from cross-over statistics, of the order of 2 mGal. Finally, an unexpected anomalous result is obtained when combining 𝑁 + 𝑇!! !"# , the anomaly being related to the standard deviation of these residuals which is to be compared with the one derived using 𝑇!! !"# data. At the moment this behavior has no reasonable explanation. 0 0.5 1 1.5 2 2.5 3 3.5 4 -0.05 0 0.05 0.1 0.15 0.2 0.25 0.3 C(NN) deg [m2 ] empirical model 0 0.5 1 1.5 2 2.5 3 3.5 4 -0.2 -0.1 0 0.1 0.2 0.3 0.4 0.5 C(NN) deg [m2 ] empirical model 0 0.5 1 1.5 2 2.5 3 3.5 4 -0.1 -0.05 0 0.05 0.1 0.15 0.2 0.25 C(NN) deg [m2 ] empirical model 0 0.5 1 1.5 2 2.5 3 3.5 4 -0.5 0 0.5 1 1.5 2 C(TrrTrr) deg [mE2 ] empirical model
  • 9. To check for possible improvements, in test B it has been decided to remove, during the estimation procedure, for each estimation point and for each windowed dataset, the corresponding mean value. The results of this test are shown in (Table 2): # points E [mGal] σ [mGal] ∆𝑔!"# 1157 -1.12 21.34 ∆𝑔!"# − ∆𝑔!!! !"# 1157 -10.72 22.49 ∆𝑔!"# − ∆𝑔!!! !"# 1157 -17.64 22.54 ∆𝑔!"# − ∆𝑔! 1157 -0.04 17.01 ∆𝑔!"# − ∆𝑔!!!! !"# 1157 -1.19 17.46 ∆𝑔!"# − ∆𝑔!!!! !"# 1157 -7.87 16.18 Table 2. Statistics of the differences between benchmark values and predictions obtained in test B Results of test B are similar to those obtained previously. We only remark that, in this second case, the anomalous behavior in the ∆𝑔!!!! !"# is not present. This enforces the previous statement on this anomaly. As previously stated, only combined 𝑁 + 𝑇!! estimate lead to a significant reduction of the observed aerogravimetry derived data. Test C has concerned the amplitude of the windowed selection of data for each prediction point. To verify if a wider windowing improves the accuracy a further test has been performed. In this third case, only combinations of 𝑇!! !"# data have been considered. The selection window has been set to 1° width and a maximum of 4000 observations for each computation point has been selected. With these settings, nearby 600 values of 𝑇!! and 800 of 𝑁 on sea have been considered for each prediction point. After selection, the mean value of selected has been removed, as it has been done in the second test. The summary of the results is shown in (Table 3): # points E [mGal] σ [mGal] ∆𝑔!"# 1157 -1.12 21.34 ∆𝑔!"# − ∆𝑔!!! !"# 1157 -9.38 20.37 ∆𝑔!"# − ∆𝑔! 1157 -0.04 17.01 ∆𝑔!"# − ∆𝑔!!!! !"# 1157 2.26 15.94 Table 3. Statistics of the differences between benchmark values and predictions obtained in test C The residuals show the same patterns and statistics are practically equivalent to those in (Table 2). Thus, it can be concluded that changing windowing amplitude in respect to that chosen as function of correlation length of data has no significant impact on the estimation but only on computation speed. Furthermore, generally speaking, it seems that no significant improvements are obtained when using 𝑁 + 𝑇!! with respect to the solution based on 𝑁 only. This is, by no means, true when residuals are globally considered. However, by analyzing the residuals track by track, one can see that some slight improvements can be reached when aerogravimetry tracks on land areas are considered. There, no altimeter data are available and the 𝑁 based estimates are poorer than those based on 𝑁 + 𝑇!!. This can be seen in the differences between ∆𝑔! and ∆𝑔!!!! !"# . The larger discrepancies are on land areas (Figure 18). These discrepancies are related to 𝑇!! improvements in the ∆𝑔!!!! !"# estimations which are closer to the aerogravimetry ∆𝑔!"#, as it can be seen in the statistics of the residuals that are slightly better (see Table 3). This indeed an indication in favor of merging all the available data, even though they are satellite derived. Figure 18. Differences between predicted values ∆𝑔!!!! !"# and ∆𝑔! obtained in test C An interesting results has not been yet discussed. High degree global models (e.g. EGM08) allow to compute gravity signal with high overall accuracy and resolution. In (Table 4) are shown statistics of differences obtained by EGM08 synthesized up to different maximum degrees on aerogravimetry tracks. When it is computed up to maximum degree 2160, EGM08 is practically able to recover all the signal observed by aerogravimetry. In this case aerogravimetry campaign could be replaced by global models, if accuracy requested permit it. Prediction obtained by the procedure developed in this work, combining GOCE filtered data, and aerogravimetry, is comparable with EGM08 computed up to degree 650 (see Table 3 and Table 4). E [mGal] σ [mGal] ∆g!"# − ∆g!"#$%(!"#$) -0.28 3.74 ∆g!"# − ∆g!"#$%(!"##) -0.72 4.49 ∆g!"# − ∆g!"#$%(!""") 0.34 9.58 ∆g!"# − ∆g!"#$%(!"") 1.09 12.74 ∆g!"# − ∆g!"#$%(!"#) 2.50 16.79 Table 4. Statistics of the differences between benchmark values and EGM2008 computed up to different maximum degrees This is a interesting result because on central part of Mediterranean Sea gravity data are generally very dense and with high quality and it’s obvious that global models such EGM08 are able to represent gravity field in this area with resolution comparable to aerogravimetry. However in some areas of central Africa of south America the availability of high quality data is not so good and with such procedure of data integration, specially with a great number of satellite observations now available, local computing could obtained still better results in respect to that it could be possible simply computing a global model. In conclusion, in the discussed tests reliable and meaningful results have been obtained using the proposed procedure.
  • 10. 4. CONCLUSIONS The aim of this work was to set up a procedure able to combine different functionals of the anomalous potential in order to obtain prediction of any other functional of 𝑇. A computing procedure, based on remove-restore technique and least squares collocation has been devised. Particularly, this procedure was based on an innovative approach for covariance function modeling. A new methodology based on simplex algorithm and linear programming theory allowed to obtain model covariance functions in good agreement with the empirical covariance values computed from reduced data. Remarkable results were obtained in the integrated estimate of model covariance functions when empirical values of different available functionals were used. The new methodology proved to be flexible and able to properly reproduce all the main features of the given empirical covariances. This result is only a part of a general procedure able to combine functionals of the anomalous potential such as gravity anomalies, geoid undulation and second radial derivatives of 𝑇 for local gravity field estimation. Another important conclusion has been reached in the evaluation of the feasibility of a windowed collocation estimate. Reliable results were obtained in the windowed collocation procedure implemented in this work. Particularly, it has been shown that the window amplitude can be selected on the basis of the covariance correlation length. Also, tests have been devised for reducing the number of data preserving homogeneity and isotropy in data distribution for each computation point. Preliminary tests, have been performed to validate this procedure from an algorithmic point of view. Covariance models, through least squares collocation procedure, were able to give reliable predictions of ∆𝑔. In the test presented, local predictions of ∆𝑔 have been compared with observed values coming from aerogravimetry. These predictions have been obtained with different combination of radar altimetry data and GOCE data, in order to verify the recovering of the medium-high frequencies, contained in the gravity signal measured with this technique. The best fit between empirical covariance values and model covariances was reached in the combined estimation procedure. This allows overcoming the fitting problem that frequently occur when only one empirical covariance is used to tune the model covariance. As a matter of fact, it was proved that the joint estimation option leads to an optimal fit in the selected covariance models for the other available functionals. The collocation estimates as derived from combining the different data sets according to this procedure proved to be consistent with the observed gravity data. Further tests have been performed on different area and with different combination of data. They were not presented for brevity but they confirm the results illustrated in this paper. As a final comment, one can say that the devised method for covariance fitting together with a windowed collocation procedure is able to give standard reliable estimates. However, there are promising improvements particularly related to the proposed covariance fitting procedure. The method for covariance fitting is effective and can remarkably improve the coherence between empirical and model covariances. Therefore, it can be considered as a valuable tool in further developments and applications of collocation. 5. REFERENCES Barzaghi R., Tselfes N., Tziavos I.N., Vergos G.S., 2009. Geoid and high resolution sea surface topography modellingin the mediterranean from gravimetry, altimetry and GOCE data: evaluation by simulation. Journal of Geodesy, No.83. Barzaghi R., Sansò F., 1984. La collocazione in geodesia fisica. Bollettino di geodesia e scienze affini, anno XLIII Borghi A., 1999. The Italian geoid estimate: present state and future perspectives. Ph.D. thesis, Politecnico di Milano. Bruinsma S.L., Marty J.C., Balmino G., Biancale R., Foerste C., Abrikosov O., Neumayer H., 2010. GOCE Gravity Field Recovery by Means of the Direct Numerical Method. presented at the ESA Living Planet Symposium, 28 June-2 July 2010, Bergen, Norway. Forsberg R., 1994. Terrain effect in geoid computation. Lecture notes, International School of the Determination and Use of the Geoid, IGeS, Milano. Heiskanen W.A., Moritz H., 1967. Physical Geodesy. Institute of Physical Geodesy, Technical University, Graz, Austria. Knudsen P., 1987. Estimation and modeling of the local empirical covariance function using gravity and satellite altimeter data. Bullettin Geodesique, No.61. Moritz H., 1980. Advanced Physical Geodesy. Wichmann, Karlsruhe. Mussio L., 1984. Il metodo della collocazione minimi quadrati e le sue applicazioni per l’analisi statistica dei risultati delle compensazioni. Ricerche di Geodesia Topografia e Fotogrammetria, No. 4, CLUP. Pavlis N.K., Holmes S.A, Kenyon S.C., Factor J.K., 2012. The development and evaluation of the Earth Gravitational Model 2008 (EGM2008). Journal of geophysical research, Vol. 117. Press W.H., Flannery B.P., Teukolsky S.A., Vetterling W.T., 1989. Numerical Recipes The Art of Scientific Computing. Cambridge University Press. Reguzzoni M., 2004. GOCE: the space-wise approach to gravity field determination by satellite gradiometry. Ph.D. thesis, Politecnico di Milano. Tscherning C.C., Rapp R.H., 1974. Closed Covariance Expressions for Gravity Anomalies, Geoid Undulations, and Deflections of the Vertical Implied by Anomaly Degree- Variance Models. reports of the Department of Geodetic Science, No. 208, The Ohio State University. Tscherning C.C., 2004. Geoid determination by least squares collocation using GRAVSOFT. Lecture notes, International School of the Determination and Use of the Geoid, IGeS, Milano. Tselfes N., 2008. Global and local geoid modelling with GOCE data and collocation. Ph.D. thesis, Politecnico di Milano. 6. ACKNOWLEDGMENTS Prof. Riccardo Barzaghi and Ph.D Noemi Emanuela Cazzaniga gave a fundamental contribute in this work and in my entire Ph.D course. My gratitude goes mainly to them. Thanks.