This document summarizes a new methodology for modeling covariance functions for integrated gravity field modeling using collocation. The methodology uses linear programming and the simplex method to estimate parameters of analytical covariance function models to best fit empirical covariance functions from multiple gravity observations. This allows all available empirical covariances to be considered simultaneously, improving over standard methods that model covariances separately and propagate between observations. The results from testing this new methodology show improvements over existing software packages for modeling covariance functions for local gravity applications.
Titan’s Topography and Shape at the Endof the Cassini MissionSérgio Sacani
With the conclusion of the Cassini mission, we present an updated topographic map of Titan,including all the available altimetry, SARtopo, and stereophotogrammetry topographic data sets availablefrom the mission. We use radial basis func tions to interpolate the sparse data set, which covers only ∼9%of Titan’s global area. The most notable updates to the topography include higher coverage of the polesof Titan, improved fits to the global shape, and a finer resolution of the global interpolation. We alsopresent a statistical analysis of the error in the derived products and perform a global minimization on aprofile-by-profile basis to account for observed biases in the input data set. We find a greater flattening ofTitan than measured, additional topographic rises in Titan’s southern hemisphere and better constrain thepossible locations of past and present liquids on Titan’s surface.
Estimation of global solar radiation by using machine learning methodsmehmet şahin
In this study, global solar radiation (GSR) was estimated based on 53 locations by using ELM, SVR, KNN, LR and NU-SVR methods. Methods were trained with a two-year data set and accuracy of the mentioned methods was tested with a one-year data set. The data set of each year was consisting of 12 months. Whereas the values of month, altitude, latitude, longitude, vapour pressure deficit and land surface temperature were used as input for developing models, GSR was obtained as output. Values of vapour pressure deficit and land surface temperature were taken from radiometry of NOAA-AVHRR satellite. Estimated solar radiation data were compared with actual data that were obtained from meteorological stations. According to statistical results, most successful method was NU-SVR method. The RMSE and MBE values of NU-SVR method were found to be 1,4972 MJ/m2 and 0,2652 MJ/m2, respectively. R value was 0,9728. Furthermore, worst prediction method was LR. For other methods, RMSE values were changing between 1,7746 MJ/m2 and 2,4546 MJ/m2. It can be seen from the statistical results that ELM, SVR, k-NN and NU-SVR methods can be used for estimation of GSR.
This document describes a testbed for image synthesis developed at Cornell University. The testbed was designed to facilitate research on new light reflection models, global illumination algorithms, and rendering of complex scenes. It uses a modular structure with hierarchical levels of functionality. The lowest level contains utility modules, the middle level contains object modules that work across primitive types, and the highest level contains image synthesis modules. The testbed uses a modeler-independent description format to represent environments independently of modeling programs. Renderers can then generate images from this common description.
1) Thermal waves in Saturn's atmosphere were analyzed using infrared observations from 2003-2013.
2) Maps were compiled from multiple instruments and analyzed using power spectral analysis to detect thermal waves.
3) Waves with different wavelengths were found to trace chemical species at different altitudes in Saturn's atmosphere. Large wave trains were detected in late 2003 and 2004.
1) The document presents a technique for automatically correcting for ion travel time when mass calibrating a single quadrupole mass spectrometer. This allows a single calibration to be used over all mass ranges and scan speeds.
2) By deriving an equation for ion transmission time as a function of mass and scan speed, the mass shift due to varying scan speeds can be calculated and subtracted from acquired data.
3) Empirical testing showed the technique reduced initial mass shifts by at least 85% for all masses and scan speeds, with no residual shift over 0.1 m/z. The combined calibration provides effective mass calibration across an instrument's operating ranges.
This document discusses methods for dynamically calculating daylight glare over the course of a year. It presents three methods: 1) A timestep-by-timestep RADIANCE simulation that serves as a reference method but is very computationally intensive. 2) A simplified daylight glare probability (DGPs) method based only on vertical eye illuminance, similar to average luminance methods. 3) An enhanced simplified DGP method that also considers simplified images to account for peak glare sources in addition to vertical eye illuminance. The enhanced method is validated against full-year RADIANCE simulations using different shading systems. A histogram analysis and glare rating classification is proposed to evaluate dynamic glare results over a year.
Implentation of Inverse Distance Weighting, Local Polynomial Interpolation, a...Sachin Mehta
The general purpose of this project is to discuss the interpolation a set of points to create four predicted surfaces. The points that were used represent pollution samples taken along the Maas River measured in parts per million (ppm). The four surfaces will be created in Arc Map using the tools found in the Geo-statistical Analyst. The created surfaces will then be used to predict the occurrence of a specified pollutant along the flood plain of the Maas River. For this exercise I chose to look at the spatial variation of Mercury along the flood plain of the Maas River.
Sachin Mehta Reno, Nevada
The document describes GRASP, a versatile algorithm for characterizing atmospheric properties from remote sensing observations. GRASP can retrieve a variety of properties including aerosol optical thickness, single scattering albedo, vertical aerosol profiles, and surface reflectance from sensors on satellites, ground-based radiometers, and aircraft. It is based on generalized principles to develop a rigorous, efficient, and accessible algorithm. GRASP uses a statistical optimization fitting approach and flexible forward model to simulate observations and retrieve multiple parameters simultaneously. It is being applied to process data from various satellite instruments to improve aerosol retrievals over land and develop synergistic retrievals using non-coincident observations.
Titan’s Topography and Shape at the Endof the Cassini MissionSérgio Sacani
With the conclusion of the Cassini mission, we present an updated topographic map of Titan,including all the available altimetry, SARtopo, and stereophotogrammetry topographic data sets availablefrom the mission. We use radial basis func tions to interpolate the sparse data set, which covers only ∼9%of Titan’s global area. The most notable updates to the topography include higher coverage of the polesof Titan, improved fits to the global shape, and a finer resolution of the global interpolation. We alsopresent a statistical analysis of the error in the derived products and perform a global minimization on aprofile-by-profile basis to account for observed biases in the input data set. We find a greater flattening ofTitan than measured, additional topographic rises in Titan’s southern hemisphere and better constrain thepossible locations of past and present liquids on Titan’s surface.
Estimation of global solar radiation by using machine learning methodsmehmet şahin
In this study, global solar radiation (GSR) was estimated based on 53 locations by using ELM, SVR, KNN, LR and NU-SVR methods. Methods were trained with a two-year data set and accuracy of the mentioned methods was tested with a one-year data set. The data set of each year was consisting of 12 months. Whereas the values of month, altitude, latitude, longitude, vapour pressure deficit and land surface temperature were used as input for developing models, GSR was obtained as output. Values of vapour pressure deficit and land surface temperature were taken from radiometry of NOAA-AVHRR satellite. Estimated solar radiation data were compared with actual data that were obtained from meteorological stations. According to statistical results, most successful method was NU-SVR method. The RMSE and MBE values of NU-SVR method were found to be 1,4972 MJ/m2 and 0,2652 MJ/m2, respectively. R value was 0,9728. Furthermore, worst prediction method was LR. For other methods, RMSE values were changing between 1,7746 MJ/m2 and 2,4546 MJ/m2. It can be seen from the statistical results that ELM, SVR, k-NN and NU-SVR methods can be used for estimation of GSR.
This document describes a testbed for image synthesis developed at Cornell University. The testbed was designed to facilitate research on new light reflection models, global illumination algorithms, and rendering of complex scenes. It uses a modular structure with hierarchical levels of functionality. The lowest level contains utility modules, the middle level contains object modules that work across primitive types, and the highest level contains image synthesis modules. The testbed uses a modeler-independent description format to represent environments independently of modeling programs. Renderers can then generate images from this common description.
1) Thermal waves in Saturn's atmosphere were analyzed using infrared observations from 2003-2013.
2) Maps were compiled from multiple instruments and analyzed using power spectral analysis to detect thermal waves.
3) Waves with different wavelengths were found to trace chemical species at different altitudes in Saturn's atmosphere. Large wave trains were detected in late 2003 and 2004.
1) The document presents a technique for automatically correcting for ion travel time when mass calibrating a single quadrupole mass spectrometer. This allows a single calibration to be used over all mass ranges and scan speeds.
2) By deriving an equation for ion transmission time as a function of mass and scan speed, the mass shift due to varying scan speeds can be calculated and subtracted from acquired data.
3) Empirical testing showed the technique reduced initial mass shifts by at least 85% for all masses and scan speeds, with no residual shift over 0.1 m/z. The combined calibration provides effective mass calibration across an instrument's operating ranges.
This document discusses methods for dynamically calculating daylight glare over the course of a year. It presents three methods: 1) A timestep-by-timestep RADIANCE simulation that serves as a reference method but is very computationally intensive. 2) A simplified daylight glare probability (DGPs) method based only on vertical eye illuminance, similar to average luminance methods. 3) An enhanced simplified DGP method that also considers simplified images to account for peak glare sources in addition to vertical eye illuminance. The enhanced method is validated against full-year RADIANCE simulations using different shading systems. A histogram analysis and glare rating classification is proposed to evaluate dynamic glare results over a year.
Implentation of Inverse Distance Weighting, Local Polynomial Interpolation, a...Sachin Mehta
The general purpose of this project is to discuss the interpolation a set of points to create four predicted surfaces. The points that were used represent pollution samples taken along the Maas River measured in parts per million (ppm). The four surfaces will be created in Arc Map using the tools found in the Geo-statistical Analyst. The created surfaces will then be used to predict the occurrence of a specified pollutant along the flood plain of the Maas River. For this exercise I chose to look at the spatial variation of Mercury along the flood plain of the Maas River.
Sachin Mehta Reno, Nevada
The document describes GRASP, a versatile algorithm for characterizing atmospheric properties from remote sensing observations. GRASP can retrieve a variety of properties including aerosol optical thickness, single scattering albedo, vertical aerosol profiles, and surface reflectance from sensors on satellites, ground-based radiometers, and aircraft. It is based on generalized principles to develop a rigorous, efficient, and accessible algorithm. GRASP uses a statistical optimization fitting approach and flexible forward model to simulate observations and retrieve multiple parameters simultaneously. It is being applied to process data from various satellite instruments to improve aerosol retrievals over land and develop synergistic retrievals using non-coincident observations.
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
Interferogram Filtering Using Gaussians Scale Mixtures in Steerable Wavelet D...CSCJournals
An interferogram filtering is presented in this paper. The main concern of the proposed scheme is to lower the residues count mean while preserving the location and jump height of the lines of phase discontinuity. The proposed method is based on a statistical model of the coefficients of multi-scale oriented basis. Neighborhoods of coefficients at adjacent positions and scales are modeled as the product of two independent random variables: a Gaussian vector and a hidden positive scalar multiplier. Under this model, the Bayesian least squares estimate of each coefficient reduces to a weighted average of the local linear estimates over all possible values of the hidden multiplier variable. The performance of this method substantially has the advantages of reducing number of residuals without affecting line of height discontinuity.
Adaptive optics are used in ground-based telescopes to directly image extrasolar planets and overcome atmospheric turbulence. Atmospheric turbulence causes distortions that blur planetary images. Adaptive optics systems measure wavefront distortions using a wavefront sensor and correct for them using a deformable mirror in a closed-loop system. This results in sharper, diffraction-limited images that help verify exoplanets. Future extremely large telescopes will use many more actuators on deformable mirrors to provide substantial correction, aiding the search for Earth-like exoplanets.
Contribution to the investigation of wind characteristics and assessment of w...Université de Dschang
M. Bawe Gerard Nfor, Jr. a soutenu sa thèse de Doctorat/Phd en Physique, option Mécanique-Énergétique ce 19 mai 2016 dans la salle des conférences de l'Université de Dschang. A l'issue de la soutenance, le jury présidé par le Prof. Anaclet Fomethe lui a décerné, à l'unanimité de ses membres, la mention très honorable.
Voici la présentation powerpoint qu'il a effectuée dans le cadre de cette soutenance.
- Christian Baker analyzed Herschel PACS spectroscopy data of dwarf galaxy NGC 5195 to determine characteristics of its cold gas and dust using a PDR model.
- Key emission lines were detected including [C II], [N II], [O I]63, and [O I]145. [O III] was below the detection threshold.
- Ratios of emission lines to total infrared flux suggested a heating efficiency about an order of magnitude lower than NGC 5195's companion galaxy M51.
- Analysis of line ratios indicated gas densities of log(n/cm-3) between 2.0-3.0 and radiation intensities of logG0 between 2.5-3.3.
Mapping spiral structure on the far side of the Milky WaySérgio Sacani
Little is known about the portion of the Milky Way lying beyond the Galactic center at distances
of more than 9 kiloparsec from the Sun. These regions are opaque at optical wavelengths
because of absorption by interstellar dust, and distances are very large and hard to measure.
We report a direct trigonometric parallax distance of 20:4þ2:8
2:2 kiloparsec obtained with the Very
Long Baseline Array to a water maser source in a region of active star formation. These
measurements allow us to shed light on Galactic spiral structure by locating the ScutumCentaurus
spiral arm as it passes through the far side of the Milky Way and to validate a
kinematic method for determining distances in this region on the basis of transverse motions.
The document summarizes a comparison between large eddy simulations (LES) of methane plume dispersion from a point source with measurements taken during a field campaign. The LES simulation was able to reproduce key meteorological conditions on the measurement day and capture the spatial evolution and statistical properties of the measured plumes over time. While the LES matched measurements well given the simple terrain and boundary layer meteorology, the study concludes more comprehensive meteorological measurements are still needed to fully validate dispersion models against real-world field experiments.
A Solution to Land Area Calculation for Android Phone using GPS-Luwei YangLuwei Yang
This document proposes an Android application to calculate land area using GPS. It records a user's path using GPS as they walk around a land area. It then calculates the total land area using the trapezoid method, which breaks the irregular shape into smaller trapezoids. The Kalman filter is used to improve GPS accuracy and reduce errors, achieving an average error of 3.64% in tests. Key aspects include using Gauss-Kruger projection to convert GPS coordinates to plane coordinates before area calculation, and employing the trapezoid method and Kalman filtering to accurately calculate irregular land shapes and compensate for GPS errors, respectively.
This document presents a Bayesian methodology for retrieving soil parameters like moisture from SAR images. It begins by introducing the importance of soil moisture monitoring and the opportunity provided by Argentina's upcoming SAOCOM SAR satellite. It then discusses limitations of traditional retrieval models in accounting for speckle noise and terrain heterogeneity. The document proposes a Bayesian approach using a multiplicative speckle model within a likelihood function to estimate soil moisture and roughness from SAR backscatter measurements. Simulation results show the Bayesian method retrieves soil moisture across the full measurement space and provides error estimates, with improved precision at higher numbers of looks.
Ozone and Aerosol Index as radiation amplification factor of UV index over Eg...inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Air pollution measurements give important, quantitative information about ambient concentrations and deposition, but they can only describe air quality at specific locations and times, without giving clear guidance on the identification of the causes of the air quality problem.
AUTOMATIC IDENTIFICATION OF CLOUD COVER REGIONS USING SURF ijcseit
Weather forecasting has become an indispensable application to predict the state of the atmosphere for a
future time based on cloud cover identification. But it generally needs the experience of a well-trained
meteorologist. In this paper, a novel method is proposed for automatic cloud cover estimation, typical to
Indian Territory Speeded Up Robust Feature Transform(SURF) is applied on the satellite images to obtain
the affine corrected images. The extracted cloud regions from the affine corrected images based on Otsu
threshold are superimposed on the artistic grids representing latitude and longitude over India. The
segmented cloud and grid composition drive a look up table mechanism to identify the cloud cover regions.
Owing to its simplicity, the proposed method processes the test images faster and provides accurate
segmentation for cloud cover regions.
Self-organizing maps (SOMs) can be used to classify seismic attributes in an unsupervised manner and reveal geological patterns that improve seismic interpretation. SOMs reduce a large set of seismic attributes into a smaller set of clusters that relate to geologic features of interest. The classified seismic data can then be analyzed using computer vision algorithms like convolutional neural networks to automatically identify depositional sequences, seismic facies, play types, leads, and prospects. This facilitates a more robust and timely seismic interpretation that can help identify hydrocarbon traps and reduce exploration risk and costs.
The document summarizes a study that evaluated EUMETSAT's Multi-sensor Precipitation Estimate (MPE) products from Meteosat-8 and Meteosat-9 satellites in comparison with ground-based radar data over Northwestern Europe. Categorical and continuous verification statistics were used to assess differences in spatial distribution and rainfall values between the MPE products and radar data. The results showed that MPE from Meteosat-9 had higher accuracy scores and was better at estimating rainfall values, though both products tended to overestimate during heavy rainfall events. Recommendations included further validation of MPE products over different regions and timescales.
This document presents a method for approximating illumination from polygonal area lights in real-time rendering. It projects the polygon onto the plane of the illuminated point to calculate diffuse illumination. It approximates local occlusion by the illuminated surface. Finding the optimal projection basis to avoid vertices appearing behind the plane is challenging. Results show promise for real-time rendering with area lights, though oscillations in performance need explanation. Future work could improve the projection, occlusion calculation, add specular lighting, and research global illumination effects.
INS/GPS integrated navigation system is studied in this paper for the hypersonic UAV in order to
satisfy the precise guidance requirements of hypersonic UAV and in response to the defects while the
inertial navigation system (INS) and the global positioning system (GPS) are being applied separately. The
information of UAV including position, velocity and attitude can be obtained by using INS and GPS
respectively after generating a reference trajectory. The corresponding errors of two navigation systems
can be obtained through comparing the navigation information of the above two guidance systems.
Kalman filter is designed to estimate the navigation errors and then the navigation information of INS are
corrected. The non-equivalence relationship between the platform misalignment angle and attitude error
angle are considered so that the navigation accuracy is further improved. The Simulink simulation results
show that INS/GPS integrated navigation system can help to achieve higher accuracy and better antiinterference
ability than INS navigation system and this system can also satisfy the navigation accuracy
requirements of hypersonic UAV.
Evaluation of procedures to improve solar resource assessments presented WREF...Gwendalyn Bender
This document evaluates two methods for improving solar resource assessments by combining long-term satellite data with short-term ground observations: 1) Correcting for bias in satellite-derived irradiance time series using local aerosol optical depth data, and 2) Applying a statistical Model Output Statistics correction using on-site observations. The methods are tested at a site in Israel, with the aerosol optical depth correction significantly reducing annual bias in direct and global irradiance estimates. A minimum of 9 months of on-site observations is found to be needed to positively impact the Model Output Statistics correction applied to satellite data.
Understanding climate model evaluation and validationPuneet Sharma
The document discusses model evaluation and validation. It introduces key concepts like evaluation, validation, and the apple-orange problem when directly comparing models and observations. It describes using a satellite simulator like COSP to facilitate apple-to-apple comparisons by simulating what satellites would observe from the model. The document also notes issues with observations like errors and uncertainties that must be considered during evaluation.
Development of Methodology for Determining Earth Work Volume Using Combined S...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
Mapping the anthropic backfill of the historical center of Rome (Italy) by us...Beniamino Murgante
Mapping the anthropic backfill of the historical center of Rome (Italy) by using Intrinsic Random Functions of order k (IRF-k)
Ciotoli Giancarlo, Francesco Stigliano, Fabrizio Marconi, Massimiliano Moscatelli, Marco Mancini, Gian Paolo Cavinato - Institute of Environmental Geology and Geo-engineering (I.G.A.G.), National Research Council, Italy
This document summarizes the application of Marchenko imaging to a 2D ocean-bottom cable dataset from the North Sea Volve field. Marchenko redatuming estimates the full wavefield from virtual sources inside the medium using only surface reflection measurements and a smooth velocity model. The authors processed the field data to obtain an estimate of the reflection response required by Marchenko and used it to iteratively estimate focusing functions and retrieve up-going and down-going Green's functions. They performed target-oriented imaging at different depth levels using the redatumed reflection responses, revealing structures not visible in standard reverse-time migration. Marchenko imaging provides a way to obtain high-resolution images of target zones without needing detailed overburden models.
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
Interferogram Filtering Using Gaussians Scale Mixtures in Steerable Wavelet D...CSCJournals
An interferogram filtering is presented in this paper. The main concern of the proposed scheme is to lower the residues count mean while preserving the location and jump height of the lines of phase discontinuity. The proposed method is based on a statistical model of the coefficients of multi-scale oriented basis. Neighborhoods of coefficients at adjacent positions and scales are modeled as the product of two independent random variables: a Gaussian vector and a hidden positive scalar multiplier. Under this model, the Bayesian least squares estimate of each coefficient reduces to a weighted average of the local linear estimates over all possible values of the hidden multiplier variable. The performance of this method substantially has the advantages of reducing number of residuals without affecting line of height discontinuity.
Adaptive optics are used in ground-based telescopes to directly image extrasolar planets and overcome atmospheric turbulence. Atmospheric turbulence causes distortions that blur planetary images. Adaptive optics systems measure wavefront distortions using a wavefront sensor and correct for them using a deformable mirror in a closed-loop system. This results in sharper, diffraction-limited images that help verify exoplanets. Future extremely large telescopes will use many more actuators on deformable mirrors to provide substantial correction, aiding the search for Earth-like exoplanets.
Contribution to the investigation of wind characteristics and assessment of w...Université de Dschang
M. Bawe Gerard Nfor, Jr. a soutenu sa thèse de Doctorat/Phd en Physique, option Mécanique-Énergétique ce 19 mai 2016 dans la salle des conférences de l'Université de Dschang. A l'issue de la soutenance, le jury présidé par le Prof. Anaclet Fomethe lui a décerné, à l'unanimité de ses membres, la mention très honorable.
Voici la présentation powerpoint qu'il a effectuée dans le cadre de cette soutenance.
- Christian Baker analyzed Herschel PACS spectroscopy data of dwarf galaxy NGC 5195 to determine characteristics of its cold gas and dust using a PDR model.
- Key emission lines were detected including [C II], [N II], [O I]63, and [O I]145. [O III] was below the detection threshold.
- Ratios of emission lines to total infrared flux suggested a heating efficiency about an order of magnitude lower than NGC 5195's companion galaxy M51.
- Analysis of line ratios indicated gas densities of log(n/cm-3) between 2.0-3.0 and radiation intensities of logG0 between 2.5-3.3.
Mapping spiral structure on the far side of the Milky WaySérgio Sacani
Little is known about the portion of the Milky Way lying beyond the Galactic center at distances
of more than 9 kiloparsec from the Sun. These regions are opaque at optical wavelengths
because of absorption by interstellar dust, and distances are very large and hard to measure.
We report a direct trigonometric parallax distance of 20:4þ2:8
2:2 kiloparsec obtained with the Very
Long Baseline Array to a water maser source in a region of active star formation. These
measurements allow us to shed light on Galactic spiral structure by locating the ScutumCentaurus
spiral arm as it passes through the far side of the Milky Way and to validate a
kinematic method for determining distances in this region on the basis of transverse motions.
The document summarizes a comparison between large eddy simulations (LES) of methane plume dispersion from a point source with measurements taken during a field campaign. The LES simulation was able to reproduce key meteorological conditions on the measurement day and capture the spatial evolution and statistical properties of the measured plumes over time. While the LES matched measurements well given the simple terrain and boundary layer meteorology, the study concludes more comprehensive meteorological measurements are still needed to fully validate dispersion models against real-world field experiments.
A Solution to Land Area Calculation for Android Phone using GPS-Luwei YangLuwei Yang
This document proposes an Android application to calculate land area using GPS. It records a user's path using GPS as they walk around a land area. It then calculates the total land area using the trapezoid method, which breaks the irregular shape into smaller trapezoids. The Kalman filter is used to improve GPS accuracy and reduce errors, achieving an average error of 3.64% in tests. Key aspects include using Gauss-Kruger projection to convert GPS coordinates to plane coordinates before area calculation, and employing the trapezoid method and Kalman filtering to accurately calculate irregular land shapes and compensate for GPS errors, respectively.
This document presents a Bayesian methodology for retrieving soil parameters like moisture from SAR images. It begins by introducing the importance of soil moisture monitoring and the opportunity provided by Argentina's upcoming SAOCOM SAR satellite. It then discusses limitations of traditional retrieval models in accounting for speckle noise and terrain heterogeneity. The document proposes a Bayesian approach using a multiplicative speckle model within a likelihood function to estimate soil moisture and roughness from SAR backscatter measurements. Simulation results show the Bayesian method retrieves soil moisture across the full measurement space and provides error estimates, with improved precision at higher numbers of looks.
Ozone and Aerosol Index as radiation amplification factor of UV index over Eg...inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Air pollution measurements give important, quantitative information about ambient concentrations and deposition, but they can only describe air quality at specific locations and times, without giving clear guidance on the identification of the causes of the air quality problem.
AUTOMATIC IDENTIFICATION OF CLOUD COVER REGIONS USING SURF ijcseit
Weather forecasting has become an indispensable application to predict the state of the atmosphere for a
future time based on cloud cover identification. But it generally needs the experience of a well-trained
meteorologist. In this paper, a novel method is proposed for automatic cloud cover estimation, typical to
Indian Territory Speeded Up Robust Feature Transform(SURF) is applied on the satellite images to obtain
the affine corrected images. The extracted cloud regions from the affine corrected images based on Otsu
threshold are superimposed on the artistic grids representing latitude and longitude over India. The
segmented cloud and grid composition drive a look up table mechanism to identify the cloud cover regions.
Owing to its simplicity, the proposed method processes the test images faster and provides accurate
segmentation for cloud cover regions.
Self-organizing maps (SOMs) can be used to classify seismic attributes in an unsupervised manner and reveal geological patterns that improve seismic interpretation. SOMs reduce a large set of seismic attributes into a smaller set of clusters that relate to geologic features of interest. The classified seismic data can then be analyzed using computer vision algorithms like convolutional neural networks to automatically identify depositional sequences, seismic facies, play types, leads, and prospects. This facilitates a more robust and timely seismic interpretation that can help identify hydrocarbon traps and reduce exploration risk and costs.
The document summarizes a study that evaluated EUMETSAT's Multi-sensor Precipitation Estimate (MPE) products from Meteosat-8 and Meteosat-9 satellites in comparison with ground-based radar data over Northwestern Europe. Categorical and continuous verification statistics were used to assess differences in spatial distribution and rainfall values between the MPE products and radar data. The results showed that MPE from Meteosat-9 had higher accuracy scores and was better at estimating rainfall values, though both products tended to overestimate during heavy rainfall events. Recommendations included further validation of MPE products over different regions and timescales.
This document presents a method for approximating illumination from polygonal area lights in real-time rendering. It projects the polygon onto the plane of the illuminated point to calculate diffuse illumination. It approximates local occlusion by the illuminated surface. Finding the optimal projection basis to avoid vertices appearing behind the plane is challenging. Results show promise for real-time rendering with area lights, though oscillations in performance need explanation. Future work could improve the projection, occlusion calculation, add specular lighting, and research global illumination effects.
INS/GPS integrated navigation system is studied in this paper for the hypersonic UAV in order to
satisfy the precise guidance requirements of hypersonic UAV and in response to the defects while the
inertial navigation system (INS) and the global positioning system (GPS) are being applied separately. The
information of UAV including position, velocity and attitude can be obtained by using INS and GPS
respectively after generating a reference trajectory. The corresponding errors of two navigation systems
can be obtained through comparing the navigation information of the above two guidance systems.
Kalman filter is designed to estimate the navigation errors and then the navigation information of INS are
corrected. The non-equivalence relationship between the platform misalignment angle and attitude error
angle are considered so that the navigation accuracy is further improved. The Simulink simulation results
show that INS/GPS integrated navigation system can help to achieve higher accuracy and better antiinterference
ability than INS navigation system and this system can also satisfy the navigation accuracy
requirements of hypersonic UAV.
Evaluation of procedures to improve solar resource assessments presented WREF...Gwendalyn Bender
This document evaluates two methods for improving solar resource assessments by combining long-term satellite data with short-term ground observations: 1) Correcting for bias in satellite-derived irradiance time series using local aerosol optical depth data, and 2) Applying a statistical Model Output Statistics correction using on-site observations. The methods are tested at a site in Israel, with the aerosol optical depth correction significantly reducing annual bias in direct and global irradiance estimates. A minimum of 9 months of on-site observations is found to be needed to positively impact the Model Output Statistics correction applied to satellite data.
Understanding climate model evaluation and validationPuneet Sharma
The document discusses model evaluation and validation. It introduces key concepts like evaluation, validation, and the apple-orange problem when directly comparing models and observations. It describes using a satellite simulator like COSP to facilitate apple-to-apple comparisons by simulating what satellites would observe from the model. The document also notes issues with observations like errors and uncertainties that must be considered during evaluation.
Development of Methodology for Determining Earth Work Volume Using Combined S...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
Mapping the anthropic backfill of the historical center of Rome (Italy) by us...Beniamino Murgante
Mapping the anthropic backfill of the historical center of Rome (Italy) by using Intrinsic Random Functions of order k (IRF-k)
Ciotoli Giancarlo, Francesco Stigliano, Fabrizio Marconi, Massimiliano Moscatelli, Marco Mancini, Gian Paolo Cavinato - Institute of Environmental Geology and Geo-engineering (I.G.A.G.), National Research Council, Italy
This document summarizes the application of Marchenko imaging to a 2D ocean-bottom cable dataset from the North Sea Volve field. Marchenko redatuming estimates the full wavefield from virtual sources inside the medium using only surface reflection measurements and a smooth velocity model. The authors processed the field data to obtain an estimate of the reflection response required by Marchenko and used it to iteratively estimate focusing functions and retrieve up-going and down-going Green's functions. They performed target-oriented imaging at different depth levels using the redatumed reflection responses, revealing structures not visible in standard reverse-time migration. Marchenko imaging provides a way to obtain high-resolution images of target zones without needing detailed overburden models.
This document describes a fast and reliable method for surface wave tomography to estimate 2-D models of isotropic and azimuthally anisotropic velocity variations from regional or global surface wave data. The method inverts surface wave group or phase velocity measurements to produce tomographic maps in a spherical geometry. It allows for spatial smoothing and model amplitude constraints to be applied simultaneously. Examples applying this technique globally and regionally in Eurasia and Antarctica are presented.
This document describes a fast and reliable method for surface wave tomography to estimate 2-D models of isotropic and azimuthally anisotropic velocity variations from regional or global surface wave data. The method inverts surface wave group or phase velocity measurements to produce tomographic maps in a spherical geometry. It allows for spatial smoothing and model amplitude constraints to be applied simultaneously. Examples applying this technique globally and regionally in Eurasia and Antarctica are presented.
OBIA on Coastal Landform Based on Structure Tensor csandit
This paper presents the OBIA method based on structure tensor to identify complex coastal
landforms. That is, develop Hessian matrix by Gabor filtering and calculate multiscale structure
tensor. Extract edge information of image from the trace of structure tensor and conduct
watershed segment of the image. Then, develop texons and create texton histogram. Finally,
obtain the final results by means of maximum likelihood classification with KL divergence as
the similarity measurement. The study findings show that structure tensor could obtain
multiscale and all-direction information with small data redundancy. Moreover, the method
described in the current paper has high classification accuracy
This document summarizes research on direct non-linear inversion of 1D acoustic media using the inverse scattering series. Key points:
- A method is derived for directly inverting 1D acoustic media with varying velocity and density, without requiring an estimate of properties above reflectors or assuming linear relationships between property changes and reflection data.
- Testing on a single reflector case showed improved estimates of property changes beyond the reflector compared to linear methods, for a wider range of angles.
- A special parameter related to velocity change was identified that has the correct sign in the linear inversion and is not affected by issues like "leakage" that complicate inversions.
This document describes a Kriging component for spatial interpolation of climatological variables in the OMS modeling framework. Kriging is a geostatistical technique that interpolates values based on measured data and the spatial autocorrelation between data points. The component implements ordinary and detrended Kriging algorithms using 10 semivariogram models. It can interpolate both raster and point data and outputs the interpolated climatological variable values. Links are provided for downloading the component code, data, and OMS project files needed to run the interpolation.
This document summarizes a study that used sigmoidal parameterization and Metropolis-Hasting (MH) inversion to estimate seismic velocity models from traveltime data. The key points are:
1) Sigmoidal functions were used to parameterize discontinuous velocity fields, allowing for sharp variations while maintaining continuity.
2) Ray tracing and the MH algorithm were used to invert traveltime data and estimate model parameters.
3) Tests on synthetic models showed the MH method produced higher resolution velocity models that better fit the observed traveltime data, compared to other global optimization methods like very fast simulated annealing.
ANALYSIS OF LAND SURFACE DEFORMATION GRADIENT BY DINSAR cscpconf
The progressive development of Synthetic Aperture Radar (SAR) systems diversify the exploitation of the generated images by these systems in different applications of geoscience. Detection and monitoring surface deformations, procreated by various phenomena had benefited from this evolution and had been realized by interferometry (InSAR) and differential interferometry (DInSAR) techniques. Nevertheless, spatial and temporal decorrelations of the interferometric couples used, limit strongly the precision of analysis results by these techniques. In this context, we propose, in this work, a methodological approach of surface deformation detection and analysis by differential interferograms to show the limits of this technique according to noise quality and level. The detectability model is generated from the deformation signatures, by simulating a linear fault merged to the images couples of ERS1 / ERS2 sensors acquired in a region of the Algerian south.
Evaluation of the Sensitivity of Seismic Inversion Algorithms to Different St...IJERA Editor
This document evaluates the sensitivity of seismic inversion algorithms to wavelets estimated using different statistical methods. It summarizes two wavelet estimation techniques - the Hilbert transform method and smoothing spectra method. It also describes two inversion methods - Narrow-band inversion and a Bayesian approach. Numerical experiments were conducted to analyze the performance of the wavelet estimation methods and sensitivity of the inversion algorithms to estimated wavelets. The smoothing spectra method produced better wavelet estimates. The Bayesian approach yielded superior inversion results and more robust impedance estimates compared to Narrow-band inversion in all tests.
A study used numerical simulations to retrieve sea surface heights from low grazing angle radar Doppler measurements. Simulations showed good correlation between true and retrieved heights in visible and shadowed regions. Analysis of Doppler spectra versus range found discrete behavior correlated to orbital velocity statistics. Less Doppler spectrum dispersion was observed in vertical polarization than horizontal. Comparison of full wave model and physical optics approximations found little multi-path effect on retrievals in visible regions, and Doppler centroids still provided height information in shadowed regions.
A study used numerical simulations to retrieve sea surface heights from low grazing angle radar Doppler measurements. Simulations showed good correlation between true and retrieved heights in visible and shadowed regions. Analysis of Doppler spectra versus range found discrete behavior correlated to orbital velocity statistics. Less Doppler spectrum dispersion was observed in vertical polarization than horizontal. Comparison of full wave model and physical optics approximations found little multi-path effect on retrievals in visible regions, and Doppler centroids still provided height information in shadowed regions.
A study used numerical simulations to retrieve sea surface heights from low grazing angle radar Doppler measurements. Simulations showed good correlation between true and retrieved heights in visible and shadowed regions. Analysis of Doppler spectra versus range found discrete behavior correlated to orbital velocity statistics. Less Doppler spectrum dispersion was observed in vertical polarization than horizontal. Comparison of full wave model and physical optics approximations found little multi-path effect on retrievals in visible regions, and Doppler centroids still provided height information in shadowed regions.
This document summarizes research on improving ambiguity resolution in GPS positioning using an ionospheric differential correction model. Data was collected from two stations in Malaysia's equatorial region over a short baseline of 33 km. Applying corrections from an ionospheric model led to ambiguities being resolved faster, in under an hour, compared to uncorrected data which took over 2.5 hours. The model also produced smaller standard errors in baseline positioning and increased the variance ratio and decreased reference variance indicators of successful ambiguity resolution. The findings show that an ionospheric differential correction model can improve ambiguity resolution for single frequency GPS over short baselines.
The document discusses formulas for calculating the gravitational effects of topographic-isostatic masses on airborne and satellite gravity gradiometry measurements. It derives integral formulas in ellipsoidal approximation for computing the gravitational potential, gradients, and tensor due to various topographic-isostatic models. The formulas separate the computations into spherical and ellipsoidal components. They are applied to calculate the gravitational tensor at GOCE satellite altitude using a 5-arcminute digital elevation model. The approach uses mass-lines to approximate ellipsoidal volume elements for numerical evaluation.
First M87 Event Horizon Telescope Results. IX. Detection of Near-horizon Circ...Sérgio Sacani
This document summarizes the key findings of a study analyzing circular polarization (CP) in observations of the supermassive black hole in M87 (M87*) made by the Event Horizon Telescope (EHT) in 2017. The study provides evidence for resolved CP across the image of M87* at a moderate level below 3.7%. Multiple imaging methods produced consistent results within this upper limit, despite variations in the morphology. An analysis of general relativistic magnetohydrodynamic simulations found most models naturally produce a low level of CP consistent with this limit, suggesting Faraday conversion is likely the dominant production mechanism for CP in M87* at 230 GHz.
Sparsity based Joint Direction-of-Arrival and Offset Frequency EstimatorJason Fernandes
- The document proposes a method to jointly estimate direction-of-arrival (DoA) and offset frequency of signals impinging on an antenna array using sparse representation.
- It builds on previous work by extending the estimation to include both spatial (DoA) and temporal (offset frequency) dimensions. This is done by constructing a joint dictionary as the Kronecker product of discrete spatial and temporal steering vector grids.
- Sparse recovery algorithms can then be applied to estimate the sparse coefficients and jointly infer the DoAs and offset frequencies of impinging signals from compressed measurements of the antenna array output over multiple time snapshots.
Seismic Modeling ASEG 082001 Andrew LongAndrew Long
This document discusses tools for modeling elastic wave propagation to aid in seismic survey planning. It summarizes three main modeling techniques: recursive reflectivity methods, ray tracing methods, and full wavefield methods using finite-differencing. Ray tracing is useful for optimizing survey geometry but not reflectivity studies, while reflectivity and finite-difference methods model full wavefields and are better for amplitude studies like AVO. Integrating these modeling tools with real data and rock physics analysis allows comprehensive understanding of wave propagation for effective survey planning addressing all acquisition parameters and seismic phenomena.
A Study of Non-Gaussian Error Volumes and Nonlinear Uncertainty Propagation f...Justin Spurbeck
The ever-growing resident space object population poses a continual threat in that a hyper velocity impact is likely to be catastrophic to an active satellite. To avoid these scenarios, space operators compute a probability of collision metric for each potential conjunction. Uncertainty trends are studied in the conjunction plane and operational decisions to mitigate any high-risk situations are made based off this information. There are many methods of uncertainty propagation and probability of collision formulations and knowledge of their realism is required to maintain a sustainable space environment. Thus, this research studies the effect of Chan, Alfano, Foster, Gaussian mixture, and Monte Carlo probability of collision calculations and their correlation to uncertainty realism metrics. The linear, unscented transform, entropy-based, and Monte Carlo propagation techniques are utilized alongside the collision calculations and it is shown that there are important correlations any space operator should be aware of to support maintenance of a healthy spacecraft.
Accuracy improvement of gnss and real time kinematic using egyptian network a...Alexander Decker
1) The document discusses improving the accuracy of differential GNSS and real-time kinematic (RTK) using the Egyptian network as a case study.
2) It investigates an integrated system to reduce orbital, ionospheric, and tropospheric errors affecting GNSS measurements.
3) The results of the study include an analysis of the improved accuracy achieved by the integrated system using precise ephemerides, ionosphere modeling, and troposphere modeling, as well as a comparison of DGPS and RTK solutions for the Egyptian network coordinates.
Similar to Covariance models for geodetic applications of collocation brief version (20)
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Covariance models for geodetic applications of collocation brief version
1. COVARIANCE MODELS FOR GEODETIC APPLICATIONS OF COLLOCATION
Carlo Iapige De Gaetani
Politecnico di Milano, Department of Civil and Environmental Engineering (DICA)
Piazza Leonardo da Vinci, 32 20133 Milan, Italy
carloiapige.degaetani@polimi.it
KEY WORDS: Local Gravity Field, Data Integration, Collocation, Covariance Functions, Linear programming, Simplex Method
ABSTRACT:
The recent gravity mission GOCE aims at measuring the global gravity field of the Earth with un-precedent accuracy. An improved
description of gravity means improved knowledge in e.g. ocean circulation and climate and sea-level change with implications in
areas such as geodesy and surveying. Through GOCE products, the low-medium frequency spectrum of the gravity field is properly
estimated. This is enough to detect the main gravimetric structures but local applications are still questionable. GOCE data can be
integrated with other kind of observations, having different features, frequency content, spatial coverage and resolution. Gravity
anomalies (∆𝑔) and geoid undulation (𝑁) derived from radar-altimetry data (as well as GOCE 𝑇!!) are all linear(ized) functional of
the anomalous gravity potential (𝑇). For local modeling of the gravity field, this useful connection can be used to integrate
information of different observations, in order to obtain a better representation of high frequency, otherwise difficult to recover. The
usual methodology is based on Collocation theory. The nodal problem of this approach is the correct modeling of the empirical
covariance of the observations. Proper covariance models have been proposed by many authors. However, there are problems in
fitting the empirical values when different functional of 𝑇 are combined. The problem of modeling covariance functions has been
dealt with an innovative methodology based on Linear Programming and the Simplex Algorithm. The results obtained during the test
phase of this new methodology of modeling covariance function for local applications show improvements respect to the software
packages available until now.
1. INTRODUCTION
1.1 The Usual Procedure for Gravity Data Integration
The usual procedure for integrated local gravity field modeling
is based on the combination of Remove-Restore technique (RR)
and Collocation theory. The RR principle is one of the most
well-known strategies used for regional and local gravity field
determination. It is based on the idea that modeling gravity field
can be done by assuming that gravity field signal can be divided
into long, medium and short wavelength components. Global
models are used as an approximation of the low frequency part
of the entire spectrum of the global gravitational signal and they
are suitable for a general representation of gravity at global
scale. In the RR procedure the effect due to the gravity field
data outside the local area of interest is removed subtracting to
observed local data a suitable global model. This corresponds to
a high-pass filter and the result is a residual signal where the
long wavelength due to the masses at the deep earth interior and
upper mantle and the contribute of a very smooth topography
have been removed. After reduction for a global model, in
addition to medium frequencies, high frequency components are
still present in local data. They are essentially due to the signal
contribution of the local topography, which is particularly
strong at short wavelengths for a rough terrain so that global
model are not able to represent them properly (Forsberg et al.,
1994). This residual topographic signal is called Residual
Terrain Correction (RTC). The residual data obtained applying
both the reduction of the global model and a correspondent RTC
contain only the intermediate wavelength to be used for local
geoid modeling. All trends or systematic effects are removed
from original data and local residual gravity observations,
related for the most part to local features, can be used in
Collocation to estimate the medium-wavelength gravity signal.
Subsequently, the geopotential model and RTC derived effects
can be restored to this residual component obtaining the final
local estimate (see Figure 1).
Figure 1. The Remove-Restore concept
Collocation is a statistical-mathematical theory applied on
gravity field modeling problems. It is based on the assumption
that the gravity observations can be considered as realizations of
a weak stationary and ergodic stochastic process. This theory
has become more and more important because it allows to
combine many different kind of gravity field observables in
order to obtain a better estimate of gravity field. With the great
amount of heterogeneous data, nowadays available, this
approach has been fully accepted as the standard methodology
for integrated gravity field modeling. In this framework the
concept of spatial correlation expressed as covariance function
is introduced. The key is represented by the fact that quantities
as ∆𝑔, 𝑁 or 𝑇!! are all linear(ized) functionals of the anomalous
gravity potential (𝑇) and covariance functions can be
propagated to one each other applying the proper linear
operators on the well-known analytical model of covariance of
the anomalous potential:
𝐶!!!!
=
𝐺𝑀!
𝑅!
𝑅
𝑟!
!!!
𝑅
𝑟!
!!!
𝜎!
!
𝑃!(𝑐𝑜𝑠
!
!!!
𝜓!") (1)
2. where: 𝑅 is the mean Earth radius,
𝐺 is gravitational constant,
𝑀 is the total mass of the Earth,
𝑟! and 𝑟! are the radius of point 𝑃 and 𝑄,
𝜎!
!
are a set of adimensional degree variances,
𝑃! are the usual normalized Legendre
polynomials,
𝜓!" is the spherical distance between 𝑃 and 𝑄
Degree variances are related to the coefficients 𝑎!" and 𝑏!" of
the spherical harmonic expansion of 𝑇 as:
𝜎!
!
= 𝑎!"
!
+ 𝑏!"
!!
!!! (2)
The general formula of Least Squares Collocation (LSC) for
geodetic application (Barzaghi and Sansò, 1984) can be written
as:
𝐿! 𝑇 = 𝐿! 𝐿! 𝐶!!!!
𝐿! 𝐿! 𝐶!!!!
+ 𝜎!
!
𝐼
!!
𝐿!(𝑇) + 𝑛!
!
!,!!!
(3)
where: 𝐿! is the linear(ized) operator applied in 𝑃!,
𝐶!!!!
is the anomalous potential covariance
matrix of observations,
𝜎!
!
is the noise variance of the observations
In equation (3) the covariance expressed by local data plays
fundamental role and a correct modeling of their covariance
functions is then required.
2. GRAVITY COVARIANCE FUNCTION MODELING
2.1 The State of the Art in Covariance Functions
Estimation
The application of LSC allows gravity field estimation by
integration of different kind of observations. This requires to
define the covariance function of the anomalous potential as
kernel function. The covariance functions of the other linear
functionals are obtained by covariance propagation. Generally
the exact structure of covariance function of 𝑇 is unknown, and
the way to proceed consists in applying suitable analytical
models fitted on empirical covariances of the available data. In
local areas, stationarity and ergodicity of local field required by
collocation theory are generally guaranteed by removal of
systematic effects from data, related to the long wavelength of
the signal and the topographic effect. In fact local covariance
functions are special case of global covariance functions
(Knudsen, 1987) where the information content in wavelengths
longer than the extent of the local area has been removed, and
the information outside the area is assumed to vary in a way
similar to the information within the area. In practice the
observations are given in discrete points in the area of
computing so that the calculation of the so-called empirical
covariance function is done by computing the product between
all the possible combinations of available data and clustering
them as function of reciprocal distance. The mean value of each
cluster represents the covariance value. Hence, the generic
covariance at distance 𝜓 is given by:
𝐶 𝜓 = 𝐶 𝑘
∆!
!
=
!
!
!
!
𝑙! 𝑙!
!!
!!!
!
!!! (4)
where 𝑛 and 𝑚 are the 𝑙 and 𝑙′ observations with reciprocal
distance (𝑘 − 1)∆𝜓 ≤ 𝜓 ≤ 𝑘∆𝜓 and 𝑘 = 1, … , 𝑠. If 𝑙 = 𝑙′,
equation (4) is referred to auto-covariance of 𝑙. In this case at
zero distance the computed covariance is nothing else than the
sum of the signal variance 𝜎!"#$%&
!
and the noise variance 𝜎!"#$%
!
.
This information is useful for a proper estimate of the empirical
covariance values. In fact the width of the sample spacing ∆𝜓
must be properly selected and a simple criterion for optimizing
∆𝜓 is maximizing the signal to noise ratio. If noise of
observations is uncorrelated, for 𝑘 ≥ 1 the empirical covariance
represents with some approximation, the covariance due only to
the signal component of the observations. Hence empirical
covariance function calculated at distance close to zero (e.g. for
𝑘 = 1) can be considered approximately equal to 𝜎!"#$%&
!
. In this
way optimal ∆𝜓 can be estimated as the one that minimize the
difference 𝐶(0) − 𝐶 ∆𝜓/2 ≅ 𝜎!"#$%
!
. For a fair spatial
distribution of data, the optimal ∆𝜓 is close to twice the mean
spacing of the observations (Mussio, 1984). Once the local
empirical covariance is computed, a theoretical model is fitted
into the empirical values. Based on the global analytical
formulas (1), the usual model for local covariance function is
expressed as:
𝐶 𝜓!" = 𝛼𝑒!
!
𝑇
𝑅!
𝑟! 𝑟!
!!!
𝑃! 𝑐𝑜𝑠𝜓!"
!!"#
!!!
+ ⋯
… + 𝜎!
!
𝑇
𝑅!
𝑟! 𝑟!
!!!
𝑃! 𝑐𝑜𝑠𝜓!"
!!"#
!!!!"#!!
(5)
Such covariance model is the sum of two parts. The first comes
from the commission error of the global model removed from
observations. It is given in terms of the sum of the error degree
variances 𝑒!
!
up to the maximum degree of computation of the
global model subtracted in the remove phase. 𝑒!
!
are computed
as the sum of the variances of the estimated model coefficients.
The second part represents the residual part of the signal
contained in the local data. It is expressed as sum of degree
variances. Error degree variances depend on the global model
used. The coefficient 𝛼 allows to weigh their influence. Suitable
models for 𝜎!
!
in (5) have been proposed by Tscherning and
Rapp (Tscherning and Rapp, 1974) as:
𝜎!
!
=
!
!!! !!!
(6)
𝜎!
!
=
!
!!! !!! (!!!)
By covariance propagation, these covariance models of the
anomalous potential 𝑇 can be used to get models for any
functional of 𝑇. By turning the model constants (e.g. 𝐴 and 𝛼)
these model functions can be used to properly fit the empirical
covariances of the available data. The selected model
covariance function can be then propagated to the other
observed functionals using once again the covariance
propagation formula. This is the usual procedure of covariance
functions modeling. The main drawbacks are connected to
mismodeling. The proposed models are frequently unable to
properly fit the empirical covariances, particularly when
3. different functionals are considered. In a previous proposed
approach (Barzaghi et al., 2009) regularized least squares
adjustment have been applied for integrated covariance function
modeling. Although more flexible than the one of Tscherning
and Rapp gives frequently problems in the non-negative
condition of 𝜎!
!
, being sum of square quantities as (2) shows.
2.2 An Introduction to Linear Programming
In order to overcome these limits, the proposed covariance
fitting method is based on linear programming. To describe this
method, we can consider the following system of linear
inequalities:
3𝑥 − 4𝑦 ≤ 15
𝑥 + 𝑦 ≤ 12
𝑥 ≥ 0
𝑦 ≥ 0
(7)
An ordered pair (𝑋, 𝑌) such that all the inequalities are satisfied
when (𝑥 = 𝑋, 𝑦 = 𝑌) is a solution of the system of linear
inequalities (7). As an example [2,4] is one possible solution.
Such problems of two variables can be easily sketched in a
graph that shows the set of all the couple (𝑋, 𝑌) solving the
system (Figure 2).
Figure 2. Graphical sketch of a system of linear inequalities
There are several solutions to (7). Finding one of particular
interest is an optimization process (Press et al., 1989). When
this solution is the one maximizing (or minimizing) a given
linear combination of the variables (called objective function)
subject to constraints expressed as (7), the optimization process
is called Linear Programming (LP). An example of linear
programming could be finding the minimum value of the
objective function:
𝑧 = 𝑥 + 𝑦 (8)
With the constraints imposed in (7). Each of the inequalities of
(7) separates the plane (𝑥, 𝑦) into two parts identifying the
region containing all the couples of (𝑋, 𝑌) satisfying them. Out
of this region, at least one of the constraints is not satisfied so
the solution to (8) must be chosen among the couples (𝑋, 𝑌)
located inside it. This region ,called feasible region, contains all
the possible solution (feasible solutions) expressed in the
constraints system (Figure 3). One of them is the optimal
solution that solve the LP problem. Fundamental theorem of
linear programming assure that if there is solution, it occurs on
one of the vertex of the feasible region.
Figure 3. Two dimensional feasible region of constraints
For applications involving a large number of constraints or
variables, numerical methods must be applied. One of them is
the Simplex method. It provides a systematic way of examining
the vertices of the feasible solution to determine the optimal
value of the objective function. The simplex method consists in
elementary row operations on a particular matrix corresponding
to the LP problem called tableau. The initial version of the
tableau changes its form through iterative optimality check.
This operation is called pivoting. To form the improved
solution, Gauss-Jordan elimination is applied with the pivot
(crossing pivot row and column) to the column pivot. After
improving the solution, the simplex algorithm start a new
iteration checking for further improvements. Each iteration
change the tableau and the conditions of optimality or
unfeasibility of the solution to the proposed LP problem stop
the algorithm. Based on fundamental theorem of linear
programming, simplex method is able to verify the existence of
at least a solution to the proposed LP problem. If this exist, the
algorithm is also able to find the best numerical solution in a
finite time.
2.3 A New Methodology of Covariance Functions Modeling
A new procedure, based on the simplex method and analytical
covariance function models similar to (5), has been devised for
integrated covariance functions modeling. It applies simplex
method for estimating some suitable parameters of model
covariance functions in such a way to fit simultaneously the
empirical covariances of all available local data. The main
problems of standard local covariance modeling are the
propagation of the covariance between functionals. Often, an
estimated model covariance function showing a good agreement
on the empirical values of one kind of data (typically gravity
anomalies) when propagated to other covariances of observed
data in the same local area, shows a poor fit to the
corresponding empirical covariances. Thus are possible
improvements that consist in taking into account all the
available empirical covariances and find a suitable combination
of error degree variances and degree variances giving the best
overall possible agreement. Other aspects that requires to be
taken into account are the non-negativity of the degree
variances of the covariance model and the fact that when data
are considered, different data height are possible. The first one
can be handled through proper constraints in applying simplex
method while the second one requires an approximation. The
empirical covariance estimation procedure does not take into
account this height variations. Nevertheless, since reduced
values are considered, one can assume that data are referred to a
mean height sphere, either the mean earth sphere or a sphere at
satellite altitude when satellite data are considered. So the basic
model covariance function of the anomalous potential observed
in two points became:
4. 𝐶!!(𝜓) =
𝐺𝑀!
𝑅!
𝑅
𝑅 + ℎ
!(!!!)
𝑒!
! 𝑃!(𝑐𝑜𝑠
!!"#
!!!
𝜓) + ⋯
… +
𝑅
𝑅 + ℎ
!(!!!)
𝜎!
!
𝑃!(𝑐𝑜𝑠
!!"#
!!!!"#!!
𝜓)
(9)
where ℎ is the mean height of the data. This equation is a linear
combination of 𝑁!"# − 2 adimensional error degree variances
up to degree 𝑁!"# of reduction of the data 𝑒!
!
and 𝑁!"# − 𝑁!"#
adimensional degree variances 𝜎!
!
that complete the spectrum
up to degree 𝑁!"# and that have to be estimated.
As explained before, simplex method requires the definition of a
set of constraints, expressed as linear inequalities. For given 𝜓!,
there are a set of 𝑒!
!
and 𝜎!
!
such that:
𝐶!!(𝜓!) = 𝐶!!
!"#
(𝜓!) (10)
Referring to more functionals and distance sampling of the
empirical covariance there are little different set of 𝑒!
!
and 𝜎!
!
so
that:
𝐶!! 𝜓! ≅ 𝐶!!
!"#
(𝜓!)
𝐶!! 𝜓! ≅ 𝐶!!
!"#
𝜓!
𝐶∆!∆! 𝜓! ≅ 𝐶∆!∆!
!"#
(𝜓!)
(11)
(11) can be translated into inequalities that simplex method has
to take into account. Each of them can be expressed in the
following form:
𝐶 𝜓! ≤ 𝐶!"# 𝜓! + ∆𝐶!
𝐶 𝜓! ≥ 𝐶!"#
𝜓! − ∆𝐶! (12)
A collection of constraints as (12) can be used into simplex
method to handle the optimization problem taking into account
that model and empirical covariances have to be as close as
possible (Figure 4).
Figure 4. Constraints applied on model covariance function
The same thing can be done on multiple empirical covariance
functions. With these constraints given on the estimated
covariance functions, simplex method is forced to find a unique
set of adapted 𝑒!
!
and 𝜎!
!
, suitable for all the available empirical
covariances (Figure 5).
Figure 5. Multiple constrains on model covariance functions
𝑒!
!
and 𝜎!
!
, are obtained applying suitable scaling factor on
some reference model of 𝑒!
!
and 𝜎!
!
. This could be possible
estimating each of them but such problem should require at least
𝑁!"# − 1 constraints and a simplex tableau of such dimensions
could be difficult to manage (some tests have been done up to
degree 2190). For this reason the chosen model covariance
function has the form:
𝐶!!(𝜓) =
𝐺𝑀!
𝑅!
𝑅
𝑅 + ℎ
!(!!!)
𝛼𝑒!
! 𝑃!(𝑐𝑜𝑠
!!"#
!!!
𝜓) + ⋯
… +
𝑅
𝑅 + ℎ
!(!!!)
𝛽𝜎!
!
𝑃!(𝑐𝑜𝑠
!!"#
!!!!"#!!
𝜓)
(13)
Once the constraints, a linear function of the unknown scale
factors 𝛼 and 𝛽, have been defined, a suitable objective function
must be defined in order to generate the tableau and apply the
simplex algorithm. A possible condition is to impose that the
model function is close to a proper feasible model, e.g. selecting
𝑒!
!
and 𝜎!
!
so that they are close to the values implied by a
global geopotential model. If global sets are able to reproduce
the empirical values of covariances this implies obtaining scale
factor close to unit. So it has been decided to minimize an
objective function that is simply the sum of these two adaptive
coefficients. Many different objective functions have been
tested, but this solution proved to be the most adequate.
Therefore the linear programming problem that simplex method
has to solve is simply:
min (𝛼 + 𝛽) (14)
with 𝛼 and 𝛽 subject to 𝑚 constraints expressed as:
𝐶! ! !!(!) 𝜓!, 𝛼, 𝛽 ≤ 𝐶! ! !! !
!"#
𝜓! + ∆𝐶! ! !! !
!
(15)
𝐶! ! !!(!) 𝜓!, 𝛼, 𝛽 ≥ 𝐶! ! !!
!
!"#
𝜓! − ∆𝐶! ! !! !
!
with 𝑖 = 1, … , 𝑚 , 𝐿(𝑇) or 𝐿!
(𝑇) linear functionals of the
anomalous potential 𝑇 (such ∆𝑔, 𝑁 or 𝑇!!) and 𝐶! ! !!(!) the
relative covariance functions propagated using (13). As
5. explained previously, if all the 𝑚 constraints are consistent and
the proposed optimization problem has a solution, there is a
feasible region of the solutions space where different
combination of 𝛼 and 𝛽 are suitable. This covariance fitting
methodology is numerically implemented through an iterative
procedure. While objective condition (14) is fixed, conditions
(15) are tuned in order to get the best possible fitting with
empirical covariances. Referring to the feasible region, this
procedure identifies an initial large feasible region (soft
constraints, poor fit) and reduces this area until the vertex of
optimality solution practically coincide one each other
(strongest constraints, best fit). In Figure 6 this process is
sketched.
Figure 6. Impact of iterative constraints adjustment
on feasible region
Thus in this procedure, simplex method has been applied in a
quite different way with respect to standard applications of
linear programming. While in the usual application of simplex
algorithm the focus is on the objective function and the
constraints are fixed, the devised procedure proceeds in a
reverse way. The objective function loses most of its importance
and the focus is on suitable constraints allowing the best
possible agreement between model covariance functions and
empirical values.
3. ASSESSMENT OF THE PROCEDURE
3.1 Preliminary Tests on Covariance Functions Modeling
with Simplex Method
Covariance function modeling with simplex method has been
implemented through many Fortran 95 subroutines, based on
the concepts explained before. For brevity, the entire program
will be called henceforth SIMPLEXCOV. This procedure is
mainly composed by three steps:
1. analysis of input data for assessment of the best
sampling of empirical covariance functions;
2. computing of empirical covariances;
3. iterative computing of best fit model covariance
functions with simplex method.
The third step is an iterative procedure, composed by two nested
loops. In the external loop a set of suitable constraints on
covariance functions are defined. Based on these constraints, in
the internal loop many optimization problems are generated and
solved by simplex method. In each iteration, the starting 𝑒!
!
-𝜎!
!
model (derived e.g. from global model coefficients) is slightly
modified and a simplex algorithm solution is searched. If more
than one set of varied degree variances is able to satisfies the
constraints, an improved fit can be obtained modifying the
constraints in the external loop and so on. On the contrary, if all
the LP problems shows to have no feasible solution in the
internal loop, constraints has to be softened in the external loop.
The final iteration corresponds to a unique combination of set of
adapted degree variances that allow to obtain the best possible
fit between empirical covariances and model covariance
functions. (Figure 7) and (Figure 8) show an intermediate
iteration. In this computation two different set of adapted degree
variances (Figure 7) produce very similar model covariance
functions, both satisfying the same constraints on empirical
values (Figure 8).
Figure 7. Degree variances adapted in intermediate solution
Figure 8. Model covariance functions in intermediate solution
Covariance function modeling can be further improved adding
constraints or making them more stringent until only one
solution is found (Figure 9).
Figure 9. Iterative covariance function modeling process
with Simplex method
This procedure has been tested both with simulated data and
real dataset. A fast test has been implemented on gravity data in
0 100 200 300 400 500 600 700 800 900
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
x 10
-17
degree
adimensional
solution I
solution II
0 0.5 1 1.5 2 2.5 3 3.5 4
-50
0
50
100
150
200
250
300
C(DgDg)
deg
[mGal2
]
empirical
solution I
solution II
6. the NW part of Italy. In this test, a dataset of 4840 observed
Free Air gravity anomalies ∆𝑔!" (hereafter denoted simply ∆𝑔)
homogeneously distributed in a 5°×5° area (with SW corner
with coordinates 42°𝑁, 7°𝐸), have been reduced for the long
wavelength component removing the global model EGM2008
(Pavlis et al., 2008) up to degree 360 and the corresponding
Residual Terrain Correction. Similarly in the same area 430
observation of geoid undulation 𝑁 obtained by GPS-leveling
have been reduced correspondingly. In a standard local geoid
computation, 𝑁 values are estimated based on ∆𝑔 values.
Empirical covariance function of ∆𝑔 is computed and a suitable
model covariance function is estimated so to represent as best
the spatial correlation given by empirical values. Thus, the
devised methodology has been applied following this estimation
procedure. As a benchmark, the standard fitting method based
on COVFIT program of the GRAVSOFT package has been
adopted (Tscherning, 2004). In (Figure 10a) the empirical
covariance of the reduced gravity values and the COVFIT3 best
fit model are plotted. As one can see, the empirical values and
the model covariance function are in a suitable agreement.
However, the model function is not able to reproduce the
oscillating features of the empirical covariance. As a further
cross-check, the model function of residual geoid undulations,
as derived from the gravity covariance model, has been
compared with the 𝑁 empirical covariances values (Figure 10b).
The agreement is quite poor as the model function displays with
a correlation pattern which, at short distance, is stronger than
the one implied by the empirical covariance.
Figure 10. ∆𝑔 model covariance functions obtained by
COVFIT3 (a) and the same model propagated on 𝑁 (b)
SIMPLEXCOV has been then applied, in order to verify the
results obtained by the new procedure and compare the results
with the standard covariance function modeling procedure. The
result is plotted in (Figure 11). The agreement between model
and empirical covariance values is remarkable. The model is
now able to fully reproduce the oscillations of the empirical
values in a very detailed way (Figure 11a). As done before, the
undulation model covariance implied by the gravity model
covariance has been derived and compared with the empirical
undulation covariance. The model seems to fit a little bit better
the empirical values even though the overall agreement is quite
poor also in this case (Figure 11b).
Figure 11. ∆𝑔 model covariance functions obtained by
SIMPLEXCOV (a) and the same model propagated on 𝑁 (b)
The SIMPLEXCOV procedure has been then applied to both
empirical functions in order to get a set of scaled error
degree/degree variances allowing a common improved fit. Thus
both empirical covariances of residual undulation and residual
gravity have been considered in the fitting procedure. The
results are summarized in (Figure 12).
Figure 12. Integrated model covariance functions of ∆𝑔 (a)
and 𝑁 (b) obtained by SIMPLEXCOV
As expected, a remarkable improvement is obtained. The main
features of both empirical covariances are properly reproduced
using a unique set of scaled error degree/degree variances. In
(Figure 13) the comparison between the starting EGM08
adopted error degree/degree variances and the final scaled
obtained solution is shown. In order to fit the empirical values
properly, larger error degree variances must be considered. On
the other hand, there is a smaller scale factor applied to the
degree variances. This can be explained saying that the formal
error degree variances are too optimistic or, equivalently, that
commission error is not realistic.
Figure 13. Integrated adapted degree variances obtained by
SIMPLEXCOV
This test shows the reliability of the solution obtained when
simplex method is applied on covariance function modeling
problems. Adapted degree variances is able to reproduce
empirical covariance functions of observed data, better than
usual methodologies. In local application, integration of
different data kinds is very useful and suitable covariance
function are needed for computing proper collocation solutions.
Thus the devised procedure seems to be a suitable tool for
improving covariance function modeling.
0 0.5 1 1.5 2 2.5 3 3.5 4
-50
0
50
100
150
200
250
300
C(DgDg)
deg
[mGal2
]
empirical
model
0 0.5 1 1.5 2 2.5 3 3.5 4
-0.05
0
0.05
0.1
0.15
0.2
C(NN)
deg
[m2
]
empirical
model
0 0.5 1 1.5 2 2.5 3 3.5 4
-50
0
50
100
150
200
250
300
C(DgDg)
deg
[mGal2
]
empirical
model
0 0.5 1 1.5 2 2.5 3 3.5 4
-0.05
0
0.05
0.1
0.15
0.2
C(NN)
deg
[m2
]
empirical
model
0 0.5 1 1.5 2 2.5 3 3.5 4
-50
0
50
100
150
200
250
300
C(DgDg)
deg
[mGal2
]
empirical
model
0 0.5 1 1.5 2 2.5 3 3.5 4
-0.05
0
0.05
0.1
0.15
0.2
C(NN)
deg
[m2
]
empirical
model
0 100 200 300 400 500 600 700 800 900
0
0.5
1
1.5
2
2.5
x 10
-17
degree
adimensional
adapted degvar
error degvar EGM08
degvar EGM08
7. 3.2 Integrated Local Estimation of Gravity Field
RR technique and LSC based on covariance function computed
by SIMPLEXCOV has been combined in a unique computing
procedure. This in order to obtain local predictions of gravity
field functionals. This procedure is able to integrate
observations of ∆𝑔!", 𝑁, 𝑇 and 𝑇!! for local prediction of each
one of them. Following the main procedure steps are illustrated.
• REMOVE PHASE:
a) computing of long wavelength (global models);
b) computing of high wavelength (RTC);
c) computing of residual values of any available data.
• COVARIANCE FUNCTION MODELING PHASE:
d) computing of empirical covariance functions (for any
available data);
e) computing of integrated model covariance functions based
on SIMPLEXCOV.
• LEAST SQUARES COLLOCATION PREDICTION
PHASE:
f) selection and down sampling of data if needed (to avoid
numerical problems);
g) assembling the covariance matrices to be used in the
collocation formula;
h) inversion of this covariance matrix;
i) computing the collocation estimate for the selected
functional of the anomalous potential.
• RESTORE PHASE:
j) restoring long and short wavelengths to the predicted
residuals;
The collocation formula (3) is applied as many times as the
number of prediction points. For each computation a down
sampling process is applied on input observed residuals. Many
reasons have lead to this choice. Covariance functions express
spatial correlation of data and the impact of data on the estimate
is tuned by the covariance correlation length. the correlation
length is defined as the distance at which covariance is half of
the origin value. Hence from model covariance function it is
possible to determine this correlation length, in such a way to
select only the most significant data, i.e. those close to the
current prediction point. To this aim, only data having a
distance from the estimation point smaller than correlation
length can be considered (Figure 14).
Figure 14. Window selection and down sampling of data
This helps in selecting the most significant data reducing also
the computation time. In fact, in (3), the covariance matrix to be
inverted has a dimension equal to the total amount of used data.
The proposed selection criterion speeds up computation without
reducing precision because only relevant (i.e. significantly
correlated) data are used. However, for very dense data set,
further down sampling process could be necessary. Once that
only significant data have been selected, if necessary the total
amount is decreased taking into account the homogeneity in
spatial coverage around the prediction point. To maintain
homogeneous coverage of prediction area, selected data are
subdivided into three rings centered on prediction point and data
in each ring is randomly down sampled in such a way to
maintain isotropic and homogeneous distribution of input data.
With this down sampling procedure, computation process
speeds up because the size of matrix to be inverted decreases
and an effective estimation procedure is set.
3.3 Pre-Processing of GOCE Data
Before being integrated with other available datasets, recent
GOCE 𝑇!! freely distributed by ESA needed a pre-processing
step. In order to be used in the integrated computing procedure
these data have been reduced for the long wavelengths by
subtracting a global model up to a given maximum degree. The
GOCE Direct Release 3 (Bruinsma et al., 2010) global model
(GDIR3) up to degree 180 have been subtracted to the observed
𝑇!!. These residuals cannot be used directly and need a further
processing step. These residuals are completely dominated by a
corrupting high frequency noise. Thus, a suitable filter must be
applied on residuals 𝑇!!, so to remove the noisy high
frequencies components. This can be done by low-pass filtering
the data using a suitable moving average. The cut-off frequency
is directly related to the amplitude of this spatial average, so this
must be set in such a way to remove mainly the noise
component. Generally, if the desired cut-off frequency is
expressed in terms of degree of spherical harmonics expansion
𝑛, the amplitude of the moving average ∆𝜓 can be determined
as ∆𝜓(°) = 180° 𝑛. Assuming this, if moving average of 0.7°
is used, residual signal components up to degree 260 are not
filtered. This empirical filtering procedure has been applied on
𝑇!! dataset. This has been done both track-wise (i.e. along each
track) and spatial-wise (i.e. considering a moving average in
space). Two filtered 𝑇!! datasets have been used in the
following analysis. The 𝑇!! spatially filtered data will be
denoted as 𝑇!!
!"#
while those filtered along track as 𝑇!!
!"#
.
3.4 The Calabrian Arc Area Test
A first test of the integration procedure was performed in the
area centered on the Calabrian Arc. This area has been chosen
as area test because here gravity signal presents high variability,
due to the geophysical phenomenon of subduction in the area of
the Ionian Sea. Furthermore, in this area an aerogravimetric
campaign sponsored by ENI S.p.A. was performed in 2004. The
main objective of this test is the assessment of the new
procedure in a situation where dense satellite observations are
integrated with other existing data in order to obtain local
modeling of the gravimetric signal. As mentioned the
benchmark of predictions is a dataset of 1157 ∆𝑔!" (gravity
anomalies free-air) acquired by aerogravimetry. In the 9°×9°
area with SW corner of coordinates (𝜆 = 12°𝐸, 𝜑 = 34°𝑁)
22932 available ERS1-Geodetic Mission radar-altimetry
observations, and 16370 values of GOCE radial second
derivatives have been selected. GOCE data have been processed
to what is explained previously (§3.3). All available data have
been reduced with GOCE Direct Release 3 global model up to
degree 180. Furthermore a correspondent Residual Terrain
Correction computed with GRAVSOFT package’s TC program
(Forsberg and Tscherning, 2008) and a detailed 3′′×3′′ DTM
0 0.5 1 1.5 2 2.5 3 3.5 4
-0.05
0
0.05
0.1
0.15
0.2
0.25
0.3
C(NN)
deg
[m2
]
empirical
model
Correlation
length
8. have been removed to ∆𝑔!! and 𝑁 data. 𝑇!! data has been
reduced only by global model component because topographic
signal has been proved to be not relevant at satellite altitude
(about 260 km) and for filtered out in the pre-processing phase.
Different combination of residuals 𝑁 and 𝑇!! has been used in
order to predict ∆𝑔!"# on aerogravimetry points. 𝑇!! values have
been considered both as 𝑇!!
!"#
and 𝑇!!
!"#
. The predicted values
have been compared with observed gravity anomalies from
aerogravimetry, in order to assess the computing procedure and
check for the best possible combination in reproducing the
observed data. Different model covariance functions has been
estimated by SIMPLEXCOV, based on empirical covariance
values of residuals data. When the model covariance functions
are computed taking into account only one functional, empirical
values are well reproduced by simplex algorithm. An example is
shown in (Figure 15).
Figure 15. Model covariance function estimated with 𝑁 only
However problems concerning propagation of model
covariances on other functional are still evident (Figure 16).
Figure 16. Model covariance function estimated with 𝑇!!
!"#
only
and then propagated on 𝑁
On the other hand, if the new proposed integrating procedure
(§2.3) is applied jointly for fitting all the available empirical
covariances there are sharp improvements which prove the
effectiveness of this approach. Example of the obtained results
are presented in (Figure 17). In this case, the combination of
two functional 𝑁 + 𝑇!!
!"#
, the agreement between empirical
values and model functions is worse if compared with model
covariance functions computed by fitting the covariance of a
single functional (e.g. Figure 15). However, if all empirical
covariances are used, an overall suitable fit is reached.
Figure 17. Integrated model covariance function estimated
with 𝑁 + 𝑇!!
!"#
propagated on 𝑁 (a) and 𝑇!!
!"#
(b)
This is indeed the procedure to be followed because, as it was
proved, if the covariance function of one observation only is
considered, the obtained remarkable fit does not reflect into the
covariances of other existing data. Thus the integrated estimate
of a covariance model for 𝑇 to be then propagated to any linear
functional of 𝑇 seems to be the proper method. Once that model
covariance functions have been estimated different tests have
been performed. Five combinations of data have been tested and
corresponding different predictions ∆𝑔!"# have been estimated,
with tests also on parameters as the amplitude of the selection
window.
Results of test A have been obtained with a selection window of
0.7° width and 4000 maximum observation for each
computation point. Due to their spatial resolution, 500 values of
𝑇!! and 600 of 𝑁 on sea have been used in the mean for each
estimation point. The results are summarized in (Table 1) where
the notation, for example ∆𝑔!!!!
!"# , correspond to gravity
anomalies predicted by a combination of 𝑁 and 𝑇!!
!"#
data and
∆𝑔!"# are the benchmark residuals obtained by aerogravimetry.
# points E [mGal] σ [mGal]
∆𝑔!"# 1157 -1.12 21.34
∆𝑔!"# − ∆𝑔!!!
!"# 1157 -9.36 23.52
∆𝑔!"# − ∆𝑔!!!
!"# 1157 -8.66 19.80
∆𝑔!"# − ∆𝑔! 1157 8.31 15.64
∆𝑔!"# − ∆𝑔!!!!
!"# 1157 4.72 14.97
∆𝑔!"# − ∆𝑔!!!!
!"# 1157 -2.16 33.23
Table 1. Statistics of the differences between benchmark values
and predictions obtained in test A
The results obtained using 𝑁 data are only slightly improved in
the combined 𝑁 + 𝑇!!
!"#
estimation procedure. Particularly, the
mean values of the residuals is nearby half the one obtained by
considering 𝑁 only. By the way, no significant estimate can be
obtained using 𝑇!! only (It seems, however, that a less poor
estimates are computed when 𝑇!!
!"#
are taken as input data).
This is an expected result because at mean altitude of 260 km,
gravity signal is very attenuated and a downward continuation
from observed 𝑇!! to ∆𝑔!" close earth surface is an unstable
procedure. As a general remark, one can also state that biases
are always present: most of them can be considered significative
being the noise in the data, as derived from cross-over statistics,
of the order of 2 mGal. Finally, an unexpected anomalous result
is obtained when combining 𝑁 + 𝑇!!
!"#
, the anomaly being
related to the standard deviation of these residuals which is to
be compared with the one derived using 𝑇!!
!"#
data. At the
moment this behavior has no reasonable explanation.
0 0.5 1 1.5 2 2.5 3 3.5 4
-0.05
0
0.05
0.1
0.15
0.2
0.25
0.3
C(NN)
deg
[m2
]
empirical
model
0 0.5 1 1.5 2 2.5 3 3.5 4
-0.2
-0.1
0
0.1
0.2
0.3
0.4
0.5
C(NN)
deg
[m2
]
empirical
model
0 0.5 1 1.5 2 2.5 3 3.5 4
-0.1
-0.05
0
0.05
0.1
0.15
0.2
0.25
C(NN)
deg
[m2
]
empirical
model
0 0.5 1 1.5 2 2.5 3 3.5 4
-0.5
0
0.5
1
1.5
2
C(TrrTrr)
deg
[mE2
]
empirical
model
9. To check for possible improvements, in test B it has been
decided to remove, during the estimation procedure, for each
estimation point and for each windowed dataset, the
corresponding mean value. The results of this test are shown in
(Table 2):
# points E [mGal] σ [mGal]
∆𝑔!"# 1157 -1.12 21.34
∆𝑔!"# − ∆𝑔!!!
!"# 1157 -10.72 22.49
∆𝑔!"# − ∆𝑔!!!
!"# 1157 -17.64 22.54
∆𝑔!"# − ∆𝑔! 1157 -0.04 17.01
∆𝑔!"# − ∆𝑔!!!!
!"# 1157 -1.19 17.46
∆𝑔!"# − ∆𝑔!!!!
!"# 1157 -7.87 16.18
Table 2. Statistics of the differences between benchmark values
and predictions obtained in test B
Results of test B are similar to those obtained previously. We
only remark that, in this second case, the anomalous behavior in
the ∆𝑔!!!!
!"# is not present. This enforces the previous statement
on this anomaly. As previously stated, only combined 𝑁 + 𝑇!!
estimate lead to a significant reduction of the observed
aerogravimetry derived data.
Test C has concerned the amplitude of the windowed selection
of data for each prediction point. To verify if a wider
windowing improves the accuracy a further test has been
performed. In this third case, only combinations of 𝑇!!
!"#
data
have been considered. The selection window has been set to 1°
width and a maximum of 4000 observations for each
computation point has been selected. With these settings, nearby
600 values of 𝑇!! and 800 of 𝑁 on sea have been considered for
each prediction point. After selection, the mean value of
selected has been removed, as it has been done in the second
test. The summary of the results is shown in (Table 3):
# points E [mGal] σ [mGal]
∆𝑔!"# 1157 -1.12 21.34
∆𝑔!"# − ∆𝑔!!!
!"# 1157 -9.38 20.37
∆𝑔!"# − ∆𝑔! 1157 -0.04 17.01
∆𝑔!"# − ∆𝑔!!!!
!"# 1157 2.26 15.94
Table 3. Statistics of the differences between benchmark values
and predictions obtained in test C
The residuals show the same patterns and statistics are
practically equivalent to those in (Table 2). Thus, it can be
concluded that changing windowing amplitude in respect to that
chosen as function of correlation length of data has no
significant impact on the estimation but only on computation
speed. Furthermore, generally speaking, it seems that no
significant improvements are obtained when using 𝑁 + 𝑇!! with
respect to the solution based on 𝑁 only. This is, by no means,
true when residuals are globally considered. However, by
analyzing the residuals track by track, one can see that some
slight improvements can be reached when aerogravimetry tracks
on land areas are considered. There, no altimeter data are
available and the 𝑁 based estimates are poorer than those based
on 𝑁 + 𝑇!!. This can be seen in the differences between ∆𝑔!
and ∆𝑔!!!!
!"# . The larger discrepancies are on land areas (Figure
18). These discrepancies are related to 𝑇!! improvements in the
∆𝑔!!!!
!"# estimations which are closer to the aerogravimetry
∆𝑔!"#, as it can be seen in the statistics of the residuals that are
slightly better (see Table 3). This indeed an indication in favor
of merging all the available data, even though they are satellite
derived.
Figure 18. Differences between predicted values ∆𝑔!!!!
!"#
and ∆𝑔! obtained in test C
An interesting results has not been yet discussed. High degree
global models (e.g. EGM08) allow to compute gravity signal
with high overall accuracy and resolution. In (Table 4) are
shown statistics of differences obtained by EGM08 synthesized
up to different maximum degrees on aerogravimetry tracks.
When it is computed up to maximum degree 2160, EGM08 is
practically able to recover all the signal observed by
aerogravimetry. In this case aerogravimetry campaign could be
replaced by global models, if accuracy requested permit it.
Prediction obtained by the procedure developed in this work,
combining GOCE filtered data, and aerogravimetry, is
comparable with EGM08 computed up to degree 650 (see Table
3 and Table 4).
E [mGal] σ [mGal]
∆g!"# − ∆g!"#$%(!"#$) -0.28 3.74
∆g!"# − ∆g!"#$%(!"##) -0.72 4.49
∆g!"# − ∆g!"#$%(!""") 0.34 9.58
∆g!"# − ∆g!"#$%(!"") 1.09 12.74
∆g!"# − ∆g!"#$%(!"#) 2.50 16.79
Table 4. Statistics of the differences between benchmark values
and EGM2008 computed up to different maximum degrees
This is a interesting result because on central part of
Mediterranean Sea gravity data are generally very dense and
with high quality and it’s obvious that global models such
EGM08 are able to represent gravity field in this area with
resolution comparable to aerogravimetry. However in some
areas of central Africa of south America the availability of high
quality data is not so good and with such procedure of data
integration, specially with a great number of satellite
observations now available, local computing could obtained still
better results in respect to that it could be possible simply
computing a global model. In conclusion, in the discussed tests
reliable and meaningful results have been obtained using the
proposed procedure.
10. 4. CONCLUSIONS
The aim of this work was to set up a procedure able to combine
different functionals of the anomalous potential in order to
obtain prediction of any other functional of 𝑇. A computing
procedure, based on remove-restore technique and least squares
collocation has been devised. Particularly, this procedure was
based on an innovative approach for covariance function
modeling. A new methodology based on simplex algorithm and
linear programming theory allowed to obtain model covariance
functions in good agreement with the empirical covariance
values computed from reduced data. Remarkable results were
obtained in the integrated estimate of model covariance
functions when empirical values of different available
functionals were used. The new methodology proved to be
flexible and able to properly reproduce all the main features of
the given empirical covariances. This result is only a part of a
general procedure able to combine functionals of the anomalous
potential such as gravity anomalies, geoid undulation and
second radial derivatives of 𝑇 for local gravity field estimation.
Another important conclusion has been reached in the
evaluation of the feasibility of a windowed collocation estimate.
Reliable results were obtained in the windowed collocation
procedure implemented in this work. Particularly, it has been
shown that the window amplitude can be selected on the basis
of the covariance correlation length. Also, tests have been
devised for reducing the number of data preserving
homogeneity and isotropy in data distribution for each
computation point. Preliminary tests, have been performed to
validate this procedure from an algorithmic point of view.
Covariance models, through least squares collocation procedure,
were able to give reliable predictions of ∆𝑔. In the test
presented, local predictions of ∆𝑔 have been compared with
observed values coming from aerogravimetry. These predictions
have been obtained with different combination of radar
altimetry data and GOCE data, in order to verify the recovering
of the medium-high frequencies, contained in the gravity signal
measured with this technique. The best fit between empirical
covariance values and model covariances was reached in the
combined estimation procedure. This allows overcoming the
fitting problem that frequently occur when only one empirical
covariance is used to tune the model covariance. As a matter of
fact, it was proved that the joint estimation option leads to an
optimal fit in the selected covariance models for the other
available functionals. The collocation estimates as derived from
combining the different data sets according to this procedure
proved to be consistent with the observed gravity data. Further
tests have been performed on different area and with different
combination of data. They were not presented for brevity but
they confirm the results illustrated in this paper. As a final
comment, one can say that the devised method for covariance
fitting together with a windowed collocation procedure is able
to give standard reliable estimates. However, there are
promising improvements particularly related to the proposed
covariance fitting procedure. The method for covariance fitting
is effective and can remarkably improve the coherence between
empirical and model covariances. Therefore, it can be
considered as a valuable tool in further developments and
applications of collocation.
5. REFERENCES
Barzaghi R., Tselfes N., Tziavos I.N., Vergos G.S., 2009. Geoid
and high resolution sea surface topography modellingin the
mediterranean from gravimetry, altimetry and GOCE data:
evaluation by simulation. Journal of Geodesy, No.83.
Barzaghi R., Sansò F., 1984. La collocazione in geodesia fisica.
Bollettino di geodesia e scienze affini, anno XLIII
Borghi A., 1999. The Italian geoid estimate: present state and
future perspectives. Ph.D. thesis, Politecnico di Milano.
Bruinsma S.L., Marty J.C., Balmino G., Biancale R., Foerste C.,
Abrikosov O., Neumayer H., 2010. GOCE Gravity Field
Recovery by Means of the Direct Numerical Method. presented
at the ESA Living Planet Symposium, 28 June-2 July 2010,
Bergen, Norway.
Forsberg R., 1994. Terrain effect in geoid computation. Lecture
notes, International School of the Determination and Use of the
Geoid, IGeS, Milano.
Heiskanen W.A., Moritz H., 1967. Physical Geodesy. Institute
of Physical Geodesy, Technical University, Graz, Austria.
Knudsen P., 1987. Estimation and modeling of the local
empirical covariance function using gravity and satellite
altimeter data. Bullettin Geodesique, No.61.
Moritz H., 1980. Advanced Physical Geodesy. Wichmann,
Karlsruhe.
Mussio L., 1984. Il metodo della collocazione minimi quadrati e
le sue applicazioni per l’analisi statistica dei risultati delle
compensazioni. Ricerche di Geodesia Topografia e
Fotogrammetria, No. 4, CLUP.
Pavlis N.K., Holmes S.A, Kenyon S.C., Factor J.K., 2012. The
development and evaluation of the Earth Gravitational Model
2008 (EGM2008). Journal of geophysical research, Vol. 117.
Press W.H., Flannery B.P., Teukolsky S.A., Vetterling W.T.,
1989. Numerical Recipes The Art of Scientific Computing.
Cambridge University Press.
Reguzzoni M., 2004. GOCE: the space-wise approach to
gravity field determination by satellite gradiometry. Ph.D.
thesis, Politecnico di Milano.
Tscherning C.C., Rapp R.H., 1974. Closed Covariance
Expressions for Gravity Anomalies, Geoid Undulations, and
Deflections of the Vertical Implied by Anomaly Degree-
Variance Models. reports of the Department of Geodetic
Science, No. 208, The Ohio State University.
Tscherning C.C., 2004. Geoid determination by least squares
collocation using GRAVSOFT. Lecture notes, International
School of the Determination and Use of the Geoid, IGeS,
Milano.
Tselfes N., 2008. Global and local geoid modelling with GOCE
data and collocation. Ph.D. thesis, Politecnico di Milano.
6. ACKNOWLEDGMENTS
Prof. Riccardo Barzaghi and Ph.D Noemi Emanuela Cazzaniga
gave a fundamental contribute in this work and in my entire
Ph.D course. My gratitude goes mainly to them. Thanks.