This document describes algorithms for generating Level 3 (L3) data products from the NASA-ISRO Synthetic Aperture Radar (NISAR) mission to validate Level 2 (L2) requirements related to measuring secular velocities, coseismic displacements, and transient deformations of the solid Earth. It outlines two approaches: 1) comparing InSAR time series to continuous GPS measurements, and 2) examining autocorrelation of noise in negligibly deforming areas. The L3 products will decompose InSAR time series into basis functions to isolate secular, coseismic and transient signals for validation against the accuracy requirements over length scales of 0.1-50km.
The document describes seismic interpretation workflows, including conventional and unconventional techniques. Conventional techniques involve horizon interpretations, fault picking, and tying seismic data to well logs to understand subsurface geology. Unconventional techniques analyze seismic attribute variations like amplitudes to identify hydrocarbon indicators. The workflow includes generating synthetics from well logs, interpreting horizons on seismic sections, identifying structures like faults and gas chimneys, and determining direct hydrocarbon indicators.
This study develops empirical correlations between cumulative absolute velocity (CAV) and spectral accelerations (Sa) using ground motion records from the NGA database. CAV-Sa correlations are influenced by rupture distance and presence of velocity pulses. Piecewise linear fitting equations are provided to quantify the correlations for various periods from 0.01 to 10 seconds. The correlations provide a useful way to characterize the joint occurrence of CAV and Sa, which can be applied in ground motion selection.
This document describes a fast and reliable method for surface wave tomography to estimate 2-D models of isotropic and azimuthally anisotropic velocity variations from regional or global surface wave data. The method inverts surface wave group or phase velocity measurements to produce tomographic maps in a spherical geometry. It allows for spatial smoothing and model amplitude constraints to be applied simultaneously. Examples applying this technique globally and regionally in Eurasia and Antarctica are presented.
This document describes a fast and reliable method for surface wave tomography to estimate 2-D models of isotropic and azimuthally anisotropic velocity variations from regional or global surface wave data. The method inverts surface wave group or phase velocity measurements to produce tomographic maps in a spherical geometry. It allows for spatial smoothing and model amplitude constraints to be applied simultaneously. Examples applying this technique globally and regionally in Eurasia and Antarctica are presented.
APPLICATION OF SPATIOTEMPORAL ASSOCIATION RULES ON SOLAR DATA TO SUPPORT SPAC...IJDKP
This summarizes a research paper that proposes a new algorithm called MiTSAI to extract Thematic Spatiotemporal Association Rules (TSARs) from solar Satellite Image Time Series (SITS) in order to better understand solar data and support space weather forecasting. MiTSAI considers both the visual features and semantic information of solar images. Experimental results validated that MiTSAI can extract new and interesting patterns compared to existing algorithms.
A Fully Automated System for Monitoring Pit Wall DisplacementsJOSE ESPEJO VASQUEZ
ABSTRACTO.
El Monitoreo automatizado de taludes empinados, excavaciones y terraplenes altos; permite la detección temprana de la inestabilidad y se puede utilizar para evitar o mitigar las posibles fallas de taludes.
Los sistemas que utilizan múltiples y diferentes tipos de sensores se han desarrollado y probado con éxito en la Mina Highland Valley Copper en la Columbia Británica. Estos sistemas utilizan estaciones totales robóticas (RTS) como principales sensores de medición, con levantamientos repetidas en intervalos predefinidos seleccionados para optimizar la eficiencia operativa.
Esta metodología ha sido desarrollada para mejorar el sistema de exactitud y fiabilidad mediante la reducción de los efectos de errores sistemáticos creados por la refracción atmosférica e instrumento inestable y posiciones de punto de referencia. La inclusión de sensores GPS para monitorear las posiciones RTS crea flexibilidad operativa adicional y mantiene la integridad del sistema cuando las estaciones de referencia disponibles son insuficientes.
Seismic Modeling ASEG 082001 Andrew LongAndrew Long
This document discusses tools for modeling elastic wave propagation to aid in seismic survey planning. It summarizes three main modeling techniques: recursive reflectivity methods, ray tracing methods, and full wavefield methods using finite-differencing. Ray tracing is useful for optimizing survey geometry but not reflectivity studies, while reflectivity and finite-difference methods model full wavefields and are better for amplitude studies like AVO. Integrating these modeling tools with real data and rock physics analysis allows comprehensive understanding of wave propagation for effective survey planning addressing all acquisition parameters and seismic phenomena.
This document summarizes the concept and uses of response spectra for structural engineers. Response spectra provide a way to quantify the demands of earthquake ground motion on structures of varying natural periods of vibration. They have been incorporated into building codes since the 1950s and help establish seismic design forces. Actual recorded response spectra are jagged, but design response spectra are smoothed curves. Response spectra can be used for rapid evaluation of building inventories, performance-based design, evaluation of seismic vulnerability, and post-earthquake damage estimates. They provide a useful tool for earthquake-resistant design.
The document describes seismic interpretation workflows, including conventional and unconventional techniques. Conventional techniques involve horizon interpretations, fault picking, and tying seismic data to well logs to understand subsurface geology. Unconventional techniques analyze seismic attribute variations like amplitudes to identify hydrocarbon indicators. The workflow includes generating synthetics from well logs, interpreting horizons on seismic sections, identifying structures like faults and gas chimneys, and determining direct hydrocarbon indicators.
This study develops empirical correlations between cumulative absolute velocity (CAV) and spectral accelerations (Sa) using ground motion records from the NGA database. CAV-Sa correlations are influenced by rupture distance and presence of velocity pulses. Piecewise linear fitting equations are provided to quantify the correlations for various periods from 0.01 to 10 seconds. The correlations provide a useful way to characterize the joint occurrence of CAV and Sa, which can be applied in ground motion selection.
This document describes a fast and reliable method for surface wave tomography to estimate 2-D models of isotropic and azimuthally anisotropic velocity variations from regional or global surface wave data. The method inverts surface wave group or phase velocity measurements to produce tomographic maps in a spherical geometry. It allows for spatial smoothing and model amplitude constraints to be applied simultaneously. Examples applying this technique globally and regionally in Eurasia and Antarctica are presented.
This document describes a fast and reliable method for surface wave tomography to estimate 2-D models of isotropic and azimuthally anisotropic velocity variations from regional or global surface wave data. The method inverts surface wave group or phase velocity measurements to produce tomographic maps in a spherical geometry. It allows for spatial smoothing and model amplitude constraints to be applied simultaneously. Examples applying this technique globally and regionally in Eurasia and Antarctica are presented.
APPLICATION OF SPATIOTEMPORAL ASSOCIATION RULES ON SOLAR DATA TO SUPPORT SPAC...IJDKP
This summarizes a research paper that proposes a new algorithm called MiTSAI to extract Thematic Spatiotemporal Association Rules (TSARs) from solar Satellite Image Time Series (SITS) in order to better understand solar data and support space weather forecasting. MiTSAI considers both the visual features and semantic information of solar images. Experimental results validated that MiTSAI can extract new and interesting patterns compared to existing algorithms.
A Fully Automated System for Monitoring Pit Wall DisplacementsJOSE ESPEJO VASQUEZ
ABSTRACTO.
El Monitoreo automatizado de taludes empinados, excavaciones y terraplenes altos; permite la detección temprana de la inestabilidad y se puede utilizar para evitar o mitigar las posibles fallas de taludes.
Los sistemas que utilizan múltiples y diferentes tipos de sensores se han desarrollado y probado con éxito en la Mina Highland Valley Copper en la Columbia Británica. Estos sistemas utilizan estaciones totales robóticas (RTS) como principales sensores de medición, con levantamientos repetidas en intervalos predefinidos seleccionados para optimizar la eficiencia operativa.
Esta metodología ha sido desarrollada para mejorar el sistema de exactitud y fiabilidad mediante la reducción de los efectos de errores sistemáticos creados por la refracción atmosférica e instrumento inestable y posiciones de punto de referencia. La inclusión de sensores GPS para monitorear las posiciones RTS crea flexibilidad operativa adicional y mantiene la integridad del sistema cuando las estaciones de referencia disponibles son insuficientes.
Seismic Modeling ASEG 082001 Andrew LongAndrew Long
This document discusses tools for modeling elastic wave propagation to aid in seismic survey planning. It summarizes three main modeling techniques: recursive reflectivity methods, ray tracing methods, and full wavefield methods using finite-differencing. Ray tracing is useful for optimizing survey geometry but not reflectivity studies, while reflectivity and finite-difference methods model full wavefields and are better for amplitude studies like AVO. Integrating these modeling tools with real data and rock physics analysis allows comprehensive understanding of wave propagation for effective survey planning addressing all acquisition parameters and seismic phenomena.
This document summarizes the concept and uses of response spectra for structural engineers. Response spectra provide a way to quantify the demands of earthquake ground motion on structures of varying natural periods of vibration. They have been incorporated into building codes since the 1950s and help establish seismic design forces. Actual recorded response spectra are jagged, but design response spectra are smoothed curves. Response spectra can be used for rapid evaluation of building inventories, performance-based design, evaluation of seismic vulnerability, and post-earthquake damage estimates. They provide a useful tool for earthquake-resistant design.
Underwater localization and node mobility estimationIJECEIAES
In this paper, localizing a moving node in the context of underwater wireless sensor networks (UWSNs) is considered. Most existing algorithms have had designed to work with a static node in the networks. However, in practical case, the node is dynamic due to relative motion between the transmitter and receiver. The main idea is to record the time of arrival message (ToA) stamp and estimating the drift in the sampling frequency accordingly. It should be emphasized that, the channel conditions such as multipath and delay spread, and ambient noise is considered to make the system pragmatic. A joint prediction of the node mobility and speed are estimated based on the sampling frequency offset estimation. This sampling frequency offset drift is detected based on correlating an anticipated window in the orthogonal frequency division multiplexing (OFDM) of the received packet. The range and the distance of the mobile node is predicted from estimating the speed at the received packet and reused in the position estimation algorithm. The underwater acoustic channel is considered in this paper with 8 paths and maximum delay spread of 48 ms to simulate a pragmatic case. The performance is evaluated by adopting different nodes speeds in the simulation in two scenarios of expansion and compression. The results show that the proposed algorithm has a stable profile in the presence of severe channel conditions. Also, the result shows that the maximum speed that can be adopted in this algorithm is 9 km/h and the expansion case profile is more stable than the compression scenario. In addition, a comparison with a dynamic triangular algorithm (DTN) is presented in order to evaluate the proposed system.
Earthquake ground motion and response spectra (Bijan Mohraz, Fahim Sadek)TheJamez
This chapter surveys the state-of-the-art work in strong motion seismology and ground motion
characterization. Methods of ground motion recording and correction are first presented, followed by a
discussion of ground motion characteristics including peak ground motion, duration of strong motion, and
frequency content. Factors that influence earthquake ground motion such as source distance, site geology,
earthquake magnitude, source characteristics, and directivity are examined. The chapter presents
probabilistic methods for evaluating seismic risk at a site and development of seismic maps used in codes
and provisions. Earthquake response spectra and factors that influence their characteristics such as soil
condition, magnitude, distance, and source characteristics are also presented and discussed. Earthquake
design spectra proposed by several investigators and those recommended by various codes and provisions
through the years to compute seismic base shears are described. The latter part of the chapter discusses
inelastic earthquake spectra and response modification factors used in seismic codes to reduce the elastic
design forces and account for energy absorbing capacity of structures due to inelastic action. Earthquake
energy content and energy spectra are also briefly introduced. Finally, the chapter presents a brief discussion
of artificially generated ground motion.
--------------------------
Te invito a que visites mis sitios en internet:
_*Canal en youtube de ingenieria civil_*
https://www.youtube.com/@IngenieriaEstructural7
_*Blog de ingenieria civil*_
https://thejamez-one.blogspot.com
Velocity analysis is one of the prime aspects of seismic data processing. Velocity analysis is an
iterative process and one keeps on improving subsurface velocity field at different stages of processing. These
analyses require an initial velocity field, but in a virgin area, it is required, to estimate this, from the seismic
data itself by employing CVS (Constant Velocity Stack) or t
2
-x
2 methods. In the present context, we demonstrate
using field data, that t
2
-x
2 method based velocities are more reliable than CVS based velocities for the
subsequent velocity analysis purposes.
Fracture prediction using low coverage seismic data in area of complicated st...Mario Prince
This document presents a workflow for predicting fractures in a limestone reservoir using 3D seismic data with low fold coverage in an area with complicated structures in Colombia. Key steps included: 1) applying interpolation and azimuthal division to overcome data limitations, 2) performing PSTM on azimuthal volumes to maintain structure while enhancing image quality, and 3) using relative impedance attributes to detect anisotropy and predict fracture orientation and intensity, with two dominant orientations identified. Comparison to well data showed excellent agreement between seismic-derived and FMI-measured fracture orientations, validating the technique for reliable fracture prediction with low coverage seismic data.
Delineating faults using multi-trace seismic attributes: Example from offshor...iosrjce
1) The document describes a workflow for delineating faults in a 3D seismic dataset from offshore Niger Delta using multi-trace seismic attributes.
2) Dip-steering and multi-trace similarity attributes were computed to highlight discontinuities and improve fault detection. This revealed a major NE-SW trending strike-slip fault separating compressional deformation to the north and extensional deformation to the south.
3) Vertical cross-sections show faults and fault zones are more clearly resolved when computing multi-trace similarity along structural dips using dip-steering, rather than directly from seismic reflectivity alone.
Estimation of bridge pier scour for clear water & live bed scour conditionIAEME Publication
1) The document analyzes and compares several equations for estimating bridge pier scour depth under clear water and live bed conditions.
2) Statistical tests are used to validate the equations against experimental laboratory data from previous studies. The tests analyzed include Theil's Coefficient, Mean Absolute Error, and Root Mean Square Error.
3) The results show that for both clear water and live bed scour conditions, the Richardson equation generally provides the most reasonable estimates of scour depth compared to other common methods, according to the statistical test values.
Development of Methodology for Determining Earth Work Volume Using Combined S...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
The document discusses several types of seismic velocity models including 1D layered models, community velocity models based on direct measurements, unified community models, and 3D tomography models derived from active and passive seismic data. It provides details on numerous global and regional reference models for the crust, mantle, and specific tectonic provinces.
This document describes a Kriging component for spatial interpolation of climatological variables in the OMS modeling framework. Kriging is a geostatistical technique that interpolates values based on measured data and the spatial autocorrelation between data points. The component implements ordinary and detrended Kriging algorithms using 10 semivariogram models. It can interpolate both raster and point data and outputs the interpolated climatological variable values. Links are provided for downloading the component code, data, and OMS project files needed to run the interpolation.
Summary of current radiometric calibration coefficients for Landsat MSS, TM, ETM+,
and EO-1 ALI sensors
Gyanesh Chander a,⁎, Brian L. Markham b, Dennis L. Helder c
a SGT, Inc. 1 contractor to the U.S. Geological Survey (USGS) Earth Resources Observation and Science (EROS) Center, Sioux Falls, SD 57198-0001, USA
b National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC), Greenbelt, MD 20771, USA
c South Dakota State University (SDSU), Brookings, SD 57007, USA
Summary of current radiometric calibration coefficients for Landsat MSS, TM, ETM+,
and EO-1 ALI sensors
Gyanesh Chander a,⁎, Brian L. Markham b, Dennis L. Helder c
a SGT, Inc. 1 contractor to the U.S. Geological Survey (USGS) Earth Resources Observation and Science (EROS) Center, Sioux Falls, SD 57198-0001, USA
b National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC), Greenbelt, MD 20771, USA
c South Dakota State University (SDSU), Brookings, SD 57007, USA
ANALYSIS OF VORTEX INDUCED VIBRATION USING IFSIJCI JOURNAL
Interaction of fluid structure (IFS) is one of the upcoming field in calculation and simulation of multiphysics problems. IFS play an important role in calculating offshore structures deformations caused by the vortex induced loads. The complexity interaction nature of fluid around the solid geometries pose the difficulties in the analysis, but IFS analysis technique overshadow the challenges. In this paper, Analysis is done by considering a cylindrical member which is similar to the part of offshore platform. The IFS analysis is done by using the commercial package ANSYS 14.0. The Vortex induced loads simulation with IFS is purely a mesh dependent, for that we have to simulate many problems for getting optimum grid size. Computational Fluid Dynamics (CFD) analysis of a two dimensional model have been done and the obtained results were validated with the literature findings. CFD analysis is performed on the extruded version of the two dimensional mesh and the results were compared with the previously obtained two dimensional results. Preliminary IFS analysis is done by coupling the structural and fluid solvers together at smaller time steps and the dynamic response of the structural member to the periodically varying Vortex induced vibrations (VIV) loads were observed and studied.
The document provides an overview of principles of seismic data interpretation. It discusses fundamentals of seismic acquisition and processing such as seismic response, phase, polarity, reflections, and resolution. It also covers topics like structural interpretation pitfalls, seismic interpretation workflows involving building databases and time-depth relationships, and structural styles. The document includes sections on depth conversion, subsurface mapping techniques, and different types of velocities.
In the first part of the talk, we will present a sensitivity analysis of a novel sea ice model. neXtSIM is a continuous Lagrangian numerical model that uses an elastobrittle rheology to simulate the ice response to external forces. The response of the model is evaluated in terms of simulated ice drift distances from its initial position and from the mean position of the ensemble. The simulated ice drift is decomposed into advective and diffusive parts that are characterized separately both spatially and temporally and compared to what is obtained with a free-drift model, i.e. when the ice rheology does not play any role. Overall the large-scale response of neXtSIM is correlated to the ice thickness and the wind velocity fields while the free-drift model response is mostly correlated to the wind velocity pattern only. The seasonal variability of the model sensitivity shows the role of the ice compactness and rheology at both local and Arctic scales. Indeed, the ice drift simulated by neXtSIM in summer is close to the free-drift model, while the more compact and solid ice pack is showing a significantly different mechanical and drift behavior in winter. In contrast of the free-drift model, neXtSIM reproduces the sea ice Lagrangian diffusion regimes as found from observed trajectories. The forecast capability of neXtSIM is also evaluated using a large set of real buoy’s trajectories. We found that neXtSIM performs better in simulating sea ice drift, both in terms of forecast error and as a tool to assist search-and-rescue operations. Adaptive meshes, as the one used in neXtSIM, are used to model a wide variety of physical phenomena. Some of these models, in particular those of sea ice movement, use a remeshing process to remove and insert mesh points at various points in their evolution. This represents a challenge in developing compatible data assimilation schemes, as the dimension of the state space we wish to estimate can change over time when these remeshings occur.
In the second part of the talk, we highlight the challenges that such a modeling framework represents for data assimilation setup. We then describe a remeshing scheme for an adaptive mesh in one dimension. The development of advanced data assimilation methods that are appropriate for such a moving and remeshed grid is presented. Finally we discuss the extension of these techniques to two-dimensional models, like neXtSIM.
A COMPARATIVE STUDY OF VARIOUS METHODS TO EVALUATE IMPEDANCE FUNCTION FOR SHA...Samirsinh Parmar
Impedance function, Foundation Vibration, dynamic soil-structure interaction, Barkan, Dominguez, Dobry and Gazetas for evaluation of impedance functions for various modes of vibration of shallow foundation
Greetings all,
Nowadays, several datasets are -or will be- available in a near future to improve operational forecasting in most aspects, like the
ocean dynamics modeling, and the assimilation efficiency, that aims now to optimize the combination of temperature/salinity in
situ profiles, drifter's velocities, and sea surface height deduce from altimeter's data and GRACE or future Goce geoid. But also
strengthen forecasting system's applications, like the climate monitoring. For all these issues, an optimal use of ocean data,
always too sparse and not enough numerous, is mandatory.
Such studies are at the heart of this Newsletter issue. It begins with a Rio M.H. and Hernandez F. review of the Goce Mission,
dedicated to focus and document the shortest scales of the Earth's gravity field. Goce satellite is due to fly in December 2007.
With the next article Guinéhut S. and Larnicol G. investigate the influence of the in situ temperature profiles sampling on the
thermosteric sea level estimation. They show that the impact is not negligible, and can introduce large errors in the estimation. In
the second article, Benkiran M. and Greiner E. are evaluating the benefits of the drifter's velocities assimilation in the Mercator
Océan 1/3° Tropical and North Atlantic operational system. A description of the assimilation scheme upgrade to take into account
velocity control is given. Castruccio F. & al. describe in the third article the performance of an improved MDT reference for
altimetric data assimilation. They concentrate their study on the Tropical Pacific Ocean. Finally, the Newsletter comes to an end
with the Benkiran M. article. In his study, based on the 1/3° Mercator system, the impact of several altimeters data on the
assimilation performance is assessed
Have a good read
Quantitative and Qualitative Seismic Interpretation of Seismic Data Haseeb Ahmed
This document discusses quantitative and qualitative seismic interpretation techniques used to analyze seismic data and map subsurface geology. It compares traditional qualitative techniques to more modern quantitative techniques. It then focuses on unconventional seismic interpretation techniques used for unconventional reservoirs with low permeability, including AVO analysis, seismic inversion, seismic attributes, and forward seismic modeling. These techniques can help identify tight gas, shale gas, and gas hydrate reservoirs that conventional methods cannot easily detect. The document provides details on how each technique works and its advantages.
Wide aperture reflection refraction profiling uses wide-angle reflected and diving wave energy to develop velocity models of seismic sections. It exploits long offset data to observe diving waves and wide-angle reflections that penetrate deeper than conventional methods. The technique involves first break tomography to obtain an initial velocity model, which is then refined through iterative forward modeling and matching of observed and calculated arrival times and amplitudes.
2010 rock slope risk assesment based on geostructural annaMasagus Azizi
The document describes a study on assessing rock slope stability along a highway in North Malaysia. Laser scanning and traditional surveying techniques were used to characterize discontinuities in eight rock slopes. Discontinuity orientations and positions were derived from laser scanning point clouds. Stability analyses using key block analysis identified potential failure mechanisms. A relative hazard index was developed based on slope geometry, stability, water presence, and protections to assess hazard levels and inform mitigation recommendations. The study provides a methodology for integrating advanced scanning with traditional surveys to evaluate rock slope stability.
Numerical Investigation of Turbulent Flow over a Rotating Circular Cylinder u...IJERA Editor
Recent advancements in the field of computational fluid mechanics and the availability of high performance with regard to rotating software computing cylinders (RCs) have drawn attention to the field of flow accelerated corrosion. (FAC). Current studies aim to numerically predict turbulent flow characteristics around the rotating cylinder and the concomitant effects on the wall shear stresses and local mass fraction of inhibitors that are directly related to corrosion rate. This 3-D numerical investigation was carried out using the commercial CFX package from which the where SST turbulence model was selected to compute the unknown Reynolds stresses term in the incompressible and viscid form of the Navier-Stokes equation. The effect of three different cylinder rotation speeds and three brine temperatures on the wall shear stress and on brine mixing is reported. Results of the simulations revealed that both cylinder rotation speed and the temperature of the brine significantly affect wall shear stress and mixing of the inhibitor that in turn affects corrosion rate
NASA Overcoming the 50-Year Ban History of the Ban on Commercial Supersonic F...Dr. Pankaj Dhussa
NASA
National Aeronautics and Space Administration
NASA Overcoming the 50-Year Ban
History of the Ban on Commercial Supersonic Flight Over Land
By
Dr. Pankaj Dhussa
More Related Content
Similar to NISAR Solid Earth Sciences Algorithm Theoretical Basis and Validation Plan
Underwater localization and node mobility estimationIJECEIAES
In this paper, localizing a moving node in the context of underwater wireless sensor networks (UWSNs) is considered. Most existing algorithms have had designed to work with a static node in the networks. However, in practical case, the node is dynamic due to relative motion between the transmitter and receiver. The main idea is to record the time of arrival message (ToA) stamp and estimating the drift in the sampling frequency accordingly. It should be emphasized that, the channel conditions such as multipath and delay spread, and ambient noise is considered to make the system pragmatic. A joint prediction of the node mobility and speed are estimated based on the sampling frequency offset estimation. This sampling frequency offset drift is detected based on correlating an anticipated window in the orthogonal frequency division multiplexing (OFDM) of the received packet. The range and the distance of the mobile node is predicted from estimating the speed at the received packet and reused in the position estimation algorithm. The underwater acoustic channel is considered in this paper with 8 paths and maximum delay spread of 48 ms to simulate a pragmatic case. The performance is evaluated by adopting different nodes speeds in the simulation in two scenarios of expansion and compression. The results show that the proposed algorithm has a stable profile in the presence of severe channel conditions. Also, the result shows that the maximum speed that can be adopted in this algorithm is 9 km/h and the expansion case profile is more stable than the compression scenario. In addition, a comparison with a dynamic triangular algorithm (DTN) is presented in order to evaluate the proposed system.
Earthquake ground motion and response spectra (Bijan Mohraz, Fahim Sadek)TheJamez
This chapter surveys the state-of-the-art work in strong motion seismology and ground motion
characterization. Methods of ground motion recording and correction are first presented, followed by a
discussion of ground motion characteristics including peak ground motion, duration of strong motion, and
frequency content. Factors that influence earthquake ground motion such as source distance, site geology,
earthquake magnitude, source characteristics, and directivity are examined. The chapter presents
probabilistic methods for evaluating seismic risk at a site and development of seismic maps used in codes
and provisions. Earthquake response spectra and factors that influence their characteristics such as soil
condition, magnitude, distance, and source characteristics are also presented and discussed. Earthquake
design spectra proposed by several investigators and those recommended by various codes and provisions
through the years to compute seismic base shears are described. The latter part of the chapter discusses
inelastic earthquake spectra and response modification factors used in seismic codes to reduce the elastic
design forces and account for energy absorbing capacity of structures due to inelastic action. Earthquake
energy content and energy spectra are also briefly introduced. Finally, the chapter presents a brief discussion
of artificially generated ground motion.
--------------------------
Te invito a que visites mis sitios en internet:
_*Canal en youtube de ingenieria civil_*
https://www.youtube.com/@IngenieriaEstructural7
_*Blog de ingenieria civil*_
https://thejamez-one.blogspot.com
Velocity analysis is one of the prime aspects of seismic data processing. Velocity analysis is an
iterative process and one keeps on improving subsurface velocity field at different stages of processing. These
analyses require an initial velocity field, but in a virgin area, it is required, to estimate this, from the seismic
data itself by employing CVS (Constant Velocity Stack) or t
2
-x
2 methods. In the present context, we demonstrate
using field data, that t
2
-x
2 method based velocities are more reliable than CVS based velocities for the
subsequent velocity analysis purposes.
Fracture prediction using low coverage seismic data in area of complicated st...Mario Prince
This document presents a workflow for predicting fractures in a limestone reservoir using 3D seismic data with low fold coverage in an area with complicated structures in Colombia. Key steps included: 1) applying interpolation and azimuthal division to overcome data limitations, 2) performing PSTM on azimuthal volumes to maintain structure while enhancing image quality, and 3) using relative impedance attributes to detect anisotropy and predict fracture orientation and intensity, with two dominant orientations identified. Comparison to well data showed excellent agreement between seismic-derived and FMI-measured fracture orientations, validating the technique for reliable fracture prediction with low coverage seismic data.
Delineating faults using multi-trace seismic attributes: Example from offshor...iosrjce
1) The document describes a workflow for delineating faults in a 3D seismic dataset from offshore Niger Delta using multi-trace seismic attributes.
2) Dip-steering and multi-trace similarity attributes were computed to highlight discontinuities and improve fault detection. This revealed a major NE-SW trending strike-slip fault separating compressional deformation to the north and extensional deformation to the south.
3) Vertical cross-sections show faults and fault zones are more clearly resolved when computing multi-trace similarity along structural dips using dip-steering, rather than directly from seismic reflectivity alone.
Estimation of bridge pier scour for clear water & live bed scour conditionIAEME Publication
1) The document analyzes and compares several equations for estimating bridge pier scour depth under clear water and live bed conditions.
2) Statistical tests are used to validate the equations against experimental laboratory data from previous studies. The tests analyzed include Theil's Coefficient, Mean Absolute Error, and Root Mean Square Error.
3) The results show that for both clear water and live bed scour conditions, the Richardson equation generally provides the most reasonable estimates of scour depth compared to other common methods, according to the statistical test values.
Development of Methodology for Determining Earth Work Volume Using Combined S...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
The document discusses several types of seismic velocity models including 1D layered models, community velocity models based on direct measurements, unified community models, and 3D tomography models derived from active and passive seismic data. It provides details on numerous global and regional reference models for the crust, mantle, and specific tectonic provinces.
This document describes a Kriging component for spatial interpolation of climatological variables in the OMS modeling framework. Kriging is a geostatistical technique that interpolates values based on measured data and the spatial autocorrelation between data points. The component implements ordinary and detrended Kriging algorithms using 10 semivariogram models. It can interpolate both raster and point data and outputs the interpolated climatological variable values. Links are provided for downloading the component code, data, and OMS project files needed to run the interpolation.
Summary of current radiometric calibration coefficients for Landsat MSS, TM, ETM+,
and EO-1 ALI sensors
Gyanesh Chander a,⁎, Brian L. Markham b, Dennis L. Helder c
a SGT, Inc. 1 contractor to the U.S. Geological Survey (USGS) Earth Resources Observation and Science (EROS) Center, Sioux Falls, SD 57198-0001, USA
b National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC), Greenbelt, MD 20771, USA
c South Dakota State University (SDSU), Brookings, SD 57007, USA
Summary of current radiometric calibration coefficients for Landsat MSS, TM, ETM+,
and EO-1 ALI sensors
Gyanesh Chander a,⁎, Brian L. Markham b, Dennis L. Helder c
a SGT, Inc. 1 contractor to the U.S. Geological Survey (USGS) Earth Resources Observation and Science (EROS) Center, Sioux Falls, SD 57198-0001, USA
b National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC), Greenbelt, MD 20771, USA
c South Dakota State University (SDSU), Brookings, SD 57007, USA
ANALYSIS OF VORTEX INDUCED VIBRATION USING IFSIJCI JOURNAL
Interaction of fluid structure (IFS) is one of the upcoming field in calculation and simulation of multiphysics problems. IFS play an important role in calculating offshore structures deformations caused by the vortex induced loads. The complexity interaction nature of fluid around the solid geometries pose the difficulties in the analysis, but IFS analysis technique overshadow the challenges. In this paper, Analysis is done by considering a cylindrical member which is similar to the part of offshore platform. The IFS analysis is done by using the commercial package ANSYS 14.0. The Vortex induced loads simulation with IFS is purely a mesh dependent, for that we have to simulate many problems for getting optimum grid size. Computational Fluid Dynamics (CFD) analysis of a two dimensional model have been done and the obtained results were validated with the literature findings. CFD analysis is performed on the extruded version of the two dimensional mesh and the results were compared with the previously obtained two dimensional results. Preliminary IFS analysis is done by coupling the structural and fluid solvers together at smaller time steps and the dynamic response of the structural member to the periodically varying Vortex induced vibrations (VIV) loads were observed and studied.
The document provides an overview of principles of seismic data interpretation. It discusses fundamentals of seismic acquisition and processing such as seismic response, phase, polarity, reflections, and resolution. It also covers topics like structural interpretation pitfalls, seismic interpretation workflows involving building databases and time-depth relationships, and structural styles. The document includes sections on depth conversion, subsurface mapping techniques, and different types of velocities.
In the first part of the talk, we will present a sensitivity analysis of a novel sea ice model. neXtSIM is a continuous Lagrangian numerical model that uses an elastobrittle rheology to simulate the ice response to external forces. The response of the model is evaluated in terms of simulated ice drift distances from its initial position and from the mean position of the ensemble. The simulated ice drift is decomposed into advective and diffusive parts that are characterized separately both spatially and temporally and compared to what is obtained with a free-drift model, i.e. when the ice rheology does not play any role. Overall the large-scale response of neXtSIM is correlated to the ice thickness and the wind velocity fields while the free-drift model response is mostly correlated to the wind velocity pattern only. The seasonal variability of the model sensitivity shows the role of the ice compactness and rheology at both local and Arctic scales. Indeed, the ice drift simulated by neXtSIM in summer is close to the free-drift model, while the more compact and solid ice pack is showing a significantly different mechanical and drift behavior in winter. In contrast of the free-drift model, neXtSIM reproduces the sea ice Lagrangian diffusion regimes as found from observed trajectories. The forecast capability of neXtSIM is also evaluated using a large set of real buoy’s trajectories. We found that neXtSIM performs better in simulating sea ice drift, both in terms of forecast error and as a tool to assist search-and-rescue operations. Adaptive meshes, as the one used in neXtSIM, are used to model a wide variety of physical phenomena. Some of these models, in particular those of sea ice movement, use a remeshing process to remove and insert mesh points at various points in their evolution. This represents a challenge in developing compatible data assimilation schemes, as the dimension of the state space we wish to estimate can change over time when these remeshings occur.
In the second part of the talk, we highlight the challenges that such a modeling framework represents for data assimilation setup. We then describe a remeshing scheme for an adaptive mesh in one dimension. The development of advanced data assimilation methods that are appropriate for such a moving and remeshed grid is presented. Finally we discuss the extension of these techniques to two-dimensional models, like neXtSIM.
A COMPARATIVE STUDY OF VARIOUS METHODS TO EVALUATE IMPEDANCE FUNCTION FOR SHA...Samirsinh Parmar
Impedance function, Foundation Vibration, dynamic soil-structure interaction, Barkan, Dominguez, Dobry and Gazetas for evaluation of impedance functions for various modes of vibration of shallow foundation
Greetings all,
Nowadays, several datasets are -or will be- available in a near future to improve operational forecasting in most aspects, like the
ocean dynamics modeling, and the assimilation efficiency, that aims now to optimize the combination of temperature/salinity in
situ profiles, drifter's velocities, and sea surface height deduce from altimeter's data and GRACE or future Goce geoid. But also
strengthen forecasting system's applications, like the climate monitoring. For all these issues, an optimal use of ocean data,
always too sparse and not enough numerous, is mandatory.
Such studies are at the heart of this Newsletter issue. It begins with a Rio M.H. and Hernandez F. review of the Goce Mission,
dedicated to focus and document the shortest scales of the Earth's gravity field. Goce satellite is due to fly in December 2007.
With the next article Guinéhut S. and Larnicol G. investigate the influence of the in situ temperature profiles sampling on the
thermosteric sea level estimation. They show that the impact is not negligible, and can introduce large errors in the estimation. In
the second article, Benkiran M. and Greiner E. are evaluating the benefits of the drifter's velocities assimilation in the Mercator
Océan 1/3° Tropical and North Atlantic operational system. A description of the assimilation scheme upgrade to take into account
velocity control is given. Castruccio F. & al. describe in the third article the performance of an improved MDT reference for
altimetric data assimilation. They concentrate their study on the Tropical Pacific Ocean. Finally, the Newsletter comes to an end
with the Benkiran M. article. In his study, based on the 1/3° Mercator system, the impact of several altimeters data on the
assimilation performance is assessed
Have a good read
Quantitative and Qualitative Seismic Interpretation of Seismic Data Haseeb Ahmed
This document discusses quantitative and qualitative seismic interpretation techniques used to analyze seismic data and map subsurface geology. It compares traditional qualitative techniques to more modern quantitative techniques. It then focuses on unconventional seismic interpretation techniques used for unconventional reservoirs with low permeability, including AVO analysis, seismic inversion, seismic attributes, and forward seismic modeling. These techniques can help identify tight gas, shale gas, and gas hydrate reservoirs that conventional methods cannot easily detect. The document provides details on how each technique works and its advantages.
Wide aperture reflection refraction profiling uses wide-angle reflected and diving wave energy to develop velocity models of seismic sections. It exploits long offset data to observe diving waves and wide-angle reflections that penetrate deeper than conventional methods. The technique involves first break tomography to obtain an initial velocity model, which is then refined through iterative forward modeling and matching of observed and calculated arrival times and amplitudes.
2010 rock slope risk assesment based on geostructural annaMasagus Azizi
The document describes a study on assessing rock slope stability along a highway in North Malaysia. Laser scanning and traditional surveying techniques were used to characterize discontinuities in eight rock slopes. Discontinuity orientations and positions were derived from laser scanning point clouds. Stability analyses using key block analysis identified potential failure mechanisms. A relative hazard index was developed based on slope geometry, stability, water presence, and protections to assess hazard levels and inform mitigation recommendations. The study provides a methodology for integrating advanced scanning with traditional surveys to evaluate rock slope stability.
Numerical Investigation of Turbulent Flow over a Rotating Circular Cylinder u...IJERA Editor
Recent advancements in the field of computational fluid mechanics and the availability of high performance with regard to rotating software computing cylinders (RCs) have drawn attention to the field of flow accelerated corrosion. (FAC). Current studies aim to numerically predict turbulent flow characteristics around the rotating cylinder and the concomitant effects on the wall shear stresses and local mass fraction of inhibitors that are directly related to corrosion rate. This 3-D numerical investigation was carried out using the commercial CFX package from which the where SST turbulence model was selected to compute the unknown Reynolds stresses term in the incompressible and viscid form of the Navier-Stokes equation. The effect of three different cylinder rotation speeds and three brine temperatures on the wall shear stress and on brine mixing is reported. Results of the simulations revealed that both cylinder rotation speed and the temperature of the brine significantly affect wall shear stress and mixing of the inhibitor that in turn affects corrosion rate
Similar to NISAR Solid Earth Sciences Algorithm Theoretical Basis and Validation Plan (20)
NASA Overcoming the 50-Year Ban History of the Ban on Commercial Supersonic F...Dr. Pankaj Dhussa
NASA
National Aeronautics and Space Administration
NASA Overcoming the 50-Year Ban
History of the Ban on Commercial Supersonic Flight Over Land
By
Dr. Pankaj Dhussa
NASA Carbon Fiber Composites (CFC) Recycling Technology Enabled by the TuFF T...Dr. Pankaj Dhussa
NASA
National Aeronautics and Space Administration
NASA Carbon Fiber Composites (CFC) Recycling
Technology Enabled by the TuFF Technology
By
Dr. Pankaj Dhussa
NASA Urban Air Mobility Fatigue Prediction Aaron Crawford – NASA ULI Innovati...Dr. Pankaj Dhussa
NASA
National Aeronautics and Space Administration
NASA Urban Air Mobility Fatigue Prediction
Aaron Crawford – NASA ULI Innovative Manufacturing, Operation, and Certification of Advanced
By
Dr. Pankaj Dhussa
NASA
National Aeronautics and Space Administration
NASA Multiscale Analysis Tool (NASMAT)
Robust, Integrated, Physics-based, Non-linear, Variable Fidelity Modeling of
Multi-phased Materials and Structures
By
Dr. Pankaj Dhussa
NASA Aviary Open-source software tool for optimizing next-generation aircraft...Dr. Pankaj Dhussa
NASA
National Aeronautics and Space Administration
NASA Aviary
Open-source software tool for optimizing next-generation aircraft designs
By
Dr. Pankaj Dhussa
NASA Aircraft Certification by Analysis (CbA) 20-year Vision for Virtual Flig...Dr. Pankaj Dhussa
NASA
National Aeronautics and Space Administration
NASA Aircraft Certification by Analysis (CbA)
20-year Vision for Virtual Flight Testing
By
Dr. Pankaj Dhussa
NASA DECO CAS Research into a New Operating Mode for Digitally Enabled Cooper...Dr. Pankaj Dhussa
NASA
National Aeronautics and Space Administration
NASA DECO
CAS Research into a New Operating Mode for Digitally Enabled Cooperative Operations (DECO)
By
Dr. Pankaj Dhussa
NASA Advanced Exploration of Reliable Operation at low Altitudes: meteorology...Dr. Pankaj Dhussa
NASA
National Aeronautics and Space Administration
NASA Advanced Exploration of Reliable Operation at low Altitudes: meteorology, Simulation,
and Technology (AEROcAST)
By
Dr. Pankaj Dhussa
NASA Self-Adaptation of Loosely Coupled Systems across a System of Small Uncr...Dr. Pankaj Dhussa
NASA
National Aeronautics and Space Administration
NASA Self-Adaptation of Loosely Coupled Systems across a
System of Small Uncrewed Aerial Vehicles
By
Dr. Pankaj Dhussa
NASA RealWindDroneSim (RWDS) Enhancing Simulation Fidelity of small Uncrewed ...Dr. Pankaj Dhussa
NASA
National Aeronautics and Space Administration
NASA RealWindDroneSim (RWDS)
Enhancing Simulation Fidelity of small Uncrewed Aerial Vehicles in Windy Conditions
By
Dr. Pankaj Dhussa
NASA ARMD Test Data Portal (ATDP) Agile Development Method Introduction of an...Dr. Pankaj Dhussa
NASA
National Aeronautics and Space Administration
NASA ARMD Test Data Portal (ATDP) Agile Development Method
Introduction of an Agile Systems Engineering Process to the
NASA Armstrong Flight Research Center
By
Dr. Pankaj Dhussa
NASA ARMD Test Data Portal (ATDP) he ARMD Test Data Portal (ATDP) is a secure...Dr. Pankaj Dhussa
NASA
National Aeronautics and Space Administration
NASA The ARMD Test Data Portal (ATDP) is a secure data archive that allows NASA personnel to remotely upload and register ARMD test data, as well as to quickly search, access, extract, time slice, and download test data for further analysis.
Associated metadata [ATDP Metadata Specification (AMS)] is compliant with government and industry metadata standards and requirements, and provides capabilities for cataloging, search, and data discovery.
By
Dr. Pankaj Dhussa
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
A Comprehensive Guide to DeFi Development Services in 2024Intelisync
DeFi represents a paradigm shift in the financial industry. Instead of relying on traditional, centralized institutions like banks, DeFi leverages blockchain technology to create a decentralized network of financial services. This means that financial transactions can occur directly between parties, without intermediaries, using smart contracts on platforms like Ethereum.
In 2024, we are witnessing an explosion of new DeFi projects and protocols, each pushing the boundaries of what’s possible in finance.
In summary, DeFi in 2024 is not just a trend; it’s a revolution that democratizes finance, enhances security and transparency, and fosters continuous innovation. As we proceed through this presentation, we'll explore the various components and services of DeFi in detail, shedding light on how they are transforming the financial landscape.
At Intelisync, we specialize in providing comprehensive DeFi development services tailored to meet the unique needs of our clients. From smart contract development to dApp creation and security audits, we ensure that your DeFi project is built with innovation, security, and scalability in mind. Trust Intelisync to guide you through the intricate landscape of decentralized finance and unlock the full potential of blockchain technology.
Ready to take your DeFi project to the next level? Partner with Intelisync for expert DeFi development services today!
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
This presentation provides valuable insights into effective cost-saving techniques on AWS. Learn how to optimize your AWS resources by rightsizing, increasing elasticity, picking the right storage class, and choosing the best pricing model. Additionally, discover essential governance mechanisms to ensure continuous cost efficiency. Whether you are new to AWS or an experienced user, this presentation provides clear and practical tips to help you reduce your cloud costs and get the most out of your budget.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
Dive into the realm of operating systems (OS) with Pravash Chandra Das, a seasoned Digital Forensic Analyst, as your guide. 🚀 This comprehensive presentation illuminates the core concepts, types, and evolution of OS, essential for understanding modern computing landscapes.
Beginning with the foundational definition, Das clarifies the pivotal role of OS as system software orchestrating hardware resources, software applications, and user interactions. Through succinct descriptions, he delineates the diverse types of OS, from single-user, single-task environments like early MS-DOS iterations, to multi-user, multi-tasking systems exemplified by modern Linux distributions.
Crucial components like the kernel and shell are dissected, highlighting their indispensable functions in resource management and user interface interaction. Das elucidates how the kernel acts as the central nervous system, orchestrating process scheduling, memory allocation, and device management. Meanwhile, the shell serves as the gateway for user commands, bridging the gap between human input and machine execution. 💻
The narrative then shifts to a captivating exploration of prominent desktop OSs, Windows, macOS, and Linux. Windows, with its globally ubiquitous presence and user-friendly interface, emerges as a cornerstone in personal computing history. macOS, lauded for its sleek design and seamless integration with Apple's ecosystem, stands as a beacon of stability and creativity. Linux, an open-source marvel, offers unparalleled flexibility and security, revolutionizing the computing landscape. 🖥️
Moving to the realm of mobile devices, Das unravels the dominance of Android and iOS. Android's open-source ethos fosters a vibrant ecosystem of customization and innovation, while iOS boasts a seamless user experience and robust security infrastructure. Meanwhile, discontinued platforms like Symbian and Palm OS evoke nostalgia for their pioneering roles in the smartphone revolution.
The journey concludes with a reflection on the ever-evolving landscape of OS, underscored by the emergence of real-time operating systems (RTOS) and the persistent quest for innovation and efficiency. As technology continues to shape our world, understanding the foundations and evolution of operating systems remains paramount. Join Pravash Chandra Das on this illuminating journey through the heart of computing. 🌟
2. 2
The NISAR Algorithm Theoretical Basis Documents (ATBDs) provide the physical and
mathematical descriptions of algorithms used in the generation of NISAR science data products.
The ATBDs include descriptions of variance and uncertainty estimates and considerations of
calibration and validation, exception control and diagnostics. Internal and external data flows are
also described.
3. 3
Table of Contents
1 INTRODUCTION 4
1.1 Motivation: Coseismic Displacements 4
1.2 Motivation: Secular Velocities 5
1.3 Motivation: Transient Deformation 6
2 THEORETICAL BASIS OF ALGORITHM 7
2.1 Requirements 7
2.2 Approach to validating the L2 requirements 8
2.2.1 L2 Requirement 658 - Secular Deformation Rate 9
2.2.2 L2 Requirement 660 - Coseismic Displacements 9
2.2.3 L2 Requirement 663 - Transient Displacements 10
2.3 Generalized Time Series Analysis 11
3 IMPLEMENTED APPROACH FOR GENERATING THE L3 DATA PRODUCTS
11
3.1 Generation of time series from sets of interferograms 11
3.1.1 Stack preparation 12
3.1.2 Timeseries estimation and parameterization 13
3.1.3 Optional Corrections 13
4 APPROACH FOR VALIDATING L3 DATA PRODUCTS 14
4.1 Decomposition of InSAR time series into basis functions 14
4.2 NISAR Validation Procedure 16
5 ASSUMPTIONS AND REQUIRED INPUT 17
6 PLANNED DELIVERABLES 18
7 EXPECTED ACTIVITIES IN PHASE C 19
8 REFERENCES 19
4. 4
1 INTRODUCTION
This document is the Algorithm Theoretical Basis Document (ATBD) for the Level 3
NISAR data products necessary to validate the Solid Earth Science (SES) L2
requirements.
NISAR will measure ground displacements over time through the well-established
technique of repeat-pass interferometry (e.g., Goldstein et al., 1993; Massonnet et al.,
1993; Zebker et al., 1994; Bamler and Hartl, 1998; Massonnet and Feigl, 1998;
Bürgmann et al., 2000; Rosen et al., 2000; Hanssen, 2001; Simons and Rosen, 2007),
wherein radar images acquired from nearly the same vantage point over time are
compared interferometrically, with the phase difference between images representing (in
part) measures of ground displacement. The NISAR Project will focus on producing
Level 1 and 2 products for distribution to the science community, using a common set of
algorithms.
NISAR will acquire near-global coverage of the surface of the solid Earth, including
many areas known to be less active as regards to tectonic processes, but where other
active deformation is occurring. For example, studies as diverse as energy and water
resources, degradation of permafrost in the arctic, and remote assessment of the national
transportation infrastructure will benefit from high resolution maps of surface
displacements with time. Some, but not all of these targets are included in the list of 2000
transient targets (see section 1.3). Rather than devising individual calibration and
validation exercises for every potential application, we note that the observation of these
various motions can be considered equivalent to comparable measurements in the solid
Earth science context, and so it suffices to validate the instrument performance using
three specific data products describing tectonic displacements. We describe these in
detail here and make only passing reference to their broader applicability.
This ATBD describes the algorithms used to generate the necessary SES L3 products that
the project will generate specifically for the purpose of validating SES L2 requirements
658, 660 and 663 (Section 3). While described quantitatively later, these three
requirements encapsulate measurement needs for measuring coseismic ground offsets,
steady secular movement, and transient deformation. The ATBD also summarizes the
algorithms that will be used for the validation of these L3 products (Section 4). Before
further describing the requirements and the associated algorithms that we will use to
produce and eventually validate NISAR’s ability to satisfy the L2 requirements, we begin
with an overview of the motivation for each of these requirements.
1.1 Motivation: Coseismic Displacements
The first requirement addresses the ability of NISAR to map large, essentially
instantaneous displacements at the time of an earthquake using a small number of
repeated interferograms. This requirement also covers other processes which result in
surface displacements with a characteristic time shorter than the satellite revisit time.
5. 5
Measuring displacements associated with earthquakes is essential for describing which
parts of a fault have ruptured and which have not (but may have been brought closer to
failure) and for constraining estimates of the distribution of fault slip in the subsurface.
Beyond what can be done solely with InSAR data, estimates of other rupture
characteristics such as the speed at which rupture propagates along the fault and the rate
at which slip occurs at a given point on the fault are also best constrained by combining
coseismic displacement information such as will be provided by NISAR with seismic
data, as opposed to using seismic data alone (e.g., Pritchard et al., 2006; 2007; Duputel
et al., 2015). These estimates of fault slip parameters then provide key input into
mechanical models of faults and the surrounding crust and upper mantle, estimates of
stress change on neighboring faults, and inform our basic understanding of regional
seismic hazards.
It is generally important to have sensitivity to both vertical and horizontal displacements
to constrain strike and dip components of the slip vector. By measuring at least two
components of the relative displacement vector (through ascending and descending
acquisitions), NISAR will achieve this sensitivity to vertical and horizontal displacements
within the context of a physical model.
1.2 Motivation: Secular Velocities
The measurements of secular velocities are essential role to our understanding of
fundamental processes associated with tectonic deformation and postglacial rebound,
along with many other gradual and steady processes that deform the Earth’s surface.
Secular velocities in the context of NISAR also refer to approximately constant velocity
fields for other applications with time scales exceeding the operational lifetime of the
mission.
Tectonic deformation: Measurements of secular velocities within tectonic plate boundary
regions place important constraints on models of deformation, including our basic
understanding of fault physics, and thereby contribute to estimates of long-term seismic
hazard. The average rate of strain accumulation or long term seismic potential of a fault is
proportional to the product of the rate at which the fault is loaded by motions of the plates
(i.e., the long term slip rate), and the area of the fault that is locked in the times between
large earthquakes. The average slip rate is typically equivalent to the relative velocity
across the fault measured over a 50 km distance perpendicular to the fault. The area of
the fault that is locked can be estimated using the velocity gradient over shorter distances.
At least two of the three components of the vector field of relative velocity are needed to
distinguish the strike and the dip components of the slip vector on the fault and to infer
along-strike variations in locking. NISAR will image most of the subaerial portions of
Earth’s plate boundary zones, allowing sampling the range of different tectonic styles,
capturing plate boundaries at different stages of the earthquake cycle, and informing
regional assessments of seismic hazard.
Postglacial rebound: The earth is continuously readjusting to redistribution of water and
ice masses associated with the retreat of the Pleistocene ice sheets and ongoing melting of
6. 6
remaining glaciers, ice caps, and ice sheets. The readjustment, also known as “glacial-
isostatic adjustment” (GIA), includes both an instantaneous elastic response to any
changes in load as well as an ongoing viscoelastic response associated with past changes
in ice loading, from the most recent to thousands of years ago. The resulting surface
deformation from glacial-isostatic adjustment has important implications for our ability to
predict sea level rise (SLR). Improving this understanding depends upon the history of
ice loading, with competing models of ice loading predicting vector velocities that differ
by 2 mm/yr over 50 km length scales. Accurate SLR predictions are also tied to our
understanding of the rheological (viscoelastic) structure of the mantle, with different
structural models predicting different patterns of surface deformation.
For both of the above applications, it is generally important to make separate
measurements of the horizontal and vertical components of the vector displacement field.
NISAR will measure at least two components of the vector fields through ascending and
descending acquisitions.
Other longer time-scale processes: For some studies, characterizing slower and relatively
constant velocity fields as separate from time-variable velocities is key to characterizing
the physics of the underlying phenomenon. One example of this arises in measurements
of displacement over aquifers, in which it is necessary to separate the inelastic subsidence
that permanently reduces the storage capacity of an aquifer from the annual
subsidence/inflation due to water use patterns. Since proper management of the aquifer
system depends on maintaining the long term storage of the system, NISAR must be able
to resolve these components. This measurement is equivalent to the need for secular
velocities, except that the horizontal component of displacement is small and so the
emphasis needs to be on accurate determination of the vertical component.
1.3 Motivation: Transient Deformation
Detecting and quantifying transient deformation plays an essential role in improving our
understanding of fundamental processes associated with tectonics, subsurface movement
of magma and volcanic eruptions, landslides, response to changing surface loads and a
wide variety of anthropogenic phenomena. Aseismic and postseismic fault slip
transients, volcanic and landslide deformation, and local subsidence and uplift due to
migration of crustal fluids, occur globally over a large range of temporal (sub-daily to
multi-year) and spatial (10's m - 100's km) scales. We are targeting observations from
nearly 2000 sites to cover the catalogue of approximately 1400 active volcanoes,
postseismic deformation from dozens of earthquakes, and to obtain a representative
sample of landslides, groundwater aquifers, hydrocarbon reservoirs, and geothermal
reservoirs, as well as the rapid response to changing surface loads such as melting
glaciers.
Anthropogenic and climate-sensitive signals both require adequate sampling to resolve
annual or shorter components of the time history of displacement. Annual cycles often
result from water withdrawal and recharge in aquifer systems, or from climate induced
patterns such as the freezing and thawing of the active layer overlying permafrost in the
7. 7
arctic and subarctic regions. NISAR-targeted sites include those with human-induced
deformations such as oil/gas extraction and geothermal energy production, which can
vary over arbitrary time scales. Observations of these transients are best supported by
resolving deformation at the shortest time scale possible, since it is not known in advance
how a particular phenomenon might evolve. In the context of NISAR, we will be able to
uniquely constrain the temporal evolution of transient displacements in the line of sight
direction at time scales of twice the repeat acquisition time in a given overflight direction
(ascending or descending). The effective temporal resolution for obtaining two-
component displacements will be determined by the repeat timing of both ascending and
descending passes over the area of interest.
The detection and quantification of transients is perhaps the most exciting frontier of
solid earth deformation science at present. The greatest challenge to robust detection of
transient deformation is the unknown temporal and spatial behavior of many of the
associated processes and the need to isolate transient signals in the presence of other
processes such as discrete events (e.g., earthquakes), quasiperiodic seasonal signals
(which may not necessarily be a pure single annual sinusoid), and secular trends. Any
transient phenomenon can be considered to be a specific case of the general transient
problem, so that addressing the needs of the SES community largely ensures the
applicability of the instrument to a wider audience.
2 THEORETICAL BASIS OF ALGORITHM
We describe here the four SES requirements that will need to be validated and the
underlying physical (forward) model that is assumed for validating these requirements.
As noted above, satisfying these three specific criteria will ensure that NISAR can be
applied to many other applications covering the spectrum of crustal deformation
phenomena.
2.1 Requirements
The three primary NISAR Solid Earth L2 requirements that drive the L3 products needed
for calibration and validation are:
L2 Requirement 658 - Secular Deformation Rates: Over three years, the NISAR
project shall measure at least two components of the spatially and temporally averaged
relative vector velocities over active regions of Earth’s land surface with accuracy of 2
mm/yr or better, over length scales 0.1 km < L < 50 km, over 70% of these regions. Here,
active regions are defined as areas where relative displacement rates are expected to be 1
mm/yr or greater over distances of 50 km, and the temporal average is the time-
covariance-weighted average of all individual displacements used over the full three
years of observations to form the result.
8. 8
L2 Requirement 660 - Coseismic Deformation: Over three years, the NISAR project
shall measure at least two vector components of the point-to-point vector coseismic
displacements of at least 80% of regions where earthquakes with sufficient magnitude to
generate surface displacements of 100 mm or greater occur, with accuracy of 4(1+L^1/2)
mm or better, over length scales 0.1 km < L < 50 km, at 100 m spatial resolution over at
least 70% of these regions.
L2 Requirement 663 - Deformation Transients: The NISAR project shall measure at
least two components of the point-to-point vector displacements over at least 70% of
targeted sites with accuracy of 3(1+ L^1/2) mm or better, over length scales 0.1 km < L <
50 km, at 100 m resolution, and over 12-day time scales. Here, target sites include all
active volcanoes above sea-level, regions surrounding earthquakes where postseismic
deformation is expected, areas of rapid glacial mass changes, selected deforming
reservoirs of water, oil, gas, CO2 and steam, and landslide-prone areas, as well as sites
where selected disaster-related events have occurred.
2.2 Approach to validating the L2 requirements
We use two separate approaches for validating the NISAR solid earth L2 requirements,
both of which require the generation of a standard set of NISAR L3 data products
consisting of surface displacement time series for selected areas that sample a range of
vegetation types, topographic relief, and strain rates. Generation of these products, as
discussed in Section 3, requires a set of temporally contiguous/overlapping SAR
interferograms over all time periods of interest (see description of inputs and potential
preprocessing steps in Sections 3 and 5).
In the first approach, we compare InSAR-derived surface displacements with point
observations of surface motion from collocated continuous GPS/GNSS stations (we will
use GPS and continuous GPS, or cGPS, interchangeably in this document). Since all
requirements are written in terms of relative displacements (sampling the deformation
field at individual points), comparisons are done on the differences of observed surface
motion (from both InSAR and GPS) between GPS station locations within the scene. For
a GPS station network of N stations, this will yield N(N-1)/2 distinct observations for
comparison, distributed across a range of length scales. As we discuss below, the
methodology differs slightly depending on if we perform our comparison directly on
interferograms (Requirement 663) versus basis functions derived from sets of
interferograms (Requirements 658 660), but the underlying premise is the same: that GPS
provides a sufficiently high quality time series to validate InSAR observations. This
approach is appropriate where measurable displacements is occurring across the calval
region and the GPS/GNSS network is sufficiently dense to capture most of the expected
spatial variability of the signal.
In the second approach, which is appropriate for negligibly deforming regions, we
examine the autocorrelation of noise in NISAR interferograms without comparison to
GPS/GNSS, under the assumption that surface deformation is essentially zero at all
9. 9
relevant spatial scales. This method involves differencing InSAR displacement
observations between a large set of randomly chosen pixel pairs and confirming that the
estimates are statistically consistent with there being no deformation within the scene.
2.2.1 L2 Requirement 658 - Secular Deformation Rate
To validate relative secular deformation rates (or velocities) from NISAR, we use Line-
of-Sight (LOS) velocity data for each pixel in a target region. We generate separate LOS
velocities for ascending and descending passes to meet the requirement for two
components of motion over each target location. Although the requirement specifies that
the validation span 3 years of data, we can perform the validation for periods shorter than
3 years provided we mitigate annual effects by using data that span multiples of 1 year, or
by explicitly modeling and removing the seasonal displacements. The relative vector
velocity between any two points in the scene will be taken as the difference in the LOS
velocity at those points.
In validation approach #1, we use the LOS velocity product to calculate the relative
InSAR velocity between each pair of GPS stations within the SAR footprint that are less
than 50 km apart. For subsequent comparison, we generate the accompanying GPS
velocity differences by taking the 3-component GPS position time series, projecting them
into the InSAR LOS direction, estimating the GPS LOS velocities, and differencing the
GPS LOS velocities between all stations pairs. To test NISAR’s fulfillment of the 2
mm/y specification, we difference the InSAR and GPS relative velocity estimates for
each pair, calculate the mean and standard deviation of all residuals, and perform a t-test
to check whether the mean error is statistically consistent with a value ≤ 2 mm/y.
Validation approach #2 is identical to approach #1 except that the relative velocities are
determined for random pairs of InSAR pixels within a scene, and the statistics are
calculated directly from the InSAR estimates. The calval regions to be used for both
approaches will be defined by the NISAR Science Team and listed in the NISAR calval
document.
2.2.2 L2 Requirement 660 - Coseismic Displacements
To validate NISAR’s ability to recover relative coseismic displacements of 100 mm and
larger within a scene, we estimate step functions in surface displacements at the time of
the earthquake from the InSAR and GPS time series. The simplest version of the InSAR
estimate is a coseismic interferogram spanning the earthquake, assuming negligible
postseismic deformation. Greater accuracy can be obtained by modeling the time series
using appropriate basis functions (e.g. a secular displacement rate, a Heaviside time
function at the time of the earthquake, and an exponential postseismic response) and
using the offset thus obtained. A similar analysis can be done for the GPS time series.
This is the methodology we implement here.
10. 10
In validation approach #1, we calculate the relative displacements between each pair of
GPS stations within the SAR footprint and less than 50 km apart. To do the comparison,
we estimate the GPS coseismic displacements by estimating the amplitude of a Heaviside
basis function at the time of the earthquake for the 3-component GPS positions, and the
InSAR displacements in the same way. The GPS 3-component displacements are then
projected into the InSAR line of sight and differenced to obtain the relative GPS
displacements between all station pairs. To test NISAR’s fulfillment of the 4(1+L^1/2)
mm specification, we difference the InSAR and GPS relative displacement estimates for
each pair of GPS station locations, calculate the distance L between stations, calculate the
mean and standard deviation of all residuals, and perform a t-test to check whether the
mean error is statistically less than 4(1+L^1/2) mm over length scales 0.1 km < L < 50
km (e.g. ≤ 5 mm at 0.1 km and ≤ 32 mm at 50 km).
Validation approach #2 is similar to approach #1 except that the relative displacements
are determined for random pairs of InSAR pixels within a scene that does not include a
significant earthquake, and the statistics are calculated directly from the InSAR estimates.
All the SES requirements call for a minimum spatial coverage component. Validation of
this component will rely on a combination of assessing the coverage of basic InSAR-
quality data and ensuring that the required measurement accuracy is achieved in a suite of
selected but comprehensive regions. Many of these regions will be automatically
evaluated as part of the targeted sites for the transient deformation requirement.
2.2.3 L2 Requirement 663 - Transient Displacements
To validate the L2 requirements on transients, we will produce 12-day interferograms
from both descending and ascending tracks over diverse target sites where GPS
observations are available. The two components of vector displacement, ascending and
descending, will be validated separately .
For approach #1, we will use unwrapped interferograms at 100-m-resolution to produce
point-to-point relative LOS measurements (and their associated uncertainties) between
GPS sites. Position observations from the same set of GPS sites and at the InSAR
acquisition times will be projected into the LOS direction and differenced pairwise.
These will be compared to the point-to-point InSAR LOS measurements using a
methodology similar to that described in Section 2.2.2., except that the accuracy
specification is 3(1+ L^1/2) mm over 0.1 km < L < 50 km. To validate the noise in
individual interferograms in Approach #2, we will utilize interferograms over the set of
non-deforming sites discussed in Section 2.2.1. In practice, characterization of transient
deformation will usually be improved by examining longer time series of interferograms -
the approach described here validates the requirement that short timescale or temporally
complex transients can be characterized with a single interferogram.
Comprehensive validation requires transient sites possessing different deformation
characteristics (e.g., volcanoes, landslides, aquifers, hydrocarbons, etc.), vegetation
11. 11
covers (forest, shrub, bare surface, etc), seasonality (leaf on/off, snow, etc.), and terrain
slopes. The NISAR Science Team will select a set of calval regions to be used for this
requirement and will list those sites in the NISAR calval document.
2.3 Generalized Time Series Analysis
The InSAR and cGPS comparisons described in Section 2.2.1 and 2.2.2 will be
performed in the framework of generalized time series analysis (Section 3), whereby
information in each time series is characterized by one or more underlying basis
functions. The problem is cast as an overdetermined least squares (LSQ) estimation
problem, from which we infer parameters for the simultaneous fit of various components
to the time series, on a station-by-station or pixel-by-pixel basis. We describe our
implementation of this approach in Section 4.
These components––which include secular velocities, seasonal sinusoids, temporal
offsets, and postseismic exponential decay––represent much of the non-stochastic
variance in the time series and are well-suited to the specific validation targets. For
instance, for Requirement 658 (secular deformation) we will use the velocity component
of these fits, while for Requirement 660 (coseismic deformation) we will use the velocity,
Heaviside (instantaneous step), and exponential/logarithmic components. To perform the
validations, estimates of the fit parameters for these functions (rather than the raw time
series themselves) will be used for the statistical comparisons of InSAR and GPS
outlined in Section 2.2.
3 IMPLEMENTED APPROACH FOR GENERATING THE L3 DATA
PRODUCTS
3.1 Generation of time series from sets of interferograms
The time series analysis will be performed using the Generic InSAR Analysis Toolbox
(GIAnT) (Hetland et al. 2012, Agram et al., 2013), which is openly downloadable from
http://earthdef.caltech.edu. This toolbox has been used in many studies including
interseismic deformation along the San Andreas Fault (Jolivet et al., 2014) and will
continue to be updated (with separate documentation) and openly released on a regular
basis.
GIAnT is distributed with implementations of SBAS (Berardino et al., 2002, Doin et al.,
2011) as well as TimeFun and MInTS (Hetland et al., 2012) techniques. The approach
that will be used for the generation of NISAR L3 products is akin to the TimeFun
technique (Hetland et al., 2012) implemented in GIAnT and allows for an explicit
inclusion of key basis functions (e.g., Heaviside functions, secular rate, etc.) into the
InSAR inversion and that we describe further in Section 4. Figure 1 describes the
12. 12
workflow that will be followed for L3 product generation. Modifications to this algorithm
may be identified and implemented in response to NISAR Phase C activities.
Figure 1: NISAR L3 product generation workflow.
As shown in Figure 1, the L3 product generation workflow includes the following
consecutive steps:
3.1.1 Stack preparation
In this initial processing step, all the necessary Level-2 unwrapped interferogram
products are gathered, organized and reduced to a common grid for analysis with GIAnT.
For operational NISAR processing, the following information from the Level-2 products
are used in the stack preparation step:
● Unwrapped interferograms (either in radar or ground coordinates) prepared using
the InSAR Scientific Computing Environment (ISCE) software (Rosen et al.,
2012) following the L1 and L2 product descriptions summarized HERE.
● Corresponding coherence layers (also generated using ISCE).
● Perpendicular baseline associated with the interferograms.
● A radar simulation file containing the pixels’ elevation.
● A file containing radar incidence angles.
● Shadow, layover and land/water mask layers corresponding to the interferograms.
● A processing configuration file that includes processing parameters such as
coherence thresholds, flags for applying phase corrections etc. to allow for region-
specific customization.
13. 13
● Optional: Atmospheric delay metadata layers
In the current concept, L2 data will be provided as coregistered stacks of unwrapped
interferograms. Hence, no separate coregistration is planned during stack preparation.
Changes to this approach may be decided during Phase C. The output of the stack
preparation step is a self-contained HDF5 product that is handed off for further
processing.
3.1.2 Timeseries estimation and parameterization
The timeseries (i.e., the unfiltered displacement of each pixel vs. time) is estimated from
the processed stack using an SBAS or similar approach, and then parameterized using the
approach described in Section 4. In practice, GIAnT combines the two steps of SBAS and
model-based parameterization. As we expect high-quality orbital control for NISAR, we
anticipate that the set of interferograms will typically include all nearest-neighbor (i.e.,
~12-day pairs) and skip-1 interferograms, so the SBAS step will often be somewhat
trivial.
3.1.3 Optional Corrections
Phase distortions related to solid earth and ocean tidal effects as well as those due to
temporal variations in the vertical stratification of the atmosphere can be mitigated using
the approaches described below. At this point, it is expected that these corrections will
not be needed to validate the mission requirements, but they may be used to produce the
highest quality data products. Typically, these are applied to the estimated time series
product rather than to the individual interferograms, since they are a function of the time
of each radar acquisition.
Optional atmospheric correction utilizes the PyAPS (Jolivet et al., 2011, Jolivet and
Agram, 2012) module within GIAnT for implementing weather model-based
interferometric phase delay corrections. PyAPS is well documented, maintained and can
be freely downloaded (http://pyaps.googlecode.com; PyAPS is included in GIAnT
distribution). PyAPS currently includes support for ECMWF’s ERA-Interim, NOAA’s
NARR and NASA’s MERRA weather models. A final selection of atmospheric models
to be used for operational NISAR processing will be done during Phase C.
Following Doin et al. (2009) and Jolivet et al. (2011), tropospheric delay maps are
produced from atmospheric data provided by Global Atmospheric Models. This method
aims to correct differential atmospheric delay correlated with the topography in
interferometric phase measurements. Global Atmospheric Models (hereafter GAMs),
such as ERA-Interim (European Center for Medium-Range Weather Forecast), MERRA
(Modern-Era Retrospective Analysis, Goddard Space Flight Center, NASA) or regional
models such as NARR (North American Regional Reanalysis, National Oceanographic
and Atmospheric Administration) provide estimates of the air temperature, the
atmospheric pressure and the humidity as a function of elevation on a coarse resolution
latitude/longitude grid. In PyAPS, we use this 3D distribution of atmospheric variables to
determine the atmospheric phase delay on each pixel of each interferogram.
14. 14
For a given GAM dataset, we select grid points overlapping with the spatial coverage of
the SAR scene. Atmospheric variables are provided at precise pressure levels. We
vertically interpolate these values to a regular grid between the surface and a reference
altitude, 𝑧"#$, above which the delay is assumed to be nearly unchanged with time
(~30,000 m). Then, the delay function on each of the selected grid points of the GAM is
computed as a function of height. The LOS single path delay 𝛿𝐿'()
*
(𝑧) at an elevation 𝑧
is given by (Doin et al., 2009, Jolivet et al., 2011):
𝛿𝐿'()
*
(𝑧) =
10.6
𝑐𝑜𝑠(𝜃)
{
𝑘6𝑅8
𝑔:
(𝑃(𝑧) − 𝑃(𝑧"#$)) + > ?@𝑘A −
𝑅8
𝑅B
𝑘6C
𝑒
𝑇
+ 𝑘F
𝑒
𝑇A
G 𝑑𝑧
IJKL
I
}
(1)
where 𝜃 is the local incidence angle, 𝑅8 = 287.05 𝐽 𝑘𝑔.1
𝐾.1
and 𝑅B =
461.495 𝐽 𝑘𝑔.1
𝐾.1
are the dry air and water vapor specific gas constants, 𝑔: is a
weighted average of the gravity acceleration between 𝑧 and 𝑧"#$, 𝑃 is the dry air partial
pressure in Pa, 𝑒 is the water vapor partial pressure in Pa, and 𝑇 is the temperature in K.
The constants are 𝑘1 = 0.776 𝐾 𝑃𝑎.1
, 𝑘2 = 0.716 𝐾 𝑃𝑎.1
, and 𝑘3 = 3.75 ∙ 103
𝐾2
𝑃𝑎.1
.
The absolute atmospheric delay is computed at each SAR acquisition date. For a pixel 𝑎S
at an elevation 𝑧 at acquisition date 𝑖, the four surrounding grid points are selected and
the delays for their respective elevations are computed. The resulting delay at the pixel 𝑎S
is then the bilinear interpolation between the delays at the four grid points. Finally, we
combine the absolute delay maps of the InSAR partner images to produce the differential
delay maps used to correct the interferograms. Details and validation of the PyAPS
approach are available in Doin et al. (2009) and Jolivet et al. (2012).
Optional corrections for solid earth and ocean-tide loadings will be done using the
SPOTL model (Agnew, 2012). To facilitate an accurate representation of ocean tides,
SPOTL provides access to a collection of global and regional ocean models and allows
for an easy combination of these models. It also includes methods to convert computed
loads into harmonic constants, and to compute the tide in the time domain from these
constants. Optimal configurations for ocean tide modeling will be studied in Phase C.
4 APPROACH FOR VALIDATING L3 DATA PRODUCTS
4.1 Decomposition of InSAR time series into basis functions
Given a time series of InSAR LOS displacements, the observations for a given pixel,
𝑈(𝑡), can be parameterized as:
𝑈(𝑡) = 𝑎 + 𝑣𝑡
+𝑐1𝑐𝑜𝑠 (𝜔6𝑡 − 𝜙1,) + 𝑐2𝑐𝑜𝑠 (𝜔2𝑡 − 𝜙2)
15. 15
+ [ ℎ^ + 𝑓
^𝐹^a𝑡 − 𝑡^bc𝐻a𝑡 − 𝑡^b +
𝐵f(𝑡)
𝑅𝑠𝑖𝑛𝜃
Δ𝑧 + 𝑟𝑒𝑠𝑖𝑑𝑢𝑎𝑙
lKm
^n1
(2)
which includes a constant offset (𝑎), velocity (𝑣), and amplitudes (𝑐^) and phases (𝜙^)
of annual (𝜔1)and semiannual (𝜔2) sinusoidal terms. Where needed we can include
additional complexity, such as coseismic and postseismic processes parameterized by
Heaviside (step) functions H and postseismic functions F (the latter typically exponential
and/or logarithmic). 𝐵f(𝑡), R, θ, and Δz are, respectively, the perpendicular component
of the interferometric baseline relative to the first date, slant range distance, incidence
angle and topography error correction (e.g., Fattahi and Amelung, 2013) for the given
pixel.
This parameterization of ground deformation has a long heritage in geodesy, particularly
in analysis of GPS time series as well as more recently with InSAR data (e.g., Blewitt,
2007, Hetland et al., 2012, Agram et al., 2013). For validation purposes, we would
perform the same parameterization on any lowpass-filtered cGPS time series used in the
analysis, after projecting the GPS into the InSAR line of sight.
Thus, given either an ensemble of interferograms or the output of SBAS (displacement
vs. time), we can write the LSQ problem as
Gm = d
(3)
where G is the design matrix (constructed out of the different functional terms in
Equation 2 evaluated either at the SAR image dates for SBAS output, or between the
dates spanned by each pair for interferograms), m is the vector of model parameters (the
coefficients in Equation 2) and d is the vector of observations. For GPS time series, G,
d, and m, are constructed using values evaluated at single epochs corresponding to the
GPS solution times, as for SBAS InSAR input. For comparison with InSAR
observations, we project the 3D GPS time series to the radar LOS using the appropriate
LOS vector. Equation 3 can be solved as a conventional weighted LSQ problem for the
maximum likelihood model, where we minimize the L2 norm of the weighted misfit
(e.g., Aster et al., 2013):
min φ(m) = (d-Gm)TCd
-1(d-Gm)
(4)
Here, the data covariance matrix, Cd, is constructed using the empirical estimate of
correlation from each contributing interferogram over the appropriate subset of pixels
(i.e., masking out water bodies and regions that are decorrelated, such as agricultural
fields) and superscript T denotes matrix transpose. Only pixels that are coherent in most
interferograms are used as input to the construction of Cd. The solution for this
overdetermined minimization problem can be written as
16. 16
mest=G-gd
(5)
where
G-g = [GTCd
-1G]-1 GTCd
-1
(6)
The full covariance on the estimated parameters, Cm, can be estimated from
Cm = G-gCdG-gT
(7)
With this formulation, we can obtain GPS and InSAR velocity estimates and their formal
uncertainties (including in areas where the expected answer is zero).
4.2 NISAR Validation Procedure
Once we derive displacement parameters from cGPS (mest,cGPS) and InSAR (mest,InSAR)
via (2) – (7), we use two complementary approaches (here referred to as A and B) to
validate the L2 requirements discussed in this document. Both approaches are needed to
understand the limits of performance as completely as possible given existing limitations
on resources and the distribution of cGPS networks.
A: cGPS-InSAR direct comparison: Here we compare parameterized time series from
InSAR and GPS, across the length scales described in the L2 requirements. We calculate
gradients of the relevant time series parameters (i.e., velocity, v) between all possible
pairs of cGPS locations within a validation region, resulting in the vectors Δmest,cGPS and
Δmest,InSAR. For all these pairs, we will perform unpaired two-sample t-tests (Snedecor &
Cochran, 1989) to test the null hypothesis that the two estimates with their respective
errors are from the same population. We will perform these tests at the 95% confidence
level.
B: InSAR Residual analysis: Using only InSAR data, we analyze the residuals w,
calculated by subtracting the estimated displacement model mest,InSAR from the
observations d,
w = Gmest,InSAR - d
(8)
We calculate empirical structure functions, Sw, from the residuals w for a subsequent
analysis of signal noise as a function of spatial scale. This approach is broadly similar to
how the Performance Tool has been validated for SES requirements (Hensley et al.,
2016). We define the semivariogram S as the variance of the difference between two
points separated by distance 𝑟
17. 17
𝑆(𝑟) = 𝐸[(𝑓(𝑥) − 𝑓(𝑥 − 𝑟))2
]
(9)
such that the covariance between two points corresponds to:
𝐶u(𝑟) = 𝜎2
−
)(")
2
,
(10)
Where 𝜎2
is the variance of the noise within the data set (Williams et al., 1998).
To calculate 𝐶u(𝑟) for a residual, w, we first detrend w at the scale of the full imaging
swath (~240 km) to meet the stationarity assumption inherent to covariance theory. To
detrend, we fit and remove a linear plane from the data. Subsequently, we calculate the
structure function Sw according to (Lohman & Simons, 2005)
𝑆w,8x,8y =
1
𝑛z
[ [ a𝑤|,} − 𝑤|.8x~6,}.8y~6b
A
uy
}n8y
ux
|n8x
(11)
where 𝑑𝑥 = [1: 𝑛𝑥], 𝑑𝑦 = [1: 𝑛𝑦] are the sampling intervals of w in the two geographic
directions, 𝑛𝑥 and 𝑛𝑦 are the maximum distances covered by the matrix w in 𝑥 and 𝑦,
and 𝑛z is the number of valid values within the overlapping region at each shift (𝑑𝑥, 𝑑𝑦).
𝑛z is not necessarily equivalent to nx times ny, due to water bodies and other regions that
are decorrelated in most interferograms.
While, in general, noise in w is anisotropic, here we neglect this anisotropy and assume
that the directional average of Sw versus distance is a good approximation of 𝐶u(𝑟).
Given Sw, we extract values at scales 𝐿 = [5, 10, 20, 30, 40, 50] 𝑘𝑚 from Sw and compare
them to the L2 requirements at these scales for validation.
5 ASSUMPTIONS AND REQUIRED INPUT
The calval activities covered by this ATBD assume:
❏ The project will provide a set of fully coregistered unwrapped L2
interferograms (an InSAR “stack”) over regions of interest listed in the NISAR
Solid Earth calval document. For the purpose of testing calval algorithms
prior to NISAR launch, interferogram stacks will be made using SAR data
from complementary missions (e.g. Sentinel-1 or ALOS-2). These stacks will
include at a minimum nearest-neighbor and skip-1 interferograms to mimic
the planned standard L2 data product from NISAR, and will span a minimum
of 2 years to support full testing of the validation algorithms for all three L2
requirements (including the fitting of seasonal basis functions to the InSAR
time series). A more complete set of interferograms, including pairs
spanning longer periods, may be requested for regions with higher
18. 18
vegetation cover, soil moisture and/or snow cover variability. The format of
these interferograms will be consistent with the GIANT analysis package
(Agram et al., 2013), which will be used to generate L3 time series data
products.
❏ As part of L2 processing, the project may choose to calculate and apply
optional corrections to minimize errors due to non-geophysical sources. An
example of this kind of correction would be the removal of ionospheric
propagation delays using split-band processing (e.g., Rosen et al., 2010;
Gomba et al., 2016; Liang and Fielding, 2017; Fattahi et al., 2017).
❏ The project may also choose to apply spatial filtering and/or masking of
regions that are decorrelated during the process of downlooking to the
required resolution (e.g., 100 meters) (e.g., Lyons and Sandwell, 2003).
❏ The project will have access to L2 position data for continuous GPS/GNSS
stations in third-party networks such NSF’s Plate Boundary Observatory, the
HVO network for Hawaii, GEONET-Japan, and GEONET-New Zealand, located
in target regions for NISAR solid earth calval. Station data will be post-
processed by one or more analysis centers, will be freely available, and will
have latencies of several days to weeks, as is the case with positions
currently produced by the NSF’s GAGE Facility and separately by the
University of Nevada Reno. Networks will contain one or more areas of high-
density station coverage (2~20 km nominal station spacing over 100 x 100
km or more) to support validation of L2 NISAR requirements at a wide range
of length scales.
6 PLANNED DELIVERABLES
L3 products will include:
● Maps of locations where the InSAR and GPS data are being compared
● LOS displacement vs. time plots showing:
o InSAR time series using a standard SBAS approach (Berardino et al.,
2002, Hooper, 2006)
o The parameterized LSQ solution to the InSAR data
o The corresponding time series of the LOS component of the GPS time
series
o The corresponding LSQ solution to the LOS component of the GPS
time series
19. 19
● Tables and/or figures of comparisons showing LSQ solutions and error
estimates of velocities and offsets as a function of baseline length from both
InSAR and GPS observations.
7 EXPECTED ACTIVITIES IN PHASE C
It is expected that this ATBD will be modified During Phase C. Anticipated
modifications include:
● A demonstration of the approaches described here using available SAR and
GPS data.
● Explore the importance of split-band processing for ionospheric corrections
and the role of corrections using atmospheric weather models.
8 REFERENCES
Agnew, D. C., SPOTL: Some Programs for Ocean-Tide Loading, SIO Technical Report,
Scripps Institution of Oceanography, http://escholarship.org/uc/item/954322pg, 2012.
Agram, P.S. , R. Jolivet, B. Riel, Y. N. Lin, M. Simons et al., New Radar Interferometric
Time Series Analysis Toolbox Released, EOS Transactions, 94, 7, 69-70, 2013.
Aster, R.C., B. Borchers, and C.H. Thurber, Parameter Estimation and Inverse Problems,
2nd edition, Elsevier Academic Press, 2013.
Bamler, R. and P. Hartl, Synthetic aperture radar interferometry, Inverse Problems, 14,
R1—R54, 1998.
Berardino, P., G. Fornaro, R. Lanari, and E. Sansosti, A new algorithm for surface
deformation monitoring based on small baseline differential SAR interferograms,
IEEE Trans. on Geosci. and Rem. Sens., 40, 2375–238, 2002.
Blewitt, G., GPS and space based geodetic methods, in Treatise on Geophysics, vol. 3,
edited by T. Herring, pp. 351– 390, Academic, Oxford, U.K., 2007.
Bürgmann, R., Rosen, P.A. and E.J. Fielding, Synthetic aperture radar interferometry to
measure Earth's surface topography and its deformation, Annual Review of Earth and
Planetary Sciences, 28(1), pp.169-209, 2000.
Doin, M-P., C. Lasserre, G. Peltzer, O. Cavalié, and C. Doubre, Corrections of stratified
tropospheric delays in SAR interferometry: Validation with global atmospheric
models, Journal of Applied Geophysics, 69(1), pp. 35-50, 2009.
Doin, M. P., S. Guillaso, R. Jolivet, C. Lasserre, F. Lodge, G. Ducret, and R. Gradin,
Presentation of the small baseline NSBAS processing chain on a case example: The
Etna deformation monitoring from 2003 to 2010 using Envisat data. In FRINGE 2011
ESA Conference, Frascati, Italy, September 2011. ESA.
20. 20
Duputel, Z., J. Jiang, R. Jolivet, M. Simons et al., The Iquique earthquake sequence of
April 2014: Bayesian modeling accounting for prediction uncertainty, Geophys. Res.
Lett., doi:10.1002/2015GL065402, 2015.
Fattahi, H., & Amelung, F., DEM error correction in InSAR time series. IEEE
Transactions on Geoscience and Remote Sensing, 51(7), 4249-4259, 2013.
Fattahi, H., Simons, M. & Agram, P., 2017. InSAR Time-Series Estimation of the
Ionospheric Phase Delay: An Extension of the Split Range-Spectrum Technique,
IEEE Transactions on Geoscience and Remote Sensing, 55, 5984-5996.
Goldstein, R. M., H. Engelhardt, B. Kamb and R. M. Frolich, Satellite Radar
Interferometry for Monitoring Ice Sheet Motion: Application to an Antarctic Ice
Stream, Science, 262, 1525-1530, 1993.
Gomba, G., A. Parizzi, F. De Zan, M. Eineder and R. Bamler, Toward Operational
Compensation of Ionospheric Effects in SAR Interferograms: The Split-Spectrum
Method, IEEE Trans. Geosci. Rem. Sens., 54, doi:10.1109/TGRS.2015.2481079,
1446-1461, 2016.
Hanssen, R. A., Radar Interferometry: Data Interpretation and Error Analysis, Springer,
New York, 2001.
Hensley, S.H., P. Agram, S. Buckley, H. Ghaemi, E. Gurrola, L. Harcke, C.
Veeramachaneni and S-H Yun, NISAR Performance Modeling and Error Budget, Jet
Propulsion Laboratory, Interoffice Memorandum, January 26, 2016.
Hetland, E. A., P. Musé, M. Simons, Y. N. Lin, P. S. Agram, and C. J. DiCaprio,
Multiscale InSAR time series (MInTS) analysis of surface deformation, Journal of
Geophysical Research: Solid Earth, 117(B2), 2012.
Hooper, A., Persistent Scatterer Radar Interferometry for Crustal Deformation Studies
and Modeling of Volcanic Deformation, PhD Thesis, Stanford University, 2006.
Jolivet, R., R. Grandin, C. Lasserre, M.- P. Doin, and G. Peltzer, Systematic InSAR
tropospheric phase delay corrections from global meteorological reanalysis data,
Geophysical Research Letters, 38(17), 2011.
Jolivet, R. and P. S. Agram, Python-based atmospheric phase screen mitigation using
atmospheric re-analysis, 2012. URL: http://pyaps.googlecode.com.
Jolivet, R., C. Lasserre, M.-P. Doin, S. Guillaso, G. Peltzer, R. Dailu, J. Sun, Z.-K. Shen,
and X. Xu, Shallow creep on the Haiyuan fault (Gansu, China) revealed by SAR
interferometry, Journal of Geophysical Research: Solid Earth, 117(B6), 2012.
Jolivet, R., P. Agram, Y.N. Lin, M. Simons, M.-P. Doin, G. Peltzer, and Z. Li, Improving
InSAR geodesy using Global Atmospheric Models, J. Geophys. Res., doi
10.1002/2013580/10558, 2014.
Liang, C., & Fielding, E. J. (2017). Interferometry with ALOS-2 full-aperture ScanSAR
data. IEEE Transactions on Geoscience and Remote Sensing, 55(5), 2739-2750.
21. 21
Lohman, R, and M. Simons, Some thoughts on the use of InSAR data to constrain models
of surface deformation: Noise structure and data downscaling, Geochemistry,
Geophysics, Geosystems, 6(1), doi:10.1029/2004GC000841, 2005.
Lyons, S., and D. Sandwell. Fault creep along the southern San Andreas from
interferometric synthetic aperture radar, permanent scatterers, and stacking. Journal
of Geophysical Research: Solid Earth 108.B1, doi:10.1029/2002JB001831, 2003.
Massonnet D., M. Rossi, C. Carmona, F. Adragna, G. Peltzer, K. Feigl and T. Rabaute,
The displacement field of the Landers earthquake mapped by radar interferometry,
Nature, 364, 138-142, doi:10.1038/364138a0, 1993.
Massonnet, D. and K. L. Feigl, Radar interferometry and its application to changes in the
earth’s surface, Review of Geophysics, 36, 441-500, 1998.
Pritchard, M. E., C. Ji, and M. Simons, Distribution of slip from 11 Mw > 6 earthquakes
in the northern Chile subduction zone, J. Geophys. Res., 111, doi:
10.1029/2005JB004013, 2006.
Pritchard, M. E., E. O. Norobuena, C. Ji, R. Boroscheck, D. Comte, M. Simons, T. H.
Dixon, and P. A. Rosen, Geodetic, teleseismic, and strong motion constraints on slip
from recent southern Peru subduction zone earthquakes, J. Geophys. Res., 112,
B03307, doi:10.1029/2006JB004294, 2007.
Rosen, P.A., S. Hensley, I.R. Joughin, F.K. Li, S.N. Madsen, E. Rodriguez and R.M.
Goldstein, Synthetic aperture radar interferometry, Proceedings of the IEEE, 88(3),
pp.333-382, 2000.
Rosen, P. A., S. Hensley, and C. Chen, Measurement and mitigation of the ionosphere in
L-band interferometric SAR data, In Radar Conference, IEEE, 1459-1463, 2010.
Rosen, P. A., E. Gurrola, G. F. Sacco, and H. Zebker, The InSAR scientific computing
environment, In Synthetic Aperture Radar, 2012. EUSAR. 9th European Conference
on, pp. 730-733. VDE, 2012.
Simons, M. and P.A. Rosen, Interferometric Synthetic Aperture Radar Geodesy, in:
Treatise on Geophysics - Geodesy. Vol.3, Elsevier , Amsterdam, pp. 391-446. ISBN
9780444527486, 2007.
Snedecor, G.W. and W.G. Cochran, Statistical Methods, Eighth Edition, Iowa State
University Press, 1989.
Williams, S., Y. Bock, and P. Fang, Integrated satellite interferometry: Tropospheric
noise, GPS estimates, and implication for interferometric synthetic aperture radar
products, J. Geophys. Res., 103, 27,051 – 27,067, 1998.
Zebker, H. P. A. Rosen, R. M. Goldstein, A. Gabriel and C. L. Werner, On the
derivations of coseismic displacement fields using differential radar interferometry:
The Landers earthquake, J. 99, 19617-19634, 1994.