This document summarizes a study that integrated seismic velocity and electrical resistivity data to probabilistically evaluate rock quality designation (RQD) between boreholes. The study used indicator kriging to create an initial RQD distribution from borehole observations. It then improved this distribution by incorporating the geophysical data using a permanence ratio, which calculates the probability of an event based on multiple information sources. The final probabilistic RQD distribution allowed for a more quantitative rock quality assessment and informed decision making for tunnel design and safety.
This document discusses using digitized outcrop images and forward modeling to simulate ground-penetrating radar (GPR) data under different water saturation scenarios. Petrophysical models are used to estimate electrical properties for different lithological elements in an outcrop image from a gravel quarry site. GPR simulations are performed for the outcrop model under three water saturation states: uniformly drained, nonuniformly saturated, and fully saturated. Comparisons of the synthetic GPR sections with field data show that the occurrence of reflections depends on the presence and distribution of pore water. The modeling approach allows investigation of GPR sensitivity to different soil types and conditions.
Estimating the Probability of Earthquake Occurrence and Return Period Using G...sajjalp
In this paper, the frequency of an earthquake occurrence and magnitude relationship
has been modeled with generalized linear models for the set of
earthquake data of Nepal. A goodness of fit of a statistical model is applied for
generalized linear models and considering the model selection information
criterion, Akaike information criterion and Bayesian information criterion,
generalized Poisson regression model has been selected as a suitable model
for the study. The objective of this study is to determine the parameters (a
and b values), estimate the probability of an earthquake occurrence and its
return period using a Poisson regression model and compared with the Gutenberg-Richter
model. The study suggests that the probabilities of earthquake
occurrences and return periods estimated by both the models are relatively
close to each other. The return periods from the generalized Poisson
regression model are comparatively smaller than the Gutenberg-Richter
model.
This document summarizes and compares methods for evaluating the performance of ground motion prediction equations (GMPEs) using observed ground motion databases. It evaluates several CENA GMPEs using the NGA-East database through residual analysis and other statistical tests to check for bias and normality. It then ranks the GMPEs using the log-likelihood and Euclidean distance-based methods to determine the most appropriate models for the target region.
APPLICATION OF GENE EXPRESSION PROGRAMMING IN FLOOD FREQUENCY ANALYSISMohd Danish
This document discusses different methods for flood frequency analysis, including Gumbel's method, artificial neural networks (ANN), and gene expression programming (GEP). Gumbel's method is widely used in India to predict flood peaks. ANN and GEP are artificial intelligence techniques that have been applied to hydraulic engineering problems in recent decades. The document focuses on applying GEP to flood frequency analysis of the Ganga River at Hardwar, India. GEP is implemented to derive a relationship between peak flood discharge and return period. The results of GEP are promising and suggest it is a useful alternative to more conventional flood frequency analysis methods.
The document discusses methods for integrating imprecise or biased secondary data from multiple sources into mineral resource estimates. It examines techniques including cokriging (CK), multicollocated cokriging (MCCK), and ordinary kriging with variance of measurement error (OKVME). Where abundant imprecise but unbiased secondary data are available, the document recommends using OKVME, as it improves estimation precision over using the primary data alone or pooling the data and using ordinary kriging. For abundant imprecise and biased secondary data, CK is recommended as it provides an unbiased estimate and some improvement in precision relative to using only the primary data. The document evaluates these techniques using a case study iron
geostatistical approach to coal reserve classificationKeith Whitchurch
This document describes a geostatistical approach to classifying coal reserves. It begins by outlining the traditional Queensland and New South Wales coal classification codes, which categorize reserves as measured, indicated, or inferred based on sampling density. The document then proposes a geostatistical algorithm for classification that assigns blocks to one of three classes (A, B, C) based on estimated error variance from kriging and maximum allowable block size, corresponding to the traditional categories. The algorithm works by iteratively enlarging blocks and checking if estimated error and size satisfy restrictions for the most restrictive Class A, before moving to less restrictive Classes B and C. This provides a standardized, quantitative approach while still adhering to statutory requirements.
This document discusses the derivation of new empirical magnitude conversion relationships for earthquakes in Turkey and surrounding regions between 1900-2012. It uses an improved earthquake catalog containing 12,674 events of magnitude 4.0 and greater. The catalog includes events reported in different magnitude scales. The study derives conversion equations to relate moment magnitude (Mw) to local magnitude (ML), duration magnitude (Md), body wave magnitude (mb), and surface wave magnitude (Ms) using 489 events that have reported Mw values. Both orthogonal regression and ordinary least squares methods are used and compared. The new relationships are meant to make the catalog more homogeneous by converting all magnitudes to the common Mw scale.
The document compares earthquake wave propagation analysis results from SHAKE2000 and Plaxis v8 based on Indonesian standards SNI 03-1726-2012 for North, Central, and South Jakarta. The analysis models soil at five locations using synthetic ground motions for a 2500-year earthquake. The results show site-specific spectra are generally higher than the standard. Linear Elastic modeling in Plaxis gives very high values unsuitable for seismic analysis, while Mohr-Coulomb is also unsuitable. Hardening Soil with Small Strain modeling is most suitable. SHAKE2000's Linear Equivalent modeling best matches the standard but more study of soil parameters is needed.
This document discusses using digitized outcrop images and forward modeling to simulate ground-penetrating radar (GPR) data under different water saturation scenarios. Petrophysical models are used to estimate electrical properties for different lithological elements in an outcrop image from a gravel quarry site. GPR simulations are performed for the outcrop model under three water saturation states: uniformly drained, nonuniformly saturated, and fully saturated. Comparisons of the synthetic GPR sections with field data show that the occurrence of reflections depends on the presence and distribution of pore water. The modeling approach allows investigation of GPR sensitivity to different soil types and conditions.
Estimating the Probability of Earthquake Occurrence and Return Period Using G...sajjalp
In this paper, the frequency of an earthquake occurrence and magnitude relationship
has been modeled with generalized linear models for the set of
earthquake data of Nepal. A goodness of fit of a statistical model is applied for
generalized linear models and considering the model selection information
criterion, Akaike information criterion and Bayesian information criterion,
generalized Poisson regression model has been selected as a suitable model
for the study. The objective of this study is to determine the parameters (a
and b values), estimate the probability of an earthquake occurrence and its
return period using a Poisson regression model and compared with the Gutenberg-Richter
model. The study suggests that the probabilities of earthquake
occurrences and return periods estimated by both the models are relatively
close to each other. The return periods from the generalized Poisson
regression model are comparatively smaller than the Gutenberg-Richter
model.
This document summarizes and compares methods for evaluating the performance of ground motion prediction equations (GMPEs) using observed ground motion databases. It evaluates several CENA GMPEs using the NGA-East database through residual analysis and other statistical tests to check for bias and normality. It then ranks the GMPEs using the log-likelihood and Euclidean distance-based methods to determine the most appropriate models for the target region.
APPLICATION OF GENE EXPRESSION PROGRAMMING IN FLOOD FREQUENCY ANALYSISMohd Danish
This document discusses different methods for flood frequency analysis, including Gumbel's method, artificial neural networks (ANN), and gene expression programming (GEP). Gumbel's method is widely used in India to predict flood peaks. ANN and GEP are artificial intelligence techniques that have been applied to hydraulic engineering problems in recent decades. The document focuses on applying GEP to flood frequency analysis of the Ganga River at Hardwar, India. GEP is implemented to derive a relationship between peak flood discharge and return period. The results of GEP are promising and suggest it is a useful alternative to more conventional flood frequency analysis methods.
The document discusses methods for integrating imprecise or biased secondary data from multiple sources into mineral resource estimates. It examines techniques including cokriging (CK), multicollocated cokriging (MCCK), and ordinary kriging with variance of measurement error (OKVME). Where abundant imprecise but unbiased secondary data are available, the document recommends using OKVME, as it improves estimation precision over using the primary data alone or pooling the data and using ordinary kriging. For abundant imprecise and biased secondary data, CK is recommended as it provides an unbiased estimate and some improvement in precision relative to using only the primary data. The document evaluates these techniques using a case study iron
geostatistical approach to coal reserve classificationKeith Whitchurch
This document describes a geostatistical approach to classifying coal reserves. It begins by outlining the traditional Queensland and New South Wales coal classification codes, which categorize reserves as measured, indicated, or inferred based on sampling density. The document then proposes a geostatistical algorithm for classification that assigns blocks to one of three classes (A, B, C) based on estimated error variance from kriging and maximum allowable block size, corresponding to the traditional categories. The algorithm works by iteratively enlarging blocks and checking if estimated error and size satisfy restrictions for the most restrictive Class A, before moving to less restrictive Classes B and C. This provides a standardized, quantitative approach while still adhering to statutory requirements.
This document discusses the derivation of new empirical magnitude conversion relationships for earthquakes in Turkey and surrounding regions between 1900-2012. It uses an improved earthquake catalog containing 12,674 events of magnitude 4.0 and greater. The catalog includes events reported in different magnitude scales. The study derives conversion equations to relate moment magnitude (Mw) to local magnitude (ML), duration magnitude (Md), body wave magnitude (mb), and surface wave magnitude (Ms) using 489 events that have reported Mw values. Both orthogonal regression and ordinary least squares methods are used and compared. The new relationships are meant to make the catalog more homogeneous by converting all magnitudes to the common Mw scale.
The document compares earthquake wave propagation analysis results from SHAKE2000 and Plaxis v8 based on Indonesian standards SNI 03-1726-2012 for North, Central, and South Jakarta. The analysis models soil at five locations using synthetic ground motions for a 2500-year earthquake. The results show site-specific spectra are generally higher than the standard. Linear Elastic modeling in Plaxis gives very high values unsuitable for seismic analysis, while Mohr-Coulomb is also unsuitable. Hardening Soil with Small Strain modeling is most suitable. SHAKE2000's Linear Equivalent modeling best matches the standard but more study of soil parameters is needed.
Prediction of scour depth at bridge abutments in cohesive bed using gene expr...Mohd Danish
The scour modelling in cohesive beds is relatively more complex than that in sandy beds and
thus there is limited number of studies available on local scour at bridge abutments on cohesive
sediment. Recently, a good progress has been made in the development of data-driven techniques
based on artificial intelligence (AI). It has been reported that AI-based inductive modelling
techniques are frequently used to model complex process due to their powerful and non-linear model
structures and their increased capabilities to capture the cause and effect relationship of such
complex processes. Gene Expression Programming (GEP) is one of the AI techniques that have
emerged as a powerful tool in modelling complex phenomenon into simpler chromosomal
architecture. This technique has been proved to be more accurate and much simpler than other AI
tools. In the present study, an attempt has been made to implement GEP for the development of
scour depth prediction model at bridge abutments in cohesive sediments using laboratory data
available in literature. The present study reveals that the performance of GEP is better than nonlinear
regression model for the prediction of scour depth at abutments in cohesive beds.
Statistical Tuning Chart for Mapping Porosity Thickness: a Case Study of Chan...TELKOMNIKA JOURNAL
Reservoir assessment is not only controlled by the structural framework but also stratigraphical
features. Stratigraphical interpretation, which is related to seismic amplitude interpretation, is used to
describe petrophysical aspects of channel sand reservoirs such as net porosity and thickness. This paper
aims to map the porosity thickness for a case study of channel sand bodies reservoir in the Kutei basin.
The study area is complex channel reservoir system that appears to occupy specific area within the
depositional system. The geometry of the sediment channel, which thins toward the channel margins,
makes this feature similar to be wedge model that could possibly be influenced by tuning effects. The
tuning effects introduce pitfall in interpreting high-quality reservoir that is affected by contrasts in acoustic
impedance. In order to distinguish high amplitude responses caused by tuning effects and acoustic
properties, the analysis of amplitude responses needs to be correlated to the reservoir thickness. The
statistical tuning chart is one of the techniques used to correlate amplitude responses and the reservoir
thickness. The application of this technique to real data sets shows net porosity thickness map over the
targeted reservoir. Thus, high-quality reservoir characterization can be performed to delineate geometric
framework of the reservoir.
Introduction of Lateral Decision Conveying Approach Based Multi Criteria Asse...ijsrd.com
Out lining of potentiality is regarded as the foundation step towards the conservation and management of any resource. Continuous and substantial extraction of ground water has already minimized the quantity of such a useful life supporting flow resource. Delineation of ground water potentiality (GWP) is essential not only to endure the present civilization from the prevailing shadow of water crisis but also to increase the sustainability of water resource development for the generation to come. In the present study efforts are made to scientifically delineate the GWP of selected study area by multi criteria assessment (MCA) of several hydro-geomorphic data and facts. While determining the individual weight and sub weight for MCA, through the study a new approach has been developed in order to strike a balance between accuracy and intricacy.
This document discusses modeling groyne placement on river bends based on sedimentation analysis using numerical simulation with the finite difference method. The goal is to determine optimal groyne placement by considering sediment accumulation volumes in groyne fields.
The study plans to simulate 450 cases combining various groyne positions, lengths, flow velocities, bend radii, and suspended sediment concentrations. Conservation equations for mass and momentum will be used to develop the mathematical model. Validation and verification processes will evaluate the agreement between experimental data and model predictions.
Regression analysis of simulation results will determine suitability coefficients to obtain relationships between parameters like Froude number and groyne length, providing guidance on best distances between groynes. The numerical model aims
The document summarizes an empirical ground motion model developed as part of the PEER Next Generation Attenuation (NGA) project. Key points:
- The model predicts peak ground acceleration, velocity, displacement, and response spectra for shallow crustal earthquakes in active tectonic regions.
- It is based on over 1,500 recordings from 64 earthquakes ranging in magnitude from 4.3 to 7.9 and distances from 0.1 to 199 km.
- The model accounts for magnitude, distance, faulting style, depth, directivity, site conditions, and variability between events and recordings.
Applying the “abcd” monthly water balance model for some regions in the unite...Alexander Decker
This document describes applying the "abcd" monthly water balance model to three catchment regions in the United States to assess the model's feasibility in different climate regions. The model was able to adequately simulate streamflow for two catchments in warm, humid regions but was not able to simulate a catchment dominated by snowfall. Model parameters were calibrated for one catchment and applied successfully to another similar catchment, demonstrating potential for regionalization. However, the model requires modifications to account for snow dynamics to be effective in snow-dominated regions.
This document discusses a method for classifying atmospheric circulation patterns (CPs) based on their links to extreme wave events. It proposes using an entropy measure to objectively evaluate the quality of CP classifications and determine the optimal number of CP classes. The method is applied to wave data from Durban, South Africa to classify CPs driving extreme wave heights over 3.5m. The results indicate that 15-20 CP classes are needed for a good quality classification but one persistent class explains a large proportion of extreme events regardless of the number of classes.
Gaddam et al-2017-journal_of_earth_system_science (1)Vinay G
1) The study develops seasonal sensitivity characteristics (SSCs) for four glaciers in Western Himalaya to quantify changes in specific mass balance from monthly temperature and precipitation variations.
2) Using the SSCs and climate reanalysis data, the study reconstructs the specific mass balance of the glaciers from 1900-2010, finding they experienced both positive and negative balances, except Naradu glacier which only lost mass.
3) A cumulative mass loss of -133 ± 21.5 meters water equivalent was estimated for the four glaciers over the observation period, making this the first record of Himalayan glacier mass balances over a century scale.
This paper presents a joint analysis of surface wave and microgravity surveys to estimate S-wave velocity and density models of the subsurface. Surface wave testing and microgravity surveys were conducted along a 205m survey line to map a buried channel filled with soft sediments. An S-wave velocity model was derived from surface wave data using cross-correlation analysis. Gravity data was then analyzed based on the S-wave model, converting it to a density model using soil properties. A least squares method was used to modify the density model to reduce residuals between calculated and observed gravity, resulting in a clear low-density area matching the low S-wave velocity channel.
This document presents a data-driven approach to establish relationships between the microstructure and hydraulic conductivity of packed soil particles. Soil particle packings were generated numerically and their microstructures characterized using 2-point statistics. Hydraulic conductivity was estimated using finite volume simulations. Principal component analysis was used to reduce the microstructural data, and regression analysis was employed to correlate hydraulic conductivity with the principal components, establishing a structure-property relationship. Leave-one-out cross validation was used to assess the regression models.
Two concurring hydrological models (M1 and M2) that performed equally well when calibrated using only streamflow data showed important differences when microgravity data was added. The shape of the Pareto fronts obtained from multi-objective calibration using both streamflow and microgravity data provided useful insights to identify model limitations and indicated the value of including geophysical data to better constrain the inversion procedure. Time-lapse, relative microgravity surveys conducted over multiple field campaigns in the Vermigliana catchment in the Italian Alps allowed obtaining spatially distributed estimates of subsurface water storage changes to inform hydrological modeling.
This document discusses applying a novel approach using multi-criterion decision analysis (MCDA) with the generalized likelihood uncertainty estimation (GLUE) method to quantify uncertainty in hydrological modeling. Specifically, it examines uncertainty in the SLURP hydrological model. Rather than considering overall Nash-Sutcliffe efficiency, the approach considers NSE values for different flow magnitudes simultaneously. The TOPSIS MCDA method is used to compute predictive intervals by considering NSE values for different flow periods simultaneously. The Kootenay Catchment case study is used to demonstrate the MCDA-GLUE approach.
Npp site selection_andreev_varbanov_submitStoyan Andreev
This document summarizes a study comparing four proposed nuclear power plant sites in Bulgaria based on local seismic effects. Detailed site-specific seismic response analyses were conducted to obtain free-field ground motions at each site, accounting for uncertainty in geotechnical data using Latin Hypercube sampling. Input ground motions were selected from regional seismic sources. Response spectra at each site were compared to each other and to the uniform hazard response spectrum to evaluate local effects and rank the sites. The site with the lowest seismic hazard was nominated as preferred.
Mineral potential projects in the Southern New England Orogen: a pilot study conducted by the Geological Survey of NSW and Kenex to provide a comprehensive account of the mineral resource potential of the region
This document describes a study that develops expressions for ice particle mass (m) and projected area (A) as a function of maximum dimension (D), for use in atmospheric models and remote sensing. It combines measurements of individual ice particle m and D from ground studies with estimates of m and A from aircraft probe measurements in ice clouds. The resulting m-D and A-D expressions are functions of temperature and cloud type (synoptic vs. anvil), and agree well with other field studies for temperatures between −60 and −20 °C. The expressions allow ice particle properties to be estimated over a wider size range than single power laws, and provide uncertainty estimates for parameterizing the relationships as power laws over limited ranges.
Defining Homogenous Climate zones of Bangladesh using Cluster AnalysisPremier Publishers
Climate zones of Bangladesh are identified by using mathematical methodology of cluster analysis. Monthly data from 34 climate stations for rainfall from 1991 to 2013 are used in the cluster analysis. Five Agglomerative Hierarchical clustering measures based on mostly used six proximity measures are chosen to perform the regionalization. Besides three popular measures: K-means, Fuzzy and density based clustering techniques are applied initially to decide the most suitable method for the identification of homogeneous region. Stability of the cluster is also tested based on nine validity indices. It is decided that Ward method based on Euclidean distance, K-means, Fuzzy are the most likely to yield acceptable results in this particular case, as is often the case in climatological research. In this analysis we found seven different climate zones in Bangladesh.
The use of Cellular Automata is extended in various disciplines for the modeling of complex system procedures. Their inherent simplicity and their natural parallelism make them a very efficient tool for the simulation of large scale physical phenomena. We explore the framework of Cellular Automata to develop a physically based model for the spatial and temporal prediction of shallow landslides. Particular weight is given to the modeling of hydrological processes in order to investigate the hydrological triggering mechanisms and the importance of continuous modeling of water balance to detect timing and location of soil slips occurrences. Specifically, the 3D flow of water and the resulting water balance in the unsaturated and saturated zone is modeled taking into account important phenomena such as hydraulic hysteresis and evapotranspiration. In this poster the hydrological component of the model will be presented and tested against well established benchmark experiments [Vauclin et al, 1975; Vauclin et al, 1979]. Furthermore, we investigate the applicability of incorporating it in a hydrological catchment model for the prediction (temporal and spatial) of rainfall-triggered shallow landslides.
This document presents a study that uses Bayesian Regularized Neural Networks (BRNN) to model groundwater levels in the Mahabad aquifer in Iran. The study area and data collection process are described. Five factors - precipitation, evaporation, temperature, streamflow, and previous month's groundwater level - are used as inputs to the BRNN model to estimate current groundwater levels. The results show the BRNN model performs excellently with low errors and high accuracy and determination values. Previous month's groundwater level and streamflow are found to be the most important predictors of current groundwater levels.
This document presents a method for predicting stream flow distributions based on climatic and geomorphic data alone, without discharge measurements. It combines a physically-based stream flow model with water balance and geomorphic recession flow models. Key parameters of the stream flow model are estimated from rainfall, potential evapotranspiration, and digital elevation model data. The method was tested on calibration and test catchments. While offering a unique approach, the method has limitations including additional assumptions and reduced accuracy of parameter estimates and flow regime predictions.
This document provides instructions for analyzing the distribution of earthquakes based on magnitude, time, and location with a focus on clustering characteristics. It discusses Gutenberg-Richter's law which describes the relationship between earthquake magnitude and frequency. It also examines methods for calculating the b-value coefficient and considers the effects of aftershocks on magnitude distributions. The document proposes a model relating the magnitude distributions of main shocks, aftershocks, and all earthquakes based on b-values and the degree of aftershock activity.
Prediction of scour depth at bridge abutments in cohesive bed using gene expr...Mohd Danish
The scour modelling in cohesive beds is relatively more complex than that in sandy beds and
thus there is limited number of studies available on local scour at bridge abutments on cohesive
sediment. Recently, a good progress has been made in the development of data-driven techniques
based on artificial intelligence (AI). It has been reported that AI-based inductive modelling
techniques are frequently used to model complex process due to their powerful and non-linear model
structures and their increased capabilities to capture the cause and effect relationship of such
complex processes. Gene Expression Programming (GEP) is one of the AI techniques that have
emerged as a powerful tool in modelling complex phenomenon into simpler chromosomal
architecture. This technique has been proved to be more accurate and much simpler than other AI
tools. In the present study, an attempt has been made to implement GEP for the development of
scour depth prediction model at bridge abutments in cohesive sediments using laboratory data
available in literature. The present study reveals that the performance of GEP is better than nonlinear
regression model for the prediction of scour depth at abutments in cohesive beds.
Statistical Tuning Chart for Mapping Porosity Thickness: a Case Study of Chan...TELKOMNIKA JOURNAL
Reservoir assessment is not only controlled by the structural framework but also stratigraphical
features. Stratigraphical interpretation, which is related to seismic amplitude interpretation, is used to
describe petrophysical aspects of channel sand reservoirs such as net porosity and thickness. This paper
aims to map the porosity thickness for a case study of channel sand bodies reservoir in the Kutei basin.
The study area is complex channel reservoir system that appears to occupy specific area within the
depositional system. The geometry of the sediment channel, which thins toward the channel margins,
makes this feature similar to be wedge model that could possibly be influenced by tuning effects. The
tuning effects introduce pitfall in interpreting high-quality reservoir that is affected by contrasts in acoustic
impedance. In order to distinguish high amplitude responses caused by tuning effects and acoustic
properties, the analysis of amplitude responses needs to be correlated to the reservoir thickness. The
statistical tuning chart is one of the techniques used to correlate amplitude responses and the reservoir
thickness. The application of this technique to real data sets shows net porosity thickness map over the
targeted reservoir. Thus, high-quality reservoir characterization can be performed to delineate geometric
framework of the reservoir.
Introduction of Lateral Decision Conveying Approach Based Multi Criteria Asse...ijsrd.com
Out lining of potentiality is regarded as the foundation step towards the conservation and management of any resource. Continuous and substantial extraction of ground water has already minimized the quantity of such a useful life supporting flow resource. Delineation of ground water potentiality (GWP) is essential not only to endure the present civilization from the prevailing shadow of water crisis but also to increase the sustainability of water resource development for the generation to come. In the present study efforts are made to scientifically delineate the GWP of selected study area by multi criteria assessment (MCA) of several hydro-geomorphic data and facts. While determining the individual weight and sub weight for MCA, through the study a new approach has been developed in order to strike a balance between accuracy and intricacy.
This document discusses modeling groyne placement on river bends based on sedimentation analysis using numerical simulation with the finite difference method. The goal is to determine optimal groyne placement by considering sediment accumulation volumes in groyne fields.
The study plans to simulate 450 cases combining various groyne positions, lengths, flow velocities, bend radii, and suspended sediment concentrations. Conservation equations for mass and momentum will be used to develop the mathematical model. Validation and verification processes will evaluate the agreement between experimental data and model predictions.
Regression analysis of simulation results will determine suitability coefficients to obtain relationships between parameters like Froude number and groyne length, providing guidance on best distances between groynes. The numerical model aims
The document summarizes an empirical ground motion model developed as part of the PEER Next Generation Attenuation (NGA) project. Key points:
- The model predicts peak ground acceleration, velocity, displacement, and response spectra for shallow crustal earthquakes in active tectonic regions.
- It is based on over 1,500 recordings from 64 earthquakes ranging in magnitude from 4.3 to 7.9 and distances from 0.1 to 199 km.
- The model accounts for magnitude, distance, faulting style, depth, directivity, site conditions, and variability between events and recordings.
Applying the “abcd” monthly water balance model for some regions in the unite...Alexander Decker
This document describes applying the "abcd" monthly water balance model to three catchment regions in the United States to assess the model's feasibility in different climate regions. The model was able to adequately simulate streamflow for two catchments in warm, humid regions but was not able to simulate a catchment dominated by snowfall. Model parameters were calibrated for one catchment and applied successfully to another similar catchment, demonstrating potential for regionalization. However, the model requires modifications to account for snow dynamics to be effective in snow-dominated regions.
This document discusses a method for classifying atmospheric circulation patterns (CPs) based on their links to extreme wave events. It proposes using an entropy measure to objectively evaluate the quality of CP classifications and determine the optimal number of CP classes. The method is applied to wave data from Durban, South Africa to classify CPs driving extreme wave heights over 3.5m. The results indicate that 15-20 CP classes are needed for a good quality classification but one persistent class explains a large proportion of extreme events regardless of the number of classes.
Gaddam et al-2017-journal_of_earth_system_science (1)Vinay G
1) The study develops seasonal sensitivity characteristics (SSCs) for four glaciers in Western Himalaya to quantify changes in specific mass balance from monthly temperature and precipitation variations.
2) Using the SSCs and climate reanalysis data, the study reconstructs the specific mass balance of the glaciers from 1900-2010, finding they experienced both positive and negative balances, except Naradu glacier which only lost mass.
3) A cumulative mass loss of -133 ± 21.5 meters water equivalent was estimated for the four glaciers over the observation period, making this the first record of Himalayan glacier mass balances over a century scale.
This paper presents a joint analysis of surface wave and microgravity surveys to estimate S-wave velocity and density models of the subsurface. Surface wave testing and microgravity surveys were conducted along a 205m survey line to map a buried channel filled with soft sediments. An S-wave velocity model was derived from surface wave data using cross-correlation analysis. Gravity data was then analyzed based on the S-wave model, converting it to a density model using soil properties. A least squares method was used to modify the density model to reduce residuals between calculated and observed gravity, resulting in a clear low-density area matching the low S-wave velocity channel.
This document presents a data-driven approach to establish relationships between the microstructure and hydraulic conductivity of packed soil particles. Soil particle packings were generated numerically and their microstructures characterized using 2-point statistics. Hydraulic conductivity was estimated using finite volume simulations. Principal component analysis was used to reduce the microstructural data, and regression analysis was employed to correlate hydraulic conductivity with the principal components, establishing a structure-property relationship. Leave-one-out cross validation was used to assess the regression models.
Two concurring hydrological models (M1 and M2) that performed equally well when calibrated using only streamflow data showed important differences when microgravity data was added. The shape of the Pareto fronts obtained from multi-objective calibration using both streamflow and microgravity data provided useful insights to identify model limitations and indicated the value of including geophysical data to better constrain the inversion procedure. Time-lapse, relative microgravity surveys conducted over multiple field campaigns in the Vermigliana catchment in the Italian Alps allowed obtaining spatially distributed estimates of subsurface water storage changes to inform hydrological modeling.
This document discusses applying a novel approach using multi-criterion decision analysis (MCDA) with the generalized likelihood uncertainty estimation (GLUE) method to quantify uncertainty in hydrological modeling. Specifically, it examines uncertainty in the SLURP hydrological model. Rather than considering overall Nash-Sutcliffe efficiency, the approach considers NSE values for different flow magnitudes simultaneously. The TOPSIS MCDA method is used to compute predictive intervals by considering NSE values for different flow periods simultaneously. The Kootenay Catchment case study is used to demonstrate the MCDA-GLUE approach.
Npp site selection_andreev_varbanov_submitStoyan Andreev
This document summarizes a study comparing four proposed nuclear power plant sites in Bulgaria based on local seismic effects. Detailed site-specific seismic response analyses were conducted to obtain free-field ground motions at each site, accounting for uncertainty in geotechnical data using Latin Hypercube sampling. Input ground motions were selected from regional seismic sources. Response spectra at each site were compared to each other and to the uniform hazard response spectrum to evaluate local effects and rank the sites. The site with the lowest seismic hazard was nominated as preferred.
Mineral potential projects in the Southern New England Orogen: a pilot study conducted by the Geological Survey of NSW and Kenex to provide a comprehensive account of the mineral resource potential of the region
This document describes a study that develops expressions for ice particle mass (m) and projected area (A) as a function of maximum dimension (D), for use in atmospheric models and remote sensing. It combines measurements of individual ice particle m and D from ground studies with estimates of m and A from aircraft probe measurements in ice clouds. The resulting m-D and A-D expressions are functions of temperature and cloud type (synoptic vs. anvil), and agree well with other field studies for temperatures between −60 and −20 °C. The expressions allow ice particle properties to be estimated over a wider size range than single power laws, and provide uncertainty estimates for parameterizing the relationships as power laws over limited ranges.
Defining Homogenous Climate zones of Bangladesh using Cluster AnalysisPremier Publishers
Climate zones of Bangladesh are identified by using mathematical methodology of cluster analysis. Monthly data from 34 climate stations for rainfall from 1991 to 2013 are used in the cluster analysis. Five Agglomerative Hierarchical clustering measures based on mostly used six proximity measures are chosen to perform the regionalization. Besides three popular measures: K-means, Fuzzy and density based clustering techniques are applied initially to decide the most suitable method for the identification of homogeneous region. Stability of the cluster is also tested based on nine validity indices. It is decided that Ward method based on Euclidean distance, K-means, Fuzzy are the most likely to yield acceptable results in this particular case, as is often the case in climatological research. In this analysis we found seven different climate zones in Bangladesh.
The use of Cellular Automata is extended in various disciplines for the modeling of complex system procedures. Their inherent simplicity and their natural parallelism make them a very efficient tool for the simulation of large scale physical phenomena. We explore the framework of Cellular Automata to develop a physically based model for the spatial and temporal prediction of shallow landslides. Particular weight is given to the modeling of hydrological processes in order to investigate the hydrological triggering mechanisms and the importance of continuous modeling of water balance to detect timing and location of soil slips occurrences. Specifically, the 3D flow of water and the resulting water balance in the unsaturated and saturated zone is modeled taking into account important phenomena such as hydraulic hysteresis and evapotranspiration. In this poster the hydrological component of the model will be presented and tested against well established benchmark experiments [Vauclin et al, 1975; Vauclin et al, 1979]. Furthermore, we investigate the applicability of incorporating it in a hydrological catchment model for the prediction (temporal and spatial) of rainfall-triggered shallow landslides.
This document presents a study that uses Bayesian Regularized Neural Networks (BRNN) to model groundwater levels in the Mahabad aquifer in Iran. The study area and data collection process are described. Five factors - precipitation, evaporation, temperature, streamflow, and previous month's groundwater level - are used as inputs to the BRNN model to estimate current groundwater levels. The results show the BRNN model performs excellently with low errors and high accuracy and determination values. Previous month's groundwater level and streamflow are found to be the most important predictors of current groundwater levels.
This document presents a method for predicting stream flow distributions based on climatic and geomorphic data alone, without discharge measurements. It combines a physically-based stream flow model with water balance and geomorphic recession flow models. Key parameters of the stream flow model are estimated from rainfall, potential evapotranspiration, and digital elevation model data. The method was tested on calibration and test catchments. While offering a unique approach, the method has limitations including additional assumptions and reduced accuracy of parameter estimates and flow regime predictions.
This document provides instructions for analyzing the distribution of earthquakes based on magnitude, time, and location with a focus on clustering characteristics. It discusses Gutenberg-Richter's law which describes the relationship between earthquake magnitude and frequency. It also examines methods for calculating the b-value coefficient and considers the effects of aftershocks on magnitude distributions. The document proposes a model relating the magnitude distributions of main shocks, aftershocks, and all earthquakes based on b-values and the degree of aftershock activity.
This document summarizes a case study that used geophysical methods to characterize focused seepage through an earthfill dam. Resistivity and self-potential tomography were used to investigate anomalous seepage. The self-potential signals provide information about groundwater flow patterns because the source current density responsible for the SP signals is proportional to the Darcy velocity. However, the resistivity distribution also influences the SP distribution, so resistivity and SP data need to be used together. The study conducted resistivity and SP surveys at a dam in Colorado experiencing anomalous seepage at the toe. The data revealed SP and resistivity anomalies that were used to delineate three anomalous seepage zones and estimate the source of localized seepage
This document provides an overview of digital rock technology and its industrial applications. Digital rock technology uses imaging and modeling to obtain a digital representation of a rock's pore structure and simulate fluid flow processes. It can be used to determine properties like porosity, permeability, and relative permeability. This provides similar data as traditional core analysis but with additional benefits like modeling multiple properties of the same sample simultaneously. While imaging capabilities have improved, fully modeling complex rock samples remains challenging due to limitations in resolving pore structures and simulating complex fluid-rock interactions. However, digital rock technology is becoming increasingly integrated into industry workflows for reservoir modeling and understanding fluid transport at pore scales.
This document describes the implementation of a Gaussian Markov random field sampler for forward uncertainty quantification in the Ice-sheet and Sea-level System Model (ISSM). The sampler generates realizations of Gaussian random fields with Matérn covariance to characterize spatially varying uncertain input parameters in ice sheet models as random fields. It is based on representing such random fields as solutions to a stochastic partial differential equation, which is then discretized using finite elements. This provides a computationally efficient way to generate random field samples on complex ice sheet model meshes. The implementation is tested on synthetic problems and applied to assess uncertainties in projections of Pine Island Glacier retreat.
Linear inversion of absorptive/dispersive wave field measurements: theory and...Arthur Weglein
The use of inverse scattering theory for the inversion of viscoacoustic wave field
measurements, namely for a set of parameters that includes Q, is by its nature very
different from most current approaches for Q estimation. In particular, it involves an
analysis of the angle- and frequency-dependence of amplitudes of viscoacoustic data
events, rather than the measurement of temporal changes in the spectral nature of
events. We consider the linear inversion for these parameters theoretically and with
synthetic tests. The output is expected to be useful in two ways: (1) on its own it
provides an approximate distribution of Q with depth, and (2) higher order terms in
the inverse scattering series as it would be developed for the viscoacoustic case would
take the linear inverse as input.
We will begin, following Innanen (2003) by casting and manipulating the linear
inversion problem to deal with absorption for a problem with arbitrary variation of
wavespeed and Q in depth, given a single shot record as input. Having done this, we
will numerically and analytically develop a simplified instance of the 1D problem. This
simplified case will be instructive in a number of ways, first of all in demonstrating
that this type of direct inversion technique relies on reflectivity, and has no interest in
or ability to analyse propagation effects as a means to estimate Q. Secondly, through
a set of examples of slightly increasing complexity, we will demonstrate how and where
the linear approximation causes more than the usual levels of error. We show how
these errors may be mitigated through use of specific frequencies in the input data,
or, alternatively, through a layer-stripping based, or bootstrap, correction. In either
case the linear results are encouraging, and suggest the viscoacoustic inverse Born
approximation may have value as a standalone inversion procedure.
AGN Feeding and Feedback in M84: From Kiloparsec Scales to the Bondi RadiusSérgio Sacani
We present the deepest Chandra observation to date of the galaxy M84 in the Virgo Cluster, with over 840 kiloseconds of data
provided by legacy observations and a recent 730 kilosecond campaign. The increased signal-to-noise allows us to study the
origins of the accretion flow feeding the supermassive black hole in the center of M84 from the kiloparsec scales of the X-ray
halo to the Bondi radius, 𝑅B. Temperature, metallicity, and deprojected density profiles are obtained in four sectors about M84’s
AGN, extending into the Bondi radius. Rather than being dictated by the potential of the black hole, the accretion flow is strongly
influenced by the AGN’s bipolar radio jets. Along the jet axis, the density profile is consistent with 𝑛𝑒 ∝ 𝑟
−1
; however, the
profiles flatten perpendicular to the jet. Radio jets produce a significant asymmetry in the flow, violating a key assumption of
Bondi accretion. Temperature in the inner kiloparsec is approximately constant, with only a slight increase from 0.6 to 0.7 keV
approaching 𝑅B, and there is no evidence for a temperature rise imposed by the black hole. The Bondi accretion rate 𝑀¤ B exceeds
the rate inferred from AGN luminosity and jet power by over four orders of magnitude. In sectors perpendicular to the jet, 𝑀¤ B
measurements agree; however, the accretion rate is > 4𝜎 lower in the North sector along the jet, likely due to cavities in the X-ray
gas. Our measurements provide unique insight into the fueling of AGN responsible for radio mode feedback in galaxy clusters.
The atacama cosmology_telescope_measuring_radio_galaxy_bias_through_cross_cor...Sérgio Sacani
A radiação cósmica de micro-ondas aponta para a matéria escura invisível, marcando o ponto onde jatos de material viajam a velocidades próximas da velocidade da luz, de acordo com uma equipe internacional de astrônomos. O principal autor do estudo, Rupert Allison da Universidade de Oxford apresentou os resultados no dia 6 de Julho de 2015 no National Astronomy Meeting em Venue Cymru, em Llandudno em Wales.
Atualmente, ninguém sabe ao certo do que a matéria escura é feita, mas ela é responsável por cerca de 26% do conteúdo de energia do universo, com galáxias massivas se formando em densas regiões de matéria escura. Embora invisível, a matéria escura se mostra através do efeito gravitacional – uma grande bolha de matéria escura puxa a matéria normal (como elétrons, prótons e nêutrons) através de sua própria gravidade, eventualmente se empacotando conjuntamente para criar as estrelas e galáxias inteiras.
Muitas das maiores dessas são galáxias ativas com buracos negros supermassivos em seus centros. Alguma parte do gás caindo diretamente na direção do buraco negro é ejetada como jatos de partículas e radiação. As observações feitas com rádio telescópios mostram que esses jatos as vezes se espalham por milhões de anos-luz desde a galáxia – mais distante até mesmo do que a extensão da própria galáxia.
Os cientistas esperam que os jatos possam viver em regiões onde existe um excesso de concentração da matéria escura, maior do que o da média. Mas como a matéria escura é invisível, testar essa ideia não é algo tão direto.
This study used elastic impedance inversion and pre-stack attribute analysis on 3D seismic data with limited well control to identify productive zones in an offshore Iranian reservoir. Poisson dampening factor and Lame parameters extracted from pre-stack simultaneous inversion effectively delineated hydrocarbon-bearing areas, validated by crossplots of elastic impedance volumes. The attribute analysis results at well locations were generalized to the full seismic volume since the reservoir was considered laterally homogeneous.
Jeremie Giraud's PhD research being conducted at the Centre for Exploration Targeting, University of Western Australia is investigating the use of probabilistic geological models and statistical distributions of petrophysics to constrain joint potential field inversion.
This document describes a new robust fixed rank kriging (R-FRK) method for improving the spatial completeness and accuracy of satellite sea surface temperature (SST) products. The R-FRK method addresses two key issues: 1) it allows for dimension reduction kriging to be applied to satellite SST data over irregular regions, and 2) it incorporates a data-driven bias correction model to address systematic biases in the satellite SST measurements. The method is applied to Moderate Resolution Imaging Spectroradiometer (MODIS) SST data from 2003 and 2010. Validation using drifting buoy observations shows the method produces spatially complete SST fields with high accuracy.
This document summarizes recent results from the STAR experiment regarding correlations and fluctuations in heavy ion collisions at RHIC. It discusses measurements of elliptic and directed flow that provide evidence for local equilibration and pressure gradients in the quark-gluon plasma. HBT interferometry measurements indicate a source elongated perpendicular to the reaction plane, consistent with initial collision geometry. Charge-dependent number correlations reveal modified hadronization in the quark-gluon plasma compared to pp collisions, suggesting local charge conservation effects during hadronization. Overall, the results provide insights into the equilibration and relevant degrees of freedom in the quark-gluon plasma.
1) NREL is a national laboratory operated by the Alliance for Sustainable Energy, LLC that focuses on energy efficiency and renewable energy.
2) The presentation discusses options for quantifying solar resource from measurements including horizontal and inclined surfaces, and methods for transposing horizontal irradiance data to plane of array irradiance.
3) It notes that isotropic models used to approximate this transposition can underestimate plane of array irradiance by 5-20% compared to using anisotropic physics models that better simulate cloud conditions and solar radiances.
This document summarizes a study estimating geo-mechanical properties of reservoir rocks from well log data. The study presents a method to predict shear wave velocity from compressional wave velocity, porosity, and shale content when direct shear wave measurements are unavailable. Elastic properties including Poisson's ratio, shear modulus, bulk modulus, and Young's modulus are then calculated. These properties allow evaluation of formation strength and prediction of safe production rates without sand production. The results show shear and compressional wave velocities are linearly related. Calculated combined modulus of strength and shear modulus to compressibility ratio values indicate the formations can generally be produced safely below an optimum flow rate without significant sand production risks.
Estimating geo mechanical strength of reservoir rocks from well logs for safe...Alexander Decker
This document summarizes a study that estimated geo-mechanical properties of reservoir rocks from well log data in order to determine safety limits for sand-free hydrocarbon production. The study used well logs to predict shear wave velocity and then calculate elastic moduli, which can indicate a formation's mechanical strength. The results showed that the combined modulus of strength and shear modulus to compressibility ratio for the formations were relatively low, suggesting sand production should not occur below certain flow rates. This information on a formation's mechanical properties can help minimize risks during hydrocarbon exploration and production.
1. The study area located in the Songliao Basin contains thin and laterally discontinuous reservoirs that are difficult to predict using conventional seismic inversion methods.
2. Coherent and frequency division attributes were used to qualitatively predict reservoir distribution, showing the sand bodies are concentrated in central bands with different orientations for different formations.
3. Geostatistical inversion was used to quantitatively predict reservoir distribution, generating multiple equally probable impedance models that match well logging data and seismic data, accurately depicting the reservoir distribution rules in the study area.
This document summarizes a numerical investigation into the effects of roughness on near-bed turbulence characteristics in oscillatory flows. Direct numerical simulations were performed for two particle sizes corresponding to large gravel and small sand particles. A double-averaging technique was used to study the wake field spatial inhomogeneities introduced by the roughness. Preliminary results showed additional production and transport terms in the double-averaged Reynolds stress budgets, indicating alternate turbulent energy transfer pathways. Budgets of normal Reynolds stress components revealed redistribution of energy from the streamwise to other components due to pressure work. The large gravel particles significantly modulated near-bed flow structures and isotropization, while elongated horseshoe structures formed for the sand case due to high shear. Redistribution of energy
A precise water_abundance_measurement_for_the_hot_jupiter_wasp_43bSérgio Sacani
This document presents a precise measurement of the water abundance in the atmosphere of the exoplanet WASP-43b using transmission and thermal emission spectroscopy from the Hubble Space Telescope. The key findings are:
1) The water content of WASP-43b's atmosphere is consistent with solar composition at planetary temperatures, ranging from 0.4 to 3.5 times the solar water abundance.
2) This metallicity measurement extends the trend seen in the solar system of lower metal enrichment for higher mass planets.
3) Measuring a planet's water content constrains its formation location in the protoplanetary disk and provides insight into planetary formation models.
This study develops empirical correlations between cumulative absolute velocity (CAV) and spectral accelerations (Sa) using ground motion records from the NGA database. CAV-Sa correlations are influenced by rupture distance and presence of velocity pulses. Piecewise linear fitting equations are provided to quantify the correlations for various periods from 0.01 to 10 seconds. The correlations provide a useful way to characterize the joint occurrence of CAV and Sa, which can be applied in ground motion selection.
This document examines how the assumption of homogeneous vs heterogeneous radioactive contamination in soil/sediment impacts the external radiation dose rates of fauna. It analyzes contamination profiles from sediment samples in Canada and soil samples in Austria involving various radionuclides. Dose conversion coefficients are calculated using a dosimetry model for different organisms, locations within the contaminated media, and exposure scenarios. The results show dose rates can vary by three orders of magnitude depending on the specific situation. The assumption of homogeneous contamination is not always conservative.
Gravimetri Dersi için aşağıda ki videoları izleyebilirsiniz.
Link 01: https://www.youtube.com/watch?v=HTyjVaVGx0k
Link 02: https://www.youtube.com/watch?v=fUkfgI8XaOE
The document discusses gravity anomalies and density variations in different regions based on gravity data. It shows how gravity maps reveal details about crustal thickness, tectonic features like faults and volcanic zones, and plate boundaries. Specific examples discussed include the Tibetan Plateau, Central America subduction zone, an area in Chugoku, Japan, and the state of Florida in the US. Regional gravity data can be used to model density changes associated with plate tectonics, crustal evolution, and volcanic and tectonic activity.
The USF team reviewed a geophysical investigation of the Kar Kar region conducted by WesternGeco in 2011. They found that WesternGeco's magnetotelluric (MT) data and models were of high quality. Both the WesternGeco and USF MT models identified a low resistivity zone at 300m depth that correlates with a water-bearing zone found in Borehole 4. USF performed gravity modeling which identified a north-south trending basin reaching 1500m depth, consistent with mapped faults. A preliminary hydrothermal model suggested observed temperatures could result from deep circulation of meteoric waters in the basin without needing a localized heat source. Additional geophysical data is recommended around the Jermaghbyur hot springs to
This document summarizes a study that used gravity data to delineate underground structure in the Beppu geothermal field in Japan. Analysis of Bouguer anomaly maps revealed high anomalies in the southern and northern parts of the study area that correspond to known geological formations. Edge detection filtering of the gravity data helped identify subsurface faults, including the northern edge of the high southern anomaly corresponding to the Asamigawa Fault. Depth modeling of the gravity basement showed differences between the southern and northern hot spring areas, with steep basement slopes along faults in the south and uplifted basement in the north.
This document summarizes the development of a new ultra-high resolution model of Earth's gravity field called GGMplus. Key points:
- GGMplus combines satellite gravity data from GOCE and GRACE with terrestrial gravity data and topography to achieve unprecedented 200m spatial resolution globally.
- It provides gridded estimates of gravity, horizontal and radial field components, and quasi-geoid heights at over 3 billion points covering 80% of the Earth's land.
- GGMplus reveals new details of small-scale gravity variations and identifies locations of minimum and maximum gravity, suggesting peak-to-peak variations are 40% larger than previous estimates. The model will benefit scientific and engineering applications.
Gravity measurements were taken in a region of China covering the south-north earthquake belt in 1998, 2000, 2002, and 2005. Researchers noticed significant gravity changes in the region surrounding Wenchuan and suggested in 2006 that a major earthquake could occur there in 2007 or 2008. While gravity changes were significant at some locations, more research is needed to determine if they could be considered a precursor. Uncertainties exist from measurement errors, hydrologic effects, and crustal movements. Improved data collection and analysis could enhance using gravity monitoring for earthquake research.
The document provides guidelines for implementing the H/V spectral ratio technique using ambient vibration measurements to evaluate site effects. It recommends procedures for experimental design, data processing, and interpretation. The key recommendations include measuring for sufficient duration depending on expected frequency, using multiple measurement points, avoiding disturbances, and interpreting H/V peaks in context with geological and geophysical data. Reliable H/V peaks are defined as having a clear maximum within expected frequency ranges and uncertainties. The guidelines aim to help apply the technique while accounting for its limitations.
Geopsy yaygın olarak kullanılan profesyonel bir program. Özellikle, profesyonel program deneyimi yeni mezunlarda çok aranan bir özellik. Bir öğrencim çalışmasında kullanmayı planlıyor.
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
How to Build a Module in Odoo 17 Using the Scaffold MethodCeline George
Odoo provides an option for creating a module by using a single line command. By using this command the user can make a whole structure of a module. It is very easy for a beginner to make a module. There is no need to make each file manually. This slide will show how to create a module using the scaffold method.
How to Manage Your Lost Opportunities in Odoo 17 CRMCeline George
Odoo 17 CRM allows us to track why we lose sales opportunities with "Lost Reasons." This helps analyze our sales process and identify areas for improvement. Here's how to configure lost reasons in Odoo 17 CRM
हिंदी वर्णमाला पीपीटी, hindi alphabet PPT presentation, hindi varnamala PPT, Hindi Varnamala pdf, हिंदी स्वर, हिंदी व्यंजन, sikhiye hindi varnmala, dr. mulla adam ali, hindi language and literature, hindi alphabet with drawing, hindi alphabet pdf, hindi varnamala for childrens, hindi language, hindi varnamala practice for kids, https://www.drmullaadamali.com
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
Main Java[All of the Base Concepts}.docxadhitya5119
This is part 1 of my Java Learning Journey. This Contains Custom methods, classes, constructors, packages, multithreading , try- catch block, finally block and more.
Strategies for Effective Upskilling is a presentation by Chinwendu Peace in a Your Skill Boost Masterclass organisation by the Excellence Foundation for South Sudan on 08th and 09th June 2024 from 1 PM to 3 PM on each day.
Leveraging Generative AI to Drive Nonprofit InnovationTechSoup
In this webinar, participants learned how to utilize Generative AI to streamline operations and elevate member engagement. Amazon Web Service experts provided a customer specific use cases and dived into low/no-code tools that are quick and easy to deploy through Amazon Web Service (AWS.)
1. ORIGINAL ARTICLE
Geostatistical integration of seismic velocity and resistivity data
for probabilistic evaluation of rock quality
Seokhoon Oh
Received: 13 June 2011 / Accepted: 1 September 2012 / Published online: 12 September 2012
Ó Springer-Verlag 2012
Abstract This study applied a geostatistical approach to
integrate various geophysical results for the probabilistic
evaluation of rock quality designation (RQD) in regions
between boreholes. Two of the geophysical survey results,
electrical resistivity and seismic velocity, were transformed
into a probabilistic distribution with the directly observed
RQD values at the boreholes using an indicator value
method. The initial spatial distribution of RQD, inferred
from indicator kriging of observations from the boreholes,
was improved by support of the geophysical results based
on the integration by a permanence ratio. The integration
was good enough to produce results that compensated for
the defection of each exploration method. Also, the prob-
abilistic feature of the final product of the RQD distribution
made it possible to assess a more quantitative rock quality
evaluation and better decision making for safety design.
Keywords Geostatistics Á RQD Á Permanence ratio Á
Integration Á Decision making
Introduction
Rock quality designation (RQD) represents the degree of
jointing or fracture in a rock mass measured as a percent-
age of the drill core in lengths of 10 cm or more. It is
generally known that high-quality rock has an RQD of
more than 75 %, and low-quality rock has less than 50 %.
RQD has considerable value for estimating the support of
rock tunnels, and it forms a basic element value of the
major mass classification systems. In general, RQD is
decided only at boreholes.
In regions between boreholes, RQD should be decided
indirectly; one of the simplest ways to do this is to interpolate
the observed RQD values into the rest of the region with no
borehole. This method may be suitable for a site with densely
distributed boreholes; however, in most cases, it is not valid.
This kind of problem coincides with the geostatistical solu-
tion, where expensive and exact primary data (RQD) are
estimated based on inexpensive and easily obtained data that
are approximate and secondary. The secondary information
used in this study is the result of geophysical exploration.
Geophysical data support the determination of the physical
property of subsurface structure indirectly; and this feature is
very suitable as the secondary information that compensates
the borehole data of hard type information. Traditionally,
geostatistics has played an important role in integrating
various sources of information (Haas and Olivier 1994;
Torres-Verdin et al. 1999; Oh and Kwon 2001; Oh et al.
2004). The major advantage of geostatistics is that it provides
a newly integrated product that reflects the common element
of each source from the analysis of spatial relations. In the
process, uncertainty analyses from probabilistic approaches
provide decision makers with a degree of confidence for their
final products. Of the various geostatistical processes, the
one used for the interpretation of geophysical exploration is
the most important. Geophysical exploration has become
increasingly integrated (i.e., two or more explorations are
applied together) over the years (Chakravarthi et al. 2007;
Fregoso and Gallardo 2009; Lawton and Isaac 2007). This
tendency has arisen because the results of exploration require
that the risks arising from the uncertainty of interpretation be
minimized; therefore, integration enhances the decision
maker’s confidence level.
S. Oh (&)
Department of Energy and Resources Engineering,
Kangwon National University, 192-1 Hyoja 2-dong,
Chuncheon, Kangwon 200-701, South Korea
e-mail: gimul@kangwon.ac.kr
123
Environ Earth Sci (2013) 69:939–945
DOI 10.1007/s12665-012-1978-3
2. Inthisstudy,electricalresistivityandseismicvelocitywere
integrated to produce a RQD distribution in a probabilistic
way. One of the difficult aspects of dealing with geophysical
data in a probabilistic way is how to describe the geophysical
result with probability (Oh and Kwon 2001). This problem is
solved by adopting geostatistical indicator kriging and the soft
indicator value (Deutsch and Journel 1997).
A previous study (Barton 2006) stated that seismic
velocity and RQD may have a correlation; however, there are
also large variations in correlation according to the location
and class of rocks. For example, the correlation between two
physical properties is generally strong in gneiss regions, but
the pattern changes sharply in sedimentary regions,
depending on the degree of porosity and cementation. Seis-
mic velocities depend on a variety of parameters: first, the
composition, and the type of rock. Not only larger cracks or
fractures lower the P-wave velocities but also microcracks as
a result from weathering have a significant effect on the
seismic velocity, Vp. However, microcracks might have a
lower effect on RQD, thus resulting in high-quality rock.
This peculiar characteristic makes it difficult to decide the
classification of rock quality by depending on seismic
velocity exclusively. Nevertheless, seismic velocity is
believed to accurately represent the stiffness of rock, con-
sidering the propagation of seismic waves. In this study,
seismic velocity Vp is used to estimate the subsurface status.
Although Vs is known to be more correlated with the stiff-
ness of rock than Vp, practical aspects limit the use of Vs
survey in situ such as source generation, mode conversion,
late arrival, etc. Seismic velocity is still affected by a variety
of parameters except previous factors especially in micro
scale. These factors may be dealt with uncertainty, and
macroscopic approach will be more probable in this study.
Another way of estimating the RQD by geophysical
approach is electrical resistivity. The electrical resistivity of
rock is sensitive to porous media (Oh 2012), and it represents
the state of the rock, which is believed to depict its structural
characteristics. For example, a fracture zone along a fault line
orsheer zone ishighly probabletohaveporousmedium, and it
would indicate low resistivity value. However, electrical
resistivity also depends on a variety of parameters in the rock
such as fluid content and its resistivity; and it produces
sometimes obscure and uncertain results. Considering these
problems that may occur in the independent interpretation of
each method, a combination of the two different results can
improve RQD estimation by geostatistical processes.
Geostatistical data integration
Journel (2002) proved that the probability of estimating
event A based on information from B and C is given by
PðAjB; CÞ ¼
a
a þ bc
ð1Þ
where a, b, and c are, respectively,
a ¼
1 À PðAÞ
P(AÞ
; b ¼
1 À PðAjBÞ
P(AjBÞ
; c ¼
1 À PðAjCÞ
P(AjCÞ
:
ð2Þ
This formula was called the permanence ratio (Journel
2002), and it meant simply that the probability of event
A depending on the proof of B and C can be calculated
mathematically from the prior probability of the event itself
[P(A)], and the probability of event A based on proof
B [P(A|B)] and proof C [P(A|C)]. Actually, the permanence
ratio model originally proposed by Journel (2002) was
extended to the tau model by Krishnan et al. (2005). Unlike
the traditional Bayesian combination approach based on
the assumption of conditional independence, the logistic-
type ratio of the a priori and posteriori probabilities are
considered in the permanence ratio model. This model
conceptually takes into account data redundancy between
the data utilized. If one directly applies this model, the
equation is equivalent to combination under conditional
independence. Therefore, to account for data redundancy,
tau weight values should be considered (although the
derivation of the tau value is still very difficult). However,
this article aims to apply this approach to integration field,
and detailed description is omitted. When it comes to
geophysical data interpretation, event A might be an
estimation done by geophysical survey, for example, with
the probability of ground subsidence, the existence of a
subsurface cavity, or the probability of an RQD over 60 %.
The information of B and C might indicate the probability
of supporting data, such as a geophysical survey or a
borehole test. Therefore, probability P(A|B,C) indicates an
estimation of the primary parameter A, based on the
secondary information of B and C using the rule of
permanence ratio. According to Journel (2002), this rule
can be extended to cases dealing with three or more sources
of secondary information, and this necessitates the
integration of related information. In contrast to the
Bayesian integration, which requires terms of conditional
or perfect independence, this approach induces the
improvement of integrated information according to the
contribution of each source. This method provides a more
effective way of integrating information from different
sources.
In this study, event A is defined as the probability of
RQD over or under a specified value (e.g., an RQD over
60 % or under 30 %) at an arbitrary point, and event A is
estimated based on electrical resistivity [P(A|B)] and seis-
mic velocity [P(A|C)] using the permanence ratio.
940 Environ Earth Sci (2013) 69:939–945
123
3. Probabilistic estimation of geophysical results
Many researchers have attempted to deal with the geo-
physical problem using the probabilistic approach; how-
ever, it is always difficult to describe geophysical results
probabilistically (Oh and Kwon 2001). This problem
mainly arises from the difficulty of estimating geophysical
results with frequency, which is the most general approach
used in other fields. The geostatistical approach solves this
problem impartially in a way that considers the correlation
between the spatial distribution of the data and the point to
be estimated. Although a final decision for the interpreta-
tion of data from different data may still depend on geo-
physicists with experience, this way of approach will also
provide the expert with some useful information.
Prior distribution P(A)
The RQD value is the set obtained at the boreholes as the
primary information to be estimated by geostatistical
integration. All of the drilling process for core acquisition
is conducted in NX size, and complete cores are only used
for RQD measurement. Except for only a small number of
cores obtained at near surface, most of the samples are
complete, and it is not difficult to measure RQD. In geol-
ogy, the study area is mainly composed of metamorphic
and volcanic rocks, except weathered sedimentary rocks at
the surface. The prior distribution for RQD, P(A), was
made by indicator kriging (Deutsch and Journel 1997),
based on the observed RQD at each borehole. Figure 1
shows the probability distribution estimated to have an
RQD values over 60 % from P(A), which indicates the
higher probability of an RQD over 60 % when the esti-
mated probability is larger. As can be seen, the spatial
variation of the distribution is limited to the area around the
boreholes, because of the lack of data support in the region
without boreholes. However, this distribution is solely
dependent on the direct observation of the primary value,
RQD, with no relation to the geophysical result. Therefore,
it can be selected as the prior information, P(A).
Secondary information: P(A|B) and P(A|C)
The supporting data, P(A|B) and P(A|C), were induced from
a geophysical survey of electrical resistivity and seismic
velocity. Figures 2 and 3 depict the survey results of the
electrical resistivity and seismic velocity, respectively,
including the location of the boreholes. Some zones that
seem to be weak in resistivity section are marked from A to
H to check the variation after the integration. This survey
project was originally planned to assess the rock quality
through a mountain. This information was needed to design
a tunnel project. The location of the projected tunnel is also
displayed in the figures. The survey result of electrical
resistivity in Fig. 2 was obtained by applying dipole–dipole
array and its 2D inversion. The survey was conducted using
SuperSting R8 from AGI, Inc., and totally 128 electrodes
of 20-m spacing were installed. Dipole–dipole array is
known to be sensitive to detect horizontal variation in
highly resistive area, and the section of seismic velocity in
Fig. 3 resulted from the survey of P-wave tomography. The
seismic survey was conducted using an explosive source to
clearly distinguish the first arrival from background noise.
Three profiles of 64 channels of geophone are folded to
cover the entire area, and five shots for each profile are
exploded. Investigating the survey results of electrical
resistivity, two representative zones believed to be weak
were noticed (a low resistivity value can indicate weak
rock properties). The two zones were located at the x-axis
of 20,100 (marked as A) and the axis of 20,900 (marked as
C). These values showed relatively low resistivity values.
In contrast, the seismic velocity section did not depict any
significantly isolated low anomaly zone. Only a gradual
change of seismic velocity appeared, having increased
velocity with increasing depth. These characteristics,
which were in each survey, clearly explain the tendency
mentioned earlier. The integrated results will be checked
later to see the contribution of each result.
To convert these geophysical results into a probabilistic
distribution reflecting the relation with primary informa-
tion, each geophysical result was compared with the
observed RQD values at the boreholes where the locations
of the geophysical and RQD data were shared. That is, the
threshold value was set for the parameter of the geophys-
ical results. Then the RQD values, which corresponded to
the same range of geophysical results is collected. The
collected RQD values for each threshold of the geophysical
result were used to make the secondary information appear
as a non-parametric cumulative density function (CDF)
from the cumulative frequency table. The bicalib routine of
Fig. 1 Prior probability distribution of RQD over 60 % from indicator kriging of direct observation at borehole
Environ Earth Sci (2013) 69:939–945 941
123
4. GSLib (Deutsch and Journel 1997) aligns the group of RQD
values that belong to the given range of geophysical results to
the order ofindicator, which makesa probabilistic distribution
for the arbitrarythreshold. Therange ofthreshold is set to bein
steps of 0.04 from 2.3 to 2.82 in log value of resistivity. The
decision of range of threshold may importantly affect the final
result, i.e., the calibration results heavily depend on how to
group RQD values that belong to the given range of geo-
physical results. The probabilistic structure for the arbitrary
threshold of geophysical results was then expanded to the rest
of the region without boreholes. This process substitutes the
geophysical results with a probabilistic distribution supported
by the primary data. Figure 4 shows the procedure for making
secondary information from geophysical data with primary
data obtained at boreholes.
Figures 5 and 6 depict the secondary information made
from above process, where Zk represents the threshold
value of RQD. The probability was less than or equal to the
threshold value of RQD, Zk, at each location appears in the
figures. Considering the case of Zk = 20 in Fig. 6, the
probability that RQD is less than 20 is under 50 % in most
of the region, except near the surface. The near-surface
region had a high probability of having low values of RQD.
The opposite was the case for Zk = 70, shown in Fig. 6,
where most of the region had a high probability
(75–100 %) of having an RQD less than 70. However,
some regions may have had an RQD value higher than 70.
These two figures represent the most important procedure
for this kind of geostatistical integration approach, which
can be applied to a variety of other problems.
Application of permanence ratio to integration
Figures 5 and 6 explain the differences between the two
geophysical surveys. Electrical resistivity appears to have
been effective for detecting structural variations (such as
isolated low anomaly zones linked with locally weak
areas), whereas the seismic velocity describes well the
variations of physical properties with the change of stra-
tigraphy. The integrated result is expected to compensate
for the discrepancy of this tendency in the way of accepting
the part of information that is more accountable for the
primary data.
Figure 7 displays the integrated result for the RQD
estimation by the application of the permanence ratio to
electrical resistivity and seismic velocity, which demon-
strates an RQD probability over 60 %. The region with a
high probability of having an RQD value larger than 60 is
believed to be safe for geotechnical applications or tunnel
engineering. If an interpreter wants a more rigorous value
for RQD, then the threshold values may be changed to
reflect it. After the integration, zones B and D in Fig. 2 are
still showing low RQD values extended to a deep region.
For the comparison, Fig. 8 shows the probability map in
the case of an RQD under 30 %, which may be used to
distinguish a somewhat weak zone from the overall region.
As seen from the figure, the probability of having a low
RQD value is high in the near-surface region. A compari-
son of the integrated results with each independent source
of geophysical data indicates a noticeable improvement, as
expected.
Projected tunnel
100 300 500 700 Ohm-m
A
B
Fig. 2 The survey results of electrical resistivity, including the location of boreholes
Projected tunnel
1000 2000 3000 4500 m/s
Fig. 3 The survey results of seismic velocity, including the location of boreholes
942 Environ Earth Sci (2013) 69:939–945
123
5. Confidence analysis of integration
The probabilistic analysis adopted in this study also pro-
vides an integrated estimation with a new approach to
confidence analysis. The confidence analysis provides the
opportunity to check the reliability of the obtained result,
and it helps us to infer the errors quantitatively embedded
in the estimation. This procedure provides the decision
maker with assistance criteria related to cost estimation
(Goovaerts 1997).
In Fig. 8, which shows the probability map of RQD
under 30 %, some regions with especially high probability
may be designated as unsafe. For example, the zone with a
probability higher than 60 % may be classified as hazard-
ous with confidence. Such a region can be defined as
follows:
Prob ZðuÞ zcjnf g [ pc ð3Þ
where Z(u) is the estimated value, Zc is the threshold value,
n is the number of supporting data, and pc is the prede-
termined probability. That is, if the probability that the
estimated value [Z(u)] that is less than the threshold (Zc) at
a certain point is higher than the predetermined probability
(pc), then the point is classified as hazardous. In Fig. 9, the
0.00
0.20
0.40
0.60
0.80
1.00
0 10 20 30 40 50 60 70 80 90 100
0.25 0.25 0.25 0.25 0.75 1.00 1.00 1.00 1.00 1.00 1.00
0.12 0.26 0.41 0.59 0.68 0.79 0.91 0.94 0.94 0.97 1.00
0.36 0.50 0.64 0.79 0.79 0.79 0.93 0.93 0.93 1.00 1.00
0.09 0.18 0.41 0.55 0.73 0.82 0.86 0.91 0.95 1.00 1.00
0.06 0.17 0.28 0.28 0.33 0.44 0.67 0.83 0.89 1.00 1.00
0.07 0.07 0.07 0.11 0.26 0.33 0.37 0.48 0.70 0.81 1.00
Cumulative frequency table
RQD vs. Resistivity(Log) scattergram
RQD
CDF
Resistivity map
Fig. 4 Making a probabilistic
distribution from supporting
secondary data. A group of
primary values (here, RQD)
within the predetermined range
of secondary data (electrical
resistivity) was collected to
make the cumulative frequency
table. The table was converted
to a non-parametric cumulative
distribution function for the
range, and it was substituted
with resistivity values at any
point corresponding to the range
Fig. 5 Cumulative distribution
of P(A|B) for each threshold
value Zk from electrical
resistivity
Environ Earth Sci (2013) 69:939–945 943
123
6. regions (except those colored as gray) indicate a hazardous
classification by the criterion with Zc = 30 and pc = 60 %.
However, this classification has another kind of uncer-
tainty: the possibility of incorrect classification. The fol-
lowing probabilistic formula defines such an incorrect
classification:
aðuÞ ¼ Prob ZðuÞ [ zcj½zÃ
LðuÞzcŠ; ðnÞ
È É
ð4Þ
where Z(u) is the real value at point u, and zÃ
L uð Þ is the
wrongly estimated value. This formula indicates a case
where even though the estimation was classified as
hazardous ð½zÃ
L uð ÞzcŠÞ, the real value was larger than the
threshold (Z uð Þ [ zc). This probability may be simply
calculated from Eq. (1) by subtraction, which is just a
complementary part of the inferred probability. The result
is given in Fig. 9, which shows that the value of aðuÞ is
very low in the near surface region, indicating a hazardous
zone with high confidence, whereas some regions appear to
have a relatively high aðuÞ value, which indicates that the
classification may be wrong. The information displayed in
Fig. 9 determines a very useful aspect of the probabilistic
approach.
Fig. 6 Cumulative distribution
of P(A|B) for each threshold
value Zk from seismic velocity
Fig. 7 The integrated result for RQD estimation by application of the permanence ratio to electrical resistivity and seismic velocity, which maps
the probability of RQD over 60 %
Fig. 8 The probability map for
RQD estimated under 30 %
evaluated from direct borehole,
resistivity, and seismic
tomography data
Fig. 9 Probability aðuÞ of wrong classification as hazardous for
Zc = 30 and pc = 60 %. The aðuÞ is very low in the near-surface
region, indicating a hazardous zone designation with high confidence,
whereas some regions appear to have relatively high aðuÞv values,
which indicates that the classification may be wrong
944 Environ Earth Sci (2013) 69:939–945
123
7. Conclusion
An improved way of inferring physical properties in rela-
tion to geophysical surveys via geostatistical integration is
proposed. In this study, electrical resistivity, seismic
velocity, and the direct observation of RQD at boreholes
were used together to estimate RQD values at a region with
no boreholes by means of the probabilistic approach. To
deal with the geophysical data in a probabilistic way, a
special method was devised, and prior information was
obtained by indicator kriging of direct observations of
RQD. The secondary information, made from a comparison
between the geophysical results and the primary data,
described well the characteristics of each applied survey.
The integrated result seems to provide a useful way to
obtain information compared to the independent result of
geophysical survey, and the advantage of the probabilistic
approach appeared in various c analyses. Although the
search for an estimation of rock quality was only proposed
in this study, a wider application of this method is rec-
ommended for the integrated analysis of geophysical data,
likely in the cases of problems of cavity detection or
ground subsidence.
A good integration from various sources of information,
such as seismic velocity and electrical resistivity used in
this study, should explain the geology or geosciences.
Therefore, success or failure of good integration highly
depends on the preliminary detailed interpretation of each
physical property. Ignoring it, the result would be far from
true information, and even may fail in the final decision. A
statistical integration or interpretation is not magic, how-
ever, just an explanation method based on observed data.
Acknowledgments This study was funded by the Korea Meteoro-
logical Administration Research and Development Program under
Grant CATER 2012-8020.
References
Barton N (2006) Rock quality, seismic velocity, attenuation and
anisotropy. Taylor & Francis, Leiden, p 729
Chakravarthi V, Shankar GBK, Muralidharan D, Harinarayana T,
Sundararajan N (2007) An integrated geophysical approach for
imaging subbasalt sedimentary basins: case study of Jam River
Basin, India. Geophysics 72:B141. doi:10.1190/1.2777004
Deutsch CV, Journel AG (1997) Geostatistical software library and
user’s guide, 2nd edn. Oxford University Press, New York, p 384
Fregoso E, Gallardo LA (2009) Cross-gradients joint 3D inversion
with applications to gravity and magnetic data. Geophysics
74:L31. doi:10.1190/1.3119263
Goovaerts P (1997) Geostatistics for natural resources evaluation.
Oxford University Press, New York, p 483
Haas A, Olivier D (1994) Geostatistical inversion—a sequential
method of stochastic reservoir modeling constrained by seismic
data. First Break 12:561–569
Journel AG (2002) Combining knowledge from diverse sources: an
alternative to traditional data independence hypothesis. Math
Geol 34:573–596
Krishnan S, Boucher A, Journel AG (2005) Evaluating information
redundancy through the tau model. Geostatistics Banff 2004.
Quant Geol Geostat 14:1037–1046
Lawton DC, Isaac JH (2007) Integrated gravity and seismic interpre-
tation in the Norman Range, Northwest Territories, Canada.
Geophysics 72:B112–B117
Oh S (2012) Safety assessment of dams by analysis of the electrical
properties of the embankment material. Eng Geol 129:76–90
Oh S, Kwon B (2001) Geostatistical approach to Bayesian inversion
of geophysical data: Markov chain Monte Carlo method. Earth
Planets Space 53:777–791
Oh S, Lee DK, Chung H (2004) Geostatistical integration of MT and
borehole data for RMR evaluation. Environ Geol 46:1070–1078
Torres-Verdin C, Victoria M, Merletti G, Pendrel J (1999) Trace-
based and geostatistical inversion of 3-D seismic data for thin-
sand delineation: an application in San Jorge Basin, Argentina.
Lead Edge 18:1070–1077
Environ Earth Sci (2013) 69:939–945 945
123