Climate Extremes Workshop - Employing a Multivariate Spatial Hierarchical Model to Characterize Extremes with Application to US Gulf Coast Precipitation - Brook Russell, May 16, 2018
Over a seven-day period in August 2017 Hurricane Harvey brought extreme levels of rainfall to the Houston area, resulting in catastrophic flooding that caused loss of human life and damage to personal property and public infrastructure. In the wake of this event, there is growing interest in understanding the degree to which this event was unusual and estimating the probability of experiencing a similar event in other locations. Additionally, we investigate the degree to which the sea surface temperature in the Gulf of Mexico is associated with extreme precipitation in the US Gulf Coast. This talk addresses these issues through the development of an extreme value model.
We assume that the annual maximum precipitation values at Gulf Coast locations approximately follow the Generalized Extreme Value (GEV) distribution. Because the observed precipitation record in this region is relatively short, we borrow strength across spatial locations to improve GEV parameter estimates. We model the GEV parameters at US Gulf Coast locations using a multivariate spatial hierarchical model based on coregionalization; for inference, a two-stage approach is utilized. Spatial interpolation is used to estimate GEV parameters at unobserved locations, allowing us to characterize precipitation extremes throughout the region. Nearby locations may experience extreme precipitation from the same event, resulting in dependence between annual maxima that previous spatial models of this sort have ignored. Our model incorporates dependence of this type and uses the nonparametric bootstrap to estimate its effect.
Over a seven day period in August 2017 Hurricane Harvey brought extreme levels of rainfall to the Houston area, resulting in catastrophic flooding that caused loss of human life and damage to personal property and public infrastructure. In the wake of this event, there is growing interest in understanding the degree to which this event was unusual and estimating the probability of experiencing a similar event in other locations. Additionally, we investigate the degree to which the sea surface temperature in the Gulf of Mexico is associated with extreme precipitation in the US Gulf Coast. This talk addresses these issues through the development of an extreme value model.
We assume that the annual maximum precipitation values at Gulf Coast locations approximately follow the Generalized Extreme Value (GEV) distribution. Because the observed precipitation record in this region is relatively short, we borrow strength across spatial locations to improve GEV parameter estimates. We model the GEV parameters at US Gulf Coast locations using a multivariate spatial hierarchical model; for inference, a two-stage approach is utilized. Spatial
interpolation is used to estimate GEV parameters at unobserved locations, allowing us to characterize precipitation extremes throughout the region. Analysis indicates that Harvey was highly unusual as a seven
-day event, and that GoM SST seems to be more strongly linked to extreme precipitation in the Western part of
the region.
This document discusses methods for estimating earthquake recurrence parameters when observation periods are unequal for different magnitude earthquakes. It generalizes previous methods to account for magnitudes being grouped into classes, observation periods varying by magnitude, and an imposed maximum magnitude. The maximum likelihood estimation approach leads to an equation that can be solved iteratively to estimate the recurrence parameter β. Confidence intervals for β and the annual earthquake rate can be approximated using normal or chi-square distributions depending on the number of events. Sample calculations for zones in western Canada show compatible results between methods when data is well-constrained but different results when data is less well-defined.
A Novel Technique in Software Engineering for Building Scalable Large Paralle...Eswar Publications
Parallel processing is the only alternative for meeting computational demand of scientific and technological advancement. Yet first few parallelized versions of a large application code- in the present case-a meteorological Global Circulation Model- are not usually optimal or efficient. Large size and complexity of the code cause making changes for efficient parallelization and further validation difficult. The paper presents some novel techniques to enable change of parallelization strategy keeping the correctness of the code under control throughout the modification.
Prediction of the daily global solar irradiance received on a horizontal surf...irjes
This document presents a new approach for predicting the daily global solar irradiance received on a horizontal surface as a function of local daytime and the maximum daily value. An exponential distribution function is suggested and compared to experimental data from several locations. The maximum daily value (qmax) is estimated theoretically in terms of the solar constant adjusted for earth-sun distance variation. Computed values using the new approach show good agreement with experimental data, within 16% error except for some extreme points.
Pratik Tarafdar is investigating the application of analogue gravity techniques to model primordial black hole accretion. He plans to apply these techniques used to model astrophysical black hole accretion to primordial black holes. This will help understand primordial black hole accretion phenomena from the perspective of analogue gravity. He has focused on calculating the analogue surface gravity and has obtained expressions for it in both adiabatic and isothermal cases for different accretion disk models. Future work will extend this to model radiation accretion onto primordial black holes and study effects of fluid dispersion on the analogue Hawking temperature.
This document discusses using additional data from the TAMDAR sensor network to improve forecasts of cold air damming events in the Southeast United States. Cold air damming occurs when cold air becomes trapped against mountain slopes, causing winter weather. The study aims to enhance existing detection algorithms by combining TAMDAR data with radiosonde data to better predict the life cycle and severity of cold air damming events. It is hoped that the increased data availability from TAMDAR will allow forecast models to more accurately predict cold air damming frequencies, timing, and associated weather patterns. The methodology will evaluate forecasts from the last 10 years with and without TAMDAR data to determine if the additional observations significantly improve cold air damming forecasts.
The document discusses complex models for analyzing random variables using fractional moments and complex Hurst exponents. It begins with background on variance, covariance, and Hurst exponents. It then explains how complex Hurst exponents allow calculating fractional moments and higher-order information to better analyze relationships between variables. Applications discussed include analyzing gas emissions in coal mines, vegetation changes over time using NDVI maps, stock market fluctuations, and autonomous vehicle networks. Several academic papers and theses utilizing these complex models are also summarized.
Over a seven day period in August 2017 Hurricane Harvey brought extreme levels of rainfall to the Houston area, resulting in catastrophic flooding that caused loss of human life and damage to personal property and public infrastructure. In the wake of this event, there is growing interest in understanding the degree to which this event was unusual and estimating the probability of experiencing a similar event in other locations. Additionally, we investigate the degree to which the sea surface temperature in the Gulf of Mexico is associated with extreme precipitation in the US Gulf Coast. This talk addresses these issues through the development of an extreme value model.
We assume that the annual maximum precipitation values at Gulf Coast locations approximately follow the Generalized Extreme Value (GEV) distribution. Because the observed precipitation record in this region is relatively short, we borrow strength across spatial locations to improve GEV parameter estimates. We model the GEV parameters at US Gulf Coast locations using a multivariate spatial hierarchical model; for inference, a two-stage approach is utilized. Spatial
interpolation is used to estimate GEV parameters at unobserved locations, allowing us to characterize precipitation extremes throughout the region. Analysis indicates that Harvey was highly unusual as a seven
-day event, and that GoM SST seems to be more strongly linked to extreme precipitation in the Western part of
the region.
This document discusses methods for estimating earthquake recurrence parameters when observation periods are unequal for different magnitude earthquakes. It generalizes previous methods to account for magnitudes being grouped into classes, observation periods varying by magnitude, and an imposed maximum magnitude. The maximum likelihood estimation approach leads to an equation that can be solved iteratively to estimate the recurrence parameter β. Confidence intervals for β and the annual earthquake rate can be approximated using normal or chi-square distributions depending on the number of events. Sample calculations for zones in western Canada show compatible results between methods when data is well-constrained but different results when data is less well-defined.
A Novel Technique in Software Engineering for Building Scalable Large Paralle...Eswar Publications
Parallel processing is the only alternative for meeting computational demand of scientific and technological advancement. Yet first few parallelized versions of a large application code- in the present case-a meteorological Global Circulation Model- are not usually optimal or efficient. Large size and complexity of the code cause making changes for efficient parallelization and further validation difficult. The paper presents some novel techniques to enable change of parallelization strategy keeping the correctness of the code under control throughout the modification.
Prediction of the daily global solar irradiance received on a horizontal surf...irjes
This document presents a new approach for predicting the daily global solar irradiance received on a horizontal surface as a function of local daytime and the maximum daily value. An exponential distribution function is suggested and compared to experimental data from several locations. The maximum daily value (qmax) is estimated theoretically in terms of the solar constant adjusted for earth-sun distance variation. Computed values using the new approach show good agreement with experimental data, within 16% error except for some extreme points.
Pratik Tarafdar is investigating the application of analogue gravity techniques to model primordial black hole accretion. He plans to apply these techniques used to model astrophysical black hole accretion to primordial black holes. This will help understand primordial black hole accretion phenomena from the perspective of analogue gravity. He has focused on calculating the analogue surface gravity and has obtained expressions for it in both adiabatic and isothermal cases for different accretion disk models. Future work will extend this to model radiation accretion onto primordial black holes and study effects of fluid dispersion on the analogue Hawking temperature.
This document discusses using additional data from the TAMDAR sensor network to improve forecasts of cold air damming events in the Southeast United States. Cold air damming occurs when cold air becomes trapped against mountain slopes, causing winter weather. The study aims to enhance existing detection algorithms by combining TAMDAR data with radiosonde data to better predict the life cycle and severity of cold air damming events. It is hoped that the increased data availability from TAMDAR will allow forecast models to more accurately predict cold air damming frequencies, timing, and associated weather patterns. The methodology will evaluate forecasts from the last 10 years with and without TAMDAR data to determine if the additional observations significantly improve cold air damming forecasts.
The document discusses complex models for analyzing random variables using fractional moments and complex Hurst exponents. It begins with background on variance, covariance, and Hurst exponents. It then explains how complex Hurst exponents allow calculating fractional moments and higher-order information to better analyze relationships between variables. Applications discussed include analyzing gas emissions in coal mines, vegetation changes over time using NDVI maps, stock market fluctuations, and autonomous vehicle networks. Several academic papers and theses utilizing these complex models are also summarized.
This document describes how to derive a required time (T) unit hydrograph from a given time (D) unit hydrograph when T is not a multiple of D using the S-curve method. It explains that an S-curve hydrograph is generated by continuous, uniform effective rainfall and rises continuously in the shape of an S until equilibrium is reached. The ordinates of the S-curve can be calculated using the equation S(t) = U(t) + S(t-D), where S(t) is the ordinate of the S-curve at time t, U(t) is the ordinate of the given unit hydrograph at time t, and S(t-D) is the
This document is a master's paper analyzing surface wind modeling and forecasting in North Slope Alaska. It uses minute-by-minute wind speed and direction data from 1998 to 2003 at multiple heights to develop statistical models. A fractionally differenced vector ARMA model describes the correlation structure in the data. A multivariate GARCH model is then used to account for heteroscedasticity in the innovations. Specifically, a Dynamic Conditional Correlation GARCH model is adopted to model the time-varying conditional covariance matrix. Prediction intervals for minute-ahead forecasts are then calibrated using a best linear predictor approach.
This document discusses the distribution of slip along earthquake faults based on analyses of five major earthquake slip models. It finds that the distribution follows a piecewise Gutenberg-Richter law, with different b-values above and below a transition point. For smaller slips, b is near 1, while for larger slips b is greater than 1. It analyzes the slip distributions using rank-ordering analysis to overcome data limitations. This verifies the existence of power laws with different scaling constants in the two slip regimes identified.
This document proposes a modification to the Gutenberg-Richter law to describe the cumulative distribution of earthquake magnitudes using concepts from nonextensive statistical mechanics. It introduces a new "q-stretched exponential" form for the modified Gutenberg-Richter law and fits this form to seismic data from California and Iran. The empirical data fits extremely well with the proposed modification over the entire range of magnitudes. Nonextensive statistical mechanics is applied to derive a q-exponential distribution for the surface size of fragments produced during earthquakes. A new hypothetical relationship is also proposed between the surface size of fragments and the released energy.
This document discusses methods for calculating the standard error of the b-value parameter in the Gutenberg-Richter magnitude-frequency relationship. It presents two key formulas:
1) For large samples where b can be treated as constant over time, the standard error of b is given by σ(b) = 2.30b2/√n, where n is the sample size and σ2(M) is the sample variance of magnitudes.
2) When b varies slowly over time, its standard error has two components - the time-averaged variance of b at different times, plus the variance of the mean b averaged over the entire time period.
The document also provides tables to determine confidence intervals for
1) The document discusses the maximum likelihood estimator of b-value for mainshocks versus all events.
2) It shows that mainshocks do not entirely satisfy the Gutenberg-Richter law since their magnitude distribution depends on factors other than b-value alone.
3) Analyzing earthquake data from southern California, it demonstrates that the commonly used maximum likelihood estimator produces a statistically insignificant difference between b-values for mainshocks and all events when a more appropriate estimator is used that accounts for the non-exponential distribution of mainshock magnitudes.
This document discusses different methods for computing average rainfall over a basin including arithmetic average, Thiessen polygon, and isohyetal methods. It provides examples of calculating average rainfall using each method. It also discusses presenting rainfall data through mass curves and hyetographs. The arithmetic average method simply takes the mean of recorded rainfall values at stations. Thiessen polygon method weights values based on each station's representative area. Isohyetal mapping involves contouring equal rainfall and calculating weighted averages between contours.
1) The document examines the frequency-magnitude relationship for small earthquakes recorded at a borehole seismograph station in the Newport-Inglewood fault zone.
2) It finds a clear departure from the expected linear relationship for magnitudes below 3, with frequencies of M=0.5 earthquakes almost 10 times lower than expected.
3) This provides evidence that the frequency-magnitude relationship departs from self-similarity below about magnitude 3, coinciding with the observed departure of corner frequency from self-similarity. This supports the interpretation that the upper limit of large earthquake spectra (fmax) is related to fault zone size.
Hargreaves Class A method, Physical example, Christian sen method, estimation of evapotranspiration, PET, Methods of irrigation, Surface irrigation, free flooding irrigation method
This document discusses position and time plots used to analyze waves. A position plot shows displacement over position at a fixed time, allowing determination of amplitude and wavelength. A time plot shows displacement over time at a fixed position, allowing determination of time period. The document provides an example problem where given position and time plots, the wave equation is determined by extracting amplitude, wavelength, period, and phase constant.
This document summarizes a study that analyzes earthquake statistics in a mean-field model of a heterogeneous fault zone. The model examines the interplay between disorder, dynamical effects, and driving mechanisms. It finds a two-parameter phase diagram depending on the amplitude of dynamical weakening effects and the normal distance of driving forces from the fault. Small weakening effects and driving forces produce Gutenberg-Richter type power law distributions, while large weakening effects and driving forces lead to characteristic earthquakes that rupture the entire fault. For some parameters, the behavior can switch between these two phases, resembling observations of real faults.
This document discusses using machine learning to help develop subgrid parameterizations for climate models based on high-resolution simulations. It notes that while high-resolution models have progressed faster than parameterizations, humans still develop parameterizations, which is slow. The document explores using machine learning on comprehensive training datasets from high-resolution models to relate coarse-grid variables to subgrid-scale quantities needed by climate models. Challenges include making schemes stochastic, handling data outside the training range, and instability in global models. Past work applying neural networks to a cloud-resolving model dataset showed promise but has not been used prognostically. Overall, machine learning may help break the parameterization bottleneck if technical challenges can be overcome.
1. The SBAS-DInSAR algorithm was used to analyze ERS and ENVISAT SAR data from 1992-2010 over the Yellowstone Caldera, revealing a complex deformation field.
2. An optimal distribution of interferometric pairs was identified to limit the impact of temporal decorrelation, exploiting pairs with small baselines.
3. The analysis revealed accelerated uplift in the Yellowstone caldera between 2004-2007, with rates consistent with inflation of a subsurface sill at a rate of 0.1 km3 per year.
AGU 2012 Bayesian analysis of non Gaussian LRD processesNick Watkins
Contributed talk at American Geophysical Union Fall Meetinfg, San Francisco, 2012.
Part of work now submitted to Bayesian Analysis, 2014, see eprint: http://arxiv.org/abs/1403.2940
Cambridge 2014 Complexity, tails and trendsNick Watkins
This document discusses two types of complexity that can affect trend detection in time series data: long range dependence and heavy tails.
Long range dependence, if present in a system, implies the presence of low frequency "slow" fluctuations that can complicate trend detection. Heavy tails in a probability distribution are a source of "wild" fluctuations due to more frequent extreme events.
The document reviews several examples of long range dependence and heavy tails observed in real-world datasets like financial data and space weather data. Statistical models like linear fractional stable motion (LFSM) and autoregressive fractionally integrated moving average (ARFIMA) processes are discussed for modeling systems with both properties. Better statistical inference methods are also needed to distinguish true
This document describes how to derive a required time (T) unit hydrograph from a given time (D) unit hydrograph when T is not a multiple of D using the S-curve method. It explains that an S-curve hydrograph is generated by continuous, uniform effective rainfall and rises continuously in the shape of an S until equilibrium is reached. The ordinates of the S-curve can be calculated using the equation S(t) = U(t) + S(t-D), where S(t) is the ordinate of the S-curve at time t, U(t) is the ordinate of the given unit hydrograph at time t, and S(t-D) is the
This document is a master's paper analyzing surface wind modeling and forecasting in North Slope Alaska. It uses minute-by-minute wind speed and direction data from 1998 to 2003 at multiple heights to develop statistical models. A fractionally differenced vector ARMA model describes the correlation structure in the data. A multivariate GARCH model is then used to account for heteroscedasticity in the innovations. Specifically, a Dynamic Conditional Correlation GARCH model is adopted to model the time-varying conditional covariance matrix. Prediction intervals for minute-ahead forecasts are then calibrated using a best linear predictor approach.
This document discusses the distribution of slip along earthquake faults based on analyses of five major earthquake slip models. It finds that the distribution follows a piecewise Gutenberg-Richter law, with different b-values above and below a transition point. For smaller slips, b is near 1, while for larger slips b is greater than 1. It analyzes the slip distributions using rank-ordering analysis to overcome data limitations. This verifies the existence of power laws with different scaling constants in the two slip regimes identified.
This document proposes a modification to the Gutenberg-Richter law to describe the cumulative distribution of earthquake magnitudes using concepts from nonextensive statistical mechanics. It introduces a new "q-stretched exponential" form for the modified Gutenberg-Richter law and fits this form to seismic data from California and Iran. The empirical data fits extremely well with the proposed modification over the entire range of magnitudes. Nonextensive statistical mechanics is applied to derive a q-exponential distribution for the surface size of fragments produced during earthquakes. A new hypothetical relationship is also proposed between the surface size of fragments and the released energy.
This document discusses methods for calculating the standard error of the b-value parameter in the Gutenberg-Richter magnitude-frequency relationship. It presents two key formulas:
1) For large samples where b can be treated as constant over time, the standard error of b is given by σ(b) = 2.30b2/√n, where n is the sample size and σ2(M) is the sample variance of magnitudes.
2) When b varies slowly over time, its standard error has two components - the time-averaged variance of b at different times, plus the variance of the mean b averaged over the entire time period.
The document also provides tables to determine confidence intervals for
1) The document discusses the maximum likelihood estimator of b-value for mainshocks versus all events.
2) It shows that mainshocks do not entirely satisfy the Gutenberg-Richter law since their magnitude distribution depends on factors other than b-value alone.
3) Analyzing earthquake data from southern California, it demonstrates that the commonly used maximum likelihood estimator produces a statistically insignificant difference between b-values for mainshocks and all events when a more appropriate estimator is used that accounts for the non-exponential distribution of mainshock magnitudes.
This document discusses different methods for computing average rainfall over a basin including arithmetic average, Thiessen polygon, and isohyetal methods. It provides examples of calculating average rainfall using each method. It also discusses presenting rainfall data through mass curves and hyetographs. The arithmetic average method simply takes the mean of recorded rainfall values at stations. Thiessen polygon method weights values based on each station's representative area. Isohyetal mapping involves contouring equal rainfall and calculating weighted averages between contours.
1) The document examines the frequency-magnitude relationship for small earthquakes recorded at a borehole seismograph station in the Newport-Inglewood fault zone.
2) It finds a clear departure from the expected linear relationship for magnitudes below 3, with frequencies of M=0.5 earthquakes almost 10 times lower than expected.
3) This provides evidence that the frequency-magnitude relationship departs from self-similarity below about magnitude 3, coinciding with the observed departure of corner frequency from self-similarity. This supports the interpretation that the upper limit of large earthquake spectra (fmax) is related to fault zone size.
Hargreaves Class A method, Physical example, Christian sen method, estimation of evapotranspiration, PET, Methods of irrigation, Surface irrigation, free flooding irrigation method
This document discusses position and time plots used to analyze waves. A position plot shows displacement over position at a fixed time, allowing determination of amplitude and wavelength. A time plot shows displacement over time at a fixed position, allowing determination of time period. The document provides an example problem where given position and time plots, the wave equation is determined by extracting amplitude, wavelength, period, and phase constant.
This document summarizes a study that analyzes earthquake statistics in a mean-field model of a heterogeneous fault zone. The model examines the interplay between disorder, dynamical effects, and driving mechanisms. It finds a two-parameter phase diagram depending on the amplitude of dynamical weakening effects and the normal distance of driving forces from the fault. Small weakening effects and driving forces produce Gutenberg-Richter type power law distributions, while large weakening effects and driving forces lead to characteristic earthquakes that rupture the entire fault. For some parameters, the behavior can switch between these two phases, resembling observations of real faults.
This document discusses using machine learning to help develop subgrid parameterizations for climate models based on high-resolution simulations. It notes that while high-resolution models have progressed faster than parameterizations, humans still develop parameterizations, which is slow. The document explores using machine learning on comprehensive training datasets from high-resolution models to relate coarse-grid variables to subgrid-scale quantities needed by climate models. Challenges include making schemes stochastic, handling data outside the training range, and instability in global models. Past work applying neural networks to a cloud-resolving model dataset showed promise but has not been used prognostically. Overall, machine learning may help break the parameterization bottleneck if technical challenges can be overcome.
1. The SBAS-DInSAR algorithm was used to analyze ERS and ENVISAT SAR data from 1992-2010 over the Yellowstone Caldera, revealing a complex deformation field.
2. An optimal distribution of interferometric pairs was identified to limit the impact of temporal decorrelation, exploiting pairs with small baselines.
3. The analysis revealed accelerated uplift in the Yellowstone caldera between 2004-2007, with rates consistent with inflation of a subsurface sill at a rate of 0.1 km3 per year.
Presentation 4 ce 904 on Hydrology by Rabindra Ranjan Saha,PEng, Associate Pr...
Similar to Climate Extremes Workshop - Employing a Multivariate Spatial Hierarchical Model to Characterize Extremes with Application to US Gulf Coast Precipitation - Brook Russell, May 16, 2018
AGU 2012 Bayesian analysis of non Gaussian LRD processesNick Watkins
Contributed talk at American Geophysical Union Fall Meetinfg, San Francisco, 2012.
Part of work now submitted to Bayesian Analysis, 2014, see eprint: http://arxiv.org/abs/1403.2940
Cambridge 2014 Complexity, tails and trendsNick Watkins
This document discusses two types of complexity that can affect trend detection in time series data: long range dependence and heavy tails.
Long range dependence, if present in a system, implies the presence of low frequency "slow" fluctuations that can complicate trend detection. Heavy tails in a probability distribution are a source of "wild" fluctuations due to more frequent extreme events.
The document reviews several examples of long range dependence and heavy tails observed in real-world datasets like financial data and space weather data. Statistical models like linear fractional stable motion (LFSM) and autoregressive fractionally integrated moving average (ARFIMA) processes are discussed for modeling systems with both properties. Better statistical inference methods are also needed to distinguish true
Detection of a_supervoid_aligned_with_the_cold_spot_of_the_cosmic_microwave_b...Sérgio Sacani
This study uses infrared galaxy data from WISE and 2MASS surveys matched with optical data from the Pan-STARRS1 survey to search for a supervoid in the direction of the cosmic microwave background cold spot. Radial galaxy density profiles centered on the cold spot show a large underdensity extending over tens of degrees. Counts in photometric redshift bins within radii of 5 and 15 degrees show significantly low galaxy densities, at 5-6 sigma detection levels. This is consistent with a large 220 Mpc supervoid with an average density contrast of -0.14, centered at a redshift of 0.22. Such a supervoid could plausibly explain the observed cold spot in the cosmic microwave background.
Vertical wind speed measurements from Doppler LIDAR were analyzed to characterize wind speed extremes. Gaussian process models were fit to capture the nonseparability of the spatial and temporal covariance structure. A spectral-in-time covariance function was developed that includes a frequency-dependent spatial coherence function. Fast fitting methods were used to approximate the likelihood for large datasets. Zero crossing statistics were also examined to analyze wind speed thresholds over time. Future work will include incorporating cyclostationary models and additional climate variables as covariates.
This document describes methods for modeling spatial extremes using semiparametric approaches. It introduces a spatial skew-t process model that exhibits asymptotic dependence while maintaining computational tractability. It then extends this to a semiparametric Dirichlet process mixture of spatial skew-t processes that can flexibly model both the bulk distribution and tails without requiring a threshold. This flexible model is shown to perform well compared to parametric alternatives in simulations for both spatial prediction and modeling extremes.
This document presents an approach to linear prestack seismic inversion to detect hydrocarbons using amplitude variation with offset (AVO) information. The inversion is based on modeling reflection coefficients using a three-term linearized approximation to the Zoeppritz equations. One term models background reflectivity based on a priori relationships between compressional velocity, shear velocity, and density. Two additional terms describe perturbations from the background and can detect hydrocarbons as anomalies. The method is applied to a real seismic data set from a shallow marine environment. Hydrocarbons are detected as deviations in the model parameters from the expected background. The data are well reproduced with hydrocarbons detected as anomalies in the model parameters related to the prediction errors.
Gwendolyn Eadie: A New Method for Time Series Analysis in Astronomy with an A...JeremyHeyl
1. The document describes a new method called Multitaper + Lomb-Scargle (MTLS) for estimating the power spectrum of unevenly sampled time series data, such as that from astronomical observations.
2. MTLS combines the Multitaper method, which reduces spectral leakage and variance, with the Lomb-Scargle method, which can handle uneven sampling times.
3. The document provides an example application of MTLS to estimate the power spectrum of a red giant star using Kepler telescope lightcurve data, finding an improvement over the standard Lomb-Scargle method.
The document discusses methods for characterizing the global environment using satellite data to help overcome challenges posed by weather effects on missile defense sensors. It describes adjusting infrared imagery thresholds to approximate radar observations, extracting weather event boundaries, projecting 3D shapes onto a model Earth, and using an existing satellite constellation to provide continuous coverage. The goal is to determine visibility and sensor performance to optimize sensor selection and placement for missile defense.
This document discusses challenges in analyzing large spatiotemporal environmental datasets and presents hierarchical Gaussian process models as a framework to address these challenges. It describes a joint NASA and Forest Service initiative applying these models to map forest variables in Interior Alaska using sparse ground plots and airborne/spaceborne LiDAR data. The key challenges are incorporating diverse spatial data sources, handling missingness/misalignment, accounting for spatial dependence, propagating uncertainty, and scaling to massive datasets.
A statistical analysis_of_the_accuracy_of_the_digitized_magnitudes_of_photome...Sérgio Sacani
We present a statistical analysis of the accuracy of the digitized magnitudes of photometric plates on
the time scale of decades. In our examination of archival Johnson B photometry from the Harvard
DASCH archive, we nd a median RMS scatter of lightcurves of order 0.15mag over the range B
9 17 for all calibrations. Slight underlying systematics (trends or
ux discontinuities) are on a level
of . 0:2mag per century (1889{1990) for the majority of constant stars. These historic data can
be unambiguously used for processes that happen on scales of magnitudes, and need to be carefully
examined in cases approaching the noise
oor. The characterization of these limits in photometric
stability may guide future studies in their use of plate archives. We explain these limitations for the
example case of KIC8462852, which has been claimed to dim by 0:16mag per century, and show that
this trend cannot be considered as signicant.
The BICEP2 collaboration reports detecting excess B-mode polarization at degree angular scales between 30 < l < 150 that is inconsistent with the lensed-ΛCDM model at over 5σ significance. Jackknife tests and simulations show the signal is not due to systematic contamination. Available foreground models predict signals considerably smaller than observed and show no significant cross-correlation. Cross-correlating with BICEP1 confirms the excess at 3σ and its spectral index is consistent with the CMB, disfavoring synchrotron or dust origins. The data are well-fit by a lensed-ΛCDM + tensor model with tensor/scalar ratio r = 0.20+0.07−0
This document describes a Kriging component for spatial interpolation of climatological variables in the OMS modeling framework. Kriging is a geostatistical technique that interpolates values based on measured data and the spatial autocorrelation between data points. The component implements ordinary and detrended Kriging algorithms using 10 semivariogram models. It can interpolate both raster and point data and outputs the interpolated climatological variable values. Links are provided for downloading the component code, data, and OMS project files needed to run the interpolation.
Wavelet estimation for a multidimensional acoustic or elastic earthArthur Weglein
A new and general wave theoretical wavelet estimation
method is derived. Knowing the seismic wavelet
is important both for processing seismic data and for
modeling the seismic response. To obtain the wavelet,
both statistical (e.g., Wiener-Levinson) and deterministic
(matching surface seismic to well-log data) methods
are generally used. In the marine case, a far-field
signature is often obtained with a deep-towed hydrophone.
The statistical methods do not allow obtaining
the phase of the wavelet, whereas the deterministic
method obviously requires data from a well. The
deep-towed hydrophone requires that the water be
deep enough for the hydrophone to be in the far field
and in addition that the reflections from the water
bottom and structure do not corrupt the measured
wavelet. None of the methods address the source
array pattern, which is important for amplitude-versus-
offset (AVO) studies.
Wavelet estimation for a multidimensional acoustic or elastic earth- Arthur W...Arthur Weglein
A new and general wave theoretical wavelet estimation
method is derived. Knowing the seismic wavelet
is important both for processing seismic data and for
modeling the seismic response. To obtain the wavelet,
both statistical (e.g., Wiener-Levinson) and deterministic
(matching surface seismic to well-log data) methods
are generally used. In the marine case, a far-field
signature is often obtained with a deep-towed hydrophone.
The statistical methods do not allow obtaining
the phase of the wavelet, whereas the deterministic
method obviously requires data from a well. The
deep-towed hydrophone requires that the water be
deep enough for the hydrophone to be in the far field
and in addition that the reflections from the water
bottom and structure do not corrupt the measured
wavelet. None of the methods address the source
array pattern, which is important for amplitude-versus-
offset (AVO) studies
data-driven approach to identifying key regions of change associated with fut...Zachary Labe
Labe, Z.M., T.L. Delworth, N.C. Johnson, and W.F. Cooke. A data-driven approach to identifying key regions of change associated with future climate scenarios, 23rd Conference on Artificial Intelligence for Environmental Science, Baltimore, MD (Jan 2024). https://ams.confex.com/ams/104ANNUAL/meetingapp.cgi/Paper/431300
1) The document examines trends in climate data and precipitation extremes using statistical methods.
2) It analyzes temperature and sea surface temperature data from the Gulf Coast region to identify trends, finding statistically significant increasing trends in both.
3) Spatial analysis is used to interpolate the trends across the region, indicating the strongest increasing trends are concentrated near Houston and correlate more with sea surface temperatures than linear trends alone.
Using explainable machine learning to evaluate climate change projectionsZachary Labe
5 October 2023…
Atmosphere and Ocean Climate Dynamics Seminar (Presentation): Using explainable machine learning to evaluate climate change projections, Yale University, New Haven, CT. Remote Presentation.
References...
Labe, Z.M., E.A. Barnes, and J.W. Hurrell (2023). Identifying the regional emergence of climate patterns in the ARISE-SAI-1.5 simulations. Environmental Research Letters, DOI:10.1088/1748-9326/acc81a, https://iopscience.iop.org/article/10.1088/1748-9326/acc81a
This document presents a Bayesian methodology for retrieving soil parameters like moisture from SAR images. It begins by introducing the importance of soil moisture monitoring and the opportunity provided by Argentina's upcoming SAOCOM SAR satellite. It then discusses limitations of traditional retrieval models in accounting for speckle noise and terrain heterogeneity. The document proposes a Bayesian approach using a multiplicative speckle model within a likelihood function to estimate soil moisture and roughness from SAR backscatter measurements. Simulation results show the Bayesian method retrieves soil moisture across the full measurement space and provides error estimates, with improved precision at higher numbers of looks.
Similar to Climate Extremes Workshop - Employing a Multivariate Spatial Hierarchical Model to Characterize Extremes with Application to US Gulf Coast Precipitation - Brook Russell, May 16, 2018 (20)
Recently, the machine learning community has expressed strong interest in applying latent variable modeling strategies to causal inference problems with unobserved confounding. Here, I discuss one of the big debates that occurred over the past year, and how we can move forward. I will focus specifically on the failure of point identification in this setting, and discuss how this can be used to design flexible sensitivity analyses that cleanly separate identified and unidentified components of the causal model.
I will discuss paradigmatic statistical models of inference and learning from high dimensional data, such as sparse PCA and the perceptron neural network, in the sub-linear sparsity regime. In this limit the underlying hidden signal, i.e., the low-rank matrix in PCA or the neural network weights, has a number of non-zero components that scales sub-linearly with the total dimension of the vector. I will provide explicit low-dimensional variational formulas for the asymptotic mutual information between the signal and the data in suitable sparse limits. In the setting of support recovery these formulas imply sharp 0-1 phase transitions for the asymptotic minimum mean-square-error (or generalization error in the neural network setting). A similar phase transition was analyzed recently in the context of sparse high-dimensional linear regression by Reeves et al.
Many different measurement techniques are used to record neural activity in the brains of different organisms, including fMRI, EEG, MEG, lightsheet microscopy and direct recordings with electrodes. Each of these measurement modes have their advantages and disadvantages concerning the resolution of the data in space and time, the directness of measurement of the neural activity and which organisms they can be applied to. For some of these modes and for some organisms, significant amounts of data are now available in large standardized open-source datasets. I will report on our efforts to apply causal discovery algorithms to, among others, fMRI data from the Human Connectome Project, and to lightsheet microscopy data from zebrafish larvae. In particular, I will focus on the challenges we have faced both in terms of the nature of the data and the computational features of the discovery algorithms, as well as the modeling of experimental interventions.
1) The document presents a statistical modeling approach called targeted smooth Bayesian causal forests (tsbcf) to smoothly estimate heterogeneous treatment effects over gestational age using observational data from early medical abortion regimens.
2) The tsbcf method extends Bayesian additive regression trees (BART) to estimate treatment effects that evolve smoothly over gestational age, while allowing for heterogeneous effects across patient subgroups.
3) The tsbcf analysis of early medical abortion regimen data found the simultaneous administration to be similarly effective overall to the interval administration, but identified some patient subgroups where effectiveness may vary more over gestational age.
Difference-in-differences is a widely used evaluation strategy that draws causal inference from observational panel data. Its causal identification relies on the assumption of parallel trends, which is scale-dependent and may be questionable in some applications. A common alternative is a regression model that adjusts for the lagged dependent variable, which rests on the assumption of ignorability conditional on past outcomes. In the context of linear models, Angrist and Pischke (2009) show that the difference-in-differences and lagged-dependent-variable regression estimates have a bracketing relationship. Namely, for a true positive effect, if ignorability is correct, then mistakenly assuming parallel trends will overestimate the effect; in contrast, if the parallel trends assumption is correct, then mistakenly assuming ignorability will underestimate the effect. We show that the same bracketing relationship holds in general nonparametric (model-free) settings. We also extend the result to semiparametric estimation based on inverse probability weighting.
We develop sensitivity analyses for weak nulls in matched observational studies while allowing unit-level treatment effects to vary. In contrast to randomized experiments and paired observational studies, we show for general matched designs that over a large class of test statistics, any valid sensitivity analysis for the weak null must be unnecessarily conservative if Fisher's sharp null of no treatment effect for any individual also holds. We present a sensitivity analysis valid for the weak null, and illustrate why it is conservative if the sharp null holds through connections to inverse probability weighted estimators. An alternative procedure is presented that is asymptotically sharp if treatment effects are constant, and is valid for the weak null under additional assumptions which may be deemed reasonable by practitioners. The methods may be applied to matched observational studies constructed using any optimal without-replacement matching algorithm, allowing practitioners to assess robustness to hidden bias while allowing for treatment effect heterogeneity.
This document discusses difference-in-differences (DiD) analysis, a quasi-experimental method used to estimate treatment effects. The author notes that while widely applicable, DiD relies on strong assumptions about the counterfactual. She recommends approaches like matching on observed variables between similar populations, thoughtfully specifying regression models to adjust for confounding factors, testing for parallel pre-treatment trends under different assumptions, and considering more complex models that allow for different types of changes over time. The overall message is that DiD requires careful consideration and testing of its underlying assumptions to draw valid causal conclusions.
We present recent advances and statistical developments for evaluating Dynamic Treatment Regimes (DTR), which allow the treatment to be dynamically tailored according to evolving subject-level data. Identification of an optimal DTR is a key component for precision medicine and personalized health care. Specific topics covered in this talk include several recent projects with robust and flexible methods developed for the above research area. We will first introduce a dynamic statistical learning method, adaptive contrast weighted learning (ACWL), which combines doubly robust semiparametric regression estimators with flexible machine learning methods. We will further develop a tree-based reinforcement learning (T-RL) method, which builds an unsupervised decision tree that maintains the nature of batch-mode reinforcement learning. Unlike ACWL, T-RL handles the optimization problem with multiple treatment comparisons directly through a purity measure constructed with augmented inverse probability weighted estimators. T-RL is robust, efficient and easy to interpret for the identification of optimal DTRs. However, ACWL seems more robust against tree-type misspecification than T-RL when the true optimal DTR is non-tree-type. At the end of this talk, we will also present a new Stochastic-Tree Search method called ST-RL for evaluating optimal DTRs.
A fundamental feature of evaluating causal health effects of air quality regulations is that air pollution moves through space, rendering health outcomes at a particular population location dependent upon regulatory actions taken at multiple, possibly distant, pollution sources. Motivated by studies of the public-health impacts of power plant regulations in the U.S., this talk introduces the novel setting of bipartite causal inference with interference, which arises when 1) treatments are defined on observational units that are distinct from those at which outcomes are measured and 2) there is interference between units in the sense that outcomes for some units depend on the treatments assigned to many other units. Interference in this setting arises due to complex exposure patterns dictated by physical-chemical atmospheric processes of pollution transport, with intervention effects framed as propagating across a bipartite network of power plants and residential zip codes. New causal estimands are introduced for the bipartite setting, along with an estimation approach based on generalized propensity scores for treatments on a network. The new methods are deployed to estimate how emission-reduction technologies implemented at coal-fired power plants causally affect health outcomes among Medicare beneficiaries in the U.S.
Laine Thomas presented information about how causal inference is being used to determine the cost/benefit of the two most common surgical surgical treatments for women - hysterectomy and myomectomy.
We provide an overview of some recent developments in machine learning tools for dynamic treatment regime discovery in precision medicine. The first development is a new off-policy reinforcement learning tool for continual learning in mobile health to enable patients with type 1 diabetes to exercise safely. The second development is a new inverse reinforcement learning tools which enables use of observational data to learn how clinicians balance competing priorities for treating depression and mania in patients with bipolar disorder. Both practical and technical challenges are discussed.
The method of differences-in-differences (DID) is widely used to estimate causal effects. The primary advantage of DID is that it can account for time-invariant bias from unobserved confounders. However, the standard DID estimator will be biased if there is an interaction between history in the after period and the groups. That is, bias will be present if an event besides the treatment occurs at the same time and affects the treated group in a differential fashion. We present a method of bounds based on DID that accounts for an unmeasured confounder that has a differential effect in the post-treatment time period. These DID bracketing bounds are simple to implement and only require partitioning the controls into two separate groups. We also develop two key extensions for DID bracketing bounds. First, we develop a new falsification test to probe the key assumption that is necessary for the bounds estimator to provide consistent estimates of the treatment effect. Next, we develop a method of sensitivity analysis that adjusts the bounds for possible bias based on differences between the treated and control units from the pretreatment period. We apply these DID bracketing bounds and the new methods we develop to an application on the effect of voter identification laws on turnout. Specifically, we focus estimating whether the enactment of voter identification laws in Georgia and Indiana had an effect on voter turnout.
This document summarizes a simulation study evaluating causal inference methods for assessing the effects of opioid and gun policies. The study used real US state-level data to simulate the adoption of policies by some states and estimated the effects using different statistical models. It found that with fewer adopting states, type 1 error rates were too high, and most models lacked power. It recommends using cluster-robust standard errors and lagged outcomes to improve model performance. The study aims to help identify best practices for policy evaluation studies.
We study experimental design in large-scale stochastic systems with substantial uncertainty and structured cross-unit interference. We consider the problem of a platform that seeks to optimize supply-side payments p in a centralized marketplace where different suppliers interact via their effects on the overall supply-demand equilibrium, and propose a class of local experimentation schemes that can be used to optimize these payments without perturbing the overall market equilibrium. We show that, as the system size grows, our scheme can estimate the gradient of the platform’s utility with respect to p while perturbing the overall market equilibrium by only a vanishingly small amount. We can then use these gradient estimates to optimize p via any stochastic first-order optimization method. These results stem from the insight that, while the system involves a large number of interacting units, any interference can only be channeled through a small number of key statistics, and this structure allows us to accurately predict feedback effects that arise from global system changes using only information collected while remaining in equilibrium.
We discuss a general roadmap for generating causal inference based on observational studies used to general real world evidence. We review targeted minimum loss estimation (TMLE), which provides a general template for the construction of asymptotically efficient plug-in estimators of a target estimand for realistic (i.e, infinite dimensional) statistical models. TMLE is a two stage procedure that first involves using ensemble machine learning termed super-learning to estimate the relevant stochastic relations between the treatment, censoring, covariates and outcome of interest. The super-learner allows one to fully utilize all the advances in machine learning (in addition to more conventional parametric model based estimators) to build a single most powerful ensemble machine learning algorithm. We present Highly Adaptive Lasso as an important machine learning algorithm to include.
In the second step, the TMLE involves maximizing a parametric likelihood along a so-called least favorable parametric model through the super-learner fit of the relevant stochastic relations in the observed data. This second step bridges the state of the art in machine learning to estimators of target estimands for which statistical inference is available (i.e, confidence intervals, p-values etc). We also review recent advances in collaborative TMLE in which the fit of the treatment and censoring mechanism is tailored w.r.t. performance of TMLE. We also discuss asymptotically valid bootstrap based inference. Simulations and data analyses are provided as demonstrations.
We describe different approaches for specifying models and prior distributions for estimating heterogeneous treatment effects using Bayesian nonparametric models. We make an affirmative case for direct, informative (or partially informative) prior distributions on heterogeneous treatment effects, especially when treatment effect size and treatment effect variation is small relative to other sources of variability. We also consider how to provide scientifically meaningful summaries of complicated, high-dimensional posterior distributions over heterogeneous treatment effects with appropriate measures of uncertainty.
Climate change mitigation has traditionally been analyzed as some version of a public goods game (PGG) in which a group is most successful if everybody contributes, but players are best off individually by not contributing anything (i.e., “free-riding”)—thereby creating a social dilemma. Analysis of climate change using the PGG and its variants has helped explain why global cooperation on GHG reductions is so difficult, as nations have an incentive to free-ride on the reductions of others. Rather than inspire collective action, it seems that the lack of progress in addressing the climate crisis is driving the search for a “quick fix” technological solution that circumvents the need for cooperation.
This document discusses various types of academic writing and provides tips for effective academic writing. It outlines common academic writing formats such as journal papers, books, and reports. It also lists writing necessities like having a clear purpose, understanding your audience, using proper grammar and being concise. The document cautions against plagiarism and not proofreading. It provides additional dos and don'ts for writing, such as using simple language and avoiding filler words. Overall, the key message is that academic writing requires selling your ideas effectively to the reader.
Machine learning (including deep and reinforcement learning) and blockchain are two of the most noticeable technologies in recent years. The first one is the foundation of artificial intelligence and big data, and the second one has significantly disrupted the financial industry. Both technologies are data-driven, and thus there are rapidly growing interests in integrating them for more secure and efficient data sharing and analysis. In this paper, we review the research on combining blockchain and machine learning technologies and demonstrate that they can collaborate efficiently and effectively. In the end, we point out some future directions and expect more researches on deeper integration of the two promising technologies.
In this talk, we discuss QuTrack, a Blockchain-based approach to track experiment and model changes primarily for AI and ML models. In addition, we discuss how change analytics can be used for process improvement and to enhance the model development and deployment processes.
More from The Statistical and Applied Mathematical Sciences Institute (20)
How to Make a Field Mandatory in Odoo 17Celine George
In Odoo, making a field required can be done through both Python code and XML views. When you set the required attribute to True in Python code, it makes the field required across all views where it's used. Conversely, when you set the required attribute in XML views, it makes the field required only in the context of that particular view.
Philippine Edukasyong Pantahanan at Pangkabuhayan (EPP) CurriculumMJDuyan
(𝐓𝐋𝐄 𝟏𝟎𝟎) (𝐋𝐞𝐬𝐬𝐨𝐧 𝟏)-𝐏𝐫𝐞𝐥𝐢𝐦𝐬
𝐃𝐢𝐬𝐜𝐮𝐬𝐬 𝐭𝐡𝐞 𝐄𝐏𝐏 𝐂𝐮𝐫𝐫𝐢𝐜𝐮𝐥𝐮𝐦 𝐢𝐧 𝐭𝐡𝐞 𝐏𝐡𝐢𝐥𝐢𝐩𝐩𝐢𝐧𝐞𝐬:
- Understand the goals and objectives of the Edukasyong Pantahanan at Pangkabuhayan (EPP) curriculum, recognizing its importance in fostering practical life skills and values among students. Students will also be able to identify the key components and subjects covered, such as agriculture, home economics, industrial arts, and information and communication technology.
𝐄𝐱𝐩𝐥𝐚𝐢𝐧 𝐭𝐡𝐞 𝐍𝐚𝐭𝐮𝐫𝐞 𝐚𝐧𝐝 𝐒𝐜𝐨𝐩𝐞 𝐨𝐟 𝐚𝐧 𝐄𝐧𝐭𝐫𝐞𝐩𝐫𝐞𝐧𝐞𝐮𝐫:
-Define entrepreneurship, distinguishing it from general business activities by emphasizing its focus on innovation, risk-taking, and value creation. Students will describe the characteristics and traits of successful entrepreneurs, including their roles and responsibilities, and discuss the broader economic and social impacts of entrepreneurial activities on both local and global scales.
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UPRAHUL
This Dissertation explores the particular circumstances of Mirzapur, a region located in the
core of India. Mirzapur, with its varied terrains and abundant biodiversity, offers an optimal
environment for investigating the changes in vegetation cover dynamics. Our study utilizes
advanced technologies such as GIS (Geographic Information Systems) and Remote sensing to
analyze the transformations that have taken place over the course of a decade.
The complex relationship between human activities and the environment has been the focus
of extensive research and worry. As the global community grapples with swift urbanization,
population expansion, and economic progress, the effects on natural ecosystems are becoming
more evident. A crucial element of this impact is the alteration of vegetation cover, which plays a
significant role in maintaining the ecological equilibrium of our planet.Land serves as the foundation for all human activities and provides the necessary materials for
these activities. As the most crucial natural resource, its utilization by humans results in different
'Land uses,' which are determined by both human activities and the physical characteristics of the
land.
The utilization of land is impacted by human needs and environmental factors. In countries
like India, rapid population growth and the emphasis on extensive resource exploitation can lead
to significant land degradation, adversely affecting the region's land cover.
Therefore, human intervention has significantly influenced land use patterns over many
centuries, evolving its structure over time and space. In the present era, these changes have
accelerated due to factors such as agriculture and urbanization. Information regarding land use and
cover is essential for various planning and management tasks related to the Earth's surface,
providing crucial environmental data for scientific, resource management, policy purposes, and
diverse human activities.
Accurate understanding of land use and cover is imperative for the development planning
of any area. Consequently, a wide range of professionals, including earth system scientists, land
and water managers, and urban planners, are interested in obtaining data on land use and cover
changes, conversion trends, and other related patterns. The spatial dimensions of land use and
cover support policymakers and scientists in making well-informed decisions, as alterations in
these patterns indicate shifts in economic and social conditions. Monitoring such changes with the
help of Advanced technologies like Remote Sensing and Geographic Information Systems is
crucial for coordinated efforts across different administrative levels. Advanced technologies like
Remote Sensing and Geographic Information Systems
9
Changes in vegetation cover refer to variations in the distribution, composition, and overall
structure of plant communities across different temporal and spatial scales. These changes can
occur natural.
Chapter wise All Notes of First year Basic Civil Engineering.pptxDenish Jangid
Chapter wise All Notes of First year Basic Civil Engineering
Syllabus
Chapter-1
Introduction to objective, scope and outcome the subject
Chapter 2
Introduction: Scope and Specialization of Civil Engineering, Role of civil Engineer in Society, Impact of infrastructural development on economy of country.
Chapter 3
Surveying: Object Principles & Types of Surveying; Site Plans, Plans & Maps; Scales & Unit of different Measurements.
Linear Measurements: Instruments used. Linear Measurement by Tape, Ranging out Survey Lines and overcoming Obstructions; Measurements on sloping ground; Tape corrections, conventional symbols. Angular Measurements: Instruments used; Introduction to Compass Surveying, Bearings and Longitude & Latitude of a Line, Introduction to total station.
Levelling: Instrument used Object of levelling, Methods of levelling in brief, and Contour maps.
Chapter 4
Buildings: Selection of site for Buildings, Layout of Building Plan, Types of buildings, Plinth area, carpet area, floor space index, Introduction to building byelaws, concept of sun light & ventilation. Components of Buildings & their functions, Basic concept of R.C.C., Introduction to types of foundation
Chapter 5
Transportation: Introduction to Transportation Engineering; Traffic and Road Safety: Types and Characteristics of Various Modes of Transportation; Various Road Traffic Signs, Causes of Accidents and Road Safety Measures.
Chapter 6
Environmental Engineering: Environmental Pollution, Environmental Acts and Regulations, Functional Concepts of Ecology, Basics of Species, Biodiversity, Ecosystem, Hydrological Cycle; Chemical Cycles: Carbon, Nitrogen & Phosphorus; Energy Flow in Ecosystems.
Water Pollution: Water Quality standards, Introduction to Treatment & Disposal of Waste Water. Reuse and Saving of Water, Rain Water Harvesting. Solid Waste Management: Classification of Solid Waste, Collection, Transportation and Disposal of Solid. Recycling of Solid Waste: Energy Recovery, Sanitary Landfill, On-Site Sanitation. Air & Noise Pollution: Primary and Secondary air pollutants, Harmful effects of Air Pollution, Control of Air Pollution. . Noise Pollution Harmful Effects of noise pollution, control of noise pollution, Global warming & Climate Change, Ozone depletion, Greenhouse effect
Text Books:
1. Palancharmy, Basic Civil Engineering, McGraw Hill publishers.
2. Satheesh Gopi, Basic Civil Engineering, Pearson Publishers.
3. Ketki Rangwala Dalal, Essentials of Civil Engineering, Charotar Publishing House.
4. BCP, Surveying volume 1
Beyond Degrees - Empowering the Workforce in the Context of Skills-First.pptxEduSkills OECD
Iván Bornacelly, Policy Analyst at the OECD Centre for Skills, OECD, presents at the webinar 'Tackling job market gaps with a skills-first approach' on 12 June 2024
Leveraging Generative AI to Drive Nonprofit InnovationTechSoup
In this webinar, participants learned how to utilize Generative AI to streamline operations and elevate member engagement. Amazon Web Service experts provided a customer specific use cases and dived into low/no-code tools that are quick and easy to deploy through Amazon Web Service (AWS.)
This document provides an overview of wound healing, its functions, stages, mechanisms, factors affecting it, and complications.
A wound is a break in the integrity of the skin or tissues, which may be associated with disruption of the structure and function.
Healing is the body’s response to injury in an attempt to restore normal structure and functions.
Healing can occur in two ways: Regeneration and Repair
There are 4 phases of wound healing: hemostasis, inflammation, proliferation, and remodeling. This document also describes the mechanism of wound healing. Factors that affect healing include infection, uncontrolled diabetes, poor nutrition, age, anemia, the presence of foreign bodies, etc.
Complications of wound healing like infection, hyperpigmentation of scar, contractures, and keloid formation.
Walmart Business+ and Spark Good for Nonprofits.pdfTechSoup
"Learn about all the ways Walmart supports nonprofit organizations.
You will hear from Liz Willett, the Head of Nonprofits, and hear about what Walmart is doing to help nonprofits, including Walmart Business and Spark Good. Walmart Business+ is a new offer for nonprofits that offers discounts and also streamlines nonprofits order and expense tracking, saving time and money.
The webinar may also give some examples on how nonprofits can best leverage Walmart Business+.
The event will cover the following::
Walmart Business + (https://business.walmart.com/plus) is a new shopping experience for nonprofits, schools, and local business customers that connects an exclusive online shopping experience to stores. Benefits include free delivery and shipping, a 'Spend Analytics” feature, special discounts, deals and tax-exempt shopping.
Special TechSoup offer for a free 180 days membership, and up to $150 in discounts on eligible orders.
Spark Good (walmart.com/sparkgood) is a charitable platform that enables nonprofits to receive donations directly from customers and associates.
Answers about how you can do more with Walmart!"
Climate Extremes Workshop - Employing a Multivariate Spatial Hierarchical Model to Characterize Extremes with Application to US Gulf Coast Precipitation - Brook Russell, May 16, 2018
1. Employing a multivariate spatial hierarchical
model to characterize extremes with application
to US Gulf Coast precipitation
Brook T. Russell, CU Department of Mathematical Sciences
Brook T. Russell SAMSI Climate Extremes Workshop (5/16/18) 1 / 55
2. Outline
Introduction
Hurricane Harvey
Motivating Questions
Modeling Procedure
MV Spatial Model
Inference
Analysis
Precipitation Data and Covariate
Estimating Spatial Fields
Estimating Quantities of Interest
Discussion
Brook T. Russell SAMSI Climate Extremes Workshop (5/16/18) 2 / 55
4. Motivating Questions
1. How unusual was this event?
2. What is the probability of observing another event of this
magnitude in the US GC region?
3. What is the nature of the relationship between GoM SST and
precipitation extremes in the US GC region?
4. How can we account for “storm” level dependence using a
relatively simple spatial model?
NASA Richard Carson, Reuters
Brook T. Russell SAMSI Climate Extremes Workshop (5/16/18) 4 / 55
5. Outline
Introduction
Hurricane Harvey
Motivating Questions
Modeling Procedure
MV Spatial Model
Inference
Analysis
Precipitation Data and Covariate
Estimating Spatial Fields
Estimating Quantities of Interest
Discussion
Brook T. Russell SAMSI Climate Extremes Workshop (5/16/18) 5 / 55
6. MV Spatial Model
Capture spatial signal via MV spatial model
Spatially model GEV parameters
Use pointwise MLEs and covariance information as input
Use two-stage inference procedure
Approach similar to Holland et al. (2000), Tye and Cooley
(2015)
Setup:
Yt(s) – seasonal 7 day max precip at time t for s ∈ D ⊂ R2
Assume Yt(s)
·
∼ GEV (µt(s), σt(s), ξ(s))
Idea: incorporate GoM SST into location and scale parameters
Goal: estimate parameters ∀ s ∈ D, observed and unobserved
Brook T. Russell SAMSI Climate Extremes Workshop (5/16/18) 6 / 55
7. MV Spatial Model
At location s and time t, define the GEV parameters via
µt(s) = θ1(s) + SSTtθ2(s)
log σt(s) = θ3(s) + SSTtθ4(s)
ξ(s) = θ5(s)
For θ(s) = (θ1(s), θ2(s), θ3(s), θ4(s), θ5(s))T at location s,
assume
θ(s) = β + η(s)
Mean parameter values over region:
β = (β1, β2, β3, β4, β5)T
Spatially correlated random effects:
η(s) = (η1(s), η2(s), η3(s), η4(s), η5(s))T
Brook T. Russell SAMSI Climate Extremes Workshop (5/16/18) 7 / 55
8. Spatial Model
Use coregionalization model (Wackernagel, 2003)
η(s) = A δ(s)
for δ(s) = (δ1(s), δ2(s), δ3(s), δ4(s), δ5(s))T
A: lower triangular matrix (Finley et al., 2008)
δi : indep. second-order stationary GPs with mean 0 and
covariance function
Cov(δi (s), δi (s )) = exp − s − s /ρi
Assumes stationary and isotropic
Brook T. Russell SAMSI Climate Extremes Workshop (5/16/18) 8 / 55
9. Inference – Stage One
First stage of inference:
Obtain MLEs ˆθ(sl ) = (ˆθ1(sl ), ˆθ2(sl ), ˆθ3(sl ), ˆθ4(sl ), ˆθ5(sl ))T at
station l ∈ {1, . . . , L}
Assume
ˆθ(sl ) = θ(sl ) + ε(sl )
Estimation error (indep. of η):
ε(sl ) = (ε1(sl ), ε2(sl ), ε3(sl ), ε4(sl ), ε5(sl ))T
Further assume
(ε1(s1), . . . , ε1(sL), . . . , ε5(s1), . . . , ε5(sL))T
∼ N(0, W )
Brook T. Russell SAMSI Climate Extremes Workshop (5/16/18) 9 / 55
10. Resulting Hierarchical Model
Define:
Θ = (θ1(s1), . . . , θ1(sL), . . . , θ5(s1), . . . , θ5(sL))T
ˆΘ = (ˆθ1(s1), . . . , ˆθ1(sL), . . . , ˆθ5(s1), . . . , ˆθ5(sL))T
Hierarchical model
ˆΘ|Θ ∼ N(Θ, W ) and Θ ∼ N(β ⊗ 1L, ΩA,ρ)
Marginal model
ˆΘ ∼ N(β ⊗ 1L, ΩA,ρ + W )
Brook T. Russell SAMSI Climate Extremes Workshop (5/16/18) 10 / 55
11. Inference – Stage Two
Use MLEs and W as input
Estimate β, A, and ρ via sequential ML
Results in estimates ˜β, ˜A, and ˜ρ
Use fixed W :
Brook T. Russell SAMSI Climate Extremes Workshop (5/16/18) 11 / 55
12. Estimating W
First Approach: Wbd
Assume ε(sl ) is independent of ε(sl ) for all l = l
Results in banded W (sparse); ignores “storm” level
dependence
Could use ML based covariance estimates at each location
Second Approach: Wbs
No assumptions on W – allow for “storm” level dependence
Estimate W via NP block bootstrap; seasons are blocks
Obtain Wbs via empirical covariance matrix of BS ests
Third Approach: Regularize Wbs
Wbs is noisy, consider regularization methods
Two methods:
Sch¨afer and Strimmer (2005) method
Covariance tapering (Furrer et al., 2006; Katzfuss et al., 2016)
Brook T. Russell SAMSI Climate Extremes Workshop (5/16/18) 12 / 55
13. Potential Regularization Methods
1. Regularize via Sch¨afer and Strimmer (2005)
Wreg = αTtarg + (1 − α)Wbs for α ∈ [0, 1]
Ttarg is a known “target” covariance matrix
Wreg is convex combination of Ttarg and Wbs
2. Regularize via covariance tapering (Furrer et al., 2006; Katzfuss
et al., 2016)
Wtap = Wbs ◦ Ttap
◦ is Hadamard (Schur) product
Ttap is a correlation matrix based on covariance function with
property
Cov(Z(s), Z(s )) = 0 ∀ s, s such that s − s > λ > 0
Ttap is sparse ⇒ Wtap is sparse
Brook T. Russell SAMSI Climate Extremes Workshop (5/16/18) 13 / 55
14. Using Covariance Tapering
Covariance Functions considered in Furrer et al. (2006):
0 20 40 60 80 100 120
0.00.20.40.60.81.0
Distance
Correlation
Wendland 1
Wendland 2
Spherical
Could define
Ttap = 151T
5 ⊗ ΣW 2(λ)
Brook T. Russell SAMSI Climate Extremes Workshop (5/16/18) 14 / 55
15. Illustrative Example
Data for exploratory analysis
Different than data used for analysis presented later
Smaller number of stations
Different definition of season
Different criteria for excluding seasons/stations
Use model with SST in location only
Estimate fields via co-kriging
Consider Wbd , Wbs, and Wtap with several values of λ
Brook T. Russell SAMSI Climate Extremes Workshop (5/16/18) 15 / 55
22. Thoughts Regarding W
ML based estimates of covariance matrices seem to result in
estimated fields that are too smooth
Unconstrained BS based estimate of W seems to result in
rough estimated fields
How does a non-banded estimate of W impact the spatial
model?
Could we work on correlation scale and transform back later?
Perhaps using banded W with BS based covariances is
enough...
Brook T. Russell SAMSI Climate Extremes Workshop (5/16/18) 22 / 55
23. Outline
Introduction
Hurricane Harvey
Motivating Questions
Modeling Procedure
MV Spatial Model
Inference
Analysis
Precipitation Data and Covariate
Estimating Spatial Fields
Estimating Quantities of Interest
Discussion
Brook T. Russell SAMSI Climate Extremes Workshop (5/16/18) 23 / 55
25. Incorporating SST
Monthly TS of avg GoM SST via Hadley Centre Sea Ice and
Sea Surface Temperature data set (Rayner et al., 2003)
between 83◦ − 97◦W and 21◦ − 29◦N
Exploratory analysis at each station suggests using avg SST
March–June
Centered and scaled SST covariate:
1950 1970 1990 2010
−2−1012
Year
GoMSST(centeredandscaled)
GoM SST (centered and scaled)
Frequency
−2 −1 0 1 2
0246810
Brook T. Russell SAMSI Climate Extremes Workshop (5/16/18) 25 / 55
26. Inference and Spatial Interpolation
Two-step Inference Procedure:
1. Assume
Yt(s)
·
∼ GEV (θ1(s) + SSTtθ2(s), θ3(s) + SSTtθ4(s), θ5(s));
use precip data and SST series to get station MLEs ˆΘ
2. Assume marginal model
ˆΘ ∼ N(β ⊗ 1L, ΩA,ρ + W );
use ˆΘ and W to get estimates ˜β, ˜A, and ˜ρ (via seq likelihood)
Spatial Interpolation:
Goal: At s0 ∈ D (observed or unobserved), estimate θ(s0)
Use co-kriging and model output ( ˜β, ˜A, and ˜ρ) to obtain
estimate ˜θ(s0)
Estimate spatial fields by interpolating over a grid of points
Brook T. Russell SAMSI Climate Extremes Workshop (5/16/18) 26 / 55
27. Characterizing Precipitation Extremes
Using Parameter Estimates:
˜θ(s0) can be used to estimate quantities of interest at s0
Return levels – consider 100 yr. RLs
Exceedance probabilities
Observed return periods
Consider three SST scenarios
“Low” SST = −1
“High” SST = 1
“2017” SST ≈ 1.71
Methods for quantifying uncertainty
Delta method – simple but has known problems
Profile likelihood – challenges using at unobserved locations
Bootstrapping – parametric vs NP
Brook T. Russell SAMSI Climate Extremes Workshop (5/16/18) 27 / 55
33. 100 Year RL Ests
Pointwise 90% CIs for “2017” SST (based on parametric
bootstrap)
20
25
30
35
40
45
50
90% CI (lower limits)
30
40
50
60
70
80
90% CI (upper limits)
Brook T. Russell SAMSI Climate Extremes Workshop (5/16/18) 33 / 55
34. Comparing 100 Year RL Ests
Ratio: “High” SST versus “Low” SST
0.7
0.8
0.9
1.0
1.1
1.2
1.3
Ratio of 100 Yr RLs (High SST vs Low SST)
Brook T. Russell SAMSI Climate Extremes Workshop (5/16/18) 34 / 55
35. Comparing 100 Year RL Ests
Pointwise 90% CIs for ratio of 100 yr RLs (“High” SST versus
“Low” SST)
0.7
0.8
0.9
1.0
1.1
1.2
1.3
90% CI (lower limits)
0.7
0.8
0.9
1.0
1.1
1.2
1.3
90% CI (upper limits)
Brook T. Russell SAMSI Climate Extremes Workshop (5/16/18) 35 / 55
36. 100 Year RL Ests
Ratio: “2017” SST versus “Low” SST
0.7
0.8
0.9
1.0
1.1
1.2
1.3
Ratio of 100 Yr RLs (2017 SST vs Low SST)
Brook T. Russell SAMSI Climate Extremes Workshop (5/16/18) 36 / 55
37. Comparing 100 Year RL Ests
Pointwise 90% CIs for ratio of 100 yr RLs (“2017” SST versus
“Low” SST)
0.4
0.6
0.8
1.0
1.2
1.4
1.6
90% CI (lower limits)
0.4
0.6
0.8
1.0
1.2
1.4
1.6
90% CI (upper limits)
Brook T. Russell SAMSI Climate Extremes Workshop (5/16/18) 37 / 55
38. 2017 Event in Houston Area
Approximately 100cm in Houston area
Observed return periods for this amount in downtown
Houston based on spatial model
SST Obs Ret Per CI Lwr CI Upr
Low 8512.24 2079.44 70123.68
High 3471.28 1059.97 20659.49
2017 2549.66 404.20 6435.60
Brook T. Russell SAMSI Climate Extremes Workshop (5/16/18) 38 / 55
39. 2017 Event in Houston Area
Approximately 100cm in Houston area
Estimate probability of exceeding this amount in downtown
Houston based on spatial model
SST Est Exc Pr RR vs “Low” RR CI Lwr RR CI Upr
Low 0.000117 – – –
High 0.000288 2.45 1.47 4.66
2017 0.000392 3.34 2.30 24.61
Brook T. Russell SAMSI Climate Extremes Workshop (5/16/18) 39 / 55
40. Chances of observing another event of this magnitude
Idea: use annual avg precip as baseline
Houston: 70cm is 53% of annual avg., 48cm is 36.5% of
annual avg.
PRISM annual avg. precip map1
50
100
150
200
1
PRISM Climate Group, Oregon State University,
http://prism.oregonstate.edu
Brook T. Russell SAMSI Climate Extremes Workshop (5/16/18) 40 / 55
41. Estimated Exceedance Probabilities
Based on “Low” SST
0e+00
2e−04
4e−04
6e−04
8e−04
1e−03
Est Prob of Exceeding 53% of Annual Avg
0.000
0.005
0.010
0.015
Est Prob of Exceeding 36.5% of Annual Avg
Brook T. Russell SAMSI Climate Extremes Workshop (5/16/18) 41 / 55
42. Estimated Exceedance Probabilities
Based on “High” SST
0e+00
2e−04
4e−04
6e−04
8e−04
1e−03
Est Prob of Exceeding 53% of Annual Avg
0.000
0.005
0.010
0.015
Est Prob of Exceeding 36.5% of Annual Avg
Brook T. Russell SAMSI Climate Extremes Workshop (5/16/18) 42 / 55
43. Estimated Exceedance Probabilities
Based on “2017” SST
0e+00
2e−04
4e−04
6e−04
8e−04
1e−03
Est Prob of Exceeding 53% of Annual Avg
0.000
0.005
0.010
0.015
Est Prob of Exceeding 36.5% of Annual Avg
Brook T. Russell SAMSI Climate Extremes Workshop (5/16/18) 43 / 55
44. Closer Look at Five Locations
q q
q
q
q
Houston
New Orleans
San Antonio
Tallahassee
Atlanta
Brook T. Russell SAMSI Climate Extremes Workshop (5/16/18) 44 / 55
45. Houston, TX
0.00.20.40.60.81.0
Proportion of Average Total Annual Precipitation
7DayEstimatedExceedanceProbability
0.025 0.05 0.075 0.1 0.125 0.15 0.175 0.2 0.225
3.29 6.57 9.86 13.15 16.43 19.72 23.01 26.3 29.58
Precipitation (cm)
Low SST
High SST
2017 SST
Brook T. Russell SAMSI Climate Extremes Workshop (5/16/18) 45 / 55
46. New Orleans, LA
0.00.20.40.60.81.0
Proportion of Average Total Annual Precipitation
7DayEstimatedExceedanceProbability
0.025 0.05 0.075 0.1 0.125 0.15 0.175 0.2 0.225
4.01 8.02 12.03 16.05 20.06 24.07 28.08 32.09 36.1
Precipitation (cm)
Low SST
High SST
2017 SST
Brook T. Russell SAMSI Climate Extremes Workshop (5/16/18) 46 / 55
47. San Antonio, TX
0.00.20.40.60.81.0
Proportion of Average Total Annual Precipitation
7DayEstimatedExceedanceProbability
0.025 0.05 0.075 0.1 0.125 0.15 0.175 0.2 0.225
1.9 3.8 5.7 7.6 9.5 11.4 13.29 15.19 17.09
Precipitation (cm)
Low SST
High SST
2017 SST
Brook T. Russell SAMSI Climate Extremes Workshop (5/16/18) 47 / 55
48. Tallahassee, FL
0.00.20.40.60.81.0
Proportion of Average Total Annual Precipitation
7DayEstimatedExceedanceProbability
0.025 0.05 0.075 0.1 0.125 0.15 0.175 0.2 0.225
3.54 7.08 10.62 14.16 17.69 21.23 24.77 28.31 31.85
Precipitation (cm)
Low SST
High SST
2017 SST
Brook T. Russell SAMSI Climate Extremes Workshop (5/16/18) 48 / 55
49. Atlanta, GA
0.00.20.40.60.81.0
Proportion of Average Total Annual Precipitation
7DayEstimatedExceedanceProbability
0.025 0.05 0.075 0.1 0.125 0.15 0.175 0.2 0.225
3.27 6.55 9.82 13.1 16.37 19.65 22.92 26.19 29.47
Precipitation (cm)
Low SST
High SST
2017 SST
Brook T. Russell SAMSI Climate Extremes Workshop (5/16/18) 49 / 55
50. Outline
Introduction
Hurricane Harvey
Motivating Questions
Modeling Procedure
MV Spatial Model
Inference
Analysis
Precipitation Data and Covariate
Estimating Spatial Fields
Estimating Quantities of Interest
Discussion
Brook T. Russell SAMSI Climate Extremes Workshop (5/16/18) 50 / 55
51. Incorporating GoM SST Projections
SST projections from Alexander et al. (2018):
GoM: 0.2 − 0.4◦
C decade−1
(1976 – 2099)
Below 60◦
Lat.: Little change in year to year variability
“The shift in the mean was so large in many regions that SSTs
during the last 30 years of the 21st century will always be
warmer than the warmest year in the historical period.” (1976
– 2005)
1950 1960 1970 1980 1990 2000 2010
25.025.526.0
Gulf of Mexico Mean SST (Mar−Jun)
Year
Temperature(Celcius)
Brook T. Russell SAMSI Climate Extremes Workshop (5/16/18) 51 / 55
52. Additional Thoughts and Future Work
Quantifying the degree to which Harvey was unusual:
accounting spatial extent
Impact of accounting for storm level dependence in W
Regularization methods for W – choice of λ
M. Yam, LA Times, Getty J. Raedle, Getty Images
Brook T. Russell SAMSI Climate Extremes Workshop (5/16/18) 52 / 55
53. References I
Alexander, M. A., Scott, J. D., Friedland, K. D., Mills, K. E., Nye,
J. A., Pershing, A. J., and Thomas, A. C. (2018). Projected sea
surface temperatures over the 21st century: Changes in the
mean, variability and extremes for large marine ecosystem
regions of Northern Oceans. Elem Sci Anth, 6(1).
Finley, A. O., Banerjee, S., Ek, A. R., and McRoberts, R. E.
(2008). Bayesian multivariate process modeling for prediction of
forest attributes. Journal of Agricultural, Biological, and
Environmental Statistics, 13(1):60–83.
Furrer, R., Genton, M. G., and Nychka, D. (2006). Covariance
tapering for interpolation of large spatial datasets. Journal of
Computational and Graphical Statistics, 15(3):502–523.
Holland, D. M., De, O. V., Cox, L. H., and Smith, R. L. (2000).
Estimation of regional trends in sulfur dioxide over the eastern
united states. Environmetrics, 11(4):373–393.
Brook T. Russell SAMSI Climate Extremes Workshop (5/16/18) 53 / 55
54. References II
Katzfuss, M., Stroud, J. R., and Wikle, C. K. (2016).
Understanding the ensemble kalman filter. The American
Statistician, 70(4):350–357.
Rayner, N., Parker, D., Horton, E., Folland, C., Alexander, L.,
Rowell, D., Kent, E., and Kaplan, A. (2003). Global analyses of
sea surface temperature, sea ice, and night marine air
temperature since the late nineteenth century. Journal of
Geophysical Research: Atmospheres, 108(D14).
Sch¨afer, J. and Strimmer, K. (2005). A shrinkage approach to
large-scale covariance matrix estimation and implications for
functional genomics. Statistical applications in genetics and
molecular biology, 4(1).
Tye, M. R. and Cooley, D. (2015). A spatial model to examine
rainfall extremes in Colorado’s Front Range. Journal of
Hydrology, 530(Supplement C):15 – 23.
Brook T. Russell SAMSI Climate Extremes Workshop (5/16/18) 54 / 55
55. References III
Wackernagel, H. (2003). Multivariate Geostatistics. Springer
Science & Business Media.
Brook T. Russell SAMSI Climate Extremes Workshop (5/16/18) 55 / 55