Kriging is an optimal interpolation technique that estimates values at unmeasured locations based on measured data from nearby locations. It assigns weights to surrounding data points based on their distances and spatial covariance, accounting for clustering of data points. Simple kriging assumes a known mean and estimates values as a weighted average of residuals from the mean at nearby locations, with weights chosen to minimize the estimation variance. The document provides an example of applying simple kriging to estimate porosity using six nearby data points.
Kriging is a spatial prediction method that provides the "best linear unbiased estimator" (BLUE). It determines weights for a weighted linear combination of sample points that minimizes the prediction error variance. Ordinary kriging focuses on minimizing this error variance based on a variogram model that describes the spatial correlation between sample points. The kriging system is solved to determine the weights, and the predicted values and kriging variance can then be estimated at desired locations.
The document provides an introduction to geostatistics and variogram analysis. It defines key concepts in geostatistics such as variograms, covariance, correlation, and semivariance. It discusses how these statistics are used to characterize the spatial correlation and continuity of natural phenomena. The document also presents an example analysis of porosity data from an oil field, including exploring the data distribution, computing sample variograms, and fitting theoretical variogram models for use in kriging and stochastic simulation.
This document provides an overview of geostatistics and variogram analysis. It discusses how the variogram describes the spatial correlation of a phenomenon through parameters like the nugget effect and range. Experimental variograms are calculated from data and theoretical models like spherical, exponential, and power models are fitted. The variogram can identify different correlation scales through nested models. Components at different scales can be extracted through kriging. As an example, fertility data from France is analyzed to filter its large-scale spatial structure.
Spatial autocorrelation refers to the similarity of objects near each other in space. It is measured using global and local statistics. Global measures like Moran's I provide a single value for the whole data set, indicating if nearby things tend to be more similar. Local measures like LISA (Local Indicators of Spatial Association) calculate a statistic for each observation to identify local clusters, or "hot spots" and "cold spots". Moran's I is a correlation between a value and the average of its neighbors, while Geary's C uses the actual values. LISA extends these global concepts to detect significant spatial autocorrelation for individual locations.
This document provides an agenda for a presentation on geostatistics for mineral deposits. The presentation will cover topics such as sampling, geostatistics part 1 and 2, and estimations. It will include breaks between sessions and conclude with a discussion period. Sampling topics include an overview of sampling theory and nomographs, while geostatistics sessions will cover variograms, kriging, and simulations. Estimation methods like inverse distance, kriging, and recoverable resources will also be discussed.
Geophysical logging techniques used to support environmental site remediation. Also referred to as borehole geophysical logging and geophysical well logging, the techniques support site characterization and remedial design efforts at properties with groundwater impacted by discharges of contaminants, including Superfund (CERCLA) and other sites where dense, non-aqueous phase liquids (DNAPL) may be present in fractured bedrock and unconsolidated aquifers. The logging methods described include: Natural Gamma; Caliper; Acoustic Televiewer (ATV), also known as Acoustic Borehole Imager (ABI); Optical Televiewer (OTV), also known as Optical Borehole Imager (OBI); Electrical Resistivity, also known as Normal Resistivity; Single Point Resistance; Fluid Resistivity; Fluid Temperature; Heat Pulse Flow Meter (HPFM). Information is provided about how geophysical logs can assist in developing a Conceptual Site Model (CSM), with attention to structural geologic constraints on groundwater monitoring system design, referencing example conditions from the dipping sedimentary rocks of the Newark Basin and unconsolidated sediments of the New Jersey Coastal Plain. The presentation is meant to be of use to a wide range of investigators, including geologists, hydrogeologists, engineers and regulators responsible for site remediation.
Spheroid, datum, projection, and coordinate systems are used to locate positions on Earth. A spheroid is a mathematical model that approximates the Earth's shape as an oblate spheroid. A datum defines the reference frame for latitude and longitude coordinates and relates the spheroid to the Earth's center. Projections transform 3D spheroid coordinates onto a 2D surface like a map, introducing some distortion of shapes, areas, distances or directions. Common projections include transverse Mercator, UTM, and lambert conformal conic. Coordinate systems then allow measurement of positions on the projected 2D surface. Understanding these concepts is important for accurately locating geographic features.
Kriging is a spatial prediction method that provides the "best linear unbiased estimator" (BLUE). It determines weights for a weighted linear combination of sample points that minimizes the prediction error variance. Ordinary kriging focuses on minimizing this error variance based on a variogram model that describes the spatial correlation between sample points. The kriging system is solved to determine the weights, and the predicted values and kriging variance can then be estimated at desired locations.
The document provides an introduction to geostatistics and variogram analysis. It defines key concepts in geostatistics such as variograms, covariance, correlation, and semivariance. It discusses how these statistics are used to characterize the spatial correlation and continuity of natural phenomena. The document also presents an example analysis of porosity data from an oil field, including exploring the data distribution, computing sample variograms, and fitting theoretical variogram models for use in kriging and stochastic simulation.
This document provides an overview of geostatistics and variogram analysis. It discusses how the variogram describes the spatial correlation of a phenomenon through parameters like the nugget effect and range. Experimental variograms are calculated from data and theoretical models like spherical, exponential, and power models are fitted. The variogram can identify different correlation scales through nested models. Components at different scales can be extracted through kriging. As an example, fertility data from France is analyzed to filter its large-scale spatial structure.
Spatial autocorrelation refers to the similarity of objects near each other in space. It is measured using global and local statistics. Global measures like Moran's I provide a single value for the whole data set, indicating if nearby things tend to be more similar. Local measures like LISA (Local Indicators of Spatial Association) calculate a statistic for each observation to identify local clusters, or "hot spots" and "cold spots". Moran's I is a correlation between a value and the average of its neighbors, while Geary's C uses the actual values. LISA extends these global concepts to detect significant spatial autocorrelation for individual locations.
This document provides an agenda for a presentation on geostatistics for mineral deposits. The presentation will cover topics such as sampling, geostatistics part 1 and 2, and estimations. It will include breaks between sessions and conclude with a discussion period. Sampling topics include an overview of sampling theory and nomographs, while geostatistics sessions will cover variograms, kriging, and simulations. Estimation methods like inverse distance, kriging, and recoverable resources will also be discussed.
Geophysical logging techniques used to support environmental site remediation. Also referred to as borehole geophysical logging and geophysical well logging, the techniques support site characterization and remedial design efforts at properties with groundwater impacted by discharges of contaminants, including Superfund (CERCLA) and other sites where dense, non-aqueous phase liquids (DNAPL) may be present in fractured bedrock and unconsolidated aquifers. The logging methods described include: Natural Gamma; Caliper; Acoustic Televiewer (ATV), also known as Acoustic Borehole Imager (ABI); Optical Televiewer (OTV), also known as Optical Borehole Imager (OBI); Electrical Resistivity, also known as Normal Resistivity; Single Point Resistance; Fluid Resistivity; Fluid Temperature; Heat Pulse Flow Meter (HPFM). Information is provided about how geophysical logs can assist in developing a Conceptual Site Model (CSM), with attention to structural geologic constraints on groundwater monitoring system design, referencing example conditions from the dipping sedimentary rocks of the Newark Basin and unconsolidated sediments of the New Jersey Coastal Plain. The presentation is meant to be of use to a wide range of investigators, including geologists, hydrogeologists, engineers and regulators responsible for site remediation.
Spheroid, datum, projection, and coordinate systems are used to locate positions on Earth. A spheroid is a mathematical model that approximates the Earth's shape as an oblate spheroid. A datum defines the reference frame for latitude and longitude coordinates and relates the spheroid to the Earth's center. Projections transform 3D spheroid coordinates onto a 2D surface like a map, introducing some distortion of shapes, areas, distances or directions. Common projections include transverse Mercator, UTM, and lambert conformal conic. Coordinate systems then allow measurement of positions on the projected 2D surface. Understanding these concepts is important for accurately locating geographic features.
This document provides an introduction to geographic information systems (GIS). It begins by defining some basic map concepts like features, scale, and symbology. It then discusses what GIS is, how it works, and what makes it special. GIS allows users to capture, store, manipulate, analyze and visualize spatial data. It integrates data from different sources into interactive maps. Users can perform tasks like querying attributes, analyzing networks, modeling 3D surfaces, interpolating between data points, and complex spatial analysis. Overall, the document outlines the core components and capabilities of GIS as a tool for visualizing and solving real-world problems involving geographic data.
Application of Basic Remote Sensing in GeologyUzair Khan
Application of basic remote sensing in Geology. This presentation tries to discriminate the lithology in the Landsat-7 scene located Karachi West. Although other enhanced methodology available to discriminate the rock types, here just a band ratios and simple band combination used for lithology identification.
Spatial interpolation techniques are used to estimate values at unsampled locations based on known sample points. There are two main types of interpolation methods: global methods which apply a single mathematical function to all points, and local methods which apply functions to subsets of points. Specific interpolation methods covered include Thiessen polygons, triangulated irregular networks, spatial moving average, trend surfaces, inverse distance weighting, and Kriging. Kriging is similar to inverse distance weighting but uses minimum variance to calculate weights.
Optical remote sensing uses visible, near infrared, and shortwave infrared sensors to form images of the Earth's surface by detecting solar radiation reflected from targets. Different materials reflect and absorb light differently at different wavelengths, allowing targets to be differentiated by their spectral signatures. Optical remote sensing systems are classified as panchromatic, multispectral, hyperspectral, or superspectral depending on the number of spectral bands measured.
This document discusses remote sensing raster data formats and image processing. It introduces three common raster data formats: band sequential (BSQ), band interleaved by pixel (BIP), and band interleaved by line (BIL). It then provides examples and explanations of how data is organized for each format. The document also discusses image quality issues like clouds, noise, and describes common image processing techniques like radiometric and geometric correction, enhancement, and classification. Univariate and multivariate image statistics are introduced for analyzing remote sensing images.
The document discusses different types of scanning systems used to collect remote sensing data. It describes whiskbroom scanners that use rotating mirrors to scan perpendicular to the flight path, building up images line-by-line. Pushbroom scanners use linear detector arrays that collect entire lines of pixels simultaneously as the sensor moves. Circular scanners employ rotating mirrors to scan in circular patterns, while side-scanning uses active radar to illuminate terrain to one side of the flight path. The characteristics of Landsat, SPOT, and sensor technologies are also overviewed.
This document provides an overview of basic statistical concepts and terms used in 3D geological modeling and uncertainty analysis, including:
1. It defines key terms like variability, measures of variability (range, interquartile range, standard deviation), and types of data (discrete, continuous).
2. It describes statistical measures like the mean, median, mode, and different types of averages (arithmetic, geometric, harmonic).
3. It covers types of variables, measurement scales, correlational studies, descriptive and inferential statistics, and sampling error.
4. It discusses probability, probability distributions, and introduces basic geostatistical concepts like variograms and kriging which are used in 3D geological modeling
1. Geostatistical analysis uses spatial statistical techniques to analyze geospatial datasets and create continuous predictive surfaces. It characterizes the spatial dependence and variability in the datasets.
2. Key steps include exploring the statistical properties of the data, analyzing spatial dependencies through semivariograms and covariance clouds, selecting and validating a surface model, and assessing the output surface and uncertainty.
3. Semivariograms quantify the spatial autocorrelation in the data by plotting the semivariance between point pairs against their separation distance and direction. The range, sill, and nugget parameters describe the autocorrelation structure.
Image classification involves sorting pixels into categories based on their spectral signatures. There are two main approaches: unsupervised classification automatically groups similar pixels into spectral classes which the analyst then labels, while supervised classification uses training sites of known land cover to classify pixels based on their likelihood of belonging to a class. The document provides details on processes, techniques, advantages and disadvantages of each approach.
Understanding Coordinate Systems and Projections for ArcGISJohn Schaeffer
Everything you need to know to work with coordinate systems and projecting data in ArcGIS. The presentation starts by explaining the terminology, and then discusses the details you need to know to actually work successfully with coordinate systems, use the proper projections, and geographic transformations. This is a very practical look at a complex subject.
This document discusses band ratioing, image differencing, and principal and canonical component analysis techniques in remote sensing. Band ratioing involves dividing pixel values in one band by another band to enhance spectral differences. Image differencing calculates differences between images after alignment. Principal component analysis transforms correlated spectral data into fewer uncorrelated bands retaining most information, while canonical component analysis aims to maximize separability of user-defined features. These techniques can help analyze multispectral and hyperspectral remote sensing data.
Digital Elevation Model (DEM) is the digital representation of the land surface elevation with respect to any reference datum. DEM is frequently used to refer to any digital representation of a topographic surface. DEM is the simplest form of digital representation of topography. GIS applications depend mainly on DEMs, today.
Three fundamental vector types exist in GISs: points, lines, and polygons. Points contain a single coordinate pair and represent discrete features. Lines are composed of multiple connected points and represent linear features. Polygons are created by multiple lines that loop back and represent two-dimensional features bounded by closed lines.
The gravity method involves measuring variations in the Earth's gravitational field to determine subsurface density variations. Gravity surveys measure differences in gravitational attraction at surface locations. After collecting data at regular intervals, corrections are applied for drift, elevation, tides and topography. The corrected anomalies are analyzed to infer subsurface geology, locating structures like faults, voids or buried valleys. Common applications include engineering, environmental and geothermal studies.
This document outlines the syllabus for a course on Geographic Information Systems (GIS). The course is divided into 5 units that cover fundamentals of GIS, spatial data models, data input and topology, data analysis, and applications of GIS. The objectives are to introduce GIS fundamentals and processes of data management, analysis, and output. Students will learn about spatial data structures, data quality standards, and tools for data input, analysis, and management. The course aims to provide knowledge of GIS concepts and techniques.
Supervised and unsupervised classification techniques for satellite imagery i...gaup_geo
This document compares supervised and unsupervised classification techniques for satellite imagery analysis of land cover in the Porto Alegre region of Brazil. Supervised classification involved collecting over 500 training sites to create signatures for 8 land cover classes. Unsupervised classification used ISOcluster to generate 36 spectral classes which were grouped into the 8 informational classes. Both classifications underwent post-processing including majority filtering and polygon elimination to produce final 1-hectare minimum mapping unit vector maps. Accuracy assessments found the supervised classification to be more accurate at 76% compared to 48% for the unsupervised method.
This document discusses stereoscopy and parallax measurement in aerial photography. Stereoscopy uses two photographs of the same ground area taken from separate positions to create a stereo pair that enables three-dimensional viewing. Parallax is the displacement of an object caused by a change in the point of observation. Stereoscopic parallax occurs when photographs are taken of the same object from different positions, allowing measurement of differences in elevation.
The document discusses the application of kriging in groundwater studies. Kriging is a geostatistical technique used to interpolate the value of a random field between known data points. It provides the best linear unbiased prediction and honors the observed spatial structure of the data. Two case studies are summarized that demonstrate how kriging can be used to generate groundwater level contour maps and correlate declining water levels with land cover changes detected from satellite images. The studies show that kriging produces more accurate representations of spatial variability in groundwater compared to other methods.
The document discusses issues with calculating sample variograms when trends are present in data. It explains that trends can make variograms appear messy with no sill. One way to deal with trends is to filter them out by standardizing the data to remove trends in both the mean and variance. This creates residual values with a mean of 0 and variance of 1. Calculating the variogram on these residuals using the correlogram method filters out the trends and estimates the spatial continuity of the underlying residuals. The correlogram provides a robust estimator that works whether trends are present or not.
This document provides an introduction to geographic information systems (GIS). It begins by defining some basic map concepts like features, scale, and symbology. It then discusses what GIS is, how it works, and what makes it special. GIS allows users to capture, store, manipulate, analyze and visualize spatial data. It integrates data from different sources into interactive maps. Users can perform tasks like querying attributes, analyzing networks, modeling 3D surfaces, interpolating between data points, and complex spatial analysis. Overall, the document outlines the core components and capabilities of GIS as a tool for visualizing and solving real-world problems involving geographic data.
Application of Basic Remote Sensing in GeologyUzair Khan
Application of basic remote sensing in Geology. This presentation tries to discriminate the lithology in the Landsat-7 scene located Karachi West. Although other enhanced methodology available to discriminate the rock types, here just a band ratios and simple band combination used for lithology identification.
Spatial interpolation techniques are used to estimate values at unsampled locations based on known sample points. There are two main types of interpolation methods: global methods which apply a single mathematical function to all points, and local methods which apply functions to subsets of points. Specific interpolation methods covered include Thiessen polygons, triangulated irregular networks, spatial moving average, trend surfaces, inverse distance weighting, and Kriging. Kriging is similar to inverse distance weighting but uses minimum variance to calculate weights.
Optical remote sensing uses visible, near infrared, and shortwave infrared sensors to form images of the Earth's surface by detecting solar radiation reflected from targets. Different materials reflect and absorb light differently at different wavelengths, allowing targets to be differentiated by their spectral signatures. Optical remote sensing systems are classified as panchromatic, multispectral, hyperspectral, or superspectral depending on the number of spectral bands measured.
This document discusses remote sensing raster data formats and image processing. It introduces three common raster data formats: band sequential (BSQ), band interleaved by pixel (BIP), and band interleaved by line (BIL). It then provides examples and explanations of how data is organized for each format. The document also discusses image quality issues like clouds, noise, and describes common image processing techniques like radiometric and geometric correction, enhancement, and classification. Univariate and multivariate image statistics are introduced for analyzing remote sensing images.
The document discusses different types of scanning systems used to collect remote sensing data. It describes whiskbroom scanners that use rotating mirrors to scan perpendicular to the flight path, building up images line-by-line. Pushbroom scanners use linear detector arrays that collect entire lines of pixels simultaneously as the sensor moves. Circular scanners employ rotating mirrors to scan in circular patterns, while side-scanning uses active radar to illuminate terrain to one side of the flight path. The characteristics of Landsat, SPOT, and sensor technologies are also overviewed.
This document provides an overview of basic statistical concepts and terms used in 3D geological modeling and uncertainty analysis, including:
1. It defines key terms like variability, measures of variability (range, interquartile range, standard deviation), and types of data (discrete, continuous).
2. It describes statistical measures like the mean, median, mode, and different types of averages (arithmetic, geometric, harmonic).
3. It covers types of variables, measurement scales, correlational studies, descriptive and inferential statistics, and sampling error.
4. It discusses probability, probability distributions, and introduces basic geostatistical concepts like variograms and kriging which are used in 3D geological modeling
1. Geostatistical analysis uses spatial statistical techniques to analyze geospatial datasets and create continuous predictive surfaces. It characterizes the spatial dependence and variability in the datasets.
2. Key steps include exploring the statistical properties of the data, analyzing spatial dependencies through semivariograms and covariance clouds, selecting and validating a surface model, and assessing the output surface and uncertainty.
3. Semivariograms quantify the spatial autocorrelation in the data by plotting the semivariance between point pairs against their separation distance and direction. The range, sill, and nugget parameters describe the autocorrelation structure.
Image classification involves sorting pixels into categories based on their spectral signatures. There are two main approaches: unsupervised classification automatically groups similar pixels into spectral classes which the analyst then labels, while supervised classification uses training sites of known land cover to classify pixels based on their likelihood of belonging to a class. The document provides details on processes, techniques, advantages and disadvantages of each approach.
Understanding Coordinate Systems and Projections for ArcGISJohn Schaeffer
Everything you need to know to work with coordinate systems and projecting data in ArcGIS. The presentation starts by explaining the terminology, and then discusses the details you need to know to actually work successfully with coordinate systems, use the proper projections, and geographic transformations. This is a very practical look at a complex subject.
This document discusses band ratioing, image differencing, and principal and canonical component analysis techniques in remote sensing. Band ratioing involves dividing pixel values in one band by another band to enhance spectral differences. Image differencing calculates differences between images after alignment. Principal component analysis transforms correlated spectral data into fewer uncorrelated bands retaining most information, while canonical component analysis aims to maximize separability of user-defined features. These techniques can help analyze multispectral and hyperspectral remote sensing data.
Digital Elevation Model (DEM) is the digital representation of the land surface elevation with respect to any reference datum. DEM is frequently used to refer to any digital representation of a topographic surface. DEM is the simplest form of digital representation of topography. GIS applications depend mainly on DEMs, today.
Three fundamental vector types exist in GISs: points, lines, and polygons. Points contain a single coordinate pair and represent discrete features. Lines are composed of multiple connected points and represent linear features. Polygons are created by multiple lines that loop back and represent two-dimensional features bounded by closed lines.
The gravity method involves measuring variations in the Earth's gravitational field to determine subsurface density variations. Gravity surveys measure differences in gravitational attraction at surface locations. After collecting data at regular intervals, corrections are applied for drift, elevation, tides and topography. The corrected anomalies are analyzed to infer subsurface geology, locating structures like faults, voids or buried valleys. Common applications include engineering, environmental and geothermal studies.
This document outlines the syllabus for a course on Geographic Information Systems (GIS). The course is divided into 5 units that cover fundamentals of GIS, spatial data models, data input and topology, data analysis, and applications of GIS. The objectives are to introduce GIS fundamentals and processes of data management, analysis, and output. Students will learn about spatial data structures, data quality standards, and tools for data input, analysis, and management. The course aims to provide knowledge of GIS concepts and techniques.
Supervised and unsupervised classification techniques for satellite imagery i...gaup_geo
This document compares supervised and unsupervised classification techniques for satellite imagery analysis of land cover in the Porto Alegre region of Brazil. Supervised classification involved collecting over 500 training sites to create signatures for 8 land cover classes. Unsupervised classification used ISOcluster to generate 36 spectral classes which were grouped into the 8 informational classes. Both classifications underwent post-processing including majority filtering and polygon elimination to produce final 1-hectare minimum mapping unit vector maps. Accuracy assessments found the supervised classification to be more accurate at 76% compared to 48% for the unsupervised method.
This document discusses stereoscopy and parallax measurement in aerial photography. Stereoscopy uses two photographs of the same ground area taken from separate positions to create a stereo pair that enables three-dimensional viewing. Parallax is the displacement of an object caused by a change in the point of observation. Stereoscopic parallax occurs when photographs are taken of the same object from different positions, allowing measurement of differences in elevation.
The document discusses the application of kriging in groundwater studies. Kriging is a geostatistical technique used to interpolate the value of a random field between known data points. It provides the best linear unbiased prediction and honors the observed spatial structure of the data. Two case studies are summarized that demonstrate how kriging can be used to generate groundwater level contour maps and correlate declining water levels with land cover changes detected from satellite images. The studies show that kriging produces more accurate representations of spatial variability in groundwater compared to other methods.
The document discusses issues with calculating sample variograms when trends are present in data. It explains that trends can make variograms appear messy with no sill. One way to deal with trends is to filter them out by standardizing the data to remove trends in both the mean and variance. This creates residual values with a mean of 0 and variance of 1. Calculating the variogram on these residuals using the correlogram method filters out the trends and estimates the spatial continuity of the underlying residuals. The correlogram provides a robust estimator that works whether trends are present or not.
Slide del corso di statistica sociale del Prof. Giorgio Garau, per la laurea in Assistente sociale. Indici di posizione, variabilità e mutua variabilità: concentrazione ed eterogeneità.
Il ruolo della statistica nell'informazione a cura di Lucia Schirru
Este documento presenta una introducción a los conceptos básicos de estadística y geoestadística. Cubre temas como estadística clásica vs geoestadística, medidas descriptivas como media, mediana y varianza, y distribuciones de probabilidad como normal y lognormal. El objetivo es familiarizar a los estudiantes con estas herramientas para resolver problemas geológicos y estimar recursos en yacimientos minerales.
comportamiento de fases del gas naturalCarla Quispe
Este documento describe el comportamiento de fases de los hidrocarburos en reservorios de petróleo y gas natural. Explica conceptos clave como fase, temperatura, presión y diagrama de fases para sistemas de un componente, binarios y ternarios. También analiza cómo varían las fases líquida y gaseosa con cambios en la presión y temperatura, y cómo esto afecta la producción de petróleo y gas.
Este documento presenta una introducción a la geoestadística lineal. Explica conceptos clave como variables regionalizadas, campo y soporte. Detalla los objetivos de la teoría de variables regionalizadas, que son expresar las características estructurales de una variable mediante un modelo matemático y resolver el problema de estimación a partir de muestreos. Finalmente, introduce el modelo matemático de funciones aleatorias para modelar el comportamiento localmente aleatorio pero globalmente estructurado de las variables regionalizadas.
The document discusses using nearest neighbor analysis in Matlab to find soda recommendations based on attributes like sweetness, carbonation, and price. It explains how to use the dsearchn and knnsearch functions to find the single nearest neighbor or multiple nearest neighbors of input soda preferences. It also describes using a kd-tree and the rangesearch function to efficiently find all sodas within a specified distance of input ideals.
This document describes the Kumaraswamy generalized (Kw-G) distribution, a new family of continuous probability distributions defined on the interval (0,1). The Kw-G distribution is constructed by applying the Kumaraswamy distribution to an existing parent distribution with cumulative distribution function G(x). Properties of the Kw-G distribution such as its probability density function, moments, order statistics, and L-moments are expressed in terms of the parent distribution G(x). Several special cases of the Kw-G distribution are also discussed, including the Kw-normal, Kw-Weibull, and Kw-gamma distributions.
This document presents results from a lattice QCD calculation of the proton isovector scalar charge (gs) at two light quark masses. The calculation uses domain-wall fermions and Iwasaki gauge actions on a 323x64 lattice with a spacing of 0.144 fm. Ratios of three-point to two-point correlation functions are formed and fit to a plateau to extract gs. Values of gs are obtained for quark masses of 0.0042 and 0.001, and all-mode averaging is used for the lighter mass. Chiral perturbation theory will be used to extrapolate gs to the physical quark mass. Preliminary results for gs at the unphysical quark masses are reported in lattice units.
The document discusses a new family of generalized distributions called the Kumaraswamy distributions (Kw-G distributions). These distributions extend common distributions like the normal, Weibull, and gamma distributions by introducing additional shape parameters. The key properties of the Kw-G distributions are:
1) They are defined on the interval (0,1) and are derived by applying a quantile function to a parent continuous distribution G(x).
2) Important special cases are the Kw-normal, Kw-Weibull, and Kw-gamma distributions.
3) Moments and probability weighted moments of the Kw-G distributions can be written as infinite weighted sums of the moments of the parent G distribution.
This first lecture describes what EMT is. Its history of evolution. Main personalities how discovered theories relating to this theory. Applications of EMT . Scalars and vectors and there algebra. Coordinate systems. Field, Coulombs law and electric field intensity.volume charge distribution, electric flux density, gauss's law and divergence
The document summarizes key concepts from Chapter 1 of the textbook "Engineering Electromagnetics - 8th Edition" by William H. Hayt, Jr. & John A. Buck. It introduces scalar and vector quantities, describes vector algebra including addition, subtraction and multiplication. It also discusses various coordinate systems used to describe the location and direction of vectors including rectangular, cylindrical and spherical coordinate systems. Transformations between Cartesian and other coordinate systems are shown.
This document discusses subspace clustering with missing data. It summarizes two algorithms for solving this problem: 1) an EM-type algorithm that formulates the problem probabilistically and iteratively estimates the subspace parameters using an EM approach. 2) A k-means form algorithm called k-GROUSE that alternates between assigning vectors to subspaces based on projection residuals and updating each subspace using incremental gradient descent on the Grassmannian manifold. It also discusses the sampling complexity results from a recent paper, showing subspace clustering is possible without an impractically large sample size.
Principal components analysis (PCA) is an algorithm that identifies the subspace where data approximately lies in a way that requires only an eigenvector calculation. PCA works by finding the directions, called principal components, that maximize the variance of the projected data points. It does this by computing the eigenvectors of the covariance matrix of the data, with the top eigenvectors providing the principal components that best explain the variability in the data using a reduced dimensional space. PCA has applications in data compression, dimensionality reduction, noise reduction, and data visualization.
This document discusses fractional order Sallen-Key and KHN filters. It presents an analysis of allocating system poles to control stability for these fractional order filters. The stability analysis considers two different fractional order transfer functions with two different fractional order elements. The number and locations of system poles depends on the fractional orders and transfer function parameters. Numerical, circuit simulation, and experimental results are used to test proposed stability contours.
This document discusses concepts related to estimation variance in resources estimation. It defines extension variance as the variance when extending a value from a known sample or block to an unknown area, and estimation variance as the variance when extending values from multiple known samples or blocks. It provides formulas for calculating extension and estimation variance based on variogram models and the geometry of samples and blocks. Nomograms are presented to aid in calculating variances for different sample/block configurations in 1D, 2D and 3D using spherical variogram models. The concept of global estimation variance, which sums the individual variances, is also introduced.
Linear regression models the relationship between two variables, where one variable is considered the dependent variable and the other is the independent variable. The linear regression line minimizes the sum of the squared distances between the observed dependent variable values and the predicted dependent variable values. This line can be used to predict the dependent variable value based on new independent variable values. Multiple linear regression extends this to model the relationship between a dependent variable and two or more independent variables. Other types of regression models include nonlinear, generalized linear, and exponential regression.
This paper reviews the fundamental concepts and
the basic theory of polarization mode dispersion(PMD) in optical
fibers. It introduces a unified notation and methodology to
link the various views and concepts in jones space and strokes
space. The discussion includes the relation between Jones vectors
and Strokes vectors and how they are used in formulating the
jones matrix by the unitary system matrix.
I am Martina J. I am a Signals and Systems Assignment Expert at matlabassignmentexperts.com. I hold a Master's in Matlab, from the University of Maryland. I have been helping students with their assignments for the past 9 years. I solve assignments related to Signals and Systems.
Visit matlabassignmentexperts.com or email info@matlabassignmentexperts.com.
You can also call on +1 678 648 4277 for any assistance with Signals and Systems assignments.
Surveillance refers to the task of observing a scene, often for lengthy periods in search of particular objects or particular behaviour. This task has many applications, foremost among them is security (monitoring for undesirable behaviour such as theft or vandalism), but increasing numbers of others in areas such as agriculture also exist. Historically, closed circuit TV (CCTV) surveillance has been mundane and labour Intensive, involving personnel scanning multiple screens, but the advent of reasonably priced fast hardware means that automatic surveillance is becoming a realistic task to attempt in real time. Several attempts at this are underway.
This document describes methods for determining whether a time series reflects linear or nonlinear processes. It compares the forecasting performance of linear versus nonlinear models. The two-step procedure involves using simplex projection to identify the best embedding dimension, and then using that dimension in the S-map procedure to assess nonlinearity. Both methods evaluate out-of-sample forecast skill by dividing the time series in half, using one half to build the model and the other to evaluate forecasts.
This document discusses nonparametric density estimation techniques. It begins by noting limitations of histograms for estimating densities and then introduces the kernel density estimator. The kernel density estimator smooths the empirical density by placing a kernel (such as the normal density) over each data point. The smoothing parameter determines the width of the kernels and impacts the tradeoff between bias and variance. Optimal choices minimize the integrated mean squared error. Kernel density estimators can be used to estimate multivariate densities and hazard rates without parametric assumptions about the underlying distributions. Nonparametric regression similarly estimates relationships without specifying a functional form.
The document discusses the Nyquist stability criterion for analyzing the stability of sampled-data control systems. It begins by defining the Nyquist criterion and contour, and how they can be used to determine the number of closed-loop poles outside the unit circle (Z). It then provides an example showing how to apply the Nyquist criterion by plotting the loop gain and counting encirclements of the critical point. The document also discusses modifications to the Nyquist contour when open-loop poles are on the unit circle and defines the Nyquist criterion theorem.
The document proposes a new infinity differentiable weight function, Wε(r), for smoothed particle hydrodynamics (SPH) methods. Wε(r) is based on a smeared-out Heaviside function and satisfies properties required of weight functions, including compact support, continuity, and asymptotic unity. The consistency and estimation errors of SPH interpolation using Wε(r) are analyzed, showing it provides C1 consistency and O(h4) order accuracy. The new weight function is developed to address limitations of existing weight functions like cubic and quartic splines in SPH approaches.
1. The document discusses differential kinematics and how it can be used to examine the pose velocities of robot frames. It also covers Jacobians and how they map between joint and Cartesian space velocities.
2. Specific topics covered include differential transformations, screw velocity matrices, relating velocities in different frames, determining Jacobians for serial and parallel robots, and examining robot singularities.
3. Singularities occur when the mechanism loses mobility in certain directions and joint rates will increase near these configurations. Workarounds include tooling design, changing to joint space control near singularities, and carefully planning paths.
Okay, here are the steps:
1) Given:
2) Transform into spherical unit vectors:
3) Write in terms of spherical components:
So the vector components in spherical coordinates are:
Gender and Mental Health - Counselling and Family Therapy Applications and In...PsychoTech Services
A proprietary approach developed by bringing together the best of learning theories from Psychology, design principles from the world of visualization, and pedagogical methods from over a decade of training experience, that enables you to: Learn better, faster!
Leveraging Generative AI to Drive Nonprofit InnovationTechSoup
In this webinar, participants learned how to utilize Generative AI to streamline operations and elevate member engagement. Amazon Web Service experts provided a customer specific use cases and dived into low/no-code tools that are quick and easy to deploy through Amazon Web Service (AWS.)
Elevate Your Nonprofit's Online Presence_ A Guide to Effective SEO Strategies...TechSoup
Whether you're new to SEO or looking to refine your existing strategies, this webinar will provide you with actionable insights and practical tips to elevate your nonprofit's online presence.
A Visual Guide to 1 Samuel | A Tale of Two HeartsSteve Thomason
These slides walk through the story of 1 Samuel. Samuel is the last judge of Israel. The people reject God and want a king. Saul is anointed as the first king, but he is not a good king. David, the shepherd boy is anointed and Saul is envious of him. David shows honor while Saul continues to self destruct.
A Free 200-Page eBook ~ Brain and Mind Exercise.pptxOH TEIK BIN
(A Free eBook comprising 3 Sets of Presentation of a selection of Puzzles, Brain Teasers and Thinking Problems to exercise both the mind and the Right and Left Brain. To help keep the mind and brain fit and healthy. Good for both the young and old alike.
Answers are given for all the puzzles and problems.)
With Metta,
Bro. Oh Teik Bin 🙏🤓🤔🥰
Andreas Schleicher presents PISA 2022 Volume III - Creative Thinking - 18 Jun...EduSkills OECD
Andreas Schleicher, Director of Education and Skills at the OECD presents at the launch of PISA 2022 Volume III - Creative Minds, Creative Schools on 18 June 2024.
1. KRIGING
C&PE 940, 19 October 2005
Geoff Bohling
Assistant Scientist
Kansas Geological Survey
geoff@kgs.ku.edu
864-2093
Overheads and other resources available at:
http://people.ku.edu/~gbohling/cpe940
1
2. What is Kriging?
Optimal interpolation based on regression against observed z
values of surrounding data points, weighted according to spatial
covariance values.
Pronunciation: Hard “g” (as in Danie Krige) or soft “g” (á là
Georges Matheron), take your pick
What is interpolation? Estimation of a variable at an unmeasured
location from observed values at surrounding locations. For
example, estimating porosity at u = (2000 m, 4700 m) based on
porosity values at nearest six data points in our Zone A data:
It would seem reasonable to estimate u f by a weighted average
å a a l f , with weights a l given by some decreasing function of the
distance, a d , from u to data point a .
2
3. All interpolation algorithms (inverse distance squared, splines,
radial basis functions, triangulation, etc.) estimate the value at a
given location as a weighted sum of data values at surrounding
locations. Almost all assign weights according to functions that
give a decreasing weight with increasing separation distance.
Kriging assigns weights according to a (moderately) data-driven
weighting function, rather than an arbitrary function, but it is still
just an interpolation algorithm and will give very similar results to
others in many cases (Isaaks and Srivastava, 1989). In particular:
- If the data locations are fairly dense and uniformly distributed
throughout the study area, you will get fairly good estimates
regardless of interpolation algorithm.
- If the data locations fall in a few clusters with large gaps in
between, you will get unreliable estimates regardless of
interpolation algorithm.
- Almost all interpolation algorithms will underestimate the highs
and overestimate the lows; this is inherent to averaging and if
an interpolation algorithm didn’t average we wouldn’t
consider it reasonable
Some advantages of kriging:
- Helps to compensate for the effects of data clustering, assigning
individual points within a cluster less weight than isolated
data points (or, treating clusters more like single points)
- Gives estimate of estimation error (kriging variance), along with
estimate of the variable, Z, itself (but error map is basically a
scaled version of a map of distance to nearest data point, so
not that unique)
- Availability of estimation error provides basis for stochastic
simulation of possible realizations of Z(u)
3
4. Kriging approach and terminology
Goovaerts, 1997: “All kriging estimators are but variants of the
basic linear regression estimator Z*(u) defined as
( ) ( ) [ ( ) ( )] ( )
Z m Z m
4
u
n
å=
- = -
u u u u
1
*
a
la a a .”
with
a u,u : location vectors for estimation point and one of the
neighboring data points, indexed by a
n(u): number of data points in local neighborhood used for
estimation of Z*(u)
( ), ( ): a m u m u expected values (means) of Z(u) and ( ) a Z u
(u) a l : kriging weight assigned to datum ( ) a z u for estimation
location u; same datum will receive different weight for
different estimation location
Z(u) is treated as a random field with a trend component, m(u),
and a residual component, R(u) = Z(u)- m(u). Kriging estimates
residual at u as weighted sum of residuals at surrounding data
points. Kriging weights, a l , are derived from covariance function
or semivariogram, which should characterize residual component.
Distinction between trend and residual somewhat arbitrary; varies
with scale.
Development here will follow that of Pierre Goovaerts, 1997,
Geostatistics for Natural Resources Evaluation, Oxford University
Press.
5. We will continue working with our example porosity data,
including looking in detail at results in the six-data-point region
shown earlier:
5
6. Basics of Kriging
Again, the basic form of the kriging estimator is
( ) ( ) [ ( ) ( )] ( )
Z m Z m
6
u
n
å=
- = -
u u u u
1
*
a
la a a
The goal is to determine weights, a l , that minimize the variance of
the estimator
sE2 (u) = Var{Z *(u) - Z(u)}
under the unbiasedness constraint E{Z * (u)- Z (u)}= 0.
The random field (RF) Z(u) is decomposed into residual and trend
components, Z(u) = R(u) + m(u), with the residual component
treated as an RF with a stationary mean of 0 and a stationary
covariance (a function of lag, h, but not of position, u):
E{R(u)}= 0
Cov{R(u),R(u+ h)}= E{R(u)× R(u +h)}= CR (h)
The residual covariance function is generally derived from the
input semivariogram model, C (h) = C (0)-g (h) = Sill -g (h) R R .
Thus the semivariogram we feed to a kriging program should
represent the residual component of the variable.
The three main kriging variants, simple, ordinary, and kriging with
a trend, differ in their treatments of the trend component, m(u).
7. Simple Kriging
For simple kriging, we assume that the trend component is a
constant and known mean, m(u) = m, so that
( ) ( ) [ ( ) ] ( )
n(u)
å SK (u)R ( u)
- R(u)= Ra SK
{ * (u)}+Var RSK { (u)}- 2Cov RSK
n(u)
å +CR (0)- 2 la
SK 1, ,
l - = a - a
= K å=
7
u
å=
= + u u
-
n
SK
ZSK u m Z m
1
*
a
la a .
This estimate is automatically unbiased, since E[Z( ) -m] = 0 a u , so
that [Z (u)] m [Z(u)] SK E * = = E . The estimation error Z (u) Z(u) SK * -
is a linear combination of random variables representing residuals
at the data points, a u , and the estimation point, u:
* (u)- Z(u)= ZSK
ZSK
[ * (u)-m]- [Z (u)-m]
= la
a=1
* (u)- R(u)
Using rules for the variance of a linear combination of random
variables, the error variance is then given by
sE
2 (u)= Var RSK
* (u),RSK { (u)}
SK (u)lb
n(u)
å
= la
SK (u)CR ua -ub ( )
b=1
a=1
SK (u)CR ua ( - u)
n(u)
å
a=1
To minimize the error variance, we take the derivative of the above
expression with respect to each of the kriging weights and set each
derivative to zero. This leads to the following system of equations:
( u
) ( u ) ( u u ) (u u) (u)
C C n R
n
R
1
b
b a b
8. Because of the constant mean, the covariance function for Z(u) is
the same as that for the residual component, (h) (h) R C = C , so that
we can write the simple kriging system directly in terms of C(h):
( ) ( ) ( ) u u u (u u) (u)
T SK
s l SK C SK C la C
a
8
u
C C n
n
SK 1, ,
1
l - = - a = K a
å=
b
b a b .
This can be written in matrix form as
KlSK (u)= k
where KSK is the matrix of covariances between data points, with
elements ( ) i j i j K = C u - u , , k is the vector of covariances between
the data points and the estimation point, with elements given by
ki =C(ui - u), and l SK (u) is the vector of simple kriging weights
for the surrounding data points. If the covariance model is licit
(meaning the underlying semivariogram model is licit) and no two
data points are colocated, then the data covariance matrix is
positive definite and we can solve for the kriging weights using
l SK = K-1k
Once we have the kriging weights, we can compute both the
kriging estimate and the kriging variance, which is given by
( u
)
n
( ) ( ) ( ) ( ) ( ) ( )
å=
= - = - -
u u k u u u
1
2 0 0
a
after substituting the kriging weights into the error variance
expression above.
9. So what does all this math do? It finds a set of weights for
estimating the variable value at the location u from values at a set
of neighboring data points. The weight on each data point
generally decreases with increasing distance to that point, in
accordance with the decreasing data-to-estimation covariances
specified in the right-hand vector, k. However, the set of weights
is also designed to account for redundancy among the data points,
represented in the data point-to-data point covariances in the
matrix K. Multiplying k by K-1 (on the left) will downweight
points falling in clusters relative to isolated points at the same
distance.
We will apply simple kriging to our porosity data, using the
spherical semivariogram that we fit before, with zero nugget, a sill
of 0.78, and a range of 4141 m:
9
10. Since we are using a spherical semivariogram, the covariance
function is given by
C(h) = C(0)-g (h) = 0.78× (1-1.5× (h 4141)+ 0.5× (h 4141)3 )
for separation distances, h, up to 4141 m, and 0 beyond that range.
The plot below shows the elements of the right-hand vector,
k = [0.38,0.56,0.32,0.49,0.46,0.37]T , obtained from plugging the
data-to-estimation-point distances into this covariance function:
The matrix of distances between the pairs of data points (rounded
to the nearest meter) is given by
Point 1 Point 2 Point 3 Point 4 Point 5 Point 6
Point 1 0 1897 3130 2441 1400 1265
Point 2 1897 0 1281 1456 1970 2280
Point 3 3130 1281 0 1523 2800 3206
Point 4 2441 1456 1523 0 1523 1970
Point 5 1400 1970 2800 1523 0 447
Point 6 1265 2280 3206 1970 447 0
10
11. This translates into a data covariance matrix of
0.78 0.28 0.06 0.17 0.40 0.43
0.28 0.78 0.43 0.39 0.27 0.20
0.06 0.43 0.78 0.37 0.11 0.06
0.17 0.39 0.37 0.78 0.37 0.27
0.40 0.27 0.11 0.37 0.78 0.65
11
ù
ú ú ú ú ú ú ú
û
é
ê ê ê ê ê ê ê
ë
=
0.43 0.20 0.06 0.27 0.65 0.78
K
(rounded to two decimal places). Note in particular the relatively
high correlation between points 5 and 6, separated by 447 m. The
resulting vector of kriging weights is
ù
ú ú ú ú ú ú ú
0.1475
0.4564
0.0205
û
é
ê ê ê ê ê ê ê ë
-
0.2709
0.2534
-
= =
ù
ú ú ú ú ú ú ú
û
é
ê ê ê ê ê ê ê
l
l
l
l
l
ë
-
0.0266
1
1
2
3
4
5
6
K k
l
Notice that data point 6 is assigned a very small weight relative to
data point 1, even though they are both about the same distance
from the estimation point and have about the same data-point-to-estimation-
point covariance (k1 = 0.38, k6 = 0.37). This is because
data point 6 is effectively “screened” by the nearby data point 5.
Data points 5 and 6 are fairly strongly correlated with each other
and 5 has a stronger correlation with the estimation point, so data
point 6 is effectively ignored. Note that the covariances and thus
the kriging weights are determined entirely by the data
configuration and the covariance model, not the actual data values.
12. The porosities at points 5 and 6 could in fact be very different and
this would have no influence on the kriging weights.
The mean porosity value for the 85 wells is 14.70%, and the
porosity values at the six example wells are 13.84%, 12.15%,
12.87%, 12.68%, 14.41%, and 14.59%. The estimated residual
from the mean at u is given by the dot product of the kriging
weights and the vector of residuals at the data points:
( )
- 0.86
ù
ú ú ú ú ú ú ú
é
- 2.55
-1.83
R u = ?¢Ra
[ 0.15 0.46 0.02 0.27 0.25 0.03 ] = -
1.87
12
- 2.01
- 0.28
- 0.10
û
ê ê ê ê ê ê ê
ë
= - -
Adding the mean back into this estimated residual gives an
estimated porosity of Zˆ(u) = R(u) + m = -1.87 +14.70 =12.83%.
Similarly, plugging the kriging weights and the vector k into the
expression for the estimation variance gives a variance of 0.238
(squared %). Given these two pieces of information, we can
represent the porosity at u = (2000 m, 4700 m) as a normal
distribution with a mean of 12.83% and a standard deviation of
0.49%. Note that, like the kriging weights, the variance estimate
depends entirely on the data configuration and the covariance
function, not on the data values themselves. The estimated kriging
variance would be the same regardless of whether the actual
porosity values in the neighborhood were very similar or highly
variable. The influence of the data values, through the fitting of
the semivariogram model, is quite indirect.
13. Here are the simple kriging estimates and standard deviation on a
100x80 grid with 100-meter spacing using the spherical
semivariogram model and estimating each grid value from the 16
nearest neighbor data points (well locations):
13
14. Some characteristics to note:
Smoothness: Kriged surface will basically be as smooth as
possible given the constraints of the data; in many cases, probably
smoother than the “true” surface.
Bullseyes: Because kriging averages between data points, local
extremes will usually be at well locations; bullseyes are inevitable.
This is true of almost all interpolation algorithms. Extreme form
of this is artifact discontinuities at well locations when
semivariogram model includes significant nugget.
Error map reflects data locations, not data values: Map of
kriging standard deviation depends entirely on data configuration
and covariance function; essentially a map of distance to nearest
well location scaled by covariance function.
14
15. Ordinary Kriging
For ordinary kriging, rather than assuming that the mean is
constant over the entire domain, we assume that it is constant in
the local neighborhood of each estimation point, that is that
m(u ) = m(u) a for each nearby data value, ( ) a Z u , that we are using
to estimate Z(u). In this case, the kriging estimator can be written
( ) ( ) n
( )
= + l
( )[ ( ) -
( )] Z m Z m
u u u u u
=
1
a
( ) ( ) ( ) ( ) ( ) u u u (u)
= + é -
n n
u u
å å
u n
u
* u = å u u å u
=
= =
é - + = å=
s m la
E OK L
u
= - =
15
u
ù
Z m
úû
êë
å
= =
1 1
*
1
a
a
a
a a
a a
l l
and we filter the unknown local mean by requiring that the kriging
weights sum to 1, leading to an ordinary kriging estimator of
( )
( ) ( ) ( )
( )
( )
with 1
1 1
OK
n
OK
ZOK Z
a
a
a
la a l .
In order to minimize the error variance subject to the unit-sum
constraint on the weights, we actually set up the system minimize
the error variance plus an additional term involving a Lagrange
parameter, (u) OK m :
( ) ( ) ( ) ( )
ù
úû
êë
u
n
u u u
1
2 2 1
a
so that minimization with respect to the Lagrange parameter forces
the constraint to be obeyed:
( ) ( )
1 0
1
2
1
¶
¶
å=
u
L n
a
a l
m
16. In this case, the system of equations for the kriging weights turns
out to be
( ) l OK ( u ) C ( u u ) m ( u ) C ( u u ) a K
n
( u
)
2 0 .
a a - - - = å=
16
u
å
=
1
u
( ) ( )
ì
ï ï
í
ï ï
î
=
- + = - =
å
=
1
1, ,
1
u
n
OK
OK R
n
R
b
b
a
b
b a b
l
where (h) R C is once again the covariance function for the residual
component of the variable. In simple kriging, we could equate
(h) R C and C(h), the covariance function for the variable itself,
due to the assumption of a constant mean. That equality does not
hold here, but in practice the substitution is often made anyway, on
the assumption that the semivariogram, from which C(h) is
derived, effectively filters the influence of large-scale trends in the
mean.
In fact, the unit-sum constraint on the weights allows the ordinary
kriging system to be stated directly in terms of the semivariogram
(in place of the (h) R C values above). In a sense, ordinary kriging
is the interpolation approach that follows naturally from a
semivariogram analysis, since both tools tend to filter trends in the
mean.
Once the kriging weights (and Lagrange parameter) are obtained,
the ordinary kriging error variance is given by
( u
)
( ) ( ) ( ) ( )
u u u u (u)
OK
n
OK
sOK C l C m
a
1
17. In matrix terms, the ordinary kriging system is an augmented
version of the simple kriging system. For our six-point example it
would be:
0.78 0.28 0.06 0.17 0.40 0.43 1.00
0.28 0.78 0.43 0.39 0.27 0.20 1.00
0.06 0.43 0.78 0.37 0.11 0.06 1.00
0.17 0.39 0.37 0.78 0.37 0.27 1.00
0.40 0.27 0.11 0.37 0.78 0.65 1.00
0.43 0.20 0.06 0.27 0.65 0.78 1.00
17
ù
ú ú ú ú ú ú ú ú ú
û
é
ê ê ê ê ê ê ê ê ê
ë
=
ù
ú ú ú ú ú ú ú ú ú
û
é
ê ê ê ê ê ê ê ê ê
ë
ù
ú ú ú ú ú ú ú ú ú
û
é
ê ê ê ê ê ê ê ê ê
ë
0.38
0.56
0.32
0.49
0.46
0.37
1.00
1.00 1.00 1.00 1.00 1.00 1.00 0.00
l
1
l
2
l
3
l
4
l
5
l
6
m
to which the solution is
ù
ú ú ú ú ú ú ú ú ú
0.1274
0.4515
0.0463
0.2595
0.2528
0.0448
û
é
ê ê ê ê ê ê ê ê ê
ë
-
-
= =
ù
ú ú ú ú ú ú ú ú ú
û
é
ê ê ê ê ê ê ê ê ê
l
l
l
l
l
l
ë
-
0.0288
1
1
2
3
4
5
6
K k
m
The ordinary kriging estimate at u = (2000 m, 4700 m) turns out to
be 12.93% with a standard deviation of 0.490%, only slightly
different from the simple kriging values of 12.83% and 0.488%.
18. Again using 16 nearest neighbors for each estimation point, the
ordinary kriging porosity estimate and standard deviation look very
much like those from simple kriging:
18
19. Kriging with a Trend
Kriging with a trend (the method formerly known as universal
kriging) is much like ordinary kriging, except that instead of fitting
just a local mean in the neighborhood of the estimation point, we
fit a linear or higher-order trend in the (x,y) coordinates of the data
points. A local linear (a.k.a., first-order) trend model would be
given by
m( ) m(x y) a a x a y 0 1 2 u = , = + +
Including such a model in the kriging system involves the same
kind of extension as we used for ordinary kriging, with the addition
of two more Lagrange parameters and two extra columns and rows
in the K matrix whose (non-zero) elements are the x and y
coordinates of the data points. Higher-order trends (quadratic,
cubic) could be handled in the same way, but in practice it is rare
to use anything higher than a first-order trend. Ordinary kriging is
kriging with a zeroth-order trend model.
If the variable of interest does exhibit a significant trend, a typical
approach would be to attempt to estimate a “de-trended”
semivariogram using one of the methods described in the
semivariogram lecture and then feed this into kriging with a first-order
trend. However, Goovaerts (1997) warns against this
approach and instead recommends performing simple kriging of
the residuals from a global trend (with a constant mean of 0) and
then adding the kriged residuals back into the global trend.
19
20. A few of the many topics I have skipped or glossed over
Cokriging: Kriging using information from one or more
correlated secondary variables, or multivariate kriging in general.
Requires development of models for cross-covariance – covariance
between two different variables as a function of lag.
Indicator Kriging: Kriging of indicator variables, which
represent membership in a set of categories. Used with naturally
categorical variables like facies or continuous variables that have
been thresholded into categories (e.g., quartiles, deciles).
Especially useful for preserving connectedness of high- and low-permeability
regions. Direct application of kriging to perm will
almost always wash out extreme values.
Artifact discontinuities: Kriging using a semivariogram model
with a significant nugget will create discontinuities, with the
interpolated surface leaping up or down to grab any data point that
happens to correspond with a grid node (estimation point).
Solutions: factorial kriging (filtering out the nugget component) or
some other kind of smoothing (as opposed to exact) interpolation,
such as smoothing splines. Or, if you really want to do exact
interpolation, use a semivariogram model without a nugget.
Search algorithm: The algorithm for selecting neighboring data
points can have at least as much influence on the estimate as the
interpolation algorithm itself. I have used a simple nearest
neighbor search. A couple of alternatives include quadrant and
octant searches, which look for so many data points within a
certain distance in each quadrant or octant surrounding the data
point.
20