This document provides tutorial notes on using the S-GeMS software to analyze and model porosity data from the fictional Zone A area of the Big Bean Oil Field. It summarizes how to load the data, examine histograms and variograms, perform ordinary kriging and sequential Gaussian simulation to generate multiple realizations of porosity. It also briefly describes how to perform sequential indicator simulation of facies data from the Green Goblin Gas Field as an illustrative example.
Petroleum geology of pakistan by i b qadri akbar-ali-asifUsama Shah
i am not owner of this document, but for a geophysics or geology student who needs to make assignment or want to know about BASINS and petroleum geology of Pakistan, must see this.
thanks!
Este documento apresenta conceitos introdutórios sobre geoestatística e interpolação espacial. Define interpolação espacial como o processo de estimar valores em locais não amostrados com base em pontos de dados conhecidos. Discute tipos de interpoladores como exatos vs não exatos, globais vs locais, e determinísticos vs estocásticos. Explora o método de interpolação por Ponderação pelo Inverso da Distância.
O documento discute o metamorfismo de rochas, definindo-o como o processo pelo qual uma rocha é transformada em outra rocha sob novas condições de pressão e temperatura. Detalha os fatores que influenciam o metamorfismo, como a natureza do protólito, temperatura, pressão e fluidos, além dos tipos de metamorfismo e características de rochas metamórficas.
Geo-referencing is GIS based spatial analysis technique which is discussed in this presentation.For video you can see following link:
https://www.youtube.com/watch?v=h559lOsvOU8&feature=youtu.be&fbclid=IwAR3PB9YB4i86zrYyzxbiz_g2-4_ujowdO1gfm4Lz5E3vGf56Fn5DAzeUA_8
Geographical Information System (GIS) Georeferencing and Digitization, Bihar ...Kamlesh Kumar
This work is an effort to share Geographical Information System: Georeferencing, digitization and map making steps through QGIS 2.0.1
Georeferencing
Digitization of Topographical sheet
Point
Line
Area
Bihar Map
District Headquarters
Railway of Bihar
District Boundaries
Thematic Maps (Literacy & Sex Ratio)
Modern oil and gas field management is increasingly reliant on detailed and precise 3D reservoir characterisation, and timely areal monitoring. Borehole seismic techniques bridge the gap between remote surface-seismic observations and downhole reservoir evaluation: Borehole seismic data provide intrinsically higher-resolution, higher-fidelity images than surface-seismic data in the vicinity of the wellbore, and unique access to properties of seismic wavefields to enhance surface-seismic imaging. With the advent of new, operationally-efficient very large wireline receiver arrays; fiber-optic recording using Distributed Acoustic Sensing (DAS); the crosswell seismic reflection technique, and advanced seismic imaging algorithms such as Reverse Time Migration, a new wave of borehole seismic technologies is revolutionizing 3D seismic reservoir characterization and on-demand reservoir surveillance. New borehole seismic technologies are providing deeper insights into static reservoir architecture and properties, and into dynamic reservoir performance for conventional water-flood production, EOR, and CO2 sequestration – in deepwater, unconventional, full-field, and low-footprint environments. This lecture will begin by illustrating the wide range of borehole seismic solutions for reservoir characterization and monitoring, using a diverse set of current- and recent case study examples – through which the audience will gain an understanding of the appropriate use of borehole seismic techniques for field development and management. The lecture will then focus on DAS, explaining how the technique works; its capability to deliver conventional borehole seismic solutions (with key advantages over geophones); then describing DAS’s dramatic impact on field monitoring applications and business-critical decisions. New and enhanced borehole seismic techniques – especially with DAS time-lapse monitoring – are ready to deliver critical reservoir management solutions for your fields.
Play-Based Exploration (PBE) has become a trendy term in the oil and gas industry, championed by companies like ExxonMobil, Shell and BP.
PBE methods vary significantly; some firms create ‘traffic light’ maps of overall relative profitability, others partitioning probability into play/shared and local /prospect-specific probability with assigned values.
Play-based exploration reflect a shift in focus, as prospects are not the basic unit of exploration, but plays are.
Individual prospect probabilities are balanced and calibrated with the play maps, and prospect volumes are validated against the field size distribution (FSD) for the play.
Exploration evaluation experts use play-based exploration (PBE) to build a full understanding of the geological basins that should be explored and to reassess exploration strategies.
This document provides an overview and introduction to analyzing spatial data using Python. It discusses what spatial data is, popular Python libraries for working with spatial data like Fiona, Shapely, GeoPy, and Mapnik, and how to perform spatial analysis tasks in Python such as geocoding, data conversion and visualization. Jupyter notebooks are presented as an interactive environment for exploring spatial data and libraries like Geopandas and PySAL are introduced for performing spatial analysis. Examples analyze Colombian location and point of interest data.
Petroleum geology of pakistan by i b qadri akbar-ali-asifUsama Shah
i am not owner of this document, but for a geophysics or geology student who needs to make assignment or want to know about BASINS and petroleum geology of Pakistan, must see this.
thanks!
Este documento apresenta conceitos introdutórios sobre geoestatística e interpolação espacial. Define interpolação espacial como o processo de estimar valores em locais não amostrados com base em pontos de dados conhecidos. Discute tipos de interpoladores como exatos vs não exatos, globais vs locais, e determinísticos vs estocásticos. Explora o método de interpolação por Ponderação pelo Inverso da Distância.
O documento discute o metamorfismo de rochas, definindo-o como o processo pelo qual uma rocha é transformada em outra rocha sob novas condições de pressão e temperatura. Detalha os fatores que influenciam o metamorfismo, como a natureza do protólito, temperatura, pressão e fluidos, além dos tipos de metamorfismo e características de rochas metamórficas.
Geo-referencing is GIS based spatial analysis technique which is discussed in this presentation.For video you can see following link:
https://www.youtube.com/watch?v=h559lOsvOU8&feature=youtu.be&fbclid=IwAR3PB9YB4i86zrYyzxbiz_g2-4_ujowdO1gfm4Lz5E3vGf56Fn5DAzeUA_8
Geographical Information System (GIS) Georeferencing and Digitization, Bihar ...Kamlesh Kumar
This work is an effort to share Geographical Information System: Georeferencing, digitization and map making steps through QGIS 2.0.1
Georeferencing
Digitization of Topographical sheet
Point
Line
Area
Bihar Map
District Headquarters
Railway of Bihar
District Boundaries
Thematic Maps (Literacy & Sex Ratio)
Modern oil and gas field management is increasingly reliant on detailed and precise 3D reservoir characterisation, and timely areal monitoring. Borehole seismic techniques bridge the gap between remote surface-seismic observations and downhole reservoir evaluation: Borehole seismic data provide intrinsically higher-resolution, higher-fidelity images than surface-seismic data in the vicinity of the wellbore, and unique access to properties of seismic wavefields to enhance surface-seismic imaging. With the advent of new, operationally-efficient very large wireline receiver arrays; fiber-optic recording using Distributed Acoustic Sensing (DAS); the crosswell seismic reflection technique, and advanced seismic imaging algorithms such as Reverse Time Migration, a new wave of borehole seismic technologies is revolutionizing 3D seismic reservoir characterization and on-demand reservoir surveillance. New borehole seismic technologies are providing deeper insights into static reservoir architecture and properties, and into dynamic reservoir performance for conventional water-flood production, EOR, and CO2 sequestration – in deepwater, unconventional, full-field, and low-footprint environments. This lecture will begin by illustrating the wide range of borehole seismic solutions for reservoir characterization and monitoring, using a diverse set of current- and recent case study examples – through which the audience will gain an understanding of the appropriate use of borehole seismic techniques for field development and management. The lecture will then focus on DAS, explaining how the technique works; its capability to deliver conventional borehole seismic solutions (with key advantages over geophones); then describing DAS’s dramatic impact on field monitoring applications and business-critical decisions. New and enhanced borehole seismic techniques – especially with DAS time-lapse monitoring – are ready to deliver critical reservoir management solutions for your fields.
Play-Based Exploration (PBE) has become a trendy term in the oil and gas industry, championed by companies like ExxonMobil, Shell and BP.
PBE methods vary significantly; some firms create ‘traffic light’ maps of overall relative profitability, others partitioning probability into play/shared and local /prospect-specific probability with assigned values.
Play-based exploration reflect a shift in focus, as prospects are not the basic unit of exploration, but plays are.
Individual prospect probabilities are balanced and calibrated with the play maps, and prospect volumes are validated against the field size distribution (FSD) for the play.
Exploration evaluation experts use play-based exploration (PBE) to build a full understanding of the geological basins that should be explored and to reassess exploration strategies.
This document provides an overview and introduction to analyzing spatial data using Python. It discusses what spatial data is, popular Python libraries for working with spatial data like Fiona, Shapely, GeoPy, and Mapnik, and how to perform spatial analysis tasks in Python such as geocoding, data conversion and visualization. Jupyter notebooks are presented as an interactive environment for exploring spatial data and libraries like Geopandas and PySAL are introduced for performing spatial analysis. Examples analyze Colombian location and point of interest data.
Foraminifera, or forams, have several applications in petroleum geology, most notably for biostratigraphy and paleoenvironmental analysis. Biostratigraphy uses the fossils found in different rock layers to determine the age of the rock and help locate the specific rock unit a well has penetrated. Paleoenvironmental analysis interprets the depositional environment based on the fossils present. Additionally, forams indicate environmental factors like depth and salinity that are important for source rock analysis. They also aid in correlating rock strata, determining relative ages, and recognizing unconformities.
Short course discussing a practical approach to Sequence Stratigraphy and attempting to clarify some of the terminological muddle that has accumulated over the past few decades.
Note: Originally presented as in-house short course for Pioneer Natural Resources Company. All material is public domain and/or original sketches/figures by author.
The document discusses different types of spatial data used in geographic information systems (GIS). It describes vector data, which represents geographic features as points, lines, and polygons, and raster data, which divides the landscape into a grid of cells. It outlines some common vector data formats like shapefiles, coverages, and geodatabases, and raster formats like grids and digital elevation models. It provides details on how vector data structures represent points, lines, polygons, and their topological relationships in ArcGIS.
Mud logging involves monitoring drilling parameters and the lithology, gas content, and oil shows of downhole formations in real-time. A mud logger collects and examines rock cuttings for geological description and oil shows, monitors hydrocarbon gases and hydrogen sulfide levels, and tracks mud volume, drilling parameters, and trip sheets. Issues like losses or gains in mud volume and stuck pipe can occur due to pressure imbalances or poor hole cleaning and require solutions like adjusting mud weight or using a jar tool.
Learn how Cloudera Impala empowers you to:
- Perform interactive, real-time analysis directly on source data stored in Hadoop
- Interact with data in HDFS and HBase at the “speed of thought”
- Reduce data movement between systems & eliminate double storage
This document discusses the use of geographic information systems (GIS) for various purposes. It begins by outlining six principles for GIS use, including thinking spatially, using data appropriately, and believing in data sharing. It then covers what GIS is, its components and applications for fields like environmental impact assessment, social sciences, natural resource management, disaster risk reduction, participatory planning, and decision support/public policy. Specific examples discussed include using GIS for flood mapping, natural resource management, understanding disease spread, and evaluating government programs for tribal communities. The document emphasizes how GIS can help improve decision-making by integrating spatial data from various sources.
This document provides an overview of geographical information systems (GIS) and their concepts. It discusses that GIS allows for the integration of spatial and non-spatial data in a digital format to aid decision making. Key points include that GIS represents geographic features as vector or raster data, integrates data from different sources by georeferencing to a common coordinate system, and can perform spatial analysis and modeling to answer questions about patterns and relationships. GIS is a useful tool for tasks like natural resource management, precision agriculture, and land use planning.
Concept of isostatic adjustment and isostatic models parag sonwane
This document discusses the geological concept of isostasy, which refers to the equilibrium between Earth's crust and mantle such that the crust "floats" at an elevation that depends on its thickness and density. It presents several theories of isostasy, including Airy's theory which proposes that thicker crustal areas sink deeper into the mantle, and Pratt's theory which suggests areas of lower crustal density project higher. The document also discusses isostatic effects from processes like deposition, erosion, and past ice sheets, as well as concepts like phase changes and Heiskanen's modification of Airy's theory.
Ground Penetrating Radar, also known as GPR, is a tool that is used to find Underground Utilities, Underground Storage Tanks (USTs), and in some cases, Graves. The depth and accuracy are dependent on a number of variables, such as soil density, moisture content, and antenna frequency. We use a 350 MHz antenna, which has the potential to reach depths of up to 35 ft (in perfect situations).
This document is a thesis submitted by Gagandeep Singh for his M.Tech degree in RS & GIS from NIT Warangal in 2013-2015. It discusses various topics related to digital terrain modeling including contour lines, grid DTMs, TINs, the differences between DSMs and DEMs, data acquisition methods, processing techniques, and applications of digital terrain data. It also evaluates different data sources for terrain modeling like SRTM, topographic maps, and Google Earth imagery and assesses their accuracy through statistical analysis and visual inspection.
O documento descreve a geologia da ilha de Timor-Leste, localizada no sudeste asiático onde placas tectônicas colidem. A ilha é composta por cadeias montanhosas formadas por esta colisão ao longo de milhões de anos, e contém evidências fósseis que indicam levantamento da terra a taxas de até 0,4 metros por mil anos. O clima é tropical, com estações chuvosas e secas, o que é favorável à exploração de hidrocarbonetos.
The document discusses different geodatabase formats and their benefits. It explains that geodatabases store geospatial and attribute data together, unlike shapefiles. The main geodatabase types are file geodatabases, personal geodatabases, and ArcSDE geodatabases. Feature datasets are used to define projections, extents and other rules within a geodatabase. Additional functionality includes topology, networks and normalization.
Interpretation and recognition of depositional systems using seismic dataDiego Timoteo
This document discusses the interpretation and recognition of depositional systems using seismic data. It covers five key stages: (1) reviewing basic concepts of sequence stratigraphy, (2) understanding the physical foundations of rocks and seismic reflection methods, (3) seismic stratigraphic interpretation of depositional sequences and system tracts, (4) recognizing depositional systems through seismic facies analysis, and (5) advanced seismic interpretation applications. Accurate interpretation requires integrating data from outcrops, cores, well logs, and seismic sections to constrain models, especially in frontier regions with limited data. The techniques allow correlating and mapping stratigraphic units to aid paleogeographic reconstruction and facies/lithology prediction away from control points.
Anderson's theory of faulting predicts that the orientation of faults depends on the principal stresses. It assumes reverse faults dip at 30 degrees, normal faults dip at 60 degrees, and strike-slip faults are vertical. However, exceptions like low-angle normal faults exist. Pore fluid pressure or pre-existing weaknesses in the rock can allow faults to form at shallower angles. The rolling-hinge model also explains how low-angle normal faults can develop.
The document provides an overview of spontaneous potential (SP) logging. It discusses that SP logging measures natural electrical potentials between the borehole and surface. Positive deflections indicate fresher formation water than mud filtrate, while negative deflections mean saltier formation water. SP can be used to determine formation water resistivity and estimate shale volume. Key applications include detecting permeable zones, correlating formations, and determining facies.
12 Week Subsurface Mapping And Interpretation Technique Buildingjoedumesnil
The document summarizes a 12-week subsurface mapping and interpretation course. It covers laying the theoretical foundations for exploring and developing oil and gas fields. The first 6 weeks were devoted to classes on structural geology, stratigraphy, seismic techniques and reservoir engineering. The second 6 weeks focused on applying these tools to a hypothetical lease, including initial exploration, assessing discoveries, field development and performance analysis. Various mapping and interpretation techniques are demonstrated, such as depth structure maps, fault surface maps, isochore maps and seismic sections. Risk factors are assessed for a proposed deviated exploration well.
This document provides an overview of carbonate sedimentology and techniques for interpreting carbonate reservoirs using well logs, image logs, and core data. It discusses how carbonates differ from clastic reservoirs in terms of biogenic processes and diagenesis. Key carbonate depositional environments, textures, structures, and facies are described. Examples of carbonate features observed on well logs and images are presented alongside core calibration data. Interpretation of fractures in carbonate reservoirs is also addressed.
Sedimentary facies are distinguished based on features like lithology, grain size, texture, sedimentary structures, fossils, and colour. These characteristics indicate the depositional environment. Facies represent sediments deposited in particular environments due to processes like wind, streams, waves, or biological activity. Depositional environments can be determined by examining sediment textures, structures, fossils, and bedding. Facies sequences reflect changes in depositional conditions over time.
O documento introduz o mapeamento digital de solos, comparando-o com o mapeamento clássico. Discute conceitos, exemplos, passos e conhecimentos necessários para o mapeamento digital, enfatizando a importância de sistemas de informação, métodos observacionais e sistemas de inferência. Recomenda trabalhar com outros profissionais como estatísticos e geólogos para lidar melhor com os aspectos quantitativos.
Dips 7.0 introduces several new features for analyzing structural geology data including 3D stereographic plotting, input and analysis of curved borehole data, kinematic sensitivity analysis, improved joint set analysis tools, contouring of general data values, and displaying traverse orientations on the stereonet. Key capabilities include viewing poles, planes and contours in 3D; entering and processing data from curved boreholes; varying parameters to analyze their effect on failure probabilities; calculating joint spacing, RQD, and joint frequency; contouring any data column; and plotting traverse paths.
This document provides a tutorial on using the basic features of the Dips orientation data analysis program. It explains how to open example data files, view data in grid and stereonet plot views, generate different stereonet plot types like pole plots, scatter plots, and contour plots, and customize the stereonet display. It also describes how to interpret stereonet plots and control plotting options through the sidebar.
Foraminifera, or forams, have several applications in petroleum geology, most notably for biostratigraphy and paleoenvironmental analysis. Biostratigraphy uses the fossils found in different rock layers to determine the age of the rock and help locate the specific rock unit a well has penetrated. Paleoenvironmental analysis interprets the depositional environment based on the fossils present. Additionally, forams indicate environmental factors like depth and salinity that are important for source rock analysis. They also aid in correlating rock strata, determining relative ages, and recognizing unconformities.
Short course discussing a practical approach to Sequence Stratigraphy and attempting to clarify some of the terminological muddle that has accumulated over the past few decades.
Note: Originally presented as in-house short course for Pioneer Natural Resources Company. All material is public domain and/or original sketches/figures by author.
The document discusses different types of spatial data used in geographic information systems (GIS). It describes vector data, which represents geographic features as points, lines, and polygons, and raster data, which divides the landscape into a grid of cells. It outlines some common vector data formats like shapefiles, coverages, and geodatabases, and raster formats like grids and digital elevation models. It provides details on how vector data structures represent points, lines, polygons, and their topological relationships in ArcGIS.
Mud logging involves monitoring drilling parameters and the lithology, gas content, and oil shows of downhole formations in real-time. A mud logger collects and examines rock cuttings for geological description and oil shows, monitors hydrocarbon gases and hydrogen sulfide levels, and tracks mud volume, drilling parameters, and trip sheets. Issues like losses or gains in mud volume and stuck pipe can occur due to pressure imbalances or poor hole cleaning and require solutions like adjusting mud weight or using a jar tool.
Learn how Cloudera Impala empowers you to:
- Perform interactive, real-time analysis directly on source data stored in Hadoop
- Interact with data in HDFS and HBase at the “speed of thought”
- Reduce data movement between systems & eliminate double storage
This document discusses the use of geographic information systems (GIS) for various purposes. It begins by outlining six principles for GIS use, including thinking spatially, using data appropriately, and believing in data sharing. It then covers what GIS is, its components and applications for fields like environmental impact assessment, social sciences, natural resource management, disaster risk reduction, participatory planning, and decision support/public policy. Specific examples discussed include using GIS for flood mapping, natural resource management, understanding disease spread, and evaluating government programs for tribal communities. The document emphasizes how GIS can help improve decision-making by integrating spatial data from various sources.
This document provides an overview of geographical information systems (GIS) and their concepts. It discusses that GIS allows for the integration of spatial and non-spatial data in a digital format to aid decision making. Key points include that GIS represents geographic features as vector or raster data, integrates data from different sources by georeferencing to a common coordinate system, and can perform spatial analysis and modeling to answer questions about patterns and relationships. GIS is a useful tool for tasks like natural resource management, precision agriculture, and land use planning.
Concept of isostatic adjustment and isostatic models parag sonwane
This document discusses the geological concept of isostasy, which refers to the equilibrium between Earth's crust and mantle such that the crust "floats" at an elevation that depends on its thickness and density. It presents several theories of isostasy, including Airy's theory which proposes that thicker crustal areas sink deeper into the mantle, and Pratt's theory which suggests areas of lower crustal density project higher. The document also discusses isostatic effects from processes like deposition, erosion, and past ice sheets, as well as concepts like phase changes and Heiskanen's modification of Airy's theory.
Ground Penetrating Radar, also known as GPR, is a tool that is used to find Underground Utilities, Underground Storage Tanks (USTs), and in some cases, Graves. The depth and accuracy are dependent on a number of variables, such as soil density, moisture content, and antenna frequency. We use a 350 MHz antenna, which has the potential to reach depths of up to 35 ft (in perfect situations).
This document is a thesis submitted by Gagandeep Singh for his M.Tech degree in RS & GIS from NIT Warangal in 2013-2015. It discusses various topics related to digital terrain modeling including contour lines, grid DTMs, TINs, the differences between DSMs and DEMs, data acquisition methods, processing techniques, and applications of digital terrain data. It also evaluates different data sources for terrain modeling like SRTM, topographic maps, and Google Earth imagery and assesses their accuracy through statistical analysis and visual inspection.
O documento descreve a geologia da ilha de Timor-Leste, localizada no sudeste asiático onde placas tectônicas colidem. A ilha é composta por cadeias montanhosas formadas por esta colisão ao longo de milhões de anos, e contém evidências fósseis que indicam levantamento da terra a taxas de até 0,4 metros por mil anos. O clima é tropical, com estações chuvosas e secas, o que é favorável à exploração de hidrocarbonetos.
The document discusses different geodatabase formats and their benefits. It explains that geodatabases store geospatial and attribute data together, unlike shapefiles. The main geodatabase types are file geodatabases, personal geodatabases, and ArcSDE geodatabases. Feature datasets are used to define projections, extents and other rules within a geodatabase. Additional functionality includes topology, networks and normalization.
Interpretation and recognition of depositional systems using seismic dataDiego Timoteo
This document discusses the interpretation and recognition of depositional systems using seismic data. It covers five key stages: (1) reviewing basic concepts of sequence stratigraphy, (2) understanding the physical foundations of rocks and seismic reflection methods, (3) seismic stratigraphic interpretation of depositional sequences and system tracts, (4) recognizing depositional systems through seismic facies analysis, and (5) advanced seismic interpretation applications. Accurate interpretation requires integrating data from outcrops, cores, well logs, and seismic sections to constrain models, especially in frontier regions with limited data. The techniques allow correlating and mapping stratigraphic units to aid paleogeographic reconstruction and facies/lithology prediction away from control points.
Anderson's theory of faulting predicts that the orientation of faults depends on the principal stresses. It assumes reverse faults dip at 30 degrees, normal faults dip at 60 degrees, and strike-slip faults are vertical. However, exceptions like low-angle normal faults exist. Pore fluid pressure or pre-existing weaknesses in the rock can allow faults to form at shallower angles. The rolling-hinge model also explains how low-angle normal faults can develop.
The document provides an overview of spontaneous potential (SP) logging. It discusses that SP logging measures natural electrical potentials between the borehole and surface. Positive deflections indicate fresher formation water than mud filtrate, while negative deflections mean saltier formation water. SP can be used to determine formation water resistivity and estimate shale volume. Key applications include detecting permeable zones, correlating formations, and determining facies.
12 Week Subsurface Mapping And Interpretation Technique Buildingjoedumesnil
The document summarizes a 12-week subsurface mapping and interpretation course. It covers laying the theoretical foundations for exploring and developing oil and gas fields. The first 6 weeks were devoted to classes on structural geology, stratigraphy, seismic techniques and reservoir engineering. The second 6 weeks focused on applying these tools to a hypothetical lease, including initial exploration, assessing discoveries, field development and performance analysis. Various mapping and interpretation techniques are demonstrated, such as depth structure maps, fault surface maps, isochore maps and seismic sections. Risk factors are assessed for a proposed deviated exploration well.
This document provides an overview of carbonate sedimentology and techniques for interpreting carbonate reservoirs using well logs, image logs, and core data. It discusses how carbonates differ from clastic reservoirs in terms of biogenic processes and diagenesis. Key carbonate depositional environments, textures, structures, and facies are described. Examples of carbonate features observed on well logs and images are presented alongside core calibration data. Interpretation of fractures in carbonate reservoirs is also addressed.
Sedimentary facies are distinguished based on features like lithology, grain size, texture, sedimentary structures, fossils, and colour. These characteristics indicate the depositional environment. Facies represent sediments deposited in particular environments due to processes like wind, streams, waves, or biological activity. Depositional environments can be determined by examining sediment textures, structures, fossils, and bedding. Facies sequences reflect changes in depositional conditions over time.
O documento introduz o mapeamento digital de solos, comparando-o com o mapeamento clássico. Discute conceitos, exemplos, passos e conhecimentos necessários para o mapeamento digital, enfatizando a importância de sistemas de informação, métodos observacionais e sistemas de inferência. Recomenda trabalhar com outros profissionais como estatísticos e geólogos para lidar melhor com os aspectos quantitativos.
Dips 7.0 introduces several new features for analyzing structural geology data including 3D stereographic plotting, input and analysis of curved borehole data, kinematic sensitivity analysis, improved joint set analysis tools, contouring of general data values, and displaying traverse orientations on the stereonet. Key capabilities include viewing poles, planes and contours in 3D; entering and processing data from curved boreholes; varying parameters to analyze their effect on failure probabilities; calculating joint spacing, RQD, and joint frequency; contouring any data column; and plotting traverse paths.
This document provides a tutorial on using the basic features of the Dips orientation data analysis program. It explains how to open example data files, view data in grid and stereonet plot views, generate different stereonet plot types like pole plots, scatter plots, and contour plots, and customize the stereonet display. It also describes how to interpret stereonet plots and control plotting options through the sidebar.
Supervised classification uses training data to assign unknown objects to known features. Pixels are classified based on their spectral properties into user-defined classes. The author classified a satellite image of Mbulu, Tanzania using supervised classification in QGIS. Training data consisting of polygons for water, forest, shrubs, grassland and bare soil were created. Maximum likelihood classification assigned each pixel to the class with the highest probability. A model was also created in QGIS to vectorize and extract the forest class from the classified raster output.
This document provides instructions for a GIS exercise involving spatial analysis of elevation and precipitation data. The goals are to calculate average watershed elevation and precipitation for subwatersheds of the San Marcos River basin. Slope, aspect, flow direction and hydrologic slope will first be calculated from a sample digital elevation model to demonstrate spatial analysis tools in ArcGIS. A ModelBuilder model is then created to automate these calculations. Finally, the model is applied to real elevation data for the San Marcos basin watersheds to calculate average elevation and interpolate precipitation from station data to estimate watershed precipitation volumes and runoff ratios.
This document provides instructions for installing and using SPHARM-PDM, a software for shape analysis, and ShapePopulationViewer, a tool for quality control of SPHARM-PDM outputs. It describes downloading required files, configuring inputs and outputs in the Shape Analysis module, running SPHARM-PDM to generate outputs including surface meshes and parameterizations, and viewing and comparing output files using 3D Slicer. It also outlines using ShapePopulationViewer to perform quality control on SPHARM-PDM correspondences between output shapes.
Surpac is the world’s most popular geology and mine planning software used for ore body evaluation, open pit and u/g mine design.It provides tools for geological modelling, surveying, and mine planning.
This document provides instructions for calculating vegetation indices from Landsat 5 TM and Landsat 7 ETM+ data using ArcGIS. It describes a multi-step process to: 1) reclassify Landsat digital number data to exclude null values, 2) convert Landsat 5 TM data to the Landsat 7 ETM+ format, 3) calculate radiance values, 4) calculate reflectance values using sun elevation angles and earth-sun distances, and 5) enforce positive reflectance values by setting negatives to zero. This allows vegetation indices to be accurately calculated from the reflectance data.
1. The document describes the process for inputting data and calculating drought indices using the DMAP V2.0 tool. It involves importing data via Excel files or NetCDF files, selecting stations and variables, then calculating drought indices like SPI, PDSI, and KBDI.
2. The tool allows importing time series data for rainfall, temperature, soil moisture, and other variables to compute multiple drought indices. Data can be imported from Excel or NetCDF files by selecting stations, variables, and specifying formatting.
3. After inputting data, drought indices are calculated and can be visualized in plots. Severity thresholds can be customized, and drought start dates, durations, and magnitudes are outputted in a
1. The document describes the steps to simulate a 2.4 GHz patch antenna using ADS Momentum software. It involves defining substrate properties, drawing the antenna geometry, setting up excitations and mesh, running simulations to obtain S-parameters, and visualizing current distribution and radiation patterns.
2. Key steps include defining a 60 mil thick RO4003 substrate, drawing the patch and feeding microstrip line, simulating S-parameters using an adaptive sweep from 1-3 GHz, and visualizing the current distribution and 3D radiation pattern at the resonant frequency of 2.395 GHz.
3. The tutorial demonstrates how to set up simulations, examine results like S11, and visualize physical quantities like surface currents and
References1. HCS 2010 online manuals.2. Data Data provi.docxdebishakespeare
This document provides instructions for a lab exercise using HCS 2010 software to analyze and optimize signal timing plans for a case study intersection. Students are asked to develop and compare pretimed and actuated signal timing plans, analyzing output metrics like control delay and level of service. The document outlines 30 steps for entering intersection data, developing phasing plans, optimizing signal timing, and comparing the performance of base, optimized pretimed, and optimized actuated control strategies.
This document provides instructions for getting started with Carlson SurvCE software. It covers topics such as creating a new job, defining job settings like units and coordinate systems, performing total station and GPS surveys, and basic functions like occupying a point, backsight, sideshots, and stakeouts. The document is a tutorial that walks through these common surveying tasks step-by-step to help new users get acquainted with the software.
This document provides a manual for basic GIS functions. It introduces ArcGIS components like ArcCatalog, ArcMap and ArcToolbox. It discusses importing and viewing data, creating shapefiles, and setting projections. It also covers georeferencing images, analyzing data through tools like clip and extract, and performing proximity analysis. The document aims to guide users through foundational GIS processes.
This document provides a 26-step tutorial for simulating a 2.4 GHz patch antenna using ADS Momentum software. The steps include defining substrate properties, drawing the patch and feedline geometry, setting simulation parameters such as mesh and ports, running an S-parameter simulation, and visualizing current distribution and far-field radiation patterns. Increasing the mesh resolution from 30 to 70 cells per wavelength improves the S11 response and allows simulation of surface currents at the resonant frequency of 2.395 GHz. The tutorial demonstrates the full simulation workflow from design to characterization of antenna performance.
This document provides an overview of events and presentations at the 2010 International Microwave Symposium (IMS) featuring AWR Corporation. It lists the schedule of 6 microapps presentations to take place on May 25th, 2010, covering topics like multi-chip module design challenges, nonlinear co-simulation, system-level component models, and power amplifier design techniques. It also advertises AWR's online design environment for generating customized transistor datasheets.
This document provides an overview of the finite element analysis (FEA) software ANSYS. It discusses the different modules and capabilities within ANSYS including structural, thermal, fluid, and electromagnetic analysis. It also describes steps for creating and analyzing models within ANSYS such as creating geometry, meshing, applying loads and boundary conditions, and solving models. Key ANSYS terminology is defined throughout related to these modeling and analysis procedures.
Terrain AnalysisBackgroundAircraft frequently rely on terrain el.pdffeetshoemart
Terrain Analysis
Background
Aircraft frequently rely on terrain elevation data to help navigate in low visibility conditions, to
reduce pilot workload, to help closely follow terrain at low altitude to avoid radar detection
(sometimes called ground-hugging or terrain-following flight), and to plot a course to avoid
extreme sudden altitude changes (e.g., flying around a mountain, rather than over it). Terrain
elevation data is gathered through terrain-following radar, lidar, satellites, and existing terrain
map
elevation information. Elevation data is especially useful radar or communication services are
not operating.
Elevation data is typically broken down into a grid, where each grid cell represents a square
physical area of uniform size. Because elevation varies
within a cell, the highest elevation in the cell is usually used as the elevation value for that cell
on the grid.
To navigate using elevation data, it’s useful to compute locations of peaks (high points), valleys
(low points), and flat areas (which might be at high or low elevations). To
a specific location is a peak, we\'re interested in discovering whether that
location is a local maximum. To do this, we must examine its eight adjacent locations: left, right,
above, below, and the four diagonal neighbors. If the location we’re analyzing has a higher
elevation than all eight surrounding locations, then the location is clearly a peak. We might
choose to loosen our definition of a peak by saying, for example, that a location is a peak if more
than five of the eight surrounding locations have a lower elevation.
Similarly, to decide if a location is a valley, we determine if it\'s a local minimum by checking to
see if all eight neighboring locations are higher than the location we’re analyzing. If all eight are
higher, we clearly have a valley. Again, we might choose to loosen our valley definition to
identify a valley if more than five surrounding locations are higher than the location we’re
analyzing.
Determining if a location is part of a relatively flat area (a plain for our purposes, but it might
also be a plateau in the real world), we would again examine all eight neighboring locations. If
they are all within a small tolerance in elevation, then we would declare the location to be part of
a plain. Once again, we might choose to loosen our definition of a plain, so that a location
surrounded by more than five locations that have a similar elevation is considered a plain.
Your Assignment
You will develop a C program that creates a two-dimensional array of doubles representing an
elevation grid, populates the array it with elevation data read from an instructor-supplied binary
data file, displays the elevation data numerically, analyzes the data in the array to find the peaks,
valleys, and plains, and displays strings indicating where these peaks, valleys, and plains are.
Terrain elevation values are all assumed to be expressed in meters.
When defining your C functions in this pr.
This document provides a user manual for kriSIS v1.04, open source Matlab algorithms for analyzing surface waves. It describes the 5 main steps of the MASW data processing workflow in kriSIS: 1) importing seismic data files, 2) transforming data from time-distance to frequency-phase velocity, 3) selecting fundamental and higher mode dispersion curves, 4) selecting initial earth models and inversion parameters, and 5) running the inversion to calculate shear wave velocity profiles. It also provides details on parameters, options, and guidelines at each step to help users process their MASW data.
This document provides information on analyzing synchrotron powder diffraction data using the PDF-4+ database and SIeve+ software. It describes how the software includes features to optimize settings for synchrotron data like customizable instrument profiles and a preference option that automatically adjusts parameters. The high resolution and flux of synchrotron beams allows identification of minor phases below 0.3% concentration. The software can simulate multiphase patterns and compare to experimental data to solve complex mixtures.
For a new better version of this tutorial see my Google Slides with embedded videos.
https://docs.google.com/presentation/d/1MftEOT3uvYpCVwUaLMhsesm5Que-Kr7GQRV4pKZ2SNQ/edit?usp=sharing
This is a 2019 tutorial on how to do watershed delineation using ArcMap 10. It is an open education resource. Please let me know if you find it useful or see something that could be improved. Feel free to use it for teaching Geographic Information Science.
This document provides instructions for importing survey point data into AutoCAD Civil 3D and generating a surface and profile from the point data. It guides the user through importing an ASCII text file of survey points with northing, easting, and elevation coordinates into AutoCAD to build a surface. It then demonstrates different ways of visualizing the generated surface, including contours, triangulated irregular networks, elevation banding, and slope banding. Finally, it shows how to create a profile of the surface by using the points to generate a polyline alignment and extracting a cross-sectional profile along the alignment path.
Trabalhos de prospecção regional, detalhada e identificação de alvos foram realizados para identificar áreas promissoras. Prospecções regionais e detalhadas foram conduzidas para mapear a geologia. Alvos específicos foram identificados para futuras pesquisas.
O documento descreve as principais províncias e distritos mineiros do Brasil, incluindo Carajás, Tapajós, Quadrilátero Ferrífero, Norte da Bahia, Cuiabá-Poconé, e Alto Paranaíba-Goiás. Ele fornece detalhes sobre os tipos de depósitos minerais encontrados em cada uma dessas regiões.
Este documento apresenta o plano de curso de Geologia Econômica ministrado pelo Dr. Neivaldo A. Castro na UFSC em Florianópolis. O documento detalha o objetivo, ementa, conteúdo programático e métodos de ensino e avaliação do curso, além de fornecer referências bibliográficas.
7 metamorfismo de rochas pelíticas 2012Alex Ramires
1) Sedimentos pelíticos são argilitos e folhelhos ricos em argilominerais derivados da crosta continental. 2) Com o metamorfismo progressivo, os metapelitos desenvolvem uma ampla gama de minerais como muscovita, biotita e granada. 3) Os diagramas de fase AKF e AFM são usados para representar as associações minerais em rochas pelíticas e suas mudanças com o aumento da temperatura e pressão.
The document describes different rock types including siltstone, sandstone, and breccia. It notes features like horizontal planar lamination, plant material, intense bioturbation. The boundaries between the base rock types are described as either sharp, erosion, or gradational.
Geocronologia aplicada ao mapeamento regional cprmAlex Ramires
O documento apresenta um guia de procedimentos para amostragem geocronológica aplicada ao mapeamento regional, com ênfase na técnica U-Pb SHRIMP. Inclui também um estudo de caso ilustrando a aplicação da geocronologia U-Pb em diversos terrenos pré-cambrianos brasileiros por meio de análises SHRIMP em zircões. O estudo de caso analisa 53 concórdias com imagens de zircões para discutir morfologias complexas e suas implicações na interpretação das idades.
Analysis insight about a Flyball dog competition team's performanceroli9797
Insight of my analysis about a Flyball dog competition team's last year performance. Find more: https://github.com/rolandnagy-ds/flyball_race_analysis/tree/main
Predictably Improve Your B2B Tech Company's Performance by Leveraging DataKiwi Creative
Harness the power of AI-backed reports, benchmarking and data analysis to predict trends and detect anomalies in your marketing efforts.
Peter Caputa, CEO at Databox, reveals how you can discover the strategies and tools to increase your growth rate (and margins!).
From metrics to track to data habits to pick up, enhance your reporting for powerful insights to improve your B2B tech company's marketing.
- - -
This is the webinar recording from the June 2024 HubSpot User Group (HUG) for B2B Technology USA.
Watch the video recording at https://youtu.be/5vjwGfPN9lw
Sign up for future HUG events at https://events.hubspot.com/b2b-technology-usa/
Learn SQL from basic queries to Advance queriesmanishkhaire30
Dive into the world of data analysis with our comprehensive guide on mastering SQL! This presentation offers a practical approach to learning SQL, focusing on real-world applications and hands-on practice. Whether you're a beginner or looking to sharpen your skills, this guide provides the tools you need to extract, analyze, and interpret data effectively.
Key Highlights:
Foundations of SQL: Understand the basics of SQL, including data retrieval, filtering, and aggregation.
Advanced Queries: Learn to craft complex queries to uncover deep insights from your data.
Data Trends and Patterns: Discover how to identify and interpret trends and patterns in your datasets.
Practical Examples: Follow step-by-step examples to apply SQL techniques in real-world scenarios.
Actionable Insights: Gain the skills to derive actionable insights that drive informed decision-making.
Join us on this journey to enhance your data analysis capabilities and unlock the full potential of SQL. Perfect for data enthusiasts, analysts, and anyone eager to harness the power of data!
#DataAnalysis #SQL #LearningSQL #DataInsights #DataScience #Analytics
4th Modern Marketing Reckoner by MMA Global India & Group M: 60+ experts on W...Social Samosa
The Modern Marketing Reckoner (MMR) is a comprehensive resource packed with POVs from 60+ industry leaders on how AI is transforming the 4 key pillars of marketing – product, place, price and promotions.
ViewShift: Hassle-free Dynamic Policy Enforcement for Every Data LakeWalaa Eldin Moustafa
Dynamic policy enforcement is becoming an increasingly important topic in today’s world where data privacy and compliance is a top priority for companies, individuals, and regulators alike. In these slides, we discuss how LinkedIn implements a powerful dynamic policy enforcement engine, called ViewShift, and integrates it within its data lake. We show the query engine architecture and how catalog implementations can automatically route table resolutions to compliance-enforcing SQL views. Such views have a set of very interesting properties: (1) They are auto-generated from declarative data annotations. (2) They respect user-level consent and preferences (3) They are context-aware, encoding a different set of transformations for different use cases (4) They are portable; while the SQL logic is only implemented in one SQL dialect, it is accessible in all engines.
#SQL #Views #Privacy #Compliance #DataLake
End-to-end pipeline agility - Berlin Buzzwords 2024Lars Albertsson
We describe how we achieve high change agility in data engineering by eliminating the fear of breaking downstream data pipelines through end-to-end pipeline testing, and by using schema metaprogramming to safely eliminate boilerplate involved in changes that affect whole pipelines.
A quick poll on agility in changing pipelines from end to end indicated a huge span in capabilities. For the question "How long time does it take for all downstream pipelines to be adapted to an upstream change," the median response was 6 months, but some respondents could do it in less than a day. When quantitative data engineering differences between the best and worst are measured, the span is often 100x-1000x, sometimes even more.
A long time ago, we suffered at Spotify from fear of changing pipelines due to not knowing what the impact might be downstream. We made plans for a technical solution to test pipelines end-to-end to mitigate that fear, but the effort failed for cultural reasons. We eventually solved this challenge, but in a different context. In this presentation we will describe how we test full pipelines effectively by manipulating workflow orchestration, which enables us to make changes in pipelines without fear of breaking downstream.
Making schema changes that affect many jobs also involves a lot of toil and boilerplate. Using schema-on-read mitigates some of it, but has drawbacks since it makes it more difficult to detect errors early. We will describe how we have rejected this tradeoff by applying schema metaprogramming, eliminating boilerplate but keeping the protection of static typing, thereby further improving agility to quickly modify data pipelines without fear.
1. S-GeMS Tutorial Notes
presented
26 June 2007
in
Hydrogeophysics: Theory, Methods, and Modeling
Boise State University, Boise, Idaho
Geoff Bohling
Assistant Scientist
Kansas Geological Survey
geoff@kgs.ku.edu
864-2093
These notes, tutorial, and data available at:
http://people.ku.edu/~gbohling/BoiseGeostat
1
2. Introduction
These notes provide a brief introduction to geostatistical data
analysis with the S-GeMS software, which is available from
http://sgems.sourceforge.net/. For Windows, you can download
and automatic installer. The download also includes an S-GeMS
manual, sgems_manual.pdf, that ends up in the folder C:Program
FilesSGeMSsrcdoc (if you install to the default directory under
Windows).
You can read background information on the software on that
page. As mentioned there, S-GeMS provides a (fairly)
comprehensive collection of geostatistical estimation and
simulation algorithms and also provides a nice 3D visualization
environment. It provides a more limited selection of options for
standard statistical data analysis and essentially no facilities for
data management (subsetting data sets, etc.). So, you would
probably want to run S-GeMS in tandem with some other data
analysis & management software, such as Excel or (my
preference), R. R is an open-source data analysis package built on
the S language and is available from http://www.r-project.org/.
There are a few spatial & geostatistical data analysis packages
available for R. My favorite is the gstat package by Edzer
Pebesma (www.gstat.org). Some other software options are listed
at http://people.ku.edu/~gbohling/geostats/.
Note that S-GeMS does not handle screen real estate very well.
You often have to scroll around in the various windows and panels
in order to access all the controls.
2
3. Loading Data
S-GeMS can read data from files in its own format and can also
read files in GSLIB format, which is pretty much a standard format
for geostatistical data. We will start by loading data from a file
named ZoneA.dat, a GSLIB-format data file containing porosity
and permeability data from 85 wells in Zone A of the Big Bean Oil
Field (fictitious, but based on characteristics of a real field, 20 x 16
km in extent). You can open ZoneA.dat in a text editor, such as
WordPad, to see what is in it. The header lines and first six data
lines look like:
Zone A Data, Big Bean Field
8
X m meters east of origin
Y m meters north of origin
Thk m zone thickness in meters
Por % porosity in percent
Perm md permeability in millidarcies
LogPerm - base 10 log of permeability
LogPermPrd - LogPerm predicted by regression
against Por
LogPermRsd - Residual from LogPerm-Por regression
12100 8300 37.1531 14.6515 2.8547 0.4556 0.1357 0.3198
5300 8700 31.4993 14.5093 -999.9999 -999.9999 -999.9999 -999.9999
3500 13900 36.9185 14.0639 -999.9999 -999.9999 -999.9999 -999.9999
5100 1900 24.0156 15.1084 1.1407 0.0572 0.2268 -0.1696
9900 13700 35.0411 13.919 -999.9999 -999.9999 -999.9999 -999.9999
2900 900 28.4249 13.1304 0.3897 -0.4093 -0.1674 -0.2419
The -999.9999’s represent missing values (permeability
information is available only at 42 of the 85 wells).
First, fire up S-GeMS. Once it is open, I suggest maximizing it on
your screen, which may require dragging the S-GeMS window to
the left so that you can reach the maximize button in the upper
right corner. (As I said, S-GeMS does not handle screen real estate
very well.)
3
4. To load the Zone A data, select Load Object… from the Objects
menu. Use the resulting Open dialog box to navigate to and open
the file ZoneA.dat.
You will then be presented with an Import from GSLIB dialog
box with a preview of the file. Set the object type to point set in
the Select Object type dropdown box:
4
5. Then click Next> to get:
As shown above, set the Pointset name to ZoneA (or whatever
you like), check Use No-Data-Value, and set the No-Data-Value
to -999.9999. The default entries regarding the coordinate
columns is correct – X is in column 1, Y is in column 2, and there
is no Z (data are 2D). Click Finish and the data will be loaded.
5
6. Note that if you have the Commands Panel displayed (at the
bottom of the S-GeMS window) with the S-GeMS Commands
History tab selected, you will get feedback on what S-GeMS is
doing. This window will display the S-GeMS scripting language
command associated with every action you perform and will tell
you how long it took to complete each action. After loading the
above file, it should show something like:
This feature is very handy, as it lets you learn the commands
associated with each action, allowing you to subsequently enter
individual commands into the Run Command text box or transfer
a sequence of commands to a script and run the script (possibly
after editing) using the Execute Commands File… button.
6
7. Viewing Data
In between the Algorithms panel on the left and the data display
window on the right, there is a panel containing Objects,
Preferences, and Info tabs – actually this is part of the
Visualization panel. To view the Zone A porosity data, expand
Objects and then ZoneA on the Objects tab and check the boxes
next to both ZoneA and Por %:
You may also need to click on the globe below the visualization
window to get the data to get the data to display. The initial view
will be a map view, looking straight down on the data, which is
appropriate for this 2D dataset. However, you can rotate the data
in 3D space, if you care to, by clicking in the visualization window
and moving the mouse around. I won’t elaborate on the
visualization controls here – they are easy enough to figure out.
Note however that holding down Shift+Ctrl allows you to use the
mouse to zoom the data and holding down just Shift or Ctrl allows
you to pan the data (without rotating). To get a color scale, go to
the Preferences tab and check Show colorbar of, then select the
object ZoneA and the property Por %. You may need to scroll to
reach those controls.
7
8. Basic Data Analysis/Display
The Data Analysis menu contains a few options for examining
data distributions. For example, to get a histogram of porosity,
select Histogram from this menu, then select ZoneA in the Object
dropdown box and Por % in the Property dropdown box. The
result should look something like:
Note that this and other displays (such as the variogram) comes up
in a new window – which may get hidden at some point, and needs
to be re-displayed by selecting it on the task bar (or Alt-Tab). Also
note that the control panel on the left side may need to be widened
to allow access to all the controls.
As mentioned in the lecture notes, geostatistical methods for
continuous variables are optimal when the data are normally
distributed, so it is good to check for significant deviations from
normality. The Zone A porosity data are not terribly non-normal,
so we will continue analyzing them without transformation. (Be
more careful in real work!)
8
9. Variogram Analysis – Continuous Variable
To compute a variogram for the Zone A porosity data, select
Variogram on the Data Analysis menu. On the Variogram
Modeler dialog, leave Select Task at its default setting (compute
variogram from scratch), select ZoneA under Grid Name (this
really should be labeled Object), and then select Por % as both the
head and tail properties:
Then click next . . .
9
10. On the next dialog box, set the parameters as follows to compute
one omnidirectional variogram (with an angular tolerance of 90
degrees and a very large bandwidth) and four directional
variograms (at 0, 45, 90 and 135 degrees, each with an angular
tolerance of 22.5 degrees and a bandwidth of 5000 m), computing
15 lags with a nominal lag spacing of 1000 m and a lag tolerance
of 500 m. Also check Standardize by Cov(head,tail) to produce
a scaled variogram with a sill of about 1. This gives us one less
parameter to estimate (the sill) and matches expectations of the
sequential Gaussian simulation we will do later.
These settings will compute lag results out to about 15 km, almost
the full extent of the field in the north-south direction. In
accordance with suggested rules of thumb, we will not pay too
much attention to results beyond 10 km, about half the maximum
separation distance between wells.
10
11. After setting up the variogram computation parameter above,
clicking Next>, and then tiling the individual variogram plots by
selecting Tile from the Window menu under Variogram
Modeling, you should see something like:
The omnidirectional variogram is in the lower right, and all the
plots together are in the upper left, with the four directional
variograms in between. Note: The direction convention in S-
GeMS is “Cartesian” (degrees counterclockwise from east), not
geographic (degrees clockwise from north) like most other
software. The directional variograms maybe show some hint of
trend. Specifically, the 90 degree variogram looks fairly “trendy”
while the 0 degree variogram looks fairly trend-free. Nevertheless,
we will use a single omnidirectional model, exponential with a
(practical) range of 5500 meters and a sill of 1. The sill has been
imposed by our scaling by Cov(head,tail). Without this scaling,
we would have seen the same variogram, but with a sill of about
0.8 – the variance of the porosity values. The File menu offers
various options for saving and loading variograms.
11
12. Kriging a Continuous Variable
Before we krige the porosity, we need to create a grid for holding
the kriged values. If you click on the Info tab and select ZoneA
under Information on, you will see that the ZoneA dataset has a
bounding box running from (100, 100) to (19500,15700) in meters.
We will create grid a 100 x 80 grid running from X = 100 to 19900
meters in increments of 200 m and from Y = 100 to 15900 meters
in increments of 200 m. Those specifications are for grid cell
centers, so the grid will span 0 to 20 km in X and 0 to 16 km in Y,
edge to edge. To do this, select New Cartesian Grid on the
Objects menu, fill out the resulting dialog box as follows . . .
and click Create Grid.
12
13. ZoneAGrid will then be added to the list of objects available for
visualization. If you check its entry on the Objects tab, the grid’s
bounding box will be added to the display:
(I added the color scale to the display through the Preferences tab
and close the Algorithms and Commands panels to maximize
display space.)
To do the kriging thing, expand the Estimation entry on the
Algorithms panel and select the entry kriging. The middle part of
the Algorithms panel should then display two tabs, General and
Data and Variogram, for specifying the parameter controlling the
kriging process.
13
14. Fill out the General and Data tab as follows:
This will use ordinary kriging to add a new property, PorOK, to
ZoneAGrid, conditioned on the porosity values in the ZoneA point
set and using a circular “search ellipsoid” with a radius of 10 km,
requiring a minimum of 2 data points to be found within 10 km of
each grid point and using no more than 12 nearest neighboring
points in the estimation for each grid point.
14
15. Fill out the Variogram tab with the information on the porosity
variogram we developed above. Here I have used the “raw” sill of
0.8, rather than the scaled sill of 1.0, since we are kriging the raw
data (but this really makes no difference, except in changing the
magnitude of the estimated kriging variance at each grid node):
If you saved the variogam model to a file earlier, you can reload
that information here using the Load existing model… button.
After you have filled in all the parameters, click the Run
Algorithm button down below to make it go . . .
15
16. Then, on the Objects tab for the visualization window, check the
property PorOK under ZoneAGrid (which must also be checked)
to display the kriged porosity grid:
You can toggle the display of PorOK off and on, leaving the data
values (Por % in the ZoneA object) on to see that the kriging has
reproduced the data values.
If you wish, you can take a look at the kriging variance map,
PorOK_krig_var. You will see that it looks very much like a map
of the distance to the nearest well (control point).
16
17. Sequential Gaussian Simulation of Porosity
Next we will generate five realizations of porosity using sequential
Gaussian simulation. To do this, expand Simulation on the
Algorithms tab and set up the General tab as follows:
What is hidden off to the right is the fact that the number of
realizations is set to 5. You can leave the random number seed
(“Seed”) at its default value. The sequential Gaussian simulation
will actually use a normal score transform to turn the porosity
values at the wells into a set of values that perfectly follow a
standard normal distribution (zero mean, unit standard deviation)
and will then generate grids of simulated values whose univariate
distribution is also standard normal. Therefore we are using
Simple Kriging – the spatially constant mean will be assumed to be
zero.
17
18. Set up the Data tab more or less as follows:
We have checked Use Target Histogram so that the standard
normal simulations will be backtransformed to match the
histogram of the actual porosity values after they are generated.
This backtransformation involves extrapolation in the tails of the
distribution, for which the user has to set minimum and maximum
allowed limits for the backtransformed values. I have used 12%
and 17%, just outside the limits of the actual data.
18
19. Set up the Variograms tab to represent the “unit-sill” version of the
variogram we developed earlier:
What the SGS algorithm really wants is the variogram of the
normal-score transformed data, which we have not computed. We
are taking a short-cut, assuming that the variogram of the normal-
score transformed data would look very much like the variogram
of the raw data scaled to a unit sill. This is the case for these data,
since the shape of the univariate porosity distribution is in fact
reasonably normal.
19
20. Click on Run Algorithm to make it go and the software should
generate five equiprobable realizations of porosity, as properties in
ZoneAGrid:
20
21. Sequential Indicator Simulation of Facies
To illustrate sequential indicator simulation of facies, we will use
data from the Green Goblin Gas Field, a Permian gas field
somewhere in the US mid-continent. The facies values have been
generated from geophysical well logs using a neural network
trained on observed log-facies associations in cored wells. The
headers and first few lines of data look like:
Facies data, Green Goblin Gas Field
11
WellNo
Depth
X ft
Y ft
Z ft
F05
Ind01
Ind02
Ind03
Ind04
Ind05
12 3031.5 65826.02 17091.19 -176.5 2 0 1 0 0 0
12 3029.75 65826.02 17091.19 -174.75 4 0 0 0 1 0
12 3027.75 65826.02 17091.19 -172.75 3 0 0 1 0 0
12 3025.75 65826.02 17091.19 -170.75 4 0 0 0 1 0
As indicated, the coordinates, X, Y and Z are in feet. X and Y are
measured from an arbitrary origin. Z is elevation relative to sea
level. F05 is an integer running from 1 to 5 representing each of
the five different facies. Ind01 – Ind05 are indicator
representations of the facies occurrence. That is Ind01 is 1 if F05
= 1 and 0 otherwise. The indicators can be thought of as a set of
“hard” probabilities, representing certainty of which facies occurs
at each data point. (In truth we are really not certain, and a “soft”
or probabilistic representation of the neural net predictions would
really be more appropriate.)
21
22. The facies sequence (and this is only a piece of it) was developed
as the seas rolled in and the seas rolled out across a very broad,
gently dipping shelf. The facies codes represent:
1 – continental sandstone
2 – continental siltstone
3 – mud-supported limestone
4 – grain-supported limestone & dolomite
5 – marine sandstone
Late we will need the proportions of each facies in the dataset.
These are (in decimal):
1: 0.008, 2: 0.437, 3:0.368, 4: 0.183, 5: 0.004
To get started, use the Load Object on the Objects menu to read
in the file GreenGoblin.dat, creating a point set named
GreenGoblin. The coordinates are in columns 3, 4, and 5 and you
do not need to specify a No-Data-Value because there are no
missing values. After the file is loaded you can use the Info tab to
see that there are 8553 data points and to get information on the
bounding box of the data.
22
23. On the Objects tab, check both GreenGoblin and F05 to display
them in the visualization window. On the preferences tab, set the
Z-scaling to 20 (the maximum allowed) and add the colorbar of
F05 to the display to get a rendition of the facies values at the 74
wells:
Then use New Cartesian Grid on the Objects menu to create a
101 x 66 x 128 grid with spacings of 660 feet in the X and Y
directions and 2 feet in the Z direction, with origin coordinates of
0, 0 in X and Y and -253.75 in Z. You can name it something like
GreenGoblinGrid. Check the new grids entry on the Object tab to
display its bounding box. Make sure that the grid encompasses the
data.
23
24. Variogram analysis for indicator variables
For SIS, we need to specify a variogram for the indicator variable
representing each category. We will go very briefly through the
indicator variogram development for facies 3 (the most abundant
facies), represented by Ind03. From the Data Analysis menu,
choose Variogram, and then choose Ind03 from GreenGoblin as
the variable to analyze. First we will look at the vertical
variogram, setting up the lag and direction parameters as follows:
Also check Standardize by Cov(head,tail) to generate a unit-sill
variogram. (It will be scaled by p*(1-p), where p is the overall
proportion of the facies. This is the variance of a Bernoulli
distributed variable.)
24
25. The early lags of the vertical variogram can be reasonably fit with
an exponential structure with a range of 18 feet:
The horizontal variogram modeling gets tricky . . .
I have done the rest of the variogram modeling in R using the gstat
add-in and the results are encapsulated in the S-GeMS SIS
parameter file GreenGoblinF05sis.par, which you can read into S-
GeMS.
25
26. Select sisim under Simulation on the Algorithms tab and then
click load to open up the parameter file GreenGoblinF05sis.par.
This should fill in the General, Data, and Variogram tabs with the
appropriate settings, which you can explore. Then click Run
Algorithm and wait a while to get something like . . .
26