During the past decade, the size of 3D seismic data volumes and the number of seismic attributes have increased
to the extent that it is difficult, if not impossible, for interpreters to examine every seismic line and time
slice. To address this problem, several seismic facies classification algorithms including k-means, self-organizing
maps, generative topographic mapping, support vector machines, Gaussian mixture models, and artificial neural
networks have been successfully used to extract features of geologic interest from multiple volumes. Although
well documented in the literature, the terminology and complexity of these algorithms may bewilder the average
seismic interpreter, and few papers have applied these competing methods to the same data volume. We have
reviewed six commonly used algorithms and applied them to a single 3D seismic data volume acquired over the
Canterbury Basin, offshore New Zealand, where one of the main objectives was to differentiate the architectural
elements of a turbidite system. Not surprisingly, the most important parameter in this analysis was the choice of
the correct input attributes, which in turn depended on careful pattern recognition by the interpreter. We found
that supervised learning methods provided accurate estimates of the desired seismic facies, whereas unsupervised
learning methods also highlighted features that might otherwise be overlooked.
We illustrate unsupervised and supervised learning algorithms that accurately classify the lithological variations in the 3D seismic data. We demonstrate blind source separation techniques such as the principal components (PCA) and noise adjusted principal
components in conjunction with Kohonen Self organizing maps to produce superior unsupervised classification maps.
Further, we utilize the PCA space training in Maximum likelihood (ML) supervised classification. Results demonstrate that the ML supervised classification produces an improved classification of the facies in the 3D seismic dataset from the Anadarko basin in central Oklahoma.
The document discusses the past, present, and future of seismic analysis techniques including seismic stratigraphy and geomorphology. It notes that integrating seismic stratigraphy and geomorphology provides the most robust geological interpretations. It describes how 3D seismic data and advances in computing/visualization have improved analysis. The document predicts that machine learning techniques will increasingly integrate seismic data types and attributes to better identify hydrocarbon traps. Handling "big data" from multiple sources is a challenge, but integrating rock physics can improve seismic reservoir description.
Convolutional neural networks can be used to automatically identify stratigraphic patterns in seismic data. The CNNs can be trained on various seismic facies and depositional sequences to recognize visual similarities. This will help perform seismic stratigraphic interpretation more efficiently and objectively compared to current methods that rely heavily on human expertise. The automated identification of stratigraphic features could further allow the detection of potential play types, leads, and prospects within the seismic volume.
This document discusses using convolutional neural networks (CNNs) to classify and segment satellite imagery. It presents a novel approach using a CNN to perform per-pixel classification of multispectral satellite imagery and a digital surface model into five categories (vegetation, ground, roads, buildings, water). The CNN is first pre-trained with unsupervised clustering then fine-tuned for classification and segmentation. Results show the CNN approach outperforms existing methods, achieving 94.49% classification accuracy and improving segmentation by reducing salt-and-pepper effects from per-pixel classification alone.
Object Classification of Satellite Images Using Cluster Repulsion Based Kerne...IOSR Journals
Abstract: We investigated the Classification of satellite images and multispectral remote sensing data .we
focused on uncertainty analysis in the produced land-cover maps .we proposed an efficient technique for
classifying the multispectral satellite images using Support Vector Machine (SVM) into road area, building area
and green area. We carried out classification in three modules namely (a) Preprocessing using Gaussian
filtering and conversion from conversion of RGB to Lab color space image (b) object segmentation using
proposed Cluster repulsion based kernel Fuzzy C- Means (FCM) and (c) classification using one-to-many SVM
classifier. The goal of this research is to provide the efficiency in classification of satellite images using the
object-based image analysis. The proposed work is evaluated using the satellite images and the accuracy of the
proposed work is compared to FCM based classification. The results showed that the proposed technique has
achieved better results reaching an accuracy of 79%, 84%, 81% and 97.9% for road, tree, building and vehicle
classification respectively.
Keywords:-Satellite image, FCM Clustering, Classification, SVM classifier.
Automatic traffic light controller for emergency vehicle using peripheral int...IJECEIAES
Traffic lights play such important role in traffic management to control the traffic on the road. Situation at traffic light area is getting worse especially in the event of emergency cases. During traffic congestion, it is difficult for emergency vehicle to cross the road which involves many junctions. This situation leads to unsafe conditions which may cause accident. An Automatic Traffic Light Controller for Emergency Vehicle is designed and developed to help emergency vehicle crossing the road at traffic light junction during emergency situation. This project used Peripheral Interface Controller (PIC) to program a priority-based traffic light controller for emergency vehicle. During emergency cases, emergency vehicle like ambulance can trigger the traffic light signal to change from red to green in order to make clearance for its path automatically. Using Radio Frequency (RF) the traffic light operation will turn back to normal when the ambulance finishes crossing the road. Result showed the design is capable to response within the range of 55 meters. This project was successfully designed, implemented and tested.
IRJET- Deep Convolution Neural Networks for Galaxy Morphology ClassificationIRJET Journal
This document presents a research on using deep convolutional neural networks to classify galaxy morphology into three main categories: elliptical, spiral, and irregular. The proposed method uses a convolutional layer for feature extraction followed by fully connected layers for classification. Image augmentation techniques are also applied to address overfitting. The algorithm involves convolution, ReLU, pooling, flattening, and full connection operations. The goal is to develop an automated and accurate system for galaxy morphology classification using deep learning approaches.
This document discusses crowd density estimation using baseline filtering. It begins with an abstract describing the challenges of detecting and tracking objects in crowded scenes due to occlusions. It then reviews related works on component-based people detection, Bayesian tracking using shape models, and neural network-based people counting. The implementation section describes extracting foreground from background, computing crowd density as a function of foreground pixels, and estimating head counts to determine the total number of people. Screenshots show results of segmentation, preprocessing, tracking, and counting frames. It concludes that the proposed method estimates crowd density using movement, size, and height features with particle filtering and clustering.
We illustrate unsupervised and supervised learning algorithms that accurately classify the lithological variations in the 3D seismic data. We demonstrate blind source separation techniques such as the principal components (PCA) and noise adjusted principal
components in conjunction with Kohonen Self organizing maps to produce superior unsupervised classification maps.
Further, we utilize the PCA space training in Maximum likelihood (ML) supervised classification. Results demonstrate that the ML supervised classification produces an improved classification of the facies in the 3D seismic dataset from the Anadarko basin in central Oklahoma.
The document discusses the past, present, and future of seismic analysis techniques including seismic stratigraphy and geomorphology. It notes that integrating seismic stratigraphy and geomorphology provides the most robust geological interpretations. It describes how 3D seismic data and advances in computing/visualization have improved analysis. The document predicts that machine learning techniques will increasingly integrate seismic data types and attributes to better identify hydrocarbon traps. Handling "big data" from multiple sources is a challenge, but integrating rock physics can improve seismic reservoir description.
Convolutional neural networks can be used to automatically identify stratigraphic patterns in seismic data. The CNNs can be trained on various seismic facies and depositional sequences to recognize visual similarities. This will help perform seismic stratigraphic interpretation more efficiently and objectively compared to current methods that rely heavily on human expertise. The automated identification of stratigraphic features could further allow the detection of potential play types, leads, and prospects within the seismic volume.
This document discusses using convolutional neural networks (CNNs) to classify and segment satellite imagery. It presents a novel approach using a CNN to perform per-pixel classification of multispectral satellite imagery and a digital surface model into five categories (vegetation, ground, roads, buildings, water). The CNN is first pre-trained with unsupervised clustering then fine-tuned for classification and segmentation. Results show the CNN approach outperforms existing methods, achieving 94.49% classification accuracy and improving segmentation by reducing salt-and-pepper effects from per-pixel classification alone.
Object Classification of Satellite Images Using Cluster Repulsion Based Kerne...IOSR Journals
Abstract: We investigated the Classification of satellite images and multispectral remote sensing data .we
focused on uncertainty analysis in the produced land-cover maps .we proposed an efficient technique for
classifying the multispectral satellite images using Support Vector Machine (SVM) into road area, building area
and green area. We carried out classification in three modules namely (a) Preprocessing using Gaussian
filtering and conversion from conversion of RGB to Lab color space image (b) object segmentation using
proposed Cluster repulsion based kernel Fuzzy C- Means (FCM) and (c) classification using one-to-many SVM
classifier. The goal of this research is to provide the efficiency in classification of satellite images using the
object-based image analysis. The proposed work is evaluated using the satellite images and the accuracy of the
proposed work is compared to FCM based classification. The results showed that the proposed technique has
achieved better results reaching an accuracy of 79%, 84%, 81% and 97.9% for road, tree, building and vehicle
classification respectively.
Keywords:-Satellite image, FCM Clustering, Classification, SVM classifier.
Automatic traffic light controller for emergency vehicle using peripheral int...IJECEIAES
Traffic lights play such important role in traffic management to control the traffic on the road. Situation at traffic light area is getting worse especially in the event of emergency cases. During traffic congestion, it is difficult for emergency vehicle to cross the road which involves many junctions. This situation leads to unsafe conditions which may cause accident. An Automatic Traffic Light Controller for Emergency Vehicle is designed and developed to help emergency vehicle crossing the road at traffic light junction during emergency situation. This project used Peripheral Interface Controller (PIC) to program a priority-based traffic light controller for emergency vehicle. During emergency cases, emergency vehicle like ambulance can trigger the traffic light signal to change from red to green in order to make clearance for its path automatically. Using Radio Frequency (RF) the traffic light operation will turn back to normal when the ambulance finishes crossing the road. Result showed the design is capable to response within the range of 55 meters. This project was successfully designed, implemented and tested.
IRJET- Deep Convolution Neural Networks for Galaxy Morphology ClassificationIRJET Journal
This document presents a research on using deep convolutional neural networks to classify galaxy morphology into three main categories: elliptical, spiral, and irregular. The proposed method uses a convolutional layer for feature extraction followed by fully connected layers for classification. Image augmentation techniques are also applied to address overfitting. The algorithm involves convolution, ReLU, pooling, flattening, and full connection operations. The goal is to develop an automated and accurate system for galaxy morphology classification using deep learning approaches.
This document discusses crowd density estimation using baseline filtering. It begins with an abstract describing the challenges of detecting and tracking objects in crowded scenes due to occlusions. It then reviews related works on component-based people detection, Bayesian tracking using shape models, and neural network-based people counting. The implementation section describes extracting foreground from background, computing crowd density as a function of foreground pixels, and estimating head counts to determine the total number of people. Screenshots show results of segmentation, preprocessing, tracking, and counting frames. It concludes that the proposed method estimates crowd density using movement, size, and height features with particle filtering and clustering.
The document discusses integrating support vector machines (SVMs) and Markov random fields (MRFs) for remote sensing image classification. SVMs are good at identifying optimal discriminant hypersurfaces but do not consider context between samples. The paper aims to integrate SVMs and MRFs to allow for contextual classification. A novel classifier is proposed that reformulates the MRF minimum-energy decision rule as an SVM discriminant function with a "contextual kernel." Experimental results on real remote sensing datasets show the proposed method provides significantly more accurate classifications compared to a standard noncontextual SVM.
he data obtained from remote sensing satellites fu
rnish information about the land at varying resolut
ions
and has been widely used for change detection studi
es. There exist a huge number of change detection
methodologies and techniques with the continual eme
rgence of new ones. This paper provides a review of
pixel based and object-based change detection techn
iques in conjunction with the comparison of their
merits and limitations. The advent of very-high-res
olution remotely sensed images, exponentially incre
ased
image data volume and multiple sensors demand the p
otential use of data mining techniques in tandem
with object-based methods for change detection
This document describes a new robust fixed rank kriging (R-FRK) method for improving the spatial completeness and accuracy of satellite sea surface temperature (SST) products. The R-FRK method addresses two key issues: 1) it allows for dimension reduction kriging to be applied to satellite SST data over irregular regions, and 2) it incorporates a data-driven bias correction model to address systematic biases in the satellite SST measurements. The method is applied to Moderate Resolution Imaging Spectroradiometer (MODIS) SST data from 2003 and 2010. Validation using drifting buoy observations shows the method produces spatially complete SST fields with high accuracy.
Multi sensor data fusion for change detectionsanu sharma
This document summarizes a study that used multi-sensor data fusion to detect changes in a coastal zone in Trabzon, Turkey between 2000 and 2003. The study fused higher resolution aerial photographs and IKONOS panchromatic data with lower resolution ETM+ multispectral data to create 1m resolution multi-spectral images for both time periods. Post-classification comparison of the fused images from 2000 and 2003 was then used to detect changes in the coastal zone due to highway construction, identifying an area of 186023 m2 of new filled earth. Fusion was performed using the à trous wavelet transform algorithm to preserve spectral content while improving spatial resolution for accurate change detection.
This document summarizes research conducted using supercomputers to enable artificial intelligence applications for analyzing large earth science data. It discusses two major functions: designing efficient simulations and developing intelligent data mining methods. Specific projects are described, including simulating climate models on the Sunway TaihuLight, simulating earthquakes, and using remote sensing data and deep learning to map land cover more accurately, detect oil palm trees, and create more accurate urban land use maps. The research enables digital earth modeling to simulate, analyze, understand, predict, and mitigate earth science issues.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Slides of my talk entitled "Independent Component Analysis based Incoherent Target Decompositions for Polarimetric SAR Data - Practical Aspects" at the 2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain.
The document discusses the applicability of fuzzy theory in remote sensing image classification. It presents three experiments comparing different classification methods: 1) Unsupervised fuzzy c-means classification, 2) Supervised classification using fuzzy signatures, 3) Supervised classification using fuzzy signatures and membership functions. The supervised fuzzy methods achieved higher accuracy than the unsupervised method, with the third method performing best with an overall accuracy of 83.9%. Fuzzy convolution can further optimize results by combining classification bands.
02 Modelling strategies for Nuclear Probabilistic Safety Assessment in case o...RCCSRENKEI
This document discusses modelling strategies for probabilistic safety assessment of nuclear sites in the case of natural external events. It provides insights into probabilistic safety assessment methodology for multi-hazard events. It also discusses ongoing developments including using model reduction techniques to create virtual charts for seismic risk assessment, and leveraging high performance computing. The goal is to improve safety assessment methods through theoretical enhancements and applying outcomes to demonstration problems.
CLASSIFICATION AND COMPARISION OF REMOTE SENSING IMAGE USING SUPPORT VECTOR M...ADEIJ Journal
Remote sensing is collecting information about an object without any direct physical contact with the particular object. It is widely used in many fields such as oceanography, geology, ecology. Remote sensing uses the Satellite to detect and classify the particular object or area. They also classify the object on the earth surfaces which includes Vegetation, Building, Soil, Forest and Water. The approach uses the classifiers of previous images to decrease the required number of training samples for the classifier training of an incoming image. For each incoming image, a rough classifier is predicted first based on the temporal trend of a set of previous classifiers. The predicted classifier is then fine-tuned into a more accurate manner with current training samples. This approach can be further applied as sequential image data, with only a small number of training samples, which are being required from each image. This method uses LANSAT 8 images for Training and Testing processes. First, using the Classifier Prediction technique the Signatures are being generated for the input images. The generated Signatures are used for the Training purposes. SVM Classification is used for classifying the images. The final results describes that the leverage of a priori information from previous images will provide advantageous improvement for future images in multi temporal image classification.
A Review of Change Detection Techniques of LandCover Using Remote Sensing Dataiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
3D GPR in Archaeology What can be gained fron Dense DataAlex Novo
This document summarizes a study comparing dense 3D ground penetrating radar (GPR) data acquisition and processing to standard pseudo 3D GPR methods for archaeological applications. Two identical dense 3D GPR surveys of a test site were collected: one using a standard odometer-guided system, and one using a new laser-guided positioning system. The laser system acquired data twice as fast with more accurate positioning. Comparisons found that dense acquisition and processing resolved more subsurface targets than decimated and interpolated pseudo 3D data, demonstrating that standard methods may be missing important archaeological features.
Unsupervised Building Extraction from High Resolution Satellite Images Irresp...CSCJournals
The document discusses an unsupervised method for extracting buildings from high resolution satellite images regardless of rooftop structures. The method first calculates NDVI and chromaticity ratios to segment vegetation and shadows. Rooftops and roads are then detected and eliminated. Principal component analysis and area analysis are performed to accurately extract buildings. The algorithm aims to eliminate inhomogeneities caused by varying building hierarchies by focusing on eliminating non-building regions rather than detecting building regions of interest. The methodology is tested on Quickbird satellite imagery and results indicate it can extract buildings in complex environments irrespective of rooftop shape.
Individual movements and geographical data mining. Clustering algorithms for ...Beniamino Murgante
Individual movements and geographical data mining. Clustering algorithms for highlighting hotspots in personal navigation routes.
Giuseppe Borruso, Gabriella Schoier - University of Trieste
Laurent Etienne's presentation at Geomatics Atlantic 2012 (www.geomaticsatlantic.com) in Halifax, June 2012. More session details at http://lanyrd.com/2012/geomaticsatlantic2012/stbgx/ .
Congenital Bucolic and Farming Region Taxonomy Using Neural Networks for Remo...DR.P.S.JAGADEESH KUMAR
This summarizes a document about classifying rural and agricultural areas using neural networks and remote sensing imagery. It discusses extracting texture features from gray scale and multispectral images to train a neural network for classification. Different features like histogram pixel intensity, texture constraints, and spatial pixel matrices were used. The neural network was able to effectively classify rural and agricultural regions by leveraging these extracted texture features from aerial images.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Field Strength Predicting Outdoor ModelsIRJET Journal
This document discusses several outdoor propagation models used to predict radio signal strength and path loss over distance. It begins by introducing concepts of transmission power, signal strength, and path loss. It then describes common factors that affect outdoor radio propagation like diffraction, reflection, refraction, and scattering. The rest of the document summarizes several empirical, theoretical, and physical propagation models including Okumura, Hata, ECC-33, COST-231 Hata, and Egli models. These models use different methods and equations to predict path loss and were developed based on extensive radio signal measurement data in various outdoor environments.
Radar and optical remote sensing data evaluation and fusion; a case study for...rsmahabir
The recent increase in the availability of spaceborne radar in different wavelengths with multiple polarisations provides new opportunities for land surface analysis. This research effort explored how different radar data, and derived texture values, indepen- dently and in combination with optical imagery influence land cover/use classification accuracies for a study site in Washington, DC, USA. Two spaceborne radar images, Radarsat-2L-band and Palsar C-band quad-polarised radar, were registered with Aster optical data for this study. Traditional methods of classification were applied to various components and combinations of this data set, and overall and class-specific thematic accuracies obtained for comparison. The results for the two despeckled radar data sets were quite different, with Radarsat-2 obtaining an overall accuracy of 59% and Palsar 77%, while that of the optical Aster was 90%. Combining the original radar and a variance texture measure increased the accuracy of Radarsat-2 to 71% but that of Palsar only to 78%. One of the sensor fusions of optical and radar obtained an accuracy of 93%. For this location, radar by itself does not obtain classification accuracies as high as optical data, but fusion with optical imagery provides better overall thematic accuracy than the optical independently, and results in some useful improvements on a class-by-class basis. For those regions with high cloud cover, quad polarisation radar can independently provide viable results but it may be wavelength-dependent.
A visualization-oriented 3D method for efficient computation of urban solar r...Jianming Liang
This document describes a new 3D method for efficiently computing and visualizing urban solar radiation based on 3D-2D surface mapping. The method first projects 3D triangular meshes onto 2D raster surfaces to determine solar radiation. It then uses barycentric interpolation to rasterize the positions and surface normals of each triangular mesh onto the 2D raster. An efficient GPU-accelerated algorithm calculates shadow masks and solar radiation for each raster cell. Resulting solar radiation rasters are represented as RGB texture maps for efficient rendering of large 3D urban models.
This document contains a stratigraphic chart for the upper Cretaceous sequence in mid-Norway. It includes information on chronostratigraphy, magnetostratigraphy, biostratigraphy, and sequence stratigraphy for mid-Norway and surrounding regions. The chart also notes various rock formations, members, and sandstones present in the upper Cretaceous sequence.
This document provides a correlation chart showing the chronostratigraphy, magnetostratigraphy, sequence stratigraphy, and biostratigraphy of the Lower Cretaceous succession in Mid-Norway. The chart correlates these stratigraphic data from various basins in Mid-Norway and adjacent areas in a timescale from the Berriasian to Albian stages. It establishes the relationship between polarity chrons, megasequences, transgressive-regressive cycles, ammonite biozones, and other microfossil biozones throughout this time period for the region.
The document discusses integrating support vector machines (SVMs) and Markov random fields (MRFs) for remote sensing image classification. SVMs are good at identifying optimal discriminant hypersurfaces but do not consider context between samples. The paper aims to integrate SVMs and MRFs to allow for contextual classification. A novel classifier is proposed that reformulates the MRF minimum-energy decision rule as an SVM discriminant function with a "contextual kernel." Experimental results on real remote sensing datasets show the proposed method provides significantly more accurate classifications compared to a standard noncontextual SVM.
he data obtained from remote sensing satellites fu
rnish information about the land at varying resolut
ions
and has been widely used for change detection studi
es. There exist a huge number of change detection
methodologies and techniques with the continual eme
rgence of new ones. This paper provides a review of
pixel based and object-based change detection techn
iques in conjunction with the comparison of their
merits and limitations. The advent of very-high-res
olution remotely sensed images, exponentially incre
ased
image data volume and multiple sensors demand the p
otential use of data mining techniques in tandem
with object-based methods for change detection
This document describes a new robust fixed rank kriging (R-FRK) method for improving the spatial completeness and accuracy of satellite sea surface temperature (SST) products. The R-FRK method addresses two key issues: 1) it allows for dimension reduction kriging to be applied to satellite SST data over irregular regions, and 2) it incorporates a data-driven bias correction model to address systematic biases in the satellite SST measurements. The method is applied to Moderate Resolution Imaging Spectroradiometer (MODIS) SST data from 2003 and 2010. Validation using drifting buoy observations shows the method produces spatially complete SST fields with high accuracy.
Multi sensor data fusion for change detectionsanu sharma
This document summarizes a study that used multi-sensor data fusion to detect changes in a coastal zone in Trabzon, Turkey between 2000 and 2003. The study fused higher resolution aerial photographs and IKONOS panchromatic data with lower resolution ETM+ multispectral data to create 1m resolution multi-spectral images for both time periods. Post-classification comparison of the fused images from 2000 and 2003 was then used to detect changes in the coastal zone due to highway construction, identifying an area of 186023 m2 of new filled earth. Fusion was performed using the à trous wavelet transform algorithm to preserve spectral content while improving spatial resolution for accurate change detection.
This document summarizes research conducted using supercomputers to enable artificial intelligence applications for analyzing large earth science data. It discusses two major functions: designing efficient simulations and developing intelligent data mining methods. Specific projects are described, including simulating climate models on the Sunway TaihuLight, simulating earthquakes, and using remote sensing data and deep learning to map land cover more accurately, detect oil palm trees, and create more accurate urban land use maps. The research enables digital earth modeling to simulate, analyze, understand, predict, and mitigate earth science issues.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Slides of my talk entitled "Independent Component Analysis based Incoherent Target Decompositions for Polarimetric SAR Data - Practical Aspects" at the 2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain.
The document discusses the applicability of fuzzy theory in remote sensing image classification. It presents three experiments comparing different classification methods: 1) Unsupervised fuzzy c-means classification, 2) Supervised classification using fuzzy signatures, 3) Supervised classification using fuzzy signatures and membership functions. The supervised fuzzy methods achieved higher accuracy than the unsupervised method, with the third method performing best with an overall accuracy of 83.9%. Fuzzy convolution can further optimize results by combining classification bands.
02 Modelling strategies for Nuclear Probabilistic Safety Assessment in case o...RCCSRENKEI
This document discusses modelling strategies for probabilistic safety assessment of nuclear sites in the case of natural external events. It provides insights into probabilistic safety assessment methodology for multi-hazard events. It also discusses ongoing developments including using model reduction techniques to create virtual charts for seismic risk assessment, and leveraging high performance computing. The goal is to improve safety assessment methods through theoretical enhancements and applying outcomes to demonstration problems.
CLASSIFICATION AND COMPARISION OF REMOTE SENSING IMAGE USING SUPPORT VECTOR M...ADEIJ Journal
Remote sensing is collecting information about an object without any direct physical contact with the particular object. It is widely used in many fields such as oceanography, geology, ecology. Remote sensing uses the Satellite to detect and classify the particular object or area. They also classify the object on the earth surfaces which includes Vegetation, Building, Soil, Forest and Water. The approach uses the classifiers of previous images to decrease the required number of training samples for the classifier training of an incoming image. For each incoming image, a rough classifier is predicted first based on the temporal trend of a set of previous classifiers. The predicted classifier is then fine-tuned into a more accurate manner with current training samples. This approach can be further applied as sequential image data, with only a small number of training samples, which are being required from each image. This method uses LANSAT 8 images for Training and Testing processes. First, using the Classifier Prediction technique the Signatures are being generated for the input images. The generated Signatures are used for the Training purposes. SVM Classification is used for classifying the images. The final results describes that the leverage of a priori information from previous images will provide advantageous improvement for future images in multi temporal image classification.
A Review of Change Detection Techniques of LandCover Using Remote Sensing Dataiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
3D GPR in Archaeology What can be gained fron Dense DataAlex Novo
This document summarizes a study comparing dense 3D ground penetrating radar (GPR) data acquisition and processing to standard pseudo 3D GPR methods for archaeological applications. Two identical dense 3D GPR surveys of a test site were collected: one using a standard odometer-guided system, and one using a new laser-guided positioning system. The laser system acquired data twice as fast with more accurate positioning. Comparisons found that dense acquisition and processing resolved more subsurface targets than decimated and interpolated pseudo 3D data, demonstrating that standard methods may be missing important archaeological features.
Unsupervised Building Extraction from High Resolution Satellite Images Irresp...CSCJournals
The document discusses an unsupervised method for extracting buildings from high resolution satellite images regardless of rooftop structures. The method first calculates NDVI and chromaticity ratios to segment vegetation and shadows. Rooftops and roads are then detected and eliminated. Principal component analysis and area analysis are performed to accurately extract buildings. The algorithm aims to eliminate inhomogeneities caused by varying building hierarchies by focusing on eliminating non-building regions rather than detecting building regions of interest. The methodology is tested on Quickbird satellite imagery and results indicate it can extract buildings in complex environments irrespective of rooftop shape.
Individual movements and geographical data mining. Clustering algorithms for ...Beniamino Murgante
Individual movements and geographical data mining. Clustering algorithms for highlighting hotspots in personal navigation routes.
Giuseppe Borruso, Gabriella Schoier - University of Trieste
Laurent Etienne's presentation at Geomatics Atlantic 2012 (www.geomaticsatlantic.com) in Halifax, June 2012. More session details at http://lanyrd.com/2012/geomaticsatlantic2012/stbgx/ .
Congenital Bucolic and Farming Region Taxonomy Using Neural Networks for Remo...DR.P.S.JAGADEESH KUMAR
This summarizes a document about classifying rural and agricultural areas using neural networks and remote sensing imagery. It discusses extracting texture features from gray scale and multispectral images to train a neural network for classification. Different features like histogram pixel intensity, texture constraints, and spatial pixel matrices were used. The neural network was able to effectively classify rural and agricultural regions by leveraging these extracted texture features from aerial images.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Field Strength Predicting Outdoor ModelsIRJET Journal
This document discusses several outdoor propagation models used to predict radio signal strength and path loss over distance. It begins by introducing concepts of transmission power, signal strength, and path loss. It then describes common factors that affect outdoor radio propagation like diffraction, reflection, refraction, and scattering. The rest of the document summarizes several empirical, theoretical, and physical propagation models including Okumura, Hata, ECC-33, COST-231 Hata, and Egli models. These models use different methods and equations to predict path loss and were developed based on extensive radio signal measurement data in various outdoor environments.
Radar and optical remote sensing data evaluation and fusion; a case study for...rsmahabir
The recent increase in the availability of spaceborne radar in different wavelengths with multiple polarisations provides new opportunities for land surface analysis. This research effort explored how different radar data, and derived texture values, indepen- dently and in combination with optical imagery influence land cover/use classification accuracies for a study site in Washington, DC, USA. Two spaceborne radar images, Radarsat-2L-band and Palsar C-band quad-polarised radar, were registered with Aster optical data for this study. Traditional methods of classification were applied to various components and combinations of this data set, and overall and class-specific thematic accuracies obtained for comparison. The results for the two despeckled radar data sets were quite different, with Radarsat-2 obtaining an overall accuracy of 59% and Palsar 77%, while that of the optical Aster was 90%. Combining the original radar and a variance texture measure increased the accuracy of Radarsat-2 to 71% but that of Palsar only to 78%. One of the sensor fusions of optical and radar obtained an accuracy of 93%. For this location, radar by itself does not obtain classification accuracies as high as optical data, but fusion with optical imagery provides better overall thematic accuracy than the optical independently, and results in some useful improvements on a class-by-class basis. For those regions with high cloud cover, quad polarisation radar can independently provide viable results but it may be wavelength-dependent.
A visualization-oriented 3D method for efficient computation of urban solar r...Jianming Liang
This document describes a new 3D method for efficiently computing and visualizing urban solar radiation based on 3D-2D surface mapping. The method first projects 3D triangular meshes onto 2D raster surfaces to determine solar radiation. It then uses barycentric interpolation to rasterize the positions and surface normals of each triangular mesh onto the 2D raster. An efficient GPU-accelerated algorithm calculates shadow masks and solar radiation for each raster cell. Resulting solar radiation rasters are represented as RGB texture maps for efficient rendering of large 3D urban models.
This document contains a stratigraphic chart for the upper Cretaceous sequence in mid-Norway. It includes information on chronostratigraphy, magnetostratigraphy, biostratigraphy, and sequence stratigraphy for mid-Norway and surrounding regions. The chart also notes various rock formations, members, and sandstones present in the upper Cretaceous sequence.
This document provides a correlation chart showing the chronostratigraphy, magnetostratigraphy, sequence stratigraphy, and biostratigraphy of the Lower Cretaceous succession in Mid-Norway. The chart correlates these stratigraphic data from various basins in Mid-Norway and adjacent areas in a timescale from the Berriasian to Albian stages. It establishes the relationship between polarity chrons, megasequences, transgressive-regressive cycles, ammonite biozones, and other microfossil biozones throughout this time period for the region.
This document summarizes oil and gas activity in Asia, with a focus on China and South Korea. It discusses China's large oil and gas production and consumption, but also significant environmental issues from pollution. In response, China has taken actions like restricting companies and shutting down factories. In contrast, South Korea relies heavily on imports due to limited domestic resources, but has advanced refining capacity and made environmental protection a priority through policies and investment in clean energy.
This document outlines the process for reservoir characterization, which involves multi-disciplinary analyses including: 1) geological analyses of core data, well logs, and cross sections; 2) analysis of geological databases; 3) evaluation of source rock and rock mechanics; 4) geophysical evaluation and interpretation of seismic data; and 5) reservoir engineering analyses including completion and drilling evaluations. The results of these analyses will be integrated into reservoir models to identify potential infill locations and "sweet spots" with greater producibility potential.
This document defines sequence stratigraphy and discusses its basic concepts. Sequence stratigraphy studies genetically related rock units bounded by unconformities. It is based on dividing strata into sequences bounded by sea level changes. Key concepts discussed include depositional sequences, parasequences, flooding surfaces, system tracts, accommodation space, and the importance of sequence stratigraphy for understanding basin evolution and resource exploration.
This document summarizes an analysis of reservoir facies and production classifications in a Frasnian-age reservoir in West Siberia. Core data from 7 wells and well log data from 27 wells in a 235 km2 3D seismic survey area were analyzed. Four litho-facies were identified from core analysis and predicted in uncored wells using fuzzy logic. A 3D static model integrating seismic, core, and log data was constructed to predict litho-facies, porosity, and permeability cubes. A conductivity map produced from the modeling can help identify the most productive areas for drilling.
Paleodepositional environment and sequence stratigraphy of outcropping sedime...Alexander Decker
- The document analyzes the paleodepositional environment and sequence stratigraphy of outcropping sediments in parts of the Southern Middle Niger Basin in Nigeria.
- Three main lithofacies were identified (sand, shale, silt) with seven subfacies. Depositional environments were determined to be continental fluvial for the Lokoja Formation and shallow marine to transitional for the Patti Formation.
- Three sequence stratigraphic systems tracts were established - a lowstand systems tract for the Lokoja Formation, a transgressive systems tract for the lower Patti Formation, and a highstand systems tract for the upper Patti Formation. An unconformity and candidate maximum flooding surface were identified.
This document provides an introduction to sequence stratigraphy, including key concepts such as accommodation space, cyclical deposition, parasequences, and Walter's Law. Students are instructed to analyze a sedimentary section by identifying parasequences and their components, measuring bed thicknesses, and interpreting the sedimentation history. The images are provided by Steve Holland of University of Georgia and require separate permission to use outside this learning module.
This document provides a review of the history and concepts of sequence stratigraphy. It begins with a brief history starting from early ideas about sea level change in the 1600s and progresses to modern concepts developed in the late 20th century. It then discusses the key principles of sequence stratigraphy including accommodation space, sequence boundaries, systems tracts including lowstand, transgressive, and highstand tracts, and parasequences. The review provides definitions and diagrams to illustrate these fundamental concepts in sequence stratigraphy.
Interpretation and recognition of depositional systems using seismic dataDiego Timoteo
This document discusses the interpretation and recognition of depositional systems using seismic data. It covers five key stages: (1) reviewing basic concepts of sequence stratigraphy, (2) understanding the physical foundations of rocks and seismic reflection methods, (3) seismic stratigraphic interpretation of depositional sequences and system tracts, (4) recognizing depositional systems through seismic facies analysis, and (5) advanced seismic interpretation applications. Accurate interpretation requires integrating data from outcrops, cores, well logs, and seismic sections to constrain models, especially in frontier regions with limited data. The techniques allow correlating and mapping stratigraphic units to aid paleogeographic reconstruction and facies/lithology prediction away from control points.
This document provides an introduction to sequence stratigraphy, which attempts to subdivide and explain sedimentary deposits in terms of variations in sediment supply and accommodation space associated with sea level changes. It defines key terms like parasequence, progradation, retrogradation, transgression, and regression. It also describes the accommodation space equation and causes of changes in sea level and tectonic subsidence. Finally, it discusses sequence stratigraphic concepts like depositional sequences, system tracts, stacking patterns, and sequence boundaries.
LGC field course in the Book Cliffs, UT: Presentation 1 of 14 (Principles of ...William W. Little
Introductory presentation for a professional field course titled: THE BOOK CLIFFS: A CASE STUDY IN COASTAL SEQUENCE STRATIGRAPHY, offered annually through W.W. LITTLE GEOLOGICAL CONSULTING (also offered by SCA). See details at: HTTP://LITTLEWW.WORDPRESS.COM.
Quantitative and Qualitative Seismic Interpretation of Seismic Data Haseeb Ahmed
This document discusses quantitative and qualitative seismic interpretation techniques used to analyze seismic data and map subsurface geology. It compares traditional qualitative techniques to more modern quantitative techniques. It then focuses on unconventional seismic interpretation techniques used for unconventional reservoirs with low permeability, including AVO analysis, seismic inversion, seismic attributes, and forward seismic modeling. These techniques can help identify tight gas, shale gas, and gas hydrate reservoirs that conventional methods cannot easily detect. The document provides details on how each technique works and its advantages.
Sedimentary facies refer to rock or sediment bodies that are distinguished by their composition, texture, structures and other features related to the depositional environment. Key aspects of facies include grain size, sorting, fossils and bedding. Individual facies represent specific depositional conditions. Multiple genetically-related facies comprise a facies association representing a depositional system. Facies successions occur at different scales from individual systems to basin-scale sequences reflecting changes in sea level over time.
Sequence stratigraphy involves subdividing stratigraphic records based on bounding discontinuities. A depositional sequence is defined as a succession of genetically related strata bounded by unconformities and correlative conformities. During a sequence, systems tracts are deposited in response to changes in relative sea level, including highstand, falling stage, lowstand, and transgressive tracts bounded by surfaces like sequence boundaries, transgressive surfaces, and flooding surfaces.
Fractal characterization of dynamic systems sectional imagesAlexander Decker
This document discusses using fractal analysis to characterize sectional images of dynamic systems. It simulates three systems - a hollow sphere, transmissibility ratio, and Lorenz weather model. For each system, it generates the governing equations, simulates the surface, takes 200 sectional images by passing a plane through the surface, and uses fractal disk dimension to characterize the roughness of each image. It finds the hollow sphere surface is smoothest, transmissibility ratio is rougher, and Lorenz weather model is roughest, indicating fractal analysis can distinguish linear from nonlinear system surfaces. The study demonstrates fractals' potential for image characterization in engineering applications.
Self-organizing maps (SOMs) can be used to classify seismic attributes in an unsupervised manner and reveal geological patterns that improve seismic interpretation. SOMs reduce a large set of seismic attributes into a smaller set of clusters that relate to geologic features of interest. The classified seismic data can then be analyzed using computer vision algorithms like convolutional neural networks to automatically identify depositional sequences, seismic facies, play types, leads, and prospects. This facilitates a more robust and timely seismic interpretation that can help identify hydrocarbon traps and reduce exploration risk and costs.
This document discusses geologic data models and their application to Cordilleran geology. It addresses two facets of data management: 1) data collection including field data, data models, databases and future refinements, and 2) data models in terms of accuracy, breadth, commonality and depth. Accurate data models allow geologists to share data and interpretations without compromising computer capabilities. Developing standardized taxonomies helps disseminate geologic knowledge among surveys. The document outlines how data models can improve data collection and interpretation for complex Cordilleran geology.
This document describes a fast and reliable method for surface wave tomography to estimate 2-D models of isotropic and azimuthally anisotropic velocity variations from regional or global surface wave data. The method inverts surface wave group or phase velocity measurements to produce tomographic maps in a spherical geometry. It allows for spatial smoothing and model amplitude constraints to be applied simultaneously. Examples applying this technique globally and regionally in Eurasia and Antarctica are presented.
This document describes a fast and reliable method for surface wave tomography to estimate 2-D models of isotropic and azimuthally anisotropic velocity variations from regional or global surface wave data. The method inverts surface wave group or phase velocity measurements to produce tomographic maps in a spherical geometry. It allows for spatial smoothing and model amplitude constraints to be applied simultaneously. Examples applying this technique globally and regionally in Eurasia and Antarctica are presented.
This paper introduces the Artifi cial Neural Networks (ANN) function to model probabilistic dependencies, in supervised classification tasks for discrimination between earthquakes and explosions problems. ANNs are regarded as the discriminating tools to classify the natural seismic events (earthquakes) from the artifi cial ones (Man-made explosions) based on the seismic signals recorded at regional distances. The bulk of our novel is to improve the obtained numerical results using this advance technique. The ANNs, by testing the different types of seismic features, showed the potential application of this method to discriminate the classes. During the above study, we found out that the Neural Networks have been used in a fully innovative manner in this work. Here the ARMA coefficients filters detects
the type of the source whenever a natural or artificial source changes the nature of the background noise of the seismograms. During the above study, we found out that this algorithm is sometimes capable to alarm the further natural seismological events just a little before the onset.
Structural Study of the West Red Lake AreaVadim Galkine
This document provides a summary of a complex structural study of the West Red Lake area in northwestern Ontario, Canada. It includes:
1) An analysis of lineaments (faults and fractures) visible in aerial photos, satellite images, and topographic data at multiple scales between 100x100m and 1000x1000m. Over 80,000 lineaments were identified and categorized.
2) Contour maps showing densities of lineaments and their intersections, with higher densities indicating more permeability and potential for mineralization.
3) Comparison of known gold occurrences in the area to the spatial distribution of different lineament types, showing a positive correlation between main lineaments and occurrences.
4) Discussion of methods
Michael Farina presented on establishing new upper bounds for the k-distance domination numbers of grid graphs by generalizing an existing construction of dominating sets to k-distance dominating sets. Armando Grez examined a method for constructing fullerene patches with 4 pentagonal faces and produced an exact process for drawing them. Darleen Perez-Lavin partitioned the set of permutations with a peak set into subsets ending with an ascent or descent and provided formulas to enumerate these subsets for Coxeter groups of types B and D.
A Land Data Assimilation System Utilizing Low Frequency Passive Microwave Rem...drboon
To address the gap in bridging global and smaller modelling scales, downscaling approaches have been reported as an appropriate solution. Downscaling on its own is not wholly adequate in the quest to produce local phenomena, and in this paper we use a physical downscaling method combined with data assimilation strategies, to obtain physically consistent land surface condition prediction. Using data assimilation strategies, it has been demonstrated that by minimizing a cost function, a solution utilizing imperfect models and observation data including observation errors is feasible. We demonstrate that by assimilating lower frequency passive microwave brightness temperature data using a validated theoretical radiative transfer model, we can obtain very good predictions that agree well with observed conditions.
The document discusses common pitfalls in 3D seismic interpretation and provides recommendations to improve interpretations. It notes that interpreters often rely too heavily on workstation tools rather than thoughtful geological analysis, and fail to properly understand data defects, phase and polarity, resolution limits, and amplitude information. The document emphasizes the importance of integrating seismic data with well data on character, using autotracking tools appropriately, questioning attribute selections, and differentiating between horizon and windowed amplitudes.
Seismic Modeling ASEG 082001 Andrew LongAndrew Long
This document discusses tools for modeling elastic wave propagation to aid in seismic survey planning. It summarizes three main modeling techniques: recursive reflectivity methods, ray tracing methods, and full wavefield methods using finite-differencing. Ray tracing is useful for optimizing survey geometry but not reflectivity studies, while reflectivity and finite-difference methods model full wavefields and are better for amplitude studies like AVO. Integrating these modeling tools with real data and rock physics analysis allows comprehensive understanding of wave propagation for effective survey planning addressing all acquisition parameters and seismic phenomena.
This document describes a MATLAB application called FractalAnalyzer that is used to analyze earthquake clustering through multifractal seismicity analysis. It allows users to calculate fractal dimensions like correlation dimension and generalized fractal dimensions from earthquake data. These fractal dimensions can provide information about the heterogeneous and complex spatial and temporal clustering of earthquakes. The application includes tools for plotting graphs of the analysis and was demonstrated on a dataset from before the 2005 Kashmir earthquake.
Distance Metric Based Multi-Attribute Seismic Facies Classification to Identi...Pioneer Natural Resources
Conventional reservoirs benefit from a long scientific history that correlates successful plays to seismic measurements through depositional, tectonic, and digenetic models. Unconventional reservoirs are less well understood, however benefit from significantly denser well control. Thus, allowing us to establish statistical rather than model-based correlations between seismic data, geology, and successful completion strategies. One of the more commonly encountered correlation techniques is based on computer assisted pattern recognition. The pattern recognition techniques have found their niche in a plethora of applications ranging from flagging suspicious credit card purchase patterns to rewarding repeating online buying patterns. Classification of a given seismic response as having a “good” or “bad” pattern requires a “distance metric”. Distance metric “learning” uses past experiences (well performance) as training data to develop a distance metric. Alternative distance metrics have demonstrated significant value in the identification and classification of repeated or anomalous behaviors in public health, security, and marketing. In this paper we examine the value of three of these alternative distance metrics of 3D seismic attributes to the identification of sweet spots in a Barnett Shale play.
This document discusses the Space-Time Cube, a tool for visualizing, analyzing, and managing spatiotemporal data describing human movement and events. It examines use cases like evacuation routes after an earthquake and identifying flocking patterns in pedestrian trajectory data. The Space-Time Cube allows interactive exploration of movement data at different scales, as well as identifying relationships and patterns within spatiotemporal event data, like archaeological site locations. Key challenges with spatiotemporal data include differences in resolution, accuracy, and data types, ranging from dense tracking datasets to sparse event records, requiring appropriate analytical methods.
This document discusses using a genetic algorithm approach to find the shortest driving time on mobile navigation devices. It proposes using constant length chromosomes to encode the route finding problem. The genetic algorithm is tested on networks of different sizes. Mutation is found to contribute greatly to achieving optimal solutions by maintaining genetic diversity. The genetic algorithm approach can find good solutions in reasonable time for large, complex networks that deterministic methods cannot solve efficiently.
This document discusses spatial analysis and analysis tools. It begins by defining spatial analysis as techniques for analyzing spatial data where the results depend on object locations. It then describes 7 types of spatial analysis: spatial data analysis, spatial autocorrelation, spatial interpolation, spatial regression, spatial interaction, simulation and modelling, and multiple-point geostatistics. The document also discusses various analysis toolsets including map algebra, math tools, multi-variate tools, neighborhood tools, raster tools, reclassification tools, and solar radiation tools. It emphasizes that spatial analysis is useless without spatial infographics and visualization.
Remote Exploration Technique. Dr. V. GalkineVadim Galkine
Evaluation of the accumulated permeability field of the uppermost crust using analogue modeling and lineament analysis combination.
Method results in building a series of Exploration Target Maps for further ground exploration. Unique on the market. Low cost, fast, effective. Base Metals, Gold, Silver, Uranium, Oil and gas, Kimberlites.
This document discusses data structures for geographic and cartographic data. It notes that current data structures are characterized by: 1) being designed for input rather than use in programs, 2) storing different feature types in separate uncoordinated files, and 3) lacking information on neighboring entities. The concept of a "neighborhood function" is introduced to indicate an entity's relative location, which is important for analysis. Ongoing research includes the GEOGRAF and GDS systems, which involve manipulating data between input and use to address issues of flexibility, comparability, and topology.
This document discusses using k-means clustering to detect minerals from remote sensing images. It begins with an abstract describing using k-means clustering on hyperspectral images to segment and extract features to detect minerals like giacomo. It then provides background on remote sensing, k-means clustering algorithms, and describes the giacomo mineral deposit in Peru that contains silicon dioxide and titanium dioxide. It concludes with discussing using sobel edge detection as part of the mineral detection process from remote sensing images.
Similar to A comparison of classification techniques for seismic facies recognition (20)
Hydraulic fracturing stimulation designs are moving towards tighter spaced clusters, longer stage length, and more proppant volumes. However, effectively evaluating the hydraulic fracturing stimulation efficiency remains a challenge. Distributed fiber optic sensing, which includes DAS and DTS, can continuously monitor the hydraulic fracturing stimulation downhole and be compared with other monitoring technology such as microseismic.
The DAS and DTS data, when integrated with the microseismic, highlight processes relevant to the completion design and allow for a better understanding and interpretation of each dataset.
This paper outlines a workflow to improve processing and interpretation of DAS and DTS data. In addition,
an estimate of the slurry distribution can be made. These methods will be demonstrated for a horizontal
Wolfcamp well in the Permian Basin. Here we compare key aspects of the microseismic, DAS, and DTS
results in several fracture stages to understand the downhole geomechanical processes. In order to interpret
the DTS data a thermal model is developed (using DTS data) to simulate the temperature behavior after
pumping has ceased. A slurry distribution is obtained by matching the simulated temperature with the
measured temperature from DTS. In addition, the DAS data signal is studied in the frequency domain and
the dominant frequencies are identified that are mostly related to fluid flow and to reduce the background
noise. This time frequency analysis enhances the ability to monitor and optimize well treatments.
After reducing the background noise, the acoustic intensity is correlated to the slurry distribution. The fluid
distribution data from DAS and DTS are compared with the microseismic and near field strain to better
understand the completion processes. We utilized fiber optic microseismic to better understand and
compare it to conventional microseismic.
Finally, we highlight the dynamics of strain and microseismic signature as fluid moves from an offset well
completion into the prior stimulated fiber well to better understand the reservoir and far field effects of the
completion.
Interpretation Special-Section: Insights into digital oilfield data using ar...Pioneer Natural Resources
Invitation to contribute to our special issue titled “Insights to the Digital Oil Field data using Artificial Intelligence & Big Data Analytics”. The scope of this special issue is to further bridge the gap between the geophysical interpretation and well planning (drilling, completions and production).
The technology communities both in industry and academia are utilizing advanced signal processing / machine learning algorithms, high compute / Big Data architectures at scale to develop these practical solutions. Your contributions to advanced algorithms in signal processing/machine learning, subsurface imaging and interpretation will be a good fit for this issue.
We hope you will find this participation both rewarding and worthwhile for our industry and to the Interpretation community in general.
Dear Colleagues,
Call for papers for another Machine Learning special issue of SEG/AAPG Journal of Interpretation focusing on the Seismic Data Analysis has been announced.
We look forward to your contribution.
Vikram Jayaram
Special Section Editor
Interpretation
An approach to offer management: maximizing sales with fare products and anci...Pioneer Natural Resources
With the growth in ancillary sales, an area of increasing importance for airlines is the concept of offer management, which entails the creation of dynamic, custom, personalized offers consisting of a flight itinerary and ancillary products offered by an airline. This practice-oriented, overview paper provides an end-to-end, future-oriented framework for determining the composition of optimal base fare and ancillary bundles by customer trip purpose segment followed by 1:1 personalization to maximize total sales. Our focus in this paper is primarily on the proposed offer management framework and its sub-components.
New exploration challenges and current research demands
3D gravity modeling with 3D geology interpretations. In
the near future, multi-parameter and multi-dimensional
interpretations represent the observed and expected in situ
geology, geophysical, and petro-physical data that will be
used for join multi-parameter, multi-dimensional
inversions. We present an initial 3D gravity model of
Osage County in northeastern Oklahoma, where there is a
greater than 40 mGal, 100 km diameter semi-circular
gravity anomaly that cannot be effectively removed by
traditional gravity processing techniques.
OPTIMIZED RATE ALLOCATION OF HYPERSPECTRAL IMAGES IN COMPRESSED DOMAIN USING ...Pioneer Natural Resources
This document discusses optimizing the rate allocation of hyperspectral images compressed using JPEG2000. It presents a mixed model for bit allocation that combines high and low bit rate models. This mixed model and an optimal rate allocation approach based on minimizing mean squared error under a rate constraint provide lower reconstruction errors than traditional approaches. Computational tests on hyperspectral data show the discrete wavelet transform allows for faster processing and less memory usage compared to the Karhunen-Loeve transform.
Receiver deghosting method to mitigate F-K transform artifacts: A non-windo...Pioneer Natural Resources
In this study, we implemented and tested a new processing- based broadband solution for mitigating F-K transform arti- facts for receiver deghosting in a marine environment. The F- K transform has traditionally been used for flat cable (constant depth) deghosting and often times tailored to meet the slanted (variable depth) cable criteria. Recently, the usage of τ − p do- main deterministic deghost operator has been more prominent with slant cable deghosting. Irrespective of the type of trans- form or deghost operator used, a windowed process is essential due to the time and offset varying character of the ghost. This use of a windowed process usually results in poor reconstruc- tion of deghosted signals and artifacts beyond the control of the transform(s) itself. The windowing in time and offset produces edgy effects which can be clearly seen in the difference plots. Our method, using a non-windowing approach, demonstrates a better representation of the deghosted signals without the arti- facts caused by the boundary of the windows. This method has also been well-tested for both the flat and slant cable receiver deghosting workflows in synthetic and field data examples.
This document discusses using hidden Markov models (HMMs) for unsupervised learning in hyperspectral image classification. It proposes an HMM-based probability density function classifier that models hyperspectral data using a reduced feature space. The approach uses an unsupervised learning scheme for maximum likelihood parameter estimation, combining both model selection and estimation. This HMM method can accurately model and synthesize approximate observations of true hyperspectral data in a reduced feature space without relying on supervised learning.
Directional Analysis and Filtering for Dust Storm detection in NOAA-AVHRR Ima...Pioneer Natural Resources
This document proposes techniques for detecting dust storms and determining their direction of transport using NOAA-AVHRR satellite imagery. It introduces a two-part approach using image processing algorithms: 1) A visualization technique uses filters, edge detectors and classification to locate dust sources. 2) An automation technique performs power spectrum analysis to detect dust storm direction by analyzing texture orientation in image blocks. The goal is to automatically detect and track dust storms for applications like hazard monitoring.
1. The document discusses using statistical learning methods like Gaussian mixture models (GMM) and dynamic component allocation (DCA) for hyperspectral image classification.
2. GMM represents pixel spectra as mixtures of Gaussian distributions. DCA is an algorithm that dynamically adds and removes Gaussian components to better characterize the data during training.
3. The document outlines how DCA works, including merging similar Gaussian modes, splitting modes with high kurtosis, and pruning insignificant modes. These techniques aim to learn the appropriate number of mixture components from the data.
This document describes a new method for training Gaussian mixture classifiers for hyperspectral image classification. The method uses dynamic pruning, splitting, and merging of Gaussian mixture kernels to automatically determine the appropriate number of components during training. This "structural learning" approach is employed to model and classify hyperspectral imagery data. Experimental results on AVIRIS hyperspectral data sets suggest this approach is a potential alternative to traditional Gaussian mixture modeling and classification using expectation-maximization.
The document discusses using discrete wavelet transform (DWT) and principal component analysis (PCA) as decorrelating transforms for hyperspectral image classification under JPEG2000 compression. It compares the classification performance of DWT and PCA when applying lossless compression and two JPEG2000 scalability options: color and quality. Color scalability decompresses a subset of bands, while quality scalability assigns more bits to important bands. The DWT provides similar classification to PCA but is faster and does not require additional files. Reordering bands by variance before color decompression improved DWT classification results compared to using the initial DWT band order.
The document discusses optimizing rate allocation for compressing hyperspectral images using JPEG2000. It proposes using the discrete wavelet transform instead of the Karhunen-Loeve transform for decorrelation due to lower computational complexity. A mixed model is used for rate distortion optimal bit allocation instead of experimentally obtained rate distortion curves. Comparisons show the mixed model approach results in lower mean squared error than traditional bit allocation schemes, while having lower implementation complexity than prior methods.
Detection and Classification in Hyperspectral Images using Rate Distortion an...Pioneer Natural Resources
This document summarizes an experiment that compares two methods of bit allocation for compressing hyperspectral imagery using JPEG2000: 1) the traditional high bit rate quantizer approach and 2) the rate distortion optimal (RDO) approach. The experiment shows that both methods perform well at relatively low bit rates, achieving over 96% classification accuracy. However, at very low bit rates, the RDO approach outperforms the high bit rate quantizer approach, achieving 90% accuracy at 0.0375 bpppb compared to less than 90% for the high bit rate method. The RDO approach also achieves lower mean squared error than the high bit rate quantizer approach.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Building RAG with self-deployed Milvus vector database and Snowpark Container...Zilliz
This talk will give hands-on advice on building RAG applications with an open-source Milvus database deployed as a docker container. We will also introduce the integration of Milvus with Snowpark Container Services.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
2. seismic, well log, completion, and production measure-
ments. For interpreters, “segmentation” will usually
mean focusing on a given stratigraphic formation or
suite of formations. Seismic data lose their temporal
and lateral resolution with depth, such that a given seis-
mic facies changes its appearance, or is nonstationary,
as we go deeper in the section. The number of potential
facies also increases as we analyze larger vertical win-
dows incorporating different depositional environments,
making classification more difficult. For computer-
assisted facies classification, “feature extraction” means
attributes, be they simple measurements of amplitude
and frequency; geometric attributes that measure reflec-
tor configurations; or more quantitative measurements
of lithology, fractures, or geomechanical properties pro-
vided by prestack inversion and azimuthal anisotropy
analysis. “Classification” assigns each voxel to one of
a finite number of classes (also called clusters), each
of which represents a seismic facies that may or may
not correspond to a geologic facies. Finally, using val-
idation data, the interpreter makes a “decision” that de-
termines whether a given cluster represents a unique
seismic facies, if it should be lumped in other clusters
having a somewhat similar attribute expression, or
whether it should be further subdivided, perhaps
through the introduction of additional attributes.
Pattern recognition and classification of seismic fea-
tures is fundamental to human based interpretation,
where our job may be as “simple” as identifying and
picking horizons and faults, or more advanced, such
as the delineation of channels, mass transport com-
plexes, carbonate buildups, or potential gas accumula-
tions. The use of computer-assisted classification began
soon after the development of seismic attributes in the
1970s (Balch, 1971; Taner et al., 1979), with the work by
Sonneland (1983) and Justice et al. (1985) being two of
the first. The k-means (Forgy, 1965; Jancey, 1966) was
one of the earliest clustering algorithms developed, and
it was quickly applied by service companies, and today
it is common to almost all interpretation software pack-
ages. The k-means is an unsupervised learning algo-
rithm, in that the interpreter provides no prior informa-
tion other than the selection of attributes and the
number of desired clusters.
Barnes and Laughlin (2002) review several unsuper-
vised clustering techniques, including k-means, fuzzy
clustering, and self-organizing maps (SOMs). Their pri-
mary finding was that the clustering algorithm used was
less important than the choice of attributes used.
Among the clustering algorithms, they favor SOM be-
cause there is topologically ordered mapping of the
clusters with similar clusters lying adjacent to each
other on a manifold and in the associated latent space.
In our examples, a “manifold” is a deformed 2D surface
that best fits the distribution of n attributes lying in an
N-dimensional data space. The clusters are then
mapped to a simpler 2D rectangular “latent” (Latin
for “hidden”) space, upon which the interpreter can ei-
ther interactively define clusters or map the projections
onto a 2D color bar. A properly chosen latent space can
help to identify data properties that are otherwise dif-
ficult to observe in the original input space. Coleou
et al.’s (2003) seismic “waveform classification” algo-
rithm is implemented using SOM, where the “attributes”
are seismic amplitudes that lie on a suite of 16 phantom
horizon slices. Each ðx; yÞ location in the analysis win-
dow results provides a 16D vector of amplitudes. When
plotted one element after the other, the mean of each
cluster in 16D space looks like a waveform. These
waveforms lie along a 1D deformed string (the mani-
fold) that lies in 16D. This 1D string is then mapped
to a 1D line (the latent space), which in turn is mapped
against a 1D continuous color bar. The proximity of
such waveforms to each other on the manifold and la-
tent spaces results in similar seismic facies appearing as
similar colors. Coleou et al. (2003) generalize their al-
gorithm to attributes other than seismic amplitude, con-
structing vectors of dip magnitude, coherence, and
reflector parallelism. Strecker and Uden (2002) are per-
haps the first to use 2D manifolds and 2D latent spaces
with geophysical data, using multidimensional attribute
volumes to form N-dimensional vectors at each seismic
Figure 1. Classification as applied to the in-
terpretation of seismic facies (modified from
Duda et al., 2001).
SAE30 Interpretation / November 2015
Downloaded09/23/15to129.15.66.178.RedistributionsubjecttoSEGlicenseorcopyright;seeTermsofUseathttp://library.seg.org/
3. sample point. Typical attributes included envelope,
bandwidth, impedance, amplitude variation with offset
(AVO) slope and intercept, dip magnitude, and coher-
ence. These attributes were projected onto a 2D latent
space and their results plotted against a 2D color table.
Gao (2007) applies a 1D SOM to gray-level co-occur-
rence matrix (GLCM) texture attributes to map seismic
facies offshore Angola. Overdefining the clusters with
256 prototype vectors, he then uses 3D visualization
and his knowledge of the depositional environment
to map the “natural” clusters. These natural clusters
were then calibrated using well control, giving rise to
what is called a posteriori supervision. Roy et al.
(2013) build on these concepts and develop an SOM
classification workflow of multiple seismic attributes
computed over a deepwater depositional system. They
calibrate the clusters a posteriori using classical princi-
ples of seismic stratigraphy on a subset of vertical slices
through the seismic amplitude. A simple but very impor-
tant innovation was to project the clusters onto a 2D
nonlinear Sammon space (Sammon, 1969). This projec-
tion was then colored using a gradational 2D color scale
like that of Matos et al. (2009), thus facilitating the in-
terpretation. Roy et al. (2013) introduce a Euclidean dis-
tance measure to correlate predefined unsupervised
clusters to average data vectors about interpreter de-
fined well-log facies.
Generative topographic mapping (GTM) is a more re-
cent unsupervised classification innovation, providing a
probabilistic representation of the data vectors in the
latent space (Bishop et al., 1998). There has been very
little work on the application of the GTM technique to
seismic data and exploration problems. Wallet et al.
(2009) are probably the first to apply the GTM tech-
nique to seismic data, using a suite of phantom horizon
slices through a seismic amplitude volume generating a
waveform classification. Although generating excellent
images, Roy (2013) and Roy et al. (2014) find the intro-
duction of well control to SOM classification to be
somewhat limited, and instead apply GTM to a Mississip-
pian tripolitic chert reservoir in midcontinent USA and a
carbonate wash play in the Sierra Madre Oriental of
Mexico. They find that GTM provides not only the most
likely cluster associated with a given voxel, but also the
probability that voxel belongs each of clusters, providing
a measure of confidence or risk in the prediction.
The k-means, SOM, and GTM are all unsupervised
learning techniques, where the clustering is driven only
by the choice of input attributes and the number of de-
sired clusters. If we wish to teach the computer to
mimic the facies identification previously chosen by
a skilled interpreter, or link seismic facies to electrof-
acies interpreted using wireline logs, we need to intro-
duce “supervision” or external control to the clustering
algorithm. The most popular means of supervised learn-
ing classification are based on artificial neural networks
(ANNs). Meldahl et al. (1999) use seismic energy and
coherence attributes coupled with interpreter control
(picked seed points) to train a neural network to iden-
tify hydrocarbon chimneys. West et al. (2002) use a sim-
ilar workflow, in which the objective is seismic facies
analysis of a channel system and the input attributes
are textures. Corradi et al. (2009) use GLCM textures
and ANN, with controls based on wells and skilled in-
terpretation of some key 2D vertical slices to map sand,
evaporate, and sealing versus nonsealing shale facies
offshore of west Africa.
The support vector machine (SVM, where the word
“machine” is due to Turing’s [1950] mechanical decryp-
tion machine) is a more recent introduction (e.g.,
Kuzma and Rector, 2004, 2005, 2007; Li and Castagna,
2004; Zhao et al., 2005; Al-Anazi and Gates, 2010). Origi-
nating from maximum margin classifiers, SVMs have
gained great popularity for solving pattern classification
and regression problems since the concept of a “soft
margin” was first introduced by Cortes and Vapnik
(1995). SVMs map the N-dimensional input data into
a higher-dimensional latent (often called feature) space,
where clusters can be linearly separated by hyper-
planes. Detailed descriptions of SVMs can be found
in Cortes and Vapnik (1995), Cristianini and Shawe-
Taylor (2000), and Schölkopf and Smola (2002). Li
and Castagna (2004) use SVM to discriminate alterna-
tive AVO responses, whereas Zhao et al. (2014) and
Zhang et al. (2015) use a variation of SVM using miner-
alogy logs and seismic attributes to predict lithology
and brittleness in a shale resource play.
We begin our paper by providing a summary of the
more common clustering techniques used in seismic fa-
cies classification, emphasizing their similarities and
differences. We start from the unsupervised learning
k-means algorithm, progress through projections onto
principal component hyperplanes, and end with projec-
tions onto SOM and GTM manifolds, which are topo-
logical spaces that resemble Euclidean space near
each point. Next, we provide a summary of supervised
learning techniques including ANNs and SVMs. Given
these definitions, we apply each of these methods to
identify seismic facies in the same data volume acquired
in the Canterbury Basin of New Zealand. We conclude
with a discussion on the advantages and limitations of
each method and areas for future algorithm develop-
ment and workflow refinement. At the very end, we pro-
vide an appendix containing some of the mathematical
details to better quantify how each algorithm works.
Review of unsupervised learning classification
techniques
Crossplotting
Crossplotting one or more attributes against each
other is an interactive, and perhaps the most common,
clustering technique. In its simplest implementation,
one computes and then displays a 2D histogram of
two attributes. In most software packages, the inter-
preter then identifies a cluster of interest and draws
a polygon around it. Although several software pack-
ages allow crossplotting of up to three attributes, cross-
plotting more than three attributes quickly becomes
Interpretation / November 2015 SAE31
Downloaded09/23/15to129.15.66.178.RedistributionsubjecttoSEGlicenseorcopyright;seeTermsofUseathttp://library.seg.org/
4. intractable. One workflow to address this visualization
limitation is to first project a high number of attributes
onto the first two or three eigenvectors, and then cross-
plot the principal components. Principal components
will be discussed later in the section on projection
methods.
k-means classification
The k-means (MacQueen, 1967) is perhaps the sim-
plest clustering algorithm and is widely available in
commercial interpretation software packages. The
method is summarized in the cartoons shown in Fig-
ure 2. One drawback of the method is that the inter-
preter needs to define how many clusters reside in
the data. Once the number of clusters is defined, the
cluster means or centers are defined either on a grid
or randomly to begin the iteration loop. Because attrib-
utes have different units of measurement (e.g., Hertz for
peak frequency, 1∕km for curvature, and millivolt for
the root-mean-square amplitude), the distance of each
data point to the current means is computed by scaling
the data by the inverse of the covariance matrix, giving
us the “Mahalanobis distance” (see Appendix A). Each
data point is then assigned to the cluster to whose mean
it is closest. Once assigned, new cluster means are com-
puted from the newly assigned data clusters and the
process is repeated. If there are Q clusters, the process
will converge in about Q iterations.
The k-means is fast and easy to implement. Unfortu-
nately, the clustering has no structure, such that there is
no relationship between the cluster numbering (and
therefore coloring) and the proximity of one cluster
to another. This lack of organization can result in sim-
ilar facies appearing in totally different colors, confus-
ing the interpretation. Tuning the number of clusters to
force similar facies into the same cluster is a somewhat
tedious procedure that also decreases the resolution of
the facies map.
Projection techniques
Although not defined this way in the pattern recog-
nition literature because this is a tutorial, we will lump
the following methods, principal component analysis
(PCA), SOM, and generative topographic maps, to-
gether and call them projection techniques. Projection
techniques project data residing in a higher-dimensional
space (for example, a 5D space defined by five attrib-
utes) onto a lower dimensional space (for example, a
2D plane or deformed 2D surface). Once projected,
the data can be clustered in that space by the algorithm
(such as SOM) or interactively clustered by the inter-
preter by drawing polygons (routine for PCA, and
our preferred analysis technique for SOM and GTM).
Principal component analysis
PCA is widely used to reduce the redundancy and
excess dimensionality of the input attribute data. Such
reduction is based on the assumption that most of the
signals are preserved in the first few principal compo-
nents (eigenvectors), whereas the last principal compo-
nents contain uncorrelated noise. In this tutorial, we
will use PCA as the first iteration of the SOM and
GTM algorithms. Many workers use PCA to reduce re-
dundant attributes into “meta attributes” to simplify the
computation. The first eigenvector is a vector in N-di-
mensional attribute space that best represents the
attribute patterns in the data. Crosscorrelating (projec-
ting) the N-dimensional data against the first eigenvec-
tor at each voxel gives us the first principal component
volume. If we scale the first eigenvector by the first
principal component and subtract it from the original
data vector, we obtain a residual data vector. The sec-
ond eigenvector is that vector that best represents the
attribute patterns in this residual. Crosscorrelating
(projecting) the second eigenvector against either the
original data or residual data vector at each voxel gives
us the second principal component volume. This proc-
ess continues for all N-dimensions resulting in N eigen-
Figure 2. Cartoon illustration of a k-means
classification of three clusters. (a) Select three
random or equally spaced, but distinct, seed
points, which serve as the initial estimate of
the vector means of each cluster. Next, com-
pute the Mahalanobis distance between each
data vector and each cluster mean. Then,
color code or otherwise label each data vector
to belong to the cluster that has the smallest
Mahalanobis distance. (b) Recompute the
means of each cluster from the previously de-
fined data vectors. (c) Recalculate the Maha-
lanobis distance from each vector to the new
cluster means. Assign each vector to the clus-
ter that has the smallest distance. (d) The
process continues until the changes in means
converge to their final locations. If we now
add a new (yellow) point, we will use a Baye-
sian classifier to determine into which cluster
it falls (figure courtesy S. Pickford).
SAE32 Interpretation / November 2015
Downloaded09/23/15to129.15.66.178.RedistributionsubjecttoSEGlicenseorcopyright;seeTermsofUseathttp://library.seg.org/
5. vectors and N principal components. In this paper, we
will limit ourselves to the first two eigenvectors, which
thus define the plane that least-squares fits the N-di-
mensional attribute data. Figure 3c shows a numerical
example of the first two principal components defining
a plane in a 3D data space.
Self-organizing maps
Although many workers (e.g., Coleou et al., 2003) de-
scribe the SOM as a type of neural network, for the pur-
poses of this tutorial, we prefer to describe the SOM as
a manifold projection technique Kohonen (1982). SOM,
originally developed for gene pattern recognition, is one
of the most popular classification techniques, and it has
been implemented in at least four commercial software
packages for seismic facies classification. The major ad-
vantage of SOM over k-means is that the clusters resid-
ing on the deformed manifold in N-dimensional data
space are directly mapped to a rectilinear or otherwise
regularly gridded latent space. We provide a brief sum-
mary of the mathematical formulations of the SOM im-
plementation used in this study in Appendix A.
Although SOM is one of the most popular classifica-
tion techniques, there are several limitations to the SOM
algorithm. First, the choice of neighborhood function at
each iteration is subjective, with different choices re-
sulting in different solutions. Second, the absence of
a quantitative error measure does not let us know
whether the solution has converged to an acceptable
level, thus providing confidence in the resulting analy-
sis. Third, although we find the most likely cluster for a
given data vector, we have no quantitative measure of
confidence in the facies classification, and no indication
if the vector could be nearly as well represented by
other facies.
Generative topographic mapping
GTM is a nonlinear dimensionality reduction tech-
nique that provides a probabilistic representation of
the data vectors on a lower L-dimensional deformed
manifold that is in turn mapped to an L-dimensional la-
tent space. Although SOM seeks the node or prototype
vector that is closest to the randomly chosen vector
from the training or input data set, in GTM, each of
the nodes lying on the lower dimensional manifold
provides some mathematical support to the data and
is considered to be to some degree “responsible” for
the data vector (Figure 4). The level of support or
“responsibility” is modeled with a constrained mixture
of Gaussians. The model parameter estimations are de-
termined by maximum likelihood using the expectation
maximization (EM) algorithm (Bishop et al., 1998).
Because GTM theory is deeply rooted in probability,
it can also be used in modern risk analysis. We can ex-
tend the GTM application in seismic exploration by pro-
jecting the mean posterior probabilities of a particular
window of multiattribute data (i.e., a producing well)
onto the 2D latent space. By project the data vector
at any given voxel onto the latent space, we obtain a
probability estimates of whether it falls into the same
category (Roy et al., 2014). We thus have a probabilistic
estimate of how similar any data vector is to attribute
behavior (and hence facies) about a producing or non-
producing well of interest.
Other unsupervised learning methods
There are many other unsupervised learning tech-
niques, several of which were evaluated by Barnes
and Laughlin (2002). We do not currently have access
to software to apply independent component analysis
(ICA) and Gaussian mixture models (GMMs) to our
Figure 3. (a) A distribution of data points in
3D attribute space. The statistics of this distri-
bution can be defined by the covariance ma-
trix. (b) k-means will cluster data into a user-
defined number of distributions (four in this
example) based on the Mahalanobis distance
measure. (c) The plane that best fits these
data is defined by the first two eigenvectors
of the covariance matrix. The projection of
the 3D data onto this plane provides the first
two principal components of the data, as well
as the initial model for our SOM and GTM al-
gorithms. (d) SOM and GTM deform the initial
2D plane into a 2D manifold that better fits the
data. Each point on the deformed 2D manifold
is in turn mapped to a 2D rectangular latent
space. Clusters are color coded or interac-
tively defined on this latent space.
Interpretation / November 2015 SAE33
Downloaded09/23/15to129.15.66.178.RedistributionsubjecttoSEGlicenseorcopyright;seeTermsofUseathttp://library.seg.org/
6. seismic facies classification problem, but we mention
them as possible candidates.
Independent component analysis
Like PCA, ICA is a statistical technique used to
project a set of N-dimensional vectors onto a smaller
L-dimensional space. Unlike PCA, which is based on
Gaussian statistics, whereby the first eigenvector best
represents the variance in the multidimensional data,
ICA attempts to project data onto subspaces that result
in non-Gaussian distributions, which are then easier to
separate and visualize. Honório et al. (2014) success-
fully apply ICA to multiple spectral components to
delineate the architectural elements of an offshore Bra-
zil carbonate terrain. PCA and ICA are commonly used
to reduce a redundant set of attributes to form a smaller
set of independent meta-attributes (e.g., Gao, 2007).
Gaussian mixture models
GMMs are parametric models of probability dis-
tributions, which can provide greater flexibility and
precision in modeling than traditional unsupervised
clustering algorithms. Lubo et al. (2014) apply this tech-
nique to a suite of well logs acquired over Horseshoe
Atoll, west Texas, to generate different lithologies.
These GMM lithologies are then used to calibrate 3D
seismic prestack inversion results to generate a 3D rock
property model. At present, we do not know of any
GMM algorithms applied to seismic facies classification
using seismic attributes as input data.
Review of supervised learning classification
techniques
Artificial neural networks
ANNs can be used in unsupervised and supervised
mulitattribute analysis (van der Baan and Jutten,
2000). The multilayer perceptron and the radial basis
function (RBF) are two popular types of neural net-
works used in supervised learning. The probabilistic
neural network (PNN), which also uses RBF, forms
the basis of additional neural network geophysical
applications. In terms of network architecture, the su-
pervised algorithms are feed-forward networks. In con-
trast, the unsupervised SOM algorithm described earlier
is a recurrent (or feed-backward) network. An advan-
tage of feed-forward networks over SOMs is the ability
to predict both continuous values (such as porosity), as
well as discrete values (such as the facies class num-
ber). Applications of neural networks can be found
in seismic inversion (Röth and Tarantola, 1994), well-
log prediction from other logs (Huang et al., 1996;
Lim, 2005), waveform recognition (Murat and Rudman,
1992), seismic facies analysis (West et al., 2002), and
reservoir property prediction using seismic attributes
(Yu et al., 2008; Zhao and Ramachandran, 2013). For
the last application listed above, however, due to the
Figure 4. (a) The K grid points uk defined on
a L-dimensional latent space grid are mapped
to K grid points mk lying on a non-Euclidean
manifold in N-dimensional data space. In this
paper, L ¼ 2, and it will be mapped against a
2D color bar. The Gaussian mapping functions
are initialized to be equally spaced on the plane
defined by the first two eigenvectors. (b) Sche-
matic showing the training of the latent space
grid points to a data vector aj lying near the
GTM manifold using an EM algorithm. The pos-
terior probability of each data vector is calcu-
lated for all Gaussian centroids points mk and
are assigned to the respective latent space grid
points uk. Grid points with high probabilities
are displayed as bright colors. All variables
are discussed in Appendix A.
SAE34 Interpretation / November 2015
Downloaded09/23/15to129.15.66.178.RedistributionsubjecttoSEGlicenseorcopyright;seeTermsofUseathttp://library.seg.org/
7. resolution difference between seismic and well logs,
structural and lithologic variation of interwell points,
and the highly nonlinear relation between these two do-
mains, achieving a convincing prediction result can be
challenging. In this case, geostatistical methods, such as
Bayesian analysis, can be used jointly to provide a prob-
ability index, giving interpreters an estimate of how
much confidence they should have in the prediction.
ANNs are routinely used in the exploration and pro-
duction industry. ANN provides a means to correlate
well measurements, such as gamma-ray logs to seismic
attributes (e.g., Verma et al., 2012), where the underly-
ing relationship is a function of rock properties, the
depositional environment, and diagenetic alteration.
Although it has produced reliable classification in many
applications during its service, defects such as converg-
ing to local minima and difficult in parameterization are
not negligible. In industrial and scientific applications,
we prefer a constant and robust classifier, once the
training vectors and model parameters have been deter-
mined. This leads to the more recent supervised learn-
ing technique developed in the late 20th century,
the SVM.
Support vector machines
The basic idea of SVMs is straightforward. First, we
transform the training data vectors into a still higher-
dimensional “feature” space using nonlinear mapping.
Then, we find a hyperplane in this feature space that
separates the data into two classes with an optimal
“margin.” The concept of a margin is defined to be
the smallest distance between the separation hyper-
plane (commonly called a decision boundary) and
the training vectors (Bishop, 2006) (Figure 5). An opti-
mal margin balances two criteria: maximizing the mar-
gin, thereby giving the classifier the best generalization
and minimizing the number of misclassified training
vectors if the training data are not linearly separable.
The margin can also be described as the distance be-
tween the decision boundary and two hyperplanes de-
fined by the data vectors, which have the smallest
distance to the decision boundary. These two hyper-
planes are called the plus plane and the minus plane.
The vectors that lie exactly on these two hyperplanes
mathematically define or “support” them and are called
support vectors. Tong and Koller (2001) show that the
decision boundary is dependent solely on the support
vectors, resulting in the name SVMs.
SVMs can be used in either a supervised or in a semi-
supervised learning mode. In contrast to supervised
learning, semisupervised learning defines a learning
process that uses labeled and unlabeled vectors. When
there are a limited number of interpreter classified data
vectors, the classifier may not act well due to insuffi-
cient training. In semisupervised training, some of
the nearby unclassified data vectors are automatically
selected and classified based on a distance measure-
ment during the training step, as in an unsupervised
learning process. These vectors are then used as addi-
tional training vectors (Figure 6), resulting in a classi-
fier that will perform better for the specific problem.
The generalization power is sacrificed by using unla-
beled data. In this tutorial, we focus on SVM; however,
the future of semisupervised SVM in geophysical appli-
cations is quite promising.
Figure 5. Cartoon of a linear SVM classifier separating black
from white data vectors. The two dashed lines are the margins
defined by the support vector data points. The red decision
boundary falls midway between the margins, separating the
two clusters. If the data clusters overlap, no margins can
be drawn. In this situation, the data vectors will be mapped
to a higher-dimensional space, where they can be separated.
Figure 6. Cartoon describing semisupervised learning. Blue
squares and red triangles indicate two different interpreter-de-
fined classes. Black dots indicate unclassified points. In semi-
supervised learning, unclassified data vectors 1 and 2 are
classified to be class “A,” whereas data vector 3 is classified
to be class “B” during the training process.
Interpretation / November 2015 SAE35
Downloaded09/23/15to129.15.66.178.RedistributionsubjecttoSEGlicenseorcopyright;seeTermsofUseathttp://library.seg.org/
8. Proximal support vector machines
The proximal support vector machine (PSVM) (Fung
and Mangasarian, 2001, 2005) is a recent variant of SVM,
which, instead of looking for a separating plane di-
rectly, builds two parallel planes that approximate
two data classes; the decision-boundary then falls be-
tween these two planes (Figure 7). Other researchers
have found that PSVM provides comparable classifica-
tion correctness with standard SVM but at considerable
computational savings (Fung and Mangasarian, 2001,
2005; Mangasarian and Wild, 2006). In this tutorial,
we use PSVM as our implementation of SVM. Details
on the PSVM algorithm are provided in Appendix A.
We may face problems in seismic interpretation that
are linearly inseparable in the original input multidi-
mensional attribute space. In SVM, we map the data
vectors into a higher-dimensional space where they
become linearly separable (Figure 8), where the in-
crease in dimensionality may result in significantly in-
creased computational cost. Instead of using an
explicit mapping function to map input data into a
higher-dimensional space, PSVM achieves the same
goal by manipulating a kernel function in the input
attribute space. In our implementation, we use a Gaus-
sian kernel function, but in principle, many other func-
tions can be used (Shawe-Taylor and Cristianini, 2004).
SVM can be used either as a classifier or as a regres-
sion operator. Used as a regression operator, SVM is
capable of predicting petrophysical properties, such
as porosity (Wong et al., 2005), VP, VS, and density
(Kuzma and Rector, 2004), as well as permeability
(Al-Anazi and Gates, 2010; Nazari et al., 2011). In all
such applications, SVM shows comparable or superior
performance with neural networks with respect to pre-
diction error and training cost. When used as a classi-
fier, SVM is suitable in predicting lithofacies (Al-Anazi
and Gates, 2010; Torres and Reveron, 2013; Wang et al.,
2014; Zhao et al., 2014) or pseudo rock properties
(Zhang et al., 2015), either from well-log data, core data,
or seismic attributes.
Geologic setting
In this tutorial, we used the Waka 3D seismic survey
acquired over the Canterbury Basin, offshore New
Zealand, generously made public by New Zealand
Petroleum and Minerals. Readers can request this data
set through their website for research purposes. Fig-
ure 9 shows the location of this survey, where the
red rectangle corresponds to time slices shown in sub-
sequent figures. The study area lies on the transition
zone of continental slope and rise, with abundance of
paleocanyons and turbidite deposits of Cretaceous
and Tertiary ages. These sediments are deposited in
a single, tectonically driven transgressive-regressive
cycle (Uruski, 2010). Being a very recent and underex-
plored prospect, publicly available comprehensive
studies of the Canterbury Basin are somewhat limited.
The modern seafloor canyons shown in Figure 9 are
good analogs of the deeper paleocanyons illuminated
by the 3D seismic amplitude and attribute data.
Attribute selection
In their comparison of alternative unsupervised
learning techniques, Barnes and Laughlin (2002) con-
Figure 7. (a) Cartoon showing a two-class PSVM in 2D space. Classes A and B are approximated by two parallel lines that have
been pushed as far apart as possible forming the cluster “margins.” The red decision-boundary lies midway between the two
margins. Maximizing the margin is equivalent to minimizing ðωT
ω þ γ2
Þ
1
2. (b) A two-class PSVM in 3D space. In this case, the de-
cision-boundary and margins are 2D planes.
SAE36 Interpretation / November 2015
Downloaded09/23/15to129.15.66.178.RedistributionsubjecttoSEGlicenseorcopyright;seeTermsofUseathttp://library.seg.org/
9. clude that the appropriate choice of attributes was the
most critical component of computer-assisted seismic
facies identification. Although interpreters are skilled
at identifying facies, such recognition is often subcon-
scious and hard to define (see Eagleman’s [2012] dis-
cussion on differentiating male from female chicks
and identifying military aircraft from silhouettes). In su-
pervised learning, the software does some of the work
during the training process, although we must always
be wary of false correlations, if we provide too many
attributes (Kalkomey, 1997). For the prediction of con-
tinuous data, such as porosity, Russell (1997) suggests
that one begin with exploratory data analysis, where
one crosscorrelates a candidate attribute with the de-
sired property at the well. Such crosscorrelation does
not work well when trying to identify seismic facies,
Figure 9. A map showing the location of the
3D seismic survey acquired over the Canter-
bury Basin, offshore New Zealand. The black
rectangle denotes the limits of the Waka 3D
survey, whereas the smaller red rectangle de-
notes the part of the survey shown in sub-
sequent figures. The colors represent the
relative depth of the current seafloor, warm
being shallower, and cold being deeper. Cur-
rent seafloor canyons are delineated in this
map, which are good analogs for the paleocan-
yons in Cretaceous and Tertiary ages (modi-
fied from Mitchell and Neil, 2012).
Figure 8. Cartoon showing how one SVM can map two linearly inseparable problem into a higher-dimensional space, in which
they can be separated. (a) Circular classes A and B in a 2D space cannot be separated by a linear decision-boundary (line). (b) Map-
ping the same data into a higher 3D feature space using the given projection. This transformation allows the two classes to be
separated by the green plane.
Interpretation / November 2015 SAE37
Downloaded09/23/15to129.15.66.178.RedistributionsubjecttoSEGlicenseorcopyright;seeTermsofUseathttp://library.seg.org/
10. Figure 10. Time slice at t ¼ 1.88 s through
the seismic amplitude volume. White arrows
indicate potential channel/canyon features.
The yellow arrow indicates a high-amplitude
feature. Red arrows indicate a relatively
low-energy, gently dipping area. AA′ denotes
the cross section shown in Figure 14.
Table 1. Attribute expressions of seismic facies.
Facies Appearance to interpreter Attribute expression
Levee Structurally high Stronger dome- or ridge-shaped structural components
Locally continuous Higher GLCM homogeneity and lower GLCM entropy
Higher amplitude Dome- or ridge-shaped component
Possibly thicker Lower peak spectral frequency
Channel thalwegs Shale filled with negative
compaction
Stronger bowl- or valley-shaped structural components and higher peak
spectral frequency
Sand filled with positive
compaction
Stronger dome- or ridge-shaped structural components and lower peak
spectral frequency
Channel flanks Onlap onto incisement and
canyon edges
Higher reflector convergence magnitude
Gas-charged
sands
High amplitude and
continuous reflections
Higher GLCM homogeneity, lower GLCM entropy, and high peak
magnitude
Incised floodplain Erosional truncation Higher reflector convergence magnitude and higher curvedness
Floodplain Lower amplitude Lower spectral magnitude
Higher frequency Higher peak spectral frequency
Continuous Higher GLCM homogeneity and lower GLCM entropy
Near-planar events Lower amplitude structural shape components and lower reflector
convergence magnitude
Slumps Chaotic reflectivity Higher reflector convergence magnitude, higher spectral frequency, lower
GLCM homogeneity, and higher GLCM entropy
SAE38 Interpretation / November 2015
Downloaded09/23/15to129.15.66.178.RedistributionsubjecttoSEGlicenseorcopyright;seeTermsofUseathttp://library.seg.org/
11. which are labeled with an integer number or alphanu-
meric name.
Table 1 summarizes how we four interpreters per-
ceive each of the seismic facies of interest. Once we
enumerate the seismic expression, the quantification
using attribute expression is relatively straightforward.
In general, amplitude and frequency attributes are lith-
ology indicators and may provide direct hydrocarbon
detection in conventional reservoirs: Geometric attrib-
utes delineate reflector morphology, such as dip, curva-
ture, rotation, and convergence, whereas statistical and
texture attributes provide information about data distri-
bution that quantifies subtle patterns that are hard to
define (Chopra and Marfurt, 2007). Attributes such as
coherence provide images of the edges of seismic facies
rather than a measure of the facies themselves,
although slumps often appear as a suite of closely
spaced faults separating rotated fault blocks. Finally,
what we see as interpreters and what our clustering
algorithms see can be quite different. Although we
may see a slump feature as exhibiting a high number
of faults per kilometer, our clustering algorithms are
applied voxel by voxel and see only the local behav-
ior. Extending the clustering to see such large-scale
textures requires the development of new texture
attributes.
The number of attributes should be as small as pos-
sible to discriminate the facies of interest, and each
attribute should be mathematically independent from
the others. Although it may be fairly easy to represent
three attributes with a deformed 2D manifold, increas-
ing the dimensionality results in increased deformation,
such that our manifold may fold on itself or may not
accurately represent the increased data variability. Be-
cause the Waka 3D survey is new to all four authors, we
test numerous attributes that we think may highlight dif-
ferent facies in the turbidite system. Among these attrib-
utes, we find the shape index to be good for visual
classification, but it dominates the unsupervised classi-
fications with valley and ridge features across the sur-
vey. After such analysis, we chose four attributes that
are mathematically independent but should be coupled
through the underlying geology, peak spectral fre-
quency, peak spectral magnitude, GLCM homogeneity,
and curvedness, as the input to our classifiers. The peak
spectral frequency and peak spectral magnitude form
an attribute pair that crudely represents the spectral re-
sponse. The peak frequency of spectrally whitened data
is sensitive to tuning thickness, whereas the peak mag-
nitude is a function of the tuning thickness and the
impedance contrast. GLCM homogeneity is a texture
attribute that has a high value for adjacent traces with
Figure 11. Time slice at t ¼ 1.88 s through
peak spectral frequency corendered with
peak spectral magnitude that emphasizes
the relative thickness and reflectivity of the
turbidite system and surrounding slope fan
sediments into which it was incised. The
two attributes are computed using a continu-
ous wavelet transform algorithm. The edges of
the channels are delineated by Sobel filter
similarity.
Interpretation / November 2015 SAE39
Downloaded09/23/15to129.15.66.178.RedistributionsubjecttoSEGlicenseorcopyright;seeTermsofUseathttp://library.seg.org/
12. similar (high or low) amplitudes and measures the
continuity of a seismic facies. Curvedness defines the
magnitude of reflector structural or stratigraphic defor-
mation, with dome-, ridge-, saddle-, valley-, and bowl-
shaped features exhibiting high curvedness and planar
features exhibiting zero curvedness.
Figure 10 shows a time slice at t ¼ 1.88 s through the
seismic amplitude volume, on which we identify chan-
nels (white arrows), high-amplitude deposits (yellow ar-
rows), and slope fans (red arrows). Figure 11 shows an
equivalent time slice through the peak spectral fre-
quency corendered with the peak spectral magnitude
that emphasizes the relative thickness and reflectivity
of the turbidite system, as well as surrounding slope
fan sediments into which it was incised. The edges of
the channels are delineated by Sobel filter similarity.
We show equivalent time slices through (Figure 12)
GLCM homogeneity, and (Figure 13) corendered shape
index and curvedness. In Figure 14, we show a repre-
sentative vertical slice at line AA′ in Figure 10 cutting
through the channels through (Figure 14a) seismic am-
plitude, (Figure 14b) seismic amplitude corendered
with peak spectral magnitude/peak spectral frequency,
(Figure 14c) seismic amplitude corendered with GLCM
homogeneity, and (Figure 14d) seismic amplitude cor-
endered shape index and curvedness. White arrows in-
dicate incised valleys, yellow arrows indicate high-
amplitude deposits, and red arrows indicate a slope
fan. We note that several of the incised values are vis-
ible at time slice t ¼ 1.88 s.
In a conventional interpretation workflow, the geo-
scientist would examine each of these attribute images
and integrate them within a depositional framework.
Such interpretation takes time and may be impractical
for extremely large data volumes. In contrast, in seismic
facies classification, the computer either attempts to
classify what it sees as distinct seismic facies (in unsu-
pervised learning) or attempts to emulate the inter-
preter’s classification made on a finite number of
vertical sections, time, and/or horizon slices and apply
the same classification to the full 3D volume (in super-
vised learning). In both cases, the interpreter needs to
validate the final classification to determine if they re-
present seismic facies of interest. In our example, we
will use Sobel filter similarity to separate the facies,
and we then evaluate how they fit within our under-
standing of a turbidite system.
Application
Given these four attributes, we now construct 4D
attribute vectors as input to the previously described
Figure 12. Time slice at t ¼ 1.88 s through
the GLCM homogeneity attribute corendered
with Sobel filter similarity. Bright colors high-
light areas with potential fan sand deposits.
SAE40 Interpretation / November 2015
Downloaded09/23/15to129.15.66.178.RedistributionsubjecttoSEGlicenseorcopyright;seeTermsofUseathttp://library.seg.org/
13. classification algorithms. To better illustrate the perfor-
mance of each algorithm, we summarize the data size,
number of computational processors, and runtime in
Table 2. All the algorithms are developed by the authors
except ANN, which is implemented using the MATLAB
toolbox.
Figure 13. Time slice at t ¼ 1.88 s through
the corendered shape index, curvedness,
and Sobel filter similarity. The shape index
highlights incisement, channel flanks, and
levees providing an excellent image for
interactive interpreter-driven classification.
However, the shape index dominates the un-
supervised classifications, highlighting valley
and ridge features, and minimizing more pla-
nar features of interest in the survey.
Figure 14. Vertical slices along line AA′ (location shown in Figure 10) through (a) seismic amplitude, (b) seismic amplitude
corendered with peak spectral magnitude and peak spectral frequency, (c) seismic amplitude corendered with GLCM homo-
geneity, and (d) seismic amplitude corendered with shape index and curvedness. White arrows indicate incised channel and can-
yon features. The yellow arrow indicates at a high-amplitude reflector. Red arrows indicate relatively low-amplitude, gently dipping
areas.
Interpretation / November 2015 SAE41
Downloaded09/23/15to129.15.66.178.RedistributionsubjecttoSEGlicenseorcopyright;seeTermsofUseathttp://library.seg.org/
14. We begin with the k-means. As previously discussed,
a limitation of k-means is the lack of any structure to the
cluster number selection process. We illustrate this
limitation by computing k-means with 16 (Figure 15)
and 256 (Figure 16) clusters. On Figure 15, we can iden-
tify high-amplitude overbank deposits (yellow arrows),
channels (white arrows), and slope fan deposits (red
arrows). A main limitation of k-means is that there is
no structure linking the clusters, which leads to a some-
what random choice of color assignment to clusters.
This problem becomes more serious when more clus-
ters are selected: The result with 256 clusters (Fig-
ure 16) is so chaotic that we can rarely separate the
overbank high-amplitude deposits (yellow arrows)
and slope fan deposits (red arrows) that were easily
separable in Figure 15. For this reason, modern k-means
Table 2. Classification settings and runtimes.
Algorithm Number of classes MPI processors4
Data set size
(samples) Runtime (s)
Training Total Training Applying to the entire data set Total
k-means 16 50 809,600 101,200,000 65 20 85
k-means 256 50 809,600 101,200,000 1060 70 1130
SOM 256 1 809,600 101,200,000 4125 6035 10,160
GTM — 50 809,600 101,200,000 9582 1025 10,607
ANN5
4 1 437 101,200,000 2 304 306
SVM 4 50 437 101,200,000 24 12,092 12,116
4
SOM is not run under MPI in our implementation. ANN is run using MATLAB and is not under MPI. The other three are run under MPI when applying the model to the
entire data set.
5
ANN is implemented using the MATLAB toolbox.
Figure 15. Time slice at t ¼ 1.88 s through a
k-means classification volume with K ¼ 16.
White arrows indicate channel-like features.
Yellow arrows indicate high-amplitude over-
bank deposits. Red arrows indicate possible
slope fans. The edges of the channels are
delineated by Sobel filter similarity.
SAE42 Interpretation / November 2015
Downloaded09/23/15to129.15.66.178.RedistributionsubjecttoSEGlicenseorcopyright;seeTermsofUseathttp://library.seg.org/
15. Figure 16. Time slice at t ¼ 1.88 s through k-means
classification volume with K ¼ 256. The classification
result follows the same pattern as K ¼ 16, but it is
more chaotic because the classes are computed inde-
pendently and are not constrained to fall on a lower
dimensional manifold. Note the similarity between
clusters of high-amplitude overbank (yellow arrows)
and slope fan deposits (red arrows), which were sepa-
rable in Figure 15.
Figure 17. Time slice at t ¼ 1.88 s of the first two
principal components plotted against a 2D color
bar. These two principal components serve as
the initial model for the SOM and GTM images that
follow. With each iteration, the SOM and GTM
manifolds will deform to better fit the natural clus-
ters in the input data.
Interpretation / November 2015 SAE43
Downloaded09/23/15to129.15.66.178.RedistributionsubjecttoSEGlicenseorcopyright;seeTermsofUseathttp://library.seg.org/
16. Figure 19. Time slice at t ¼ 1.88 s through cross-
plotting GTM projections 1 and 2 using a 2D color
bar. White arrows indicate channel-like features,
yellow arrows indicate overbank deposits, and
red arrows indicate slope fan deposits. The blue
arrow indicates a braided channel system that
can be seen on PCA but cannot be identified from
k-means or SOM classification maps. The color in-
dicates the location of the mean probability of
each data vector mapped into the 2D latent space.
Figure 18. Time slice at t ¼ 1.88 s through
an SOM classification volume using 256
clusters. White arrows indicate channel-like fea-
tures. Combined with vertical sections through
seismic amplitude, we interpret overbank depos-
its (yellow arrows), crevasse splays (orange ar-
rows), and slope fan deposits (red arrows). The
data are mapped to a 2D manifold initialized by
the first two principal components and are some-
what more organized than the k-means image
shown in the previous figures.
SAE44 Interpretation / November 2015
Downloaded09/23/15to129.15.66.178.RedistributionsubjecttoSEGlicenseorcopyright;seeTermsofUseathttp://library.seg.org/
17. applications focus on estimating the correct number of
clusters in the data.
In contrast to k-means, SOM restricts the cluster cen-
ters to lie on a deformed 2D manifold. Although clusters
may move closer or further apart, they still form (in our
implementation) a deformed quadrilateral mesh, which
maps to a rectangular mesh on the 2D latent space.
Mapping the latent space to a continuous 1D (Coleou
et al., 2003) or 2D color bar (Strecker and Uden,
2002), reduces the sensitivity to the number of clusters
chosen. We follow Gao (2007) and avoid guessing at the
number of clusters necessary to represent the data by
overdefining the number of prototype vectors to be 256
(the limit of color levels in our commercial display soft-
ware). These 256 prototype vectors (potential clusters)
reduce to only three or four distinct natural clusters
through the SOM neighborhood training criteria. The
2D SOM manifold is initialized using the first two prin-
cipal components, defining a plane through the N-di-
mensional attribute space (Figure 17). The algorithm
then deforms the manifold to better fit the data. Over-
defining the number of prototype vectors results in
clumping into a smaller number natural clusters. These
clumped prototype vectors project onto adjacent loca-
tions in the latent space, therefore, appear as subtle
shades of the same color as indicated by the limited pa-
lette of 256 colors shown in Figure 18. On the classifi-
cation result shown in Figure 18, we can clearly identify
the green colored spillover deposits (yellow arrows).
The difference between channel fill (white arrows)
and slope fans (red arrows) is insignificant. However,
by corendering with similarity, the channels are delin-
eated nicely, allowing us to visually distinguish channel
fills and the surrounded slope fans. We can also identify
some purple color clusters (orange arrows), which we
interpret to be crevasse splays.
Next, we apply GTM to the same four attributes. We
compute two “orthogonal” projections of data onto the
manifold and then onto the two dimensions of the latent
space. Rather than define explicit clusters, we project
the mean a posteriori probability distribution onto the
2D latent space and then export the projection onto the
two latent space axes. We crossplot the projections
along axes 1 and 2 and map them against a 2D color
bar (Figure 19). In this slice, we see channels delineated
by purple colors (white arrows), point bar and crevasse
splays in pinkish colors (yellow arrows), and slope fans
in lime green colors (red arrows). We can also identify
some thin, braided channels at the south end of the sur-
vey (blue arrow). Similarly to the SOM result, similarity
separates the incised valleys from the slope fans. How-
ever, the geologic meaning of the orange-colored facies
Figure 20. The same time slice through the
GTM projections shown in the previous image
but now displayed as four seismic facies. To
do so, we first create two GTM “components”
aligned with the original first two principal
components. We then pick four colored poly-
gons representing four seismic facies on the
histogram generated using a commercial
crossplot tool. This histogram is a map of
the GTM posterior probability distribution in
the latent space. The yellow polygon repre-
sents overbank deposits, the blue polygon rep-
resents channels/canyons, the green polygon
represents slope fan deposits, and the red pol-
ygon represents everything else.
Interpretation / November 2015 SAE45
Downloaded09/23/15to129.15.66.178.RedistributionsubjecttoSEGlicenseorcopyright;seeTermsofUseathttp://library.seg.org/
18. is somehow vague. This is the nature of unsupervised
learning techniques, in that the clusters represent
topological differences in the input data vectors,
which are not necessarily the facies differences we
wish to delineate. We can ameliorate this shortcoming
by adding a posteriori supervision to the GTM mani-
fold. The simplest way to add supervision is to com-
pute the average attribute vectors about a given
seismic facies and map it to the GTM crossplot. Then,
the interpreter can manually define clusters on the 2D
histogram by constructing one or more polygons (Fig-
ure 20), where we cluster the data into four facies:
multistoried channels (blue), high-energy point bar
and crevasse splay deposits (yellow), slope fans
(green), and “everything else” (red). A more quantita-
tive methodology is to mathematically project these
average clusters onto the manifold and then cross-
multiply the probability distribution of the control vec-
tors against the probability distribution function of
each data vector, thereby forming the Bhattacharyya
distance (Roy et al., 2013, 2014). Such measures then
provide a probability ranging between 0% and 100% as
to whether the data vector at any seismic sample point
is like the data vectors about well control (Roy et al.,
2013, 2014) or like the average data vector within a fa-
cies picked by the interpreter.
The a posteriori supervision added to GTM is the
critical prior supervision necessary for supervised clas-
sification, such as the ANN and SVM. In this study, we
used the same four attributes as input for unsupervised
and supervised learning techniques. Our supervision
Figure 21. Time slice at t ¼ 1.88 s through
the corendered peak spectral frequency, peak
spectral magnitude, and Sobel filter similarity
volumes. Seed points (training data) are
shown with colors for the picked four facies:
blue, multistoried channels; yellow, point bars
and crevasse splays; red, channel flanks; and
green, slope fans. Attribute vectors at these
seed points are used as training data in super-
vised classification.
Figure 22. PNN errors through the training epochs. The neu-
ral network reaches its best performance at epoch 42.
SAE46 Interpretation / November 2015
Downloaded09/23/15to129.15.66.178.RedistributionsubjecttoSEGlicenseorcopyright;seeTermsofUseathttp://library.seg.org/
19. consists of picked seed points for the three main facies
previously delineated using the unsupervised classifica-
tion results, which are multistoried channel, point bar
and crevasse splay deposits, and slope fans, plus an ad-
ditional channel flank facies. The seed points are shown
in Figure 21. Seed points should be picked with great
caution to correctly represent the corresponding facies;
any false picking (a seed point that does not belong to
the intended facies) will greatly compromise the classi-
fication result. We then compute averages of the four
input attributes within a 7 trace × 7 trace × 24 ms win-
dow about each seed point to generate a training table
which consists of 4D input attribute vectors and 1D tar-
gets (the labeled facies).
For our ANN application, we used the neural net-
works toolbox in MATLAB, and generated a PNN com-
posed of 20 neurons. Because of the relatively small size
of the training data, the training process only took a sec-
ond or so; however, because a PNN may converge to
local minima, we are not confident that our first trained
network has the best performance. Our workflow is
then to rerun the training process 50 times and choose
the network exhibiting the lowest training and cross-
validation errors. Figures 22 and 23 show the PNN per-
formance during training, whereas Figure 24 shows the
PNN classification result. We notice that all the training,
Figure 24. Time slice at t ¼ 1.88 s through
the ANN classification result. White arrows in-
dicate channels/canyons. Yellow arrows indi-
cate point bars and crevasse splays.
Figure 23. Confusion tables for the same PNN shown in Fig-
ure 21. From these tables, we find the training correctness to
be 90%, the testing and cross-validation correctness to be 86%,
and 91%, warranting a reliable prediction.
Interpretation / November 2015 SAE47
Downloaded09/23/15to129.15.66.178.RedistributionsubjecttoSEGlicenseorcopyright;seeTermsofUseathttp://library.seg.org/
20. testing, and cross-validation performance are accept-
able, with the training and cross-validation correctness
being approximately 90%, and the testing correctness
being more than 86%. We identify blue channel stories
within the relatively larger scale incised valleys (white
arrows) and yellow point bars and crevasse splays (yel-
low arrows). However, many of the slope fan deposits
are now classified as channel flanks or multistoried
channels (blue arrows), which need to be further cali-
brated with well-log data. Nevertheless, as a supervised
learning technique, ANN provides classification with
explicit geologic meaning, which is its primary advan-
tage over unsupervised learning techniques.
Finally, we cluster our 4D input data using SVM, us-
ing the same training data (interpreter picks) as for
ANN. The workflow is similar to ANN, in that we ran
20 passes of training, varying the Gaussian kernel stan-
dard deviation σ, and misclassification tolerance ε,
parameters for each pass. These parameter choices
are easier than selecting the number of neurons for
ANN because the SVM algorithm solves a convex opti-
mization problem that converges to a global minima.
The training and cross-validation performance is com-
parable with that of the ANN, with roughly 92% training
correctness and 85% cross-validation correctness. Fig-
ure 25 shows the SVM classification result at time
t ¼ 1.88 s. The SVM map follows the same pattern as
Figure 25. Time slice at t ¼ 1.88 s through SVM classification
result.Whitearrowsindicatemorecorrectlyclassifiedslopefans.
The yellow arrow indicates crevasse splays. Red arrows show
misclassifications due to the possible acquisition footprint.
Figure 26. Time slice at t ¼ 1.88 s through
the inline dip component of the reflector
dip. Inline dip magnitude provides a photolike
image of the paleocanyons.
SAE48 Interpretation / November 2015
Downloaded09/23/15to129.15.66.178.RedistributionsubjecttoSEGlicenseorcopyright;seeTermsofUseathttp://library.seg.org/
21. we have seen on the ANN map, but it is generally
cleaner, with some differences in details. Compared
with ANN, SVM successfully mapped more of the
slope fans (white arrows), but it missed some crevasse
splays that were correctly picked by ANN (yellow
arrow). We also see a great amount of facies variation
within the incised valleys, which is reasonable because
of the multiple course changes of a paleochannel
during its deposition that results in multiple channel
stories. Finally, we note some red lines following
northwestern–southeastern direction (red arrows),
which correspond to the acquisition footprint.
Conclusions
In this paper, we have compared and contrasted
some of the more important multiattribute facies clas-
sification tools, including four unsupervised (PCA, k-
means, SOM, and GTM) and two supervised (ANN
and SVM) learning techniques. In addition to highlight-
ing the differences in assumptions and implementation,
we have applied each method to the same Canterbury
Basin survey, with the goal of delineating seismic facies
in a turbidite system to demonstrate the effectiveness
and weaknesses of each method. The k-means and
SOM move the user-defined number of cluster centers
toward the input data vectors. PCA is the simplest mani-
fold method, where the data variability in our examples
is approximated by a 2D plane defined by the first two
eigenvectors. GTM is more accurately described as a
mapping technique, like PCA, where the clusters are
formed either in the human brain as part of visualization
or through crossplotting and the construction of poly-
gons. SOM and GTM manifolds deform to fit the N-di-
mensional data. In SOM, the cluster centers (prototype
vectors) move along the manifold toward the data vec-
tors, forming true clusters. In all four methods, any la-
beling of a given cluster to a given facies happens after
the process is completed. In contrast, ANN and SVM
build a specific relation between the input data vectors
and a subset of user-labeled input training data vectors,
thereby explicitly labeling the output clusters to the de-
sired facies. Supervised learning is constructed from a
limited group of training samples (usually at certain
well locations or manually picked seed points), which
generally are insufficient to represent all the lithologic
and stratigraphic variations within a relatively large
seismic data volume. A pitfall of supervised learning
is that unforeseen clusters will be misclassified as clus-
ters that have been chosen.
For this reason, unsupervised classification products
can be used to construct not only an initial estimate of
the number of classes, but also a validation tool to de-
termine if separate clusters have been incorrectly
lumped together. We advise computing unsupervised
SOM or GTM prior to picking seed points for sub-
sequent supervised learning, to clarify the topological
differences mapped by our choice of attributes. Such
mapping will greatly improve the picking confidence
because the seed points are now confirmed by human
experience and mathematical statistics.
The choice of the correct suite of attributes is criti-
cal. Specifically, images that are ideal for multiattribute
visualization may be suboptimal for clustering. We
made several poor choices in previous iterations of
writing this paper. The image of the inline (south-
west–northeast) structural dip illustrates this problem
directly. Although a skilled interpreter sees a great deal
of detail in Figure 26, there is no clear facies difference
between positive and negative dips, such that this com-
ponent of vector dip cannot be used to differentiate
them. A better choice would be dip magnitude, except
that a long-wavelength overprint (such as descending
into the basin) would again bias our clustering in a man-
ner that is unrelated to facies. Therefore, we tried to
use relative changes in dip — curvedness and shape
indices measure lateral changes in dip, and reflector
convergence, which differentiates conformal from non-
conformal reflectors.
Certain attributes should never be used in clustering.
Phase, azimuth, and strike have circular distributions,
in which a phase value of −180 indicates the same value
as +180. No trend can be found. Although the shape in-
dex s is not circular, ranging between −1 and +1, the
histogram has a peak about the ridge (s ¼ þ0.5) and
about the valley (s ¼ −0.5). We speculate that shape
components may be more amenable to classification.
Reflector convergence follows the same pattern as
curvedness. For this reason, we only used curvedness
as a representative of these three attributes. The addi-
tion of this choice improved our clustering.
Edge attributes such as Sobel filter similarity and co-
herence are not useful for the example shown here; in-
stead, we have visually added them as an edge “cluster”
and corendered with the images shown in Figures 15–
21, 24, and 25. In contrast, when analyzing more chaotic
features, such as salt domes and karst collapse, coher-
ence is a good input to clustering algorithms. We do
wish to provide an estimate of continuity and random-
ness to our clustering. To do so, use GLCM homo-
geneity as an input attribute.
Theoretically, no one technique is superior to all
the others in every aspect, and each technique has
its inherent advantages and defects. The k-means with
a relatively small number of clusters is the easiest algo-
rithm to implement, and it provides rapid interpretation,
but it lacks the relation among clusters. SOM provides a
generally more “interpreter-friendly” clustering result
with topological connections among clusters, but it is
computationally more demanding than k-means. GTM
relies on probability theory and enables the interpreter
to add a posteriori supervision by manipulating the da-
ta’s posterior probability distribution; however, it is not
widely accessible to the exploration geophysicist com-
munity. Rather than displaying the conventional cluster
numbers (or labels), we suggest displaying the cluster
coordinates projected onto the 2D SOM and GTM latent
space axes. Doing so not only provides greater flexibil-
Interpretation / November 2015 SAE49
Downloaded09/23/15to129.15.66.178.RedistributionsubjecttoSEGlicenseorcopyright;seeTermsofUseathttp://library.seg.org/
22. ity in constructing a 2D color bar, but it also provides
data that can be further manipulated using 2D cross-
plot tools.
For the two supervised learning techniques, ANN
suffers from the convergence problem and requires ex-
pertise to achieve the optimal performance, whereas
the computation cost is relatively low. SVM is math-
ematically more robust and easier to train, but it is more
computationally demanding.
Practically, if no software limitations are set, we can
make suggestions on how an interpreter can incorpo-
rate these techniques to facilitate seismic facies inter-
pretation at different exploration and development
stages. To identify the main features in a recently ac-
quired 3D seismic survey on which limited to no tradi-
tional structural interpretation is done, k-means is a
good candidate for exploratory classification starting
with a small K (typically K ¼ 4) and gradually increase
the number of class. As more data are acquired (e.g.,
well-log data and production data) and detailed struc-
tural interpretation has been performed, SOM or
GTM focusing in the target formations will provide a
more refined classification, which needs to be cali-
brated with wells. In the development stage, when most
of the data have been acquired, with proper training
processes, ANN and SVM provide targeted products,
characterizing the reservoir by mimicking interpreters’
behavior. Generally, SVM provides superior classifica-
tion than ANN but at a considerably higher computa-
tional cost, so choosing between these two requires
balancing performance and runtime cost. As a practical
manner, no given interpretation software platform pro-
vides all five of these clustering techniques, such that
many of the choices are based on software availability.
Because we wish this paper to serve as an inspiration
of interpreters, we do want to reveal one drawback of
our work: All the classifications are performed volumet-
rically but not along a certain formation. Such classifi-
cation may be biased by the bonding formations above
and below the target formation (if we do have a target
formation), therefore contaminating the facies map.
However, we want to make the point that such classi-
fication can happen at a very early stage of interpreta-
tion, when structural interpretation and well logs are
very limited. And even in such a situation, we can still
use classification techniques to generate facies volumes
to assist subsequent interpretation.
In the 1970s and 1980s, much of the geophysical in-
novation in seismic processing and interpretation was
facilitated by the rapid evolution of computer technol-
ogy — from mainframes to minicomputers to worksta-
tions to distributed processing. We believe that similar
advances in facies analysis will be facilitated by the
rapid innovation in “big data” analysis, driven by the
needs of marketing and security. Although we may
not answer Turing’s question, “Can machines think?”
we will certainly be able to teach them how to emulate
a skilled human interpreter.
Acknowledgments
We thank New Zealand Petroleum and Minerals for
providing the Waka 3D seismic data to the public. Fi-
nancial support for this effort and for the development
of our own k-means, PCA, SOM, GTM, and SVM algo-
rithms was provided by the industry sponsors of the
Attribute-Assisted Seismic Processing and Interpreta-
tion Consortium at the University of Oklahoma. The
ANN examples were run using MATLAB. We thank
our colleagues F. Li, S. Verma, L. Infante, and B. Wallet
for their valuable input and suggestions. All the 3D seis-
mic displays were made using licenses to Petrel, pro-
vided to the University of Oklahoma for research and
education courtesy of Schlumberger. Finally, we want
to express our great respect to the people that have con-
tributed the development of pattern recognition tech-
niques in exploration geophysics field.
Appendix A
Mathematical details
In this appendix, we summarize many of the
mathematical details defining the various algorithm im-
plementations. Although insufficient to allow a straight-
forward implementation of each algorithm, we hope to
more quantitatively illustrate the algorithmic assump-
tions, as well as algorithmic similarities and differences.
Because k-means and ANNs have been widely studied,
in this appendix, we only give some principal statistical
background, and brief reviews of SOM, GTM, and
PSVM algorithms involved in this tutorial. We begin this
appendix by giving statistical formulations of the
covariance matrix, principal components, and the Ma-
halanobis distance when applied to seismic attributes.
We further illustrate the formulations and some neces-
sary theory for SOM, GTM, ANN, and PSVM. Because of
the extensive use of mathematical symbols and nota-
tions, a table of shared mathematical notations is given
in Table A-1. All other symbols are defined in the
text.
Covariance matrix, principal components, and the
Mahalanobis distance
Given a suite of N attributes, the covariance matrix is
defined as
Cmn ¼
1
J
XJ
j¼1
ðajmðtj; xj; yjÞ − μmÞðajnðtj; xj; yjÞ − μnÞ;
(A-1)
where ajm and ajn are the mth and nth attributes, J is
the total number of data vectors, and where
μn ¼
1
J
XJ
j¼1
ajnðtj; xj; yjÞ; (A-2)
is the mean of the nth attribute. If we compute the ei-
genvalues λi and eigenvectors vi of the real, symmetric
SAE50 Interpretation / November 2015
Downloaded09/23/15to129.15.66.178.RedistributionsubjecttoSEGlicenseorcopyright;seeTermsofUseathttp://library.seg.org/
23. covariance matrix C, the ith principal component at
data vector j is defined as
pji ¼
XN
n¼1
ajnðtj; xj; yjÞvni; (A-3)
where vni indicates the nth attribute component of the
ith eigenvector. In this paper, the first two eigenvectors
and eigenvalues are also used to construct an initial
model in the SOM and GTM algorithms.
The Mahalanobis distance rjq of the jth sample from
the qth cluster center, θq is defined as
r2
jq ¼
XN
n¼1
XN
m¼1
ðajn − θnqÞC−1
nmðajm − θmqÞ; (A-4)
where the inversion of the covariance matrix C takes
place prior to extracting the mnth element.
Self-organizing maps
Rather than computing the Mahalanobis distance,
the SOM and GTM first normalize the data using a Z-
scale. If the data exhibit an approximately Gaussian dis-
tribution, the Z-scale of the nth attribute is obtained by
subtracting the mean and dividing by the standard
deviation (the square root of the diagonal of the covari-
ance matrix Cnn). To Z-scale non-Gaussian distributed
data, such as coherence, one needs to first break down
the data using histograms that approximate a Gaussian.
The objective of the SOM algorithm is to map the input
seismic attributes onto a geometric manifold called the
self-organized map. The SOM manifold is defined by a
suite of prototype vectors mk lying on a lower dimen-
sional (in our case, 2D) surface, which fit the N-dimen-
sional attribute data. The prototype vectors mk are
typically arranged in 2D hexagonal or rectangular struc-
ture maps that preserve their original neighborhood re-
lationship, such that neighboring prototype vectors
represent similar data vectors. The number of prototype
vectors in the 2D map determines the effectiveness and
generalization of the algorithm. One strategy is to esti-
mate the number of initial clusters, and then to either
divide or join clusters based on distance criteria. In our
case, we follow Gao (2007) and overdefine the number
of clusters to be the maximum number of colors sup-
ported by our visualization software. Interpreters then
either use their color perception or construct polygons
on 2D histograms to define a smaller number of
clusters.
Our implementation of the SOM algorithm is summa-
rized in Figure A-1. After computing Z-scores of the
input data, the initial manifold is defined to be a plane
defined by the two first principal components. Proto-
type vectors mk are defined on a rectangular grid to
the first two eigenvalues to range between Æ2ðλ1Þ1∕2
and Æ2ðλ2Þ1∕2
. The seismic attribute data are then com-
pared with each of the prototype vectors, finding the
nearest one. This prototype vector and its nearest
neighbors (those that fall within a range σ, defining a
Gaussian perturbation) are moved toward the data
point. After all the training vectors have been examined,
the neighborhood radius σ is reduced. Iterations con-
tinue until σ approaches the distance between the origi-
nal prototype vectors. Given this background, Kohonen
(2001) defines the SOM training algorithm using the fol-
lowing five steps:
1) Randomly choose a previously Z-scored input
attribute vector aj from the set of input vectors.
2) Compute the Euclidean distance between this vec-
tor aj and all prototype vectors mk, k ¼ 1;2; : : : ; K.
The prototype vector that has the minimum distance
to the input vector aj is defined to be the “winner” or
the best matching unit, mb:
kaj − mbk ¼ min
k
fkaj − mkkg: (A-5)
Table A-1. List of shared mathematical symbols.
Variable name Definition
n and N Attribute index and number of attributes
j and J Voxel (attribute vector) index and number of voxels
k and K Manifold index and number of grid points
aj The jth attribute data vector
p Matrix of principal components
C Attribute covariance matrix
μn Mean of the nth attribute
λm and vm The mth eigenvalue and eigenvector pair
mk The kth grid point lying on the manifold (prototype vector for SOM, or Gaussian center for GTM)
uk The kth grid point lying on the latent space
rjk The Mahalanobis distance between the jth data vector and the kth cluster center or manifold grid point
I Identity matrix of dimension defined in the text
Interpretation / November 2015 SAE51
Downloaded09/23/15to129.15.66.178.RedistributionsubjecttoSEGlicenseorcopyright;seeTermsofUseathttp://library.seg.org/
24. 3) Update the winner prototype vector and its neigh-
bors. The updating rule for the weight of the kth
prototype vector inside and outside the neighbor-
hood radius σðtÞ is given by
mkðtþ1Þ
¼
mkðtÞþαðtÞhbkðtÞ½aj −mkðtÞŠ; ifkrk −rbk≤σðtÞ;
mkðtÞ; ifkrk −rbkσðtÞ;
(A-6)
where the neighborhood radius defined as σðtÞ is
predefined for a problem and decreases with each
iteration t. Here, rb and rk are the position vectors
of the winner prototype vector mb and the kth proto-
type vector mb, respectively. We also define the
neighborhood function hbkðtÞ, the exponential learn-
ing function αðtÞ, and the length of training T. The
hbkðtÞ and αðtÞ decrease with each iteration in the
learning process and are defined as
hbkðtÞ ¼ e−ðkrb−rkk2∕2σ2ðtÞ
; (A-7)
and
αðtÞ ¼ α0
0.005
α0
t∕T
. (A-8)
4) Iterate through each learning step (steps [1–3]) until
the convergence criterion (which depends on the
predefined lowest neighborhood radius and the min-
imum distance between the prototype vectors in the
latent space) is reached.
5) Project the prototype vectors onto the first two prin-
cipal components and color code using a 2D color
bar (Matos et al., 2009).
Generative topological mapping
In GTM, the grid points of our 2D deformed manifold
in N-dimensional attribute space define the centers, mk,
of Gaussian distributions of variance σ2 ¼ β−1. These
centers mk are in turn projected onto a 2D latent space,
defined by a grid of nodes uk and nonlinear basis func-
tions Φ:
mk ¼
XM
m¼1
WkmΦmðukÞ; (A-9)
where W is a K × M matrix of unknown weights,
ΦmðukÞ is a set of M nonlinear basis functions, mk
are vectors defining the deformed manifold in the N-di-
mensional data space, and k ¼ 1;2; : : : ; K is the number
of grid points arranged on a lower L-dimensional latent
space (in our case, L ¼ 2). A noise model (the probabil-
ity of the existence of a particular data vector aj given
weights W and inverse variance β) is introduced for
each measured data vector. The probability density
function p is represented by a suite of K radially sym-
metric N-dimensional Gaussian functions centered
about mk with variance of 1∕β:
pðajjW; βÞ ¼
XK
k¼1
1
K
β
2π
N
2
e−β
2kmk−ajk2
. (A-10)
The prior probabilities of each of these components are
assumed to be equal with a value of 1∕K, for all data
vectors aj. Figure 4 illustrates the GTM mapping from
an L ¼ 2D latent space to the 3D data space.
The probability density model (GTM model) is fit to
the data aj to find the parameters W and β using a maxi-
mum likelihood estimation. One popular technique
used in parameter estimations is the EM algorithm. Us-
ing Bayes’ theorem and the current values of the GTM
Figure A-1. The SOM workflow.
SAE52 Interpretation / November 2015
Downloaded09/23/15to129.15.66.178.RedistributionsubjecttoSEGlicenseorcopyright;seeTermsofUseathttp://library.seg.org/
25. model parameters W and β, we calculate the J × K pos-
terior probability or responsibility, Rjk, for each of the K
components in latent space for each data-vector:
Rjk ¼
e−β
2kmk−ajk2
P
i e−β
2kmi−ajk2
. (A-11)
Equation A-11 forms the “E-step” or expectation step
in the EM algorithm. The E-step is followed by the maxi-
mization or “M-step,” which uses these responsibilities
to update the model for a new weight matrix W by solv-
ing a set of linear equations (Dempster et al., 1977):
ΦT
GΦ þ
α
β
I
WT
new ¼ ΦT
RX; (A-12)
where Gkk ¼
PJ
j¼1 Rjk are the nonzero elements of the
K × K diagonal matrix G, Φ is a K × M matrix with el-
ements Φ ¼ ΦmðukÞ, α is a regularization constant to
avoid division by zero, and I is the M × M identity
matrix.
The updated value of β is given by
1
βnew
¼
1
JN
XJ
j¼1
XK
k¼1
RjkkWkmnew
ΦmðukÞ − ajk2. (A-13)
The initialization of W is done so that the initial GTM
model approximates the principal components (largest
eigenvectors) of the input data, aj. The value of β−1 is
initialized to be the larger of the ðL þ 1Þth eigenvalue
from PCA, where L is the dimension of the latent space.
In Figure 4, L ¼ 2, such that we initialize β−1
to be the
inverse of the third eigenvalue. Figure A-2 summarizes
this workflow.
Artificial neural networks
ANNs are a class of pattern-recognition algorithm
that were derived separately in different fields, such
as statistics and artificial intelligence. ANNs are easily
accessible for most of geophysical interpreters, so we
only provide a general workflow of applying an ANN
to seismic facies classification for completeness of this
tutorial. The workflow is shown in Figure A-3.
Proximal support vector machines
Because SVMs are originally developed to solve
binary classification problems, the arithmetic we begin
with a summary of the arithmetic describing a binary
PSVM classifier.
Similarly to SVM, a PSVM decision condition is de-
fined as (Figure 7):
xT ω − γ
0; x ∈ Xþ;
¼ 0; x ∈ X þ or X−;
0; x ∈ X−;
(A-14)
where x is an N-dimensional attribute vector to be clas-
sified, ω is a N × 1 vector implicitly defines the normal
of the decision-boundary in the higher-dimensional
space, γ defines the location of the decision-boundary,
and “Xþ” and “X−” indicate the two classes of the
binary classification. PSVM solves an optimization
problem and takes the form of (Fung and Mangasarian,
2001):
min
ω;γ;y
ε
1
2
kyk2 þ
1
2
ðωT ω þ γ2Þ; (A-15)
subject to
Figure A-2. The GTM workflow.
Interpretation / November 2015 SAE53
Downloaded09/23/15to129.15.66.178.RedistributionsubjecttoSEGlicenseorcopyright;seeTermsofUseathttp://library.seg.org/
26. Dðaω − eγÞ þ y ¼ e. (A-16)
In this optimization problem, y is a J × 1 error vari-
able and a is a J × N sample matrix composed of J
attribute vectors, which can be divided into two classes,
Xþ and X−. The D is a J × J diagonal matrix of labels
with a diagonal composed of þ1 for Xþ and −1 for X−.
The ε is a nonnegative parameter. Finally, e is a J × 1
column vector of ones. This optimization problem
can be solved by a J × 1 Lagrangian multiplier t:
Lðω; γ; y; tÞ ¼ ε
1
2
kyk2
þ
1
2
ðωT
ω þ γ2
Þ
− tT
ðDðaω − eγÞ þ y − eÞ: (A-17)
By setting the gradients of L to zero, we obtain ex-
pressions for ω, γ and y explicitly in the knowns and t,
where t can further be represented by a, D and ε. Then,
by substituting ω in equations A-15 and A-16 using its
dual equivalent ω ¼ aT
Dt, we can arrive at (Fung
and Mangasarian, 2001)
min
ω;γ;y
ε
1
2
kyk2 þ
1
2
ðtT t þ γ2Þ; (A-18)
subject to
DðaaT Dt − eγÞ þ y ¼ e. (A-19)
Equations A-18 and A-19 provide a more desirable
version of the optimization problem because one can
now insert kernel methods to solve nonlinear classifica-
tion problems made possible by the term aaT
in equa-
tion A-19. Using the Lagrangian multiplier again (this
time we denote the multiplier as τ), we can minimize
the new optimization problem against t, γ, y, and τ.
By setting the gradients of these four variables to zero,
we can express t, γ, and y explicitly by τ and other
knowns, where τ is solely a dependent on the data ma-
trices. Then, for N-dimensional attribute vector x, we
write the decision conditions as
xT aT Dt − γ
0; x ∈ Xþ;
¼ 0; x ∈ X þ or X−;
0; x ∈ X−;
(A-20)
with
t ¼ DKT
D
I
ε
þ GGT
−1
e; (A-21)
γ ¼ eT
D
I
ε
þ GGT
−1
e; (A-22)
and
G ¼ D½ K −e Š: (A-23)
Instead of a, we have K in equations A-21 and A-23,
which is a Gaussian kernel function of a and aT
that has
the form of
Kða; aT
Þij ¼ expð−σkaT
i• − aT
j•k2
Þ; i; j ∈ ½1; JŠ; (A-24)
where σ is a scalar parameter. Finally, by replacing
xT aT by its corresponding kernel expression, the deci-
sion condition can be written as
KðxT
; aT
ÞDt − γ
0; x ∈ Xþ;
¼ 0; x ∈ X þ or X−;
0; x ∈ X−;
(A-25)
Figure A-3. The ANN workflow.
SAE54 Interpretation / November 2015
Downloaded09/23/15to129.15.66.178.RedistributionsubjecttoSEGlicenseorcopyright;seeTermsofUseathttp://library.seg.org/
27. and
KðxT
; aT
Þij ¼ expð−σkx − aT
i•k2
Þ; i ∈ ½1; JŠ: (A-26)
The formulations above represent a nonlinear PSVM
classifier.
To extend this binary classifier to handle multiclass
classification problems, some strategies have been de-
veloped by researchers, which generally fall into three
categories: “one-versus-all,” “one-versus-one,” and “all
together.” For Q classes, the former two strategies
build a suite of binary classifiers individually:
(QðQ − 1Þ∕2 for the one-versus-one and Q for the
one-versus-all algorithm, and we then use these clas-
sifiers to construct the final classification decision.
The all-together attempts to solve multiclass problem
in one step. Hsu and Lin (2002) find one-versus-one
method to be superior for large problems. There are
two particular algorithms for one-versus-one strate-
gies, namely, the “max wins” (Kreßel, 1999) and
directed acyclic graph (Platt et al., 2000) algorithms.
Both algorithms provide comparable results while sur-
passing the one-versus-all method in accuracy and
computational efficiency.
Our approach uses a classification factor table to as-
sign classes to unknown samples (Figure A-4). A clas-
sification factor of an unknown sample point for a
certain pilot class “X” is the normalized distance to
the binary decision boundary between X and the other
class used when generating this binary decision boun-
dary. An example of a classification factor table is
shown in Figure A-4, and based on this table, the un-
known sample point belongs to class “D.”
References
Al-Anazi, A., and I. D. Gates, 2010, A support vector ma-
chine algorithm to classify lithofacies and model per-
meability in heterogeneous reservoirs: Engineering
Geology, 114, 267–277, doi: 10.1016/j.enggeo.2010.05
.005.
Balch, A. H., 1971, Color sonagrams: A new dimension in
seismic data interpretation: Geophysics, 36, 1074–1098,
doi: 10.1190/1.1440233.
Barnes, A. E., and K. J. Laughlin, 2002, Investigation of
methods for unsupervised classification of seismic data:
72nd Annual International Meeting, SEG, Expanded Ab-
stracts, 2221–2224.
Bennett, K. P., and A. Demiriz, 1999, Semi-supervised
support vector machines: in M. S. Kearns, S. A. Solla,
and D. A. Cohn, eds., Advances in Neural Information
Processing Systems 11: Proceedings of the 1998
Conference, MIT Press, 368–374.
Bishop, C. M., 2006, Pattern recognition and machine
learning: Springer.
Bishop, C. M., M. Svensen, and C. K. I. Williams, 1998, The
generative topographic mapping: Neural Computation,
10, 215–234, doi: 10.1162/089976698300017953.
Chopra, S., and K. J. Marfurt, 2007, Seismic attributes for
prospect identification and reservoir characterization:
SEG.
Coleou, T., M. Poupon, and K. Azbel, 2003, Unsupervised
seismic facies classification: A review and comparison
of techniques and implementation: The Leading Edge,
22, 942–953, doi: 10.1190/1.1623635.
Corradi, A., P. Ruffo, A. Corrao, and C. Visentin, 2009, 3D
hydrocarbon migration by percolation technique in an
alternative sand-shale environment described by a seis-
mic facies classification volume: Marine and Petroleum
Geology, 26, 495–503, doi: 10.1016/j.marpetgeo.2009.01
.002.
Cortes, C., and V. Vapnik, 1995, Support-vector net-
works: Machine Learning, 20, 273–297, doi: 10.1023/
A:1022627411411.
Figure A-4. The PSVM workflow.
Interpretation / November 2015 SAE55
Downloaded09/23/15to129.15.66.178.RedistributionsubjecttoSEGlicenseorcopyright;seeTermsofUseathttp://library.seg.org/
28. Cristianini, N., and J. Shawe-Taylor, 2000, An introduction
to support vector machines and other kernel-based
learning methods: Cambridge University Press.
Dempster, A. P., N. M. Laird, and D. B. Rubin, 1977, Maxi-
mum likelihood from incomplete data via the EM algo-
rithm: Journal of the Royal Statistical Society, Series B,
39, 1–38.
Duda, R. O., P. E. Hart, and D. G. Stork, 2001, Pattern clas-
sification, 2nd ed.: John Wiley Sons.
Eagleman, D., 2012, Incognito: The secret lives of the brain:
Pantheon Books.
Forgy, E. W., 1965, Cluster analysis of multivariate data:
Efficiency vs interpretability of classifications: Biomet-
rics, 21, 768–769.
Fung, G., and O. L. Mangasarian, 2001, Proximal support
vector machine classifiers: Proceedings of the Seventh
ACM SIGKDD International Conference on Knowledge
Discovery and Data Mining, ACM 2001, 77–86.
Fung, G. M., and O. L. Mangasarian, 2005, Multicategory
proximal support vector machine classifiers: Machine
Learning, 59, 77–97, doi: 10.1007/s10994-005-0463-6.
Gao, D., 2007, Application of three-dimensional seismic
texture analysis with special reference to deep-marine
facies discrimination and interpretation: An example
from offshore Angola, West Africa: AAPG Bulletin,
91, 1665–1683, doi: 10.1306/08020706101.
Honório, B. C. Z., A. C. Sanchetta, E. P. Leite, and A. C.
Vidal, 2014, Independent component spectral analysis:
Interpretation, 2, no. 1, SA21–SA29, doi: 10.1190/INT-
2013-0074.1.
Hsu, C., and C. Lin, 2002, A comparison of methods for
multiclass support vector machines: IEEE Transactions
on Neural Networks, 13, 732–744, doi: 10.1109/TNN
.2002.1000139.
Huang, Z., J. Shimeld, M. Williamson, and J. Katsube, 1996,
Permeability prediction with artificial neural network
modeling in the Ventura gas field, offshore eastern Can-
ada: Geophysics, 61, 422–436, doi: 10.1190/1.1443970.
Jancey, R. C., 1966, Multidimensional group analysis: Aus-
tralian Journal of Botany, 14, 127–130, doi: 10.1071/
BT9660127.
Justice, J. H., D. J. Hawkins, and D. J. Wong, 1985, Multi-
dimensional attribute analysis and pattern recognition
for seismic interpretation: Pattern Recognition, 18,
391–399, doi: 10.1016/0031-3203(85)90010-X.
Kalkomey, C. T., T. Zhao, X. Jin, and K. J. Marfurt, 1997,
Potential risks when using seismic attributes as predic-
tors of reservoir properties: The Leading Edge, 16, 247–
251, doi: 10.1190/1.1437610.
Kohonen, T., 1982, Self-organized formation of topologi-
cally correct feature maps: Biological Cybernetics,
43, 59–69, doi: 10.1007/BF00337288.
Kohonen, T., 2001, Self-organizing maps (3rd ed.):
Springer-Verlag.
Kreßel, U., 1999, Pairwise classification and support vector
machines: Advances in kernel methods — Support vec-
tor learning, in B. Schölkopf, C. J. C. Burges, and A. J.
Smola, eds., Advances in kernel methods: MIT Press,
255–268.
Kuzma, H. A., and J. W. Rector, 2004, Nonlinear AVO
inversion using support vector machines: 74th Annual
International Meeting, SEG, Expanded Abstracts,
203–206.
Kuzma, H. A., and J. W. Rector, 2005, The Zoeppritz equa-
tions, information theory, and support vector machines:
75th Annual International Meeting, SEG, Expanded Ab-
stracts, 1701–1704.
Kuzma, H. A., and J. W. Rector, 2007, Support vector ma-
chines implemented on a graphics processing unit: 77th
Annual International Meeting, SEG, Expanded Ab-
stracts, 2089–2092.
Li, J., and J. Castagna, 2004, Support vector machine
(SVM) pattern recognition to AVO classification: Geo-
physical Research Letters, 31, L02609, doi: 10.1029/
2003GL019044.
Lim, J., 2005, Reservoir properties determination using
fuzzy logic and neural networks from well data in
offshore Korea: Journal of Petroleum Science and
Engineering, 49, 182–192, doi: 10.1016/j.petrol.2005.05
.005.
Lubo, D., K. Marfurt, and V. Jayaram, 2014, Statistical char-
acterization and geological correlation of wells using
automatic learning Gaussian mixture models: Uncon-
ventional Resources Technology Conference, Extended
Abstracts, 774–783.
MacQueen, J., 1967, Some methods for classification and
analysis of multivariate observations: in L. M. Le
Cam, and J. Neyman, eds.,Proceedings of the Fifth
Berkeley Symposium on Mathematical Statistics and
Probability, University of California Press, 281–297.
Mangasarian, O. L., and E. W. Wild, 2006, Multisurface
proximal support vector machine classification via gen-
eralized eigenvalues: IEEE Transactions on Pattern
Analysis and Machine Intelligence, 28, 69–74, doi: 10
.1109/TPAMI.2006.17.
Matos, M. C., K. J. Marfurt, and P. R. S. Johann, 2009, Seis-
mic color self-organizing maps: Presented at 11th
International Congress of the Brazilian Geophysical So-
ciety, Extended Abstracts.
Meldahl, P., R. Heggland, B. Bril, and P. de Groot, 1999,
The chimney cube, an example of semi‐automated de-
tection of seismic objects by directive attributes and
neural networks. Part I: Methodology: 69th Annual
International Meeting, SEG, Expanded Abstracts,
931–934.
Mitchell, J., and H. L. Neil, 2012, OS20/20 Canterbury —
Great South Basin TAN1209 voyage report: National In-
stitute of Water and Atmospheric Research Ltd (NIWA).
Murat, M. E., and A. J. Rudman, 1992, Automated first
arrival picking: A neural network approach: Geophysi-
cal Prospecting, 40, 587–604, doi: 10.1111/j.1365-2478
.1992.tb00543.x.
SAE56 Interpretation / November 2015
Downloaded09/23/15to129.15.66.178.RedistributionsubjecttoSEGlicenseorcopyright;seeTermsofUseathttp://library.seg.org/