SlideShare a Scribd company logo
1 of 143
Seismic Attributes:
Taxonomic Classification for Pragmatic Machine Learning
Application
Dustin T. Dewett
Ph.D. Defense
The University of Oklahoma
Committee:
Dr. John Pigott (Chair)
Dr. Kurt Marfurt (Co-Chair)
Dr. Zulfiquar Reza
Dr. Younane Abousleiman
11/15/2021 1
Abstract
• Seismic interpretation involves a hierarchy of thought
• Recall — recognize features from prior material
• Generalize — classify new examples from prior context
• Develop — applying acquired facts or information in a different way
• Inference — differentiating similar observations
• Synthesis — combining prior learned knowledge to understand a system
• Evaluation — creating realistic models from unrelated information
• Seismic interpretation is progressing toward an increase in automation and
machine learning techniques.
• Failure to appreciate these nuances will result in misapplication.
after Anderson and Krathwohl, 2001
11/15/2021 2
Recall
after Dewett, 2017
11/15/2021 3
Recall
after Dewett, 2017
11/15/2021 4
Generalize
after Dewett, 2017
11/15/2021 5
Generalize
after Dewett, 2017
11/15/2021 6
Develop
• Relative motion
• Classified as a “fault”
after Dewett, 2017
11/15/2021 7
Inference
• Classified as “normal faults”
• Seen in some geologic
settings, but not others
after Dewett, 2017
11/15/2021 8
Synthesis
• Normal faults are seen in
extensional settings
• May occur near subduction
zones (local extension)
• Conceptual understanding
and/or quantification of:
• Displacement
• Stress
• Deformation
after Dewett, 2017
11/15/2021 9
Evaluation
What types of faults are likely observed near a salt dome?
11/15/2021 10
Why is this important to understand?
• “Recently it has been claimed that unsupervised
classification of subsampled instantaneous attributes
achieves subseismic resolution.”
• “Increasing seismic resolution through classification
of subsampled attributes is suspect.”
• “Seismic interpreters should remain wary of seismic
attribute processes that promise subseismic
resolution.”
Barnes, 2017
11/15/2021 11
Outline
• Review of Seismic Attribute
Taxonomies
• Spectral Similarity Fault
Enhancement
• Efficiency Gains in Inversion-
based Interpretation
• Conclusions
• Motivation
• Historical Context
• Creating Useful Taxonomies
• Discussion
11/15/2021 12
Outline
• Review of Seismic Attribute
Taxonomies
• Spectral Similarity Fault
Enhancement
• Efficiency Gains in Inversion-
based Interpretation
• Conclusions
• Motivation
• Spectral Similarity Defined
• Case Study #1: High S:N
• Case Study #2: Low S:N
• Discussion
11/15/2021 13
Outline
• Review of Seismic Attribute
Taxonomies
• Spectral Similarity Fault
Enhancement
• Efficiency Gains in Inversion-
based Interpretation
• Conclusions
• Motivation
• Geologic Background
• Results
11/15/2021 14
Outline
• Review of Seismic Attribute
Taxonomies
• Spectral Similarity Fault
Enhancement
• Efficiency Gains in Inversion-
based Interpretation
• Conclusions
11/15/2021 15
Review of Seismic Attribute
Taxonomies
Motivation
11/15/2021 16
Review of Seismic Attribute Taxonomies
Motivation:
• Communication — There are two data analysis domains used when
discussing seismic attribute: exploratory data analysis (EDA) and
confirmatory data analysis (CDA). Communication is less efficient when
communicating across these domains.
• Machine learning — Algorithms take two broad forms: supervised and
unsupervised. Unsupervised algorithms require a user to be well versed in
seismic attribute theory, esp. how attributes are related to one another.
Supervised algorithms suffer from a similar bias, but it is less transparent.
11/15/2021 17
Review of Seismic Attribute Taxonomies
Motivation:
• Communication — There are two data analysis domains used when
discussing seismic attribute: exploratory data analysis (EDA) and
confirmatory data analysis (CDA). Communication is less efficient when
communicating across these domains.
• Machine learning — Algorithms take two broad forms: supervised and
unsupervised. Unsupervised algorithms require a user to be well versed in
seismic attribute theory, esp. how attributes are related to one another.
Supervised algorithms suffer from a similar bias, but it is less transparent.
11/15/2021 18
Review of Seismic Attribute
Taxonomies
Their Historical Use and as Communication Frameworks
11/15/2021 19
Review of Seismic Attribute Taxonomies
Data analysis domain: refers to a
method of analyzing data, which is
either exploratory or confirmatory in
nature
Exploratory Data Analysis: favored by
interpreters who are dominantly
conceptual; often described as “general
seismic interpreters” or similar
Confirmatory Data Analysis: favored
by interpreters who are dominantly
analytical; often described as
“quantitative seismic interpreters” or
similar
Notes:
1. These two terms represent
extremes on a gradient, where
many individuals occupy a mixture.
2. EDA techniques rely on qualitative
visual methods centered on
developing a hypothesis based on
observations and statistical
inferences
3. CDA techniques rely on predictions
from calculations centered on those
which represent deviations from a
model
after Tukey., 1962
11/15/2021 20
Review of Seismic Attribute Taxonomies
Data Analysis: refers to a method of
analyzing data, which is either
exploratory or confirmatory in nature
Exploratory Data Analysis: favored by
interpreters who are dominantly
conceptual; often described as “general
seismic interpreters” or similar
Confirmatory Data Analysis: favored
by interpreters who are dominantly
analytical; often described as
“quantitative seismic interpreters” or
similar
Notes:
1. These two terms represent
extremes on a gradient, where
many individuals occupy a mixture.
2. EDA techniques rely on qualitative
visual methods centered on
developing a hypothesis based on
observations and statistical
inferences
3. CDA techniques rely on predictions
from calculations centered on those
which represent deviations from a
model
after Tukey., 1962
11/15/2021 21
Review of Seismic Attribute Taxonomies
Dominant Taxonomic Schemes
• Taner et al., 1994
• Carter, 1995
• Brown, 1996
• Barnes, 1997
• Chen and Sidney, 1997
• Taner, 1999
• Taner, 2001
• Liner et al., 2004
• Randen and Sønneland, 2005
• Brown, 2011
• Barnes, 2016
• Marfurt, 2018
Excluded Usage Guides and Tables
• OpendTect Attribute Matrix
• OpendTect Attribute Table
• Paradise Attribute Usage Table
• Petrel Attribute Table
• Landmark Attribute Usage Guide
11/15/2021 22
Review of Seismic Attribute Taxonomies
• Taner et al., 1994
• Carter, 1995
• Brown, 1996
• Barnes, 1997
• Chen and Sidney, 1997
• Taner, 1999
• Taner, 2001
• Liner et al., 2004
• Randen and Sønneland, 2005
• Brown, 2011
• Barnes, 2016
• Marfurt, 2018
11/15/2021 23
Review of Seismic Attribute Taxonomies
• Taner et al., 1994
• Carter, 1995
• Brown, 1996
• Barnes, 1997
• Chen and Sidney, 1997
• Taner, 1999
• Taner, 2001
• Liner et al., 2004
• Randen and Sønneland, 2005
• Brown, 2011
• Barnes, 2016
• Marfurt, 2018
Themes
• Taner — interpretation
• Brown — signal properties
• Barnes —mathematics (signal
properties)
11/15/2021 24
Review of Seismic Attribute Taxonomies
• Taner et al., 1994
• Carter, 1995
• Brown, 1996
• Barnes, 1997
• Chen and Sidney, 1997
• Taner, 1999
• Taner, 2001
• Liner et al., 2004
• Randen and Sønneland, 2005
• Brown, 2011
• Barnes, 2016
• Marfurt, 2018
Themes
• Taner — interpretation
• Brown — signal properties
• Barnes —mathematics (signal
properties)
Consider both how you
understand seismic attributes, and
how your colleagues do.
Are they the same?
11/15/2021 25
Review of Seismic Attribute Taxonomies
• Taner et al., 1994
• Carter, 1995
• Brown, 1996
• Barnes, 1997
• Chen and Sidney, 1997
• Taner, 1999
• Taner, 2001
• Liner et al., 2004
• Randen and Sønneland, 2005
• Brown, 2011
• Barnes, 2016
• Marfurt, 2018
Themes
• Taner — interpretation
• Brown — signal properties
• Barnes —mathematics (signal
properties)
• Taner may describe attributes
toward scientists who favor EDA.
• While Brown and Barnes may
describe attributes toward
scientists who favor CDA.
11/15/2021 26
Why is this important to understand?
• At its core, a seismic attribute
taxonomy aims to communicate
ideas to an audience to ease
discussion and further research.
• Incorrect understandings of how
attributes relate to one another
in the three described domains
(interpretive, signal property,
and mathematical) have a high
change of applying attributes
incorrectly, incorrect
interpretations, and dubious
research.
• This translates to a loss of
resources (time, money, etc.)
due to misapplied techniques.
• Machine learning approaches
make it easier to misapply
techniques and algorithms,
while making it easier to get a
result.
11/15/2021 27
Why is this important to understand?
• At its core, a seismic attribute
taxonomy aims to communicate
ideas to an audience to ease
discussion and further research.
• Incorrect understandings of how
attributes relate to one another
in the three described domains
(interpretive, signal property,
and mathematical) have a high
change of applying attributes
incorrectly, incorrect
interpretations, and dubious
research.
• This translates to a loss of
resources (time, money, etc.)
due to misapplied techniques.
• Machine learning approaches
make it easier to misapply
techniques and algorithms,
while making it easier to get a
result.
11/15/2021 28
11/15/2021 29
Review of Seismic Attribute Taxonomies
Citation Factor is a value in the
range of 1–5 assigned to a particular
citation in an academic work such
that a particular work was
referenced:
• 1: in a reference list, but not in the
body (e.g, bibliography or additional
resources sections)
• 2: with several others in a list
reinforcing an assertion or statement
• 3: referring to a unique thought or
short quote
• 4: by itself, referring to a long thought,
definition, or significant quote
• 5: by itself and expounds on the
original idea, often drawing new
meaning
How is this concept applied?
• Raw citing works: 2,100
• Relevant citing works: 287
• Each relevant citing work was
assigned a citation factor.
• The following graphs use these
data.
11/15/2021 30
Review of Seismic Attribute Taxonomies
Citation Factor is a value in the
range of 1–5 assigned to a particular
citation in an academic work such
that a particular work was
referenced:
• 1: in a reference list, but not in the
body (e.g, bibliography or additional
resources sections)
• 2: with several others in a list
reinforcing an assertion or statement
• 3: referring to a unique thought or
short quote
• 4: by itself, referring to a long thought,
definition, or significant quote
• 5: by itself and expounds on the
original idea, often drawing new
meaning
How is this concept applied?
• Raw citing works: 2,100
• Relevant citing works: 287
• Each relevant citing work was
assigned a citation factor.
• The following graphs use these
data.
11/15/2021 31
Dewett et al., 2021
11/15/2021 32
Dewett et al., 2021
11/15/2021 33
Dewett et al., 2021
Observations and Hypotheses:
1. The count of citations by CF should
be higher for low CFs, because
higher CF (inferring more detailed)
works are less numerous.
2. The dominant group (low or high
CF) as a function of time, implies
the relative value of the work
toward the respective group.
3. In context, lower CF works tend to
be tangential topics of a more
general nature.
11/15/2021 34
Dewett et al., 2021
Observations and Hypotheses:
4. Hypothesis #1: High CFs leading
implies an acceptance among more
specialized authors.
5. Hypothesis #2: Low CFs leading
implies an acceptance among more
general authors.
6. Hypothesis #3: Mid CFs leading
implies a broad acceptance.
*Each situation may have a spillover to the remaining groups.
11/15/2021 35
Dewett et al., 2021
11/15/2021 36
Dewett et al., 2021
• Supports Hypothesis #2 (Low CFs lead)
• Supports a “spillover effect” from Low CFs to Mid CFs
to High CFs
11/15/2021 37
Dewett et al., 2021
11/15/2021 38
Dewett et al., 2021
• Supports Hypothesis #1 (High CFs lead)
11/15/2021 39
Dewett et al., 2021
11/15/2021 40
Dewett et al., 2021
• Supports Hypothesis #3 (Mid CFs lead)
• Supports a “spillover effect” from Mid CFs to High/Low
CFs
11/15/2021 41
Dewett et al., 2021
Historical Analysis —Main Takeaway
• EDA focused scientists, by definition, prefer conceptual models to
quantitative ones, while CDA focused scientists prefer theoretical and
mathematical models.
• EDA scientists likely favor Interpretive taxonomic systems owing to
the qualitative visual models such systems describe.
• CDA scientists likely favor Mathematical and Signal Property
taxonomic systems owing to the quantitative and theoretical
properties such systems describe.
11/15/2021 42
Dewett et al., 2021
Historical Analysis —Main Takeaway
• EDA focused scientists, by definition, prefer conceptual models to
quantitative ones, while CDA focused scientists prefer theoretical and
mathematical models.
• EDA scientists likely favor Interpretive taxonomic systems owing to
the qualitative visual models such systems describe.
• CDA scientists likely favor Mathematical and Signal Property
taxonomic systems owing to the quantitative and theoretical
properties such systems describe.
11/15/2021 43
Dewett et al., 2021
Historical Analysis —Main Takeaway
• EDA focused scientists, by definition, prefer conceptual models to
quantitative ones, while CDA focused scientists prefer theoretical and
mathematical models.
• EDA scientists likely favor Interpretive taxonomic systems owing to
the qualitative visual models such systems describe.
• CDA scientists likely favor Mathematical and Signal Property
taxonomic systems owing to the quantitative and theoretical
properties such systems describe.
11/15/2021 44
Review of Seismic Attribute
Taxonomies
Creating Useful Taxonomies
11/15/2021 45
What is a “Useful Taxonomy”
A useful seismic attribute taxonomy:
• Communicates to a wide audience
• Both EDA and CDA audiences (i.e., Interpretive Value, Signal Property,
and Mathematical)
• Facilitates communication between these domains
• Defines example attributes so that a reader can easily add new
or excluded attributes to the classification
• Clearly states any shortcomings or assumptions
11/15/2021 46
Dewett et al., 2021
It’s also helpful if it
provides a chart…
• Use 35 example attributes including:
• Instantaneous
• Derived
• Geometric
• Classification
• Machine Learning
• Inversion
• Provide cross-referenced charts and
definitions (in text)
11/15/2021 47
Dewett et al., 2021
Signal Property Domain
• Centered on amplitude, phase,
and frequency
• Used a hierarchical structure to
imply relationships
• Used colors to describe
relationship to signal property
• Provide for input dependent
attributes, which concerns
machine learning algorithms
11/15/2021 48
Dewett et al., 2021
11/15/2021 49
Dewett et al., 2021
Mathematical
Formulation Domain
• Focused on mathematical
formulation of the algorithm (or
likely formulation)
• Provides for “component
separation” and “dimensionality
reduction” techniques (not a
specific mathematical operation)
11/15/2021 50
Dewett et al., 2021
11/15/2021 51
Dewett et al., 2021
Interpretive Value
Domain
• Provides an overview of seismic
attribute applications
• Does not validate any specific
technique or application
• Focused on structural,
stratigraphic, fluid, and signal
anomaly divisions
11/15/2021 52
Dewett et al., 2021
11/15/2021 53
Review of Seismic Attribute
Taxonomies
Discussion
11/15/2021 54
Discussion
• Shown that data analysis domains are
a convenient tool to understand how
scientists use and understand seismic
attribute classification and, by
extension, the attributes themselves.
• Shown that there are three domains in
this context:
• Interpretive
• Signal Property
• Mathematical
• Shown that there is value (to
scientists) for a clear communication
path between each domain.
• Concluded with the creation of three
seismic attribute taxonomies.
• Clearly identified assumptions and
shortcomings.
• Provided example seismic attributes
that specifically allow for individuals to
extend the taxonomies to include new
and omitted attributes.
• This work provides a solid foundation
on which scientists and professionals
can use seismic attributes more
effectively in their current work and in
future machine learning workflows.
11/15/2021 55
Discussion
• Shown that data analysis domains are
a convenient tool to understand how
scientists use and understand seismic
attribute classification and, by
extension, the attributes themselves.
• Shown that there are three domains in
this context:
• Interpretive
• Signal Property
• Mathematical
• Shown that there is value (to
scientists) for a clear communication
path between each domain.
• Concluded with the creation of three
seismic attribute taxonomies.
• Clearly identified assumptions and
shortcomings.
• Provided example seismic attributes
that specifically allow for individuals to
extend the taxonomies to include new
and omitted attributes.
• This work provides a solid foundation
on which scientists and professionals
can use seismic attributes more
effectively in their current work and in
future machine learning workflows.
11/15/2021 56
Outline
• Review of Seismic Attribute
Taxonomies
• Spectral Similarity Fault
Enhancement
• Efficiency Gains in Inversion-
based Interpretation
• Conclusions
• Motivation
• Spectral Similarity Defined
• Case Study #1: High S:N
• Case Study #2: Low S:N
• Discussion
11/15/2021 57
Spectral Similarity Fault
Enhancement
Motivation
11/15/2021 58
Spectral Similarity Fault Enhancement
Motivation:
• Edge Detection Enhancement — Professional and academic seismic
interpreters have long used edge detection attributes for fault identification
and interpretation (1995) and their enhancement (2001).
• Computer Based Interpretation — Ideally, fault interpretation could be
automated via a computer extraction of faults into fault planes. The quality
of the input attribute is the current limiting factor. While newer machine
learning approaches are beginning to become commonplace, the spectral
similarity extends conventional attributes by novel filtering and attribute
choice.
11/15/2021 59
Spectral Similarity Fault Enhancement
Motivation:
• Edge Detection Enhancement — Professional and academic seismic
interpreters have long used edge detection attributes for fault identification
and interpretation (1995) and their enhancement (2001).
• Computer Based Interpretation — Ideally, fault interpretation could be
automated via a computer extraction of faults into fault planes. The quality
of the input attribute is the current limiting factor. While newer machine
learning approaches are beginning to become commonplace, the spectral
similarity extends conventional attributes by novel filtering and attribute
choice.
11/15/2021 60
Spectral Similarity Fault
Enhancement
Spectral Similarity Defined
11/15/2021 61
Quick Definitions
• Swarm intelligence — a family of
algorithms that use decentralized self-
organization to perform a task
(examples include particle swarm
optimization, ant colony optimization,
or differential evolution).
• Machine learning — a subdiscipline
of computer science that consists of
algorithms that can learn from and
make predictions on data (examples
include artificial neural networks, self-
organized maps, and k-means
clustering).
• Spectral similarity — refers to the
workflow described in this paper for
similarity enhancement.
• Fault Enhanced similarity — the
patented filtering process for similarity
enhancement described by Dorn et al.
(2012)
• Similarity — a family of edge
detection attributes that include
coherence, variance, the Sobel filter,
or similar algorithms.
11/15/2021 62
Quick Definitions
• Swarm intelligence — a family of
algorithms that use decentralized self-
organization to perform a task
(examples include particle swarm
optimization, ant colony optimization,
or differential evolution).
• Machine learning — a subdiscipline
of computer science that consists of
algorithms that can learn from and
make predictions on data (examples
include artificial neural networks, self-
organized maps, and k-means
clustering).
• Spectral similarity — refers to the
workflow described in this paper for
similarity enhancement.
• Fault Enhanced similarity — the
patented filtering process for similarity
enhancement described by Dorn et al.
(2012)
• Similarity — a family of edge
detection attributes that include
coherence, variance, the Sobel filter,
or similar algorithms.
11/15/2021 63
Quick Definitions
• Swarm intelligence — a family of
algorithms that use decentralized self-
organization to perform a task
(examples include particle swarm
optimization, ant colony optimization,
or differential evolution).
• Machine learning — a subdiscipline
of computer science that consists of
algorithms that can learn from and
make predictions on data (examples
include artificial neural networks, self-
organized maps, and k-means
clustering).
• Spectral similarity — refers to the
workflow described in this paper for
similarity enhancement.
• Fault Enhanced similarity — the
patented filtering process for similarity
enhancement described by Dorn et al.
(2012).
• Similarity — a family of edge
detection attributes that include
coherence, variance, the Sobel filter,
or similar algorithms.
11/15/2021 64
Quick Definitions
• Swarm intelligence — a family of
algorithms that use decentralized self-
organization to perform a task
(examples include particle swarm
optimization, ant colony optimization,
or differential evolution).
• Machine learning — a subdiscipline
of computer science that consists of
algorithms that can learn from and
make predictions on data (examples
include artificial neural networks, self-
organized maps, and k-means
clustering).
• Spectral similarity — refers to the
workflow described in this paper for
similarity enhancement.
• Fault Enhanced similarity — the
patented filtering process for similarity
enhancement described by Dorn et al.
(2012).
• Similarity — a family of edge
detection attributes that include
coherence, variance, the Sobel filter,
or similar algorithms.
11/15/2021 65
Quick Definitions
• Swarm intelligence — a family of
algorithms that use decentralized self-
organization to perform a task
(examples include particle swarm
optimization, ant colony optimization,
or differential evolution).
• Machine learning — a subdiscipline
of computer science that consists of
algorithms that can learn from and
make predictions on data (examples
include artificial neural networks, self-
organized maps, and k-means
clustering).
• Spectral similarity — refers to the
workflow described in this paper for
similarity enhancement.
• Fault Enhanced similarity — the
patented filtering process for similarity
enhancement described by Dorn et al.
(2012).
• Similarity — a family of edge
detection attributes that include
coherence, variance, the Sobel filter,
or similar algorithms.
11/15/2021 66
Note on Comparisons
The case studies will
compare the spectral
similarity to:
• A well-regarded commercial
fault enhancement method
• A second prominent fault
enhancement method
based on swarm
intelligence algorithms
• A traditional eigen structure
similarity
Dewett and Henza, 2016
11/15/2021 67
The Spectral Similarity
has been favored by 20+
experienced
geoscientists (15–20-
year average) with
structural geology
training and/or over a
decade of seismic
interpretation experience
Dewett and Henza, 2016
Additional Testing
11/15/2021 68
filter seismic spectral
decomposition
pick spec-d
volumes
run seismic
attributes
optimize dip
response
filter and
smooth edges
combine
volumes
final result
General Workflow
Dewett and Henza, 2016
11/15/2021 69
Example Workflow
Step 1: Filter data as needed
Step 2: Decompose data into
spectra
Step 3: Compute seismic
attributes
Step 4: Connect & interpolate
Step 5: Filter and smooth
Step 6: Combine volumes
• Addition
• Machine Learning
Step 7: Dip filter faults
Dewett and Henza, 2016
11/15/2021 70
Example Workflow
Step 1: Filter data as needed
Step 2: Decompose data into
spectra
Step 3: Compute seismic
attributes
Step 4: Connect & interpolate
Step 5: Filter and smooth
Step 6: Combine volumes
• Addition
• Machine Learning
Step 7: Dip filter faults
Dewett and Henza, 2016
43 Hz
37 Hz
20 Hz
11/15/2021 71
Example Workflow
Step 1: Filter data as needed
Step 2: Decompose data into
spectra
Step 3: Compute seismic
attributes
Step 4: Connect & interpolate
Step 5: Filter and smooth
Step 6: Combine volumes
• Addition
• Machine Learning
Step 7: Dip filter faults
Dewett and Henza, 2016
11/15/2021 72
Example Workflow
Step 1: Filter data as needed
Step 2: Decompose data into
spectra
Step 3: Compute seismic
attributes
Step 4: Connect & interpolate
Step 5: Filter and smooth
Step 6: Combine volumes
• Addition
• Machine Learning
Step 7: Dip filter faults
Dewett and Henza, 2016
11/15/2021 73
Example Workflow
Step 1: Filter data as needed
Step 2: Decompose data into
spectra
Step 3: Compute seismic
attributes
Step 4: Connect & interpolate
Step 5: Filter and smooth
Step 6: Combine volumes
• Addition
• Machine Learning
Step 7: Dip filter faults
Dewett and Henza, 2016
11/15/2021 74
Example Workflow
Step 1: Filter data as needed
Step 2: Decompose data into
spectra
Step 3: Compute seismic
attributes
Step 4: Connect & interpolate
Step 5: Filter and smooth
Step 6: Combine volumes
• Addition
• Machine Learning
Step 7: Dip filter faults
Dewett and Henza, 2016
11/15/2021 75
Example Workflow
Step 1: Filter data as needed
Step 2: Decompose data into
spectra
Step 3: Compute seismic
attributes
Step 4: Connect & interpolate
Step 5: Filter and smooth
Step 6: Combine volumes
• Addition
• Machine Learning
Step 7: Dip filter faults
Dewett and Henza, 2016
11/15/2021 76
Example Workflow
Step 1: Filter data as needed
Step 2: Decompose data into
spectra
Step 3: Compute seismic
attributes
Step 4: Connect & interpolate
Step 5: Filter and smooth
Step 6: Combine volumes
• Addition
• Machine Learning
Step 7: Dip filter faults
Dewett and Henza, 2016
11/15/2021 77
Spectral Similarity
Workflow
• Multistep and multistage
• Incorporates spectral decomposition
• Bandlimited phase and amplitude
• Bandlimited geometric attributes
• Uses any number of frequency
volumes (variable depending on data)
• Uses novel approach to attribute
filtering
• Combines many input volumes into a
single output fault probability attribute
• Optionally, uses machine learning
classification approaches
• Provides a clean input for computer-
based fault extraction
Dewett and Henza, 2016
11/15/2021 78
Spectral Similarity Fault
Enhancement
Case Study #1: High S:N
11/15/2021 79
Fault Enhanced Similarity and
Spectral Similarity
Dewett and Henza, 2016
11/15/2021 80
Fault Enhanced Similarity and
Spectral Similarity
Improved fault connectivity (yellow arrows) and more reasonable fault dip (red rectangle) in the Spectral Similarity Volume
Dewett and Henza, 2016
11/15/2021 81
Fault Interpretation Using Spectral Similarity
• Useful in manual fault
interpretation
• Computer based fault
extraction
• Fault QC and refinement
Dewett and Henza, 2016
11/15/2021 82
Automatic Fault Interpretation Using
Spectral Similarity
• 332 faults total
• 33% (108 faults) require no edits (orange)
• 63% (209 faults) require basic fault splitting
(blue)
• 4% (15 faults) mesh edits only (red)
Dewett and Henza, 2016
11/15/2021 83
Spectral Similarity with Peak Frequency
Spectral Similarity communicates structural information, while peak frequency communicates lithological and
stratigraphic information. This yields a more complete geologic understanding.
Dewett and Henza, 2016
11/15/2021 84
Range comparison: Fault Enhanced Similarity
and Spectral Similarity
Dewett and Henza, 2016
Slide 85
Spectral Similarity Fault
Enhancement
Case Study #2: Low S:N
11/15/2021 86
Dewett and Henza, 2016
• Land survey
• Variable fold
• High random noise
contamination
• Filtered with
structure-oriented
noise filters in
multiple passes
Case Study #2
11/15/2021 87
Similarity compared to Spectral Similarity
Dewett and Henza, 2016
11/15/2021 88
Spectral Similarity Fault
Enhancement
Discussion
11/15/2021 89
Spectral Similarity…
• Integrates any seismic attribute and spectral decomposition,
• Will improve computer based and human driven interpretation workflows,
• Will enhance both small faults and large faults,
• Can be customized using machine learning classification algorithms to
produce a volume that retains frequency information.
Dewett and Henza, 2016
11/15/2021 90
Why is this important to understand?
• The first major job of convolutional
neural network (CNN) machine
learning is fault attribute generation.
• For a CNN to be useful, it must
outperform a human interpreter
(speed and quality).
• CNN algorithms require a model
based on human interpretation or
synthetic fault data.
• CNN performance in low signal areas
is poor.
• CNN performance in compressional
and transpressional environments is
poor.
• For a CNN to be successful, it must:
• Outperform a coherence in an arbitrary
data set.
• Outperform the best fault
enhancements in arbitrary data sets.
• Produce a suitable volume for
computer-based fault interpretation.
• This work…
• Highlights the need for fault
skeletonization (a process which has
become common as a post-processing
step on new CNN algorithms.
• Highlights the usefulness of retaining
frequency information using
unsupervised classification.
11/15/2021 91
Why is this important to understand?
• The first major job of convolutional
neural network (CNN) machine
learning is fault attribute generation.
• For a CNN to be useful, it must
outperform a human interpreter
(speed and quality).
• CNN algorithms require a model
based on human interpretation or
synthetic fault data.
• CNN performance in low signal areas
is poor.
• CNN performance in compressional
and transpressional environments is
poor.
• For a CNN to be successful, it must:
• Outperform a coherence in an arbitrary
data set.
• Outperform the best fault
enhancements in arbitrary data sets.
• Produce a suitable volume for
computer-based fault interpretation.
• This work…
• Highlights the need for fault
skeletonization (a process which has
become common as a post-processing
step on new CNN algorithms.
• Highlights the usefulness of retaining
frequency information using
unsupervised classification.
11/15/2021 92
Why is this important to understand?
• The first major job of convolutional
neural network (CNN) machine
learning is fault attribute generation.
• For a CNN to be useful, it must
outperform a human interpreter
(speed and quality).
• CNN algorithms require a model
based on human interpretation or
synthetic fault data.
• CNN performance in low signal areas
is poor.
• CNN performance in compressional
and transpressional environments is
poor.
• For a CNN to be successful, it must:
• Outperform a coherence in an arbitrary
data set.
• Outperform the best fault
enhancements in arbitrary data sets.
• Produce a suitable volume for
computer-based fault interpretation.
• This work…
• Highlights the need for fault
skeletonization (a process which has
become common as a post-processing
step on new CNN algorithms.
• Highlights the usefulness of retaining
frequency information using
unsupervised classification.
11/15/2021 93
Why is this important to understand?
• The first major job of convolutional
neural network (CNN) machine
learning is fault attribute generation.
• For a CNN to be useful, it must
outperform a human interpreter
(speed and quality).
• CNN algorithms require a model
based on human interpretation or
synthetic fault data.
• CNN performance in low signal areas
is poor.
• CNN performance in compressional
and transpressional environments is
poor.
• For a CNN to be successful, it must:
• Outperform a coherence in an arbitrary
data set.
• Outperform the best fault
enhancements in arbitrary data sets.
• Produce a suitable volume for
computer-based fault interpretation.
• This work…
• Highlights the need for fault
skeletonization (a process which has
become common as a post-processing
step on new CNN algorithms.
• Highlights the usefulness of retaining
frequency information using
unsupervised classification.
11/15/2021 94
Why is this important to understand?
• The first major job of convolutional
neural network (CNN) machine
learning is fault attribute generation.
• For a CNN to be useful, it must
outperform a human interpreter
(speed and quality).
• CNN algorithms require a model
based on human interpretation or
synthetic fault data.
• CNN performance in low signal areas
is poor.
• CNN performance in compressional
and transpressional environments is
poor.
• For a CNN to be successful, it must:
• Outperform a coherence in an arbitrary
data set.
• Outperform the best fault
enhancements in arbitrary data sets.
• Produce a suitable volume for
computer-based fault interpretation.
• This work…
• Highlights the need for fault
skeletonization (a process which has
become common as a post-processing
step on new CNN algorithms.
• Highlights the usefulness of retaining
frequency information using
unsupervised classification.
11/15/2021 95
Why is this important to understand?
• The first major job of convolutional
neural network (CNN) machine
learning is fault attribute generation.
• For a CNN to be useful, it must
outperform a human interpreter
(speed and quality).
• CNN algorithms require a model
based on human interpretation or
synthetic fault data.
• CNN performance in low signal areas
is poor.
• CNN performance in compressional
and transpressional environments is
poor.
• For a CNN to be successful, it must:
• Outperform a coherence in an arbitrary
data set.
• Outperform the best fault
enhancements in arbitrary data sets.
• Produce a suitable volume for
computer-based fault interpretation.
• This work…
• Highlights the need for fault
skeletonization (a process which has
become common as a post-processing
step on new CNN algorithms.
• Highlights the usefulness of retaining
frequency information using
unsupervised classification.
11/15/2021 96
Why is this important to understand?
• The first major job of convolutional
neural network (CNN) machine
learning is fault attribute generation.
• For a CNN to be useful, it must
outperform a human interpreter
(speed and quality).
• CNN algorithms require a model
based on human interpretation or
synthetic fault data.
• CNN performance in low signal areas
is poor.
• CNN performance in compressional
and transpressional environments is
poor.
• For a CNN to be successful, it must:
• Outperform a coherence in an arbitrary
data set.
• Outperform the best fault
enhancements in arbitrary data sets.
• Produce a suitable volume for
computer-based fault interpretation.
• This work…
• Highlights the need for fault
skeletonization (a process which has
become common as a post-processing
step on new CNN algorithms.
• Highlights the usefulness of retaining
frequency information using
unsupervised classification.
11/15/2021 97
Why is this important to understand?
• The first major job of convolutional
neural network (CNN) machine
learning is fault attribute generation.
• For a CNN to be useful, it must
outperform a human interpreter
(speed and quality).
• CNN algorithms require a model
based on human interpretation or
synthetic fault data.
• CNN performance in low signal areas
is poor.
• CNN performance in compressional
and transpressional environments is
poor.
• For a CNN to be successful, it must:
• Outperform a coherence in an arbitrary
data set.
• Outperform the best fault
enhancements in arbitrary data sets.
• Produce a suitable volume for
computer-based fault interpretation.
• This work…
• Highlights the need for fault
skeletonization (a process which has
become common as a post-processing
step on new CNN algorithms.
• Highlights the usefulness of retaining
frequency information using
unsupervised classification.
11/15/2021 98
Why is this important to understand?
• The first major job of convolutional
neural network (CNN) machine
learning is fault attribute generation.
• For a CNN to be useful, it must
outperform a human interpreter
(speed and quality).
• CNN algorithms require a model
based on human interpretation or
synthetic fault data.
• CNN performance in low signal areas
is poor.
• CNN performance in compressional
and transpressional environments is
poor.
• For a CNN to be successful, it must:
• Outperform a coherence in an arbitrary
data set.
• Outperform the best fault
enhancements in arbitrary data sets.
• Produce a suitable volume for
computer-based fault interpretation.
• This work…
• Highlights the need for fault
skeletonization (a process which has
become common as a post-processing
step on new CNN algorithms.
• Highlights the usefulness of retaining
frequency information using
unsupervised classification.
11/15/2021 99
Why is this important to understand?
• The first major job of convolutional
neural network (CNN) machine
learning is fault attribute generation.
• For a CNN to be useful, it must
outperform a human interpreter
(speed and quality).
• CNN algorithms require a model
based on human interpretation or
synthetic fault data.
• CNN performance in low signal areas
is poor.
• CNN performance in compressional
and transpressional environments is
poor.
• For a CNN to be successful, it must:
• Outperform a coherence in an arbitrary
data set.
• Outperform the best fault
enhancements in arbitrary data sets.
• Produce a suitable volume for
computer-based fault interpretation.
• This work…
• Highlights the need for fault
skeletonization (a process which has
become common as a post-processing
step on new CNN algorithms.
• Highlights the usefulness of retaining
frequency information using
unsupervised classification.
11/15/2021 100
Outline
• Review of Seismic Attribute
Taxonomies
• Spectral Similarity Fault
Enhancement
• Efficiency Gains in Inversion-
based Interpretation
• Conclusions
• Motivation
• Background
• SOM with Inversion Data
• Results
11/15/2021 101
Efficiency Gains in Inversion-
based Interpretation
Motivation
11/15/2021 102
Motivation
• Professional seismic interpreters often lack experience in quantitative
interpretation (QI) workflows.
• In areas where QI analyses drive prospect identification and prioritization,
QI specialists are often a scares resource.
• Can a simple classification by a self-organizing map (SOM) algorithm aid
inexperienced or less specialized professionals by identifying QI targets?
• Can inexperienced or less specialized professionals reliably and correctly
identify leads that can be prioritized to QI specialists?
11/15/2021 103
Efficiency Gains in Inversion-
based Interpretation
Background
11/15/2021 104
SOM: Briefly Explained
Suppose we have a table of data…
• Let the columns be different
attributes
• Let the rows be what we want to
classify
• We can classify each object by the
attributes that are ascribed to
them.
11/15/2021 105
SOM: Briefly Explained
• The system (SOM) will map the
input attributes onto an output
space,
• Generate an initial state (e.g.,
pseudo random or predefined)
• Then iterate the following:
• Select training data at random
• Compute the winning neuron
• Update all neurons
• Stop when input is classified, or
maximum epochs has been reached
• In general, the neurons are
interconnected in a defined
topology.
11/15/2021 106
Seismic Inversion: Briefly Explained
Seismic data
Well data
Seismic Inversion
Geophysical
Properties
(AI, SI, ρ)
Rock Physics
Model
Geological
Properties (fluid,
lithology, etc.)
Well data
11/15/2021 107
Seismic Inversion: Briefly Explained
Relative
• Poor fidelity over long time intervals (a
function of the frequency content of the
data)
• Scaling of the output values is only
proportional to the real values
• Suffers more from tuning effects
• Easy to compute
• No initial model dependence
Absolute
• Higher fidelity over long time intervals
• Correct (or more correct) output scaling
• Suffers far less from tuning effects
• Difficult to compute (both in
computational time and knowledge)
• Dependent on initial model
11/15/2021 108
Seismic Inversion: Briefly Explained
Coloured Inversion
• Tends to outperform other relative inversion methods (e.g.n RunSum or Recursive
Inversion)
• Attempts spectrum flattening through wavelet estimation
• Matches the output seismic impedance spectrum to the well impedance spectrum
• Quick and cheap
• Simple and Robust
• No background model
• No wavelet derivation
• Well data are only used for spectral matching
11/15/2021 109
Efficiency Gains in Inversion-
based Interpretation
SOM with Inversion Data
11/15/2021 110
Relative Seismic Inversion Results
• Given multi-volume results from seismic
inversion, typically specialists classify the
results based on their prior knowledge and
experience combined with an understanding
of the rock physics.
• However, specialists may be limited in time
and more so in supply.
• Therefore, if a more unbiased and
repeatable approach can be used to classify
the inversion response, efficiency can be
gained through specialist engagement at
key times with specific areas of interest.
11/15/2021 111
Relative Seismic Inversion Results
• Given multi-volume results from seismic
inversion, typically specialists classify the
results based on their prior knowledge and
experience combined with an understanding
of the rock physics.
• However, specialists may be limited in time
and more so in supply.
• Therefore, if a more unbiased and
repeatable approach can be used to classify
the inversion response, efficiency can be
gained through specialist engagement at
key times with specific areas of interest.
11/15/2021 112
• Begin with inversion
products,
• Classify the results with a
computer-based classification
algorithm,
• Scan the results for
anomalous classifications,
• Focus the specialist’s
engagement on anomalous
areas.
11/15/2021 113
After SOM Classification
11/15/2021 114
Understanding SOM Classes
11/15/2021 115
Understanding SOM Classes
AI
GI
11/15/2021 116
Understanding SOM Classes
11/15/2021 117
Efficiency Gains in Inversion-
based Interpretation
Results
11/15/2021 118
Effect of Adding Classes
Most anomalous class
(of 16). Determined after a scan of
the data one class at a time.
Using more classes can allow for
finer distinctions between different
data points.
11/15/2021 119
Effect of Adding Classes
• Five classes (of 64)
that show the most
anomalous events in
the data.
11/15/2021 120
Effect of Adding Classes
• Five classes (of 64)
that show the most
anomalous events in
the data.
11/15/2021 121
Most Significant Anomoly
11/15/2021 122
Results
By leveraging a self-organized map and inversion products
together, it is possible:
• To more rapidly localize interpretation in a dataset while
leveraging a higher degree of geologic understanding typically
found in prospect interpreters (who commonly lack in-depth QI
knowledge).
• To make potentially substantial gains in both efficiency and
interpretation quality.
• To increase the engagement of non-specialists.
11/15/2021 123
Seismic Attributes:
Taxonomic Classification for Pragmatic Machine Learning
Application
Conclusions
11/15/2021 124
Conclusions
• Explored the history of seismic attribute classification and taxonomy.
• Surveyed every major influential system
• Analyzed each taxonomic system
• Provided an update that centers on data analysis
• Explored a method of fault enhancement that centers on spectral
decomposition and novel filtering techniques.
• Works well in noisy and clean data
• Can integrate any edge detecting attribute
• Identifies fault skeletonization as an important post-processing step
• Suggest a method of volume combination using unsupervised machine
learning
11/15/2021 125
Conclusions
• Investigated the use of SOMs (unsupervised machine learning)
applied to a seismic inversion.
• Identifies an opportunity for efficiency gains by non-specialists.
• Suggests a method of quality control for input attributes prior to
unsupervised learning.
11/15/2021 126
Questions?
11/15/2021 127
References
(Review of Seismic Attribute Taxonomies)
Adams, D., and D. Markus, 2013, Systematic error in instantaneous attributes: SEG Technical Program Expanded Abstracts, 1363–1367.
Aki, K., and P. G. Richards, 2009, Quantitative Seismology, Second Edition.: University Science Books.
Al-Dossary, S., and K. J. Marfurt, 2006, 3D volumetric multispectral estimates of reflector curvature and rotation: Geophysics, 71, P41–P51.
Bahorich, M. S., and S. L. Farmer, 1995, 3‐D seismic discontinuity for faults and stratigraphic features: The coherence cube: SEG Technical Program
Expanded Abstracts, 93–96.
Balch, A. H., 1971, Color Sonagrams: A New Dimension in Seismic Data Interpretation: Geophysics, 36, 1074–1098.
Barnes, A., 2006, Too many seismic attributes: CSEG Recorder, 31, 40–45.
Barnes, A., 2016, Handbook of Poststack Seismic Attributes: Society of Exploration Geophysicists.
Barnes, A., 2017, Instantaneous attributes and subseismic resolution: SEG Technical Program Expanded Abstracts, 2018–2022.
Barnes, A. E., 1993, Instantaneous spectral bandwidth and dominant frequency with applications to seismic reflection data: Geophysics, 58, 419–428.
Barnes, A. E., 1997, Genetic classification of complex seismic trace attributes: .
Bengio, Y., A. Courville, and P. Vincent, 2013, Representation learning: a review and new perspectives: IEEE Transactions on Pattern Analysis and
Machine Intelligence, 35, 1798–1828.
11/15/2021 128
References
(Review of Seismic Attribute Taxonomies)
Bishop, C. M., M. Svensén, and C. K. I. Williams, 1998, Developments of the generative topographic mapping: Neurocomputing, 21, 203–224.
Bodine, J. H., 1984, Waveform analysis with seismic attributes: SEG Technical Program Expanded Abstracts, 69, 505–509.
Brown, A., 1996, Seismic attributes and their classification: The Leading Edge, 15, 1090–1090.
Brown, A. R., 2011, Interpretation of Three-Dimensional Seismic Data: American Association of Petroleum Geologists.
Carnot, L. N. M., 1803, Géométrie de Position: J. B. M. Duprat.
Carter, D. C., 1995, Use of Windowed Seismic Attributes in 3D Seismic Facies Analysis and Pattern Recognition: International Symposium on Sequence
Stratigraphy, 73–87.
Castagna, J. P., and M. M. Backus, 1993, Offset Dependent Reflectivity-Theory and Practice of AVO (Investigations in Geophysics No. 8) (Investigations
in Geophysics, Vol 8), Pristine Condition Like New Edition.: Society Of Exploration Geophysicists.
Chen, Q., and S. Sidney, 1997a, Advances in seismic attribute technology: SEG Technical Program Expanded Abstracts.
Chen, Q., and S. Sidney, 1997b, Seismic attribute technology for reservoir forecasting and monitoring: The Leading Edge, 16, 445–448.
Chopra, S., and K. J. Marfurt, 2007, Seismic Attributes for Prospect Identification and Reservoir Characterization: Society of Exploration Geophysicists.
11/15/2021 129
References
(Review of Seismic Attribute Taxonomies)
Chopra, S., and J. P. Castagna, 2014, AVO: SEG Investigations in Geophysics 16: Society of Exploration Geophysicists.
Chopra, S., and K. J. Marfurt, 2019, Multispectral, multiazimuth, and multioffset coherence attribute applications: Interpretation, 7, SC21–SC32.
Comon, P., 1994, Independent component analysis, A new concept? Signal Processing, 36, 287–314.
Cooke, D. A., and W. A. Schneider, 1983, Generalized linear inversion of reflection seismic data: Geophysics, 48, 665–676.
Costa, F. A., C. R. Suarez, D. J. Sarzenski, and C. C. Ferreira Guedes, 2007, Using Seismic Attributes in Petrophysical Reservoir Characterization: Latin
American & Caribbean Petroleum Engineering Conference.
Dao, T., and K. J. Marfurt, 2011, The value of visualization with more than 256 colors: SEG Technical Program Expanded Abstracts, 941–945.
Desodt, G., 1994, Complex independent components analysis applied to the separation of radar signals: Proceedings of EUSIPCO-90, 665–668.
Forkman, J., J. Josse, and H.-P. Piepho, 2019, Hypothesis Tests for Principal Component Analysis When Variables are Standardized: Journal of
Agricultural, Biological, and Environmental Statistics, 24, 289–308.
Gao, D., 2011, Latest developments in seismic texture analysis for subsurface structure, facies, and reservoir characterization: A review: Geophysics, 76,
W1–W13.
Gassaway, G. S., and H. J. Richgels, 1983, SAMPLE: seismic amplitude measurement for primary lithology estimation: SEG Technical Program
Expanded Abstracts 1983, 39, 610–613.
11/15/2021 130
References
(Review of Seismic Attribute Taxonomies)
Gersztenkorn, A., and K. J. Marfurt, 1996, Eigenstructure based coherence computations: SEG Technical Program Expanded Abstracts, 328–331.
Gersztenkorn, A., and K. J. Marfurt, 1999, Eigenstructure-based coherence computations as an aid to 3-D structural and stratigraphic mapping:
Geophysics, 64, 1468–1479.
Golub, G. H., 1996, Matrix Computations, 3rd ed..: Johns Hopkins University Press.
Guo, H., S. Lewis, and K. J. Marfurt, 2008, Mapping multiple attributes to three- and four-component color models — A tutorial: Geophysics, 73, W7–
W19.
Hampson, D., T. Todorov, and B. Russell, 2000, Using multi-attribute transforms to predict log properties from seismic data: Exploration Geophysics, 31,
481–487.
Hampson, D., J. Schuelke, and J. Quirein, 2001, Use of multiattribute transforms to predict log properties from seismic data: Geophysics, 66, 220–236.
Haralick, R. M., K. Shanmugam, and I. Dinstein, 1973, Textural Features for Image Classification: IEEE Transactions on Systems, Man, and Cybernetics,
SMC-3, 610–621.
Hart, B. S., 2008, Channel detection in 3-D seismic data sing sweetness: AAPG Bulletin, 92, 733–742.
Hilterman, F. J., 2001, Seismic Amplitude Interpretation, Distinguished Instructor Short Course: Society of Exploration Geophysicists.
Hinton, G. E., S. Osindero, and Y.-W. Teh, 2006, A fast learning algorithm for deep belief nets: Neural Computation, 18, 1527–1554.
Hsu, F.-H., 2004, Behind Deep Blue: Building the Computer That Defeated the World Chess Champion: Princeton University Press.
11/15/2021 131
References
(Review of Seismic Attribute Taxonomies)
Joblove, G. H., and D. Greenberg, 1978, Color spaces for computer graphics: Proceedings of the 5th Annual Conference on Computer Graphics and
Interactive Techniques, 20–25.
Jouppi, N. P., C. Young, N. Patil, D. Patterson, G. Agrawal, R. Bajwa, S. Bates, S. Bhatia, N. Boden, A. Borchers, R. Boyle, P.-L. Cantin, C. Chao, C.
Clark, J. Coriell, M. Daley, M. Dau, J. Dean, B. Gelb, T. V. Ghaemmaghami, R. Gottipati, W. Gulland, R. Hagmann, C. R. Ho, D. Hogberg, J. Hu, R.
Hundt, D. Hurt, J. Ibarz, A. Jaffey, A. Jaworski, A. Kaplan, H. Khaitan, D. Killebrew, A. Koch, N. Kumar, S. Lacy, J. Laudon, J. Law, D. Le, C. Leary, Z. Liu,
K. Lucke, A. Lundin, G. MacKean, A. Maggiore, M. Mahony, K. Miller, R. Nagarajan, R. Narayanaswami, R. Ni, K. Nix, T. Norrie, M. Omernick, N.
Penukonda, A. Phelps, J. Ross, M. Ross, A. Salek, E. Samadiani, C. Severn, G. Sizikov, M. Snelham, J. Souter, D. Steinberg, A. Swing, M. Tan, G.
Thorson, B. Tian, H. Toma, E. Tuttle, V. Vasudevan, R. Walter, W. Wang, E. Wilcox, and D. H. Yoon, 2017, In-Datacenter Performance Analysis of a
Tensor Processing Unit: Proceedings of the 44th Annual International Symposium on Computer Architecture, 1–12.
Kalkomey, C., 1997, Potential risks when using seismic attributes as predictors of reservoir properties: Leading Edge, 16, 247–251.
Kim, Y., R. Hardisty, and K. J. Marfurt, 2019, Attribute selection in seismic facies classification: Application to a Gulf of Mexico 3D seismic survey and the
Barnett Shale: Interpretation, 7, SE281–SE297.
11/15/2021 132
References
(Review of Seismic Attribute Taxonomies)
Klokov, A., and S. Fomel, 2012, Separation and imaging of seismic diffractions using migrated dip-angle gathers: Geophysics, 77, S131–S143.
Kohonen, T., 1982, Self-organized formation of topologically correct feature maps: Biological Cybernetics, 43, 59–69.
Kohonen, T., 2001, Self-Organizing Maps: Springer, Berlin, Heidelberg.
Kohonen, T., and T. Honkela, 2007, Kohonen network: Scholarpedia Journal, 2, 1568.
Krizhevsky, A., I. Sutskever, and G. E. Hinton, 2017, ImageNet classification with deep convolutional neural networks: Communications of the ACM, 60,
84–90.
Latimer, R. B., R. Davidson, and P. van Riel, 2000, An interpreter’s guide to understanding and working with seismic-derived acoustic impedance data:
Leading Edge, 19, 242–256.
LeCun, Y., Y. Bengio, and G. Hinton, 2015, Deep learning: Nature, 521, 436–444.
Lecun, Y., L. Bottou, Y. Bengio, and P. Haffner, 1998, Gradient-based learning applied to document recognition: Proceedings of the IEEE, 86, 2278–2324.
Liner, C., 2016, Elements of 3D Seismology: Society of Exploration Geophysicists.
Liner, C., C. Li, A. Gersztenkorn, and J. Smythe, 2004, SPICE: A new general seismic attribute: SEG Technical Program Expanded Abstracts, 433–436.
Liu, J., and K. Marfurt, 2007, Multicolor display of spectral attributes: Leading Edge, 26, 268–271.
Lubo-Robles, D., and K. J. Marfurt, 2018, Unsupervised seismic facies classification using independent component analysis: SEG Technical Program
Expanded Abstracts, 2, 1603–1607.
11/15/2021 133
References
(Review of Seismic Attribute Taxonomies)
Luo, Y., W. G. Higgs, and W. S. Kowalik, 1996, Edge detection and stratigraphic analysis using 3D seismic data: SEG Technical Program Expanded
Abstracts, 324–327.
Luo, Y., S. Al-Dossary, M. Marhoon, and M. Alfaraj, 2003, Generalized Hilbert transform and its applications in geophysics: Leading Edge, 22, 198–202.
M. Turhan Taner, Naum M. Derzhi, Joel D. Walls, 2002, Method for generating an estimate of lithological characteristics of a region of the earth’s
subsurface: US Patent.
Marfurt, K., 2006, Robust estimates of 3D reflector dip and azimuth: Geophysics, 71, P29–P40.
Marfurt, K., 2015, Techniques and best practices in multiattribute display: Interpretation, 3, B1–B23.
Marfurt, K. J., 2018, Seismic Attributes as the Framework for Data Integration Throughout the Oilfield Life Cycle: Society of Exploration Geophysicists.
Marfurt, K. J., R. L. Kirlin, S. L. Farmer, and M. S. Bahorich, 1998, 3-D seismic attributes using a semblance‐based coherency algorithm: Geophysics, 63,
1150–1165.
Markidis, S., S. W. D. Chien, E. Laure, I. B. Peng, and J. S. Vetter, 2018, NVIDIA Tensor Core Programmability, Performance Precision: 2018 IEEE
International Parallel and Distributed Processing Symposium Workshops (IPDPSW), 522–531.
11/15/2021 134
References
(Review of Seismic Attribute Taxonomies)
Meldahl, P., R. Heggland, B. Bril, and P. de Groot, 2001, Identifying faults and gas chimneys using multiattributes and neural networks: Leading Edge, 20,
474–482.
Menke, W., 2018, Geophysical Data Analysis: Discrete Inverse Theory: Academic Press.
Nanni, L., S. Brahnam, S. Ghidoni, E. Menegatti, and T. Barrier, 2013, Different approaches for extracting information from the co-occurrence matrix: PloS
One, 8, e83554.
Odegard, J. E., P. Steeghs, R. G. Baraniuk, and C. S. Burrus, 1998, Rice Consortium for Computational Seismic Interpretation: Rice University.
Oldenburg, D. W., T. Scheuer, and S. Levy, 1983, Recovery of the acoustic impedance from reflection seismograms: Geophysics, 48, 1318–1337.
Onstott, G. E., M. M. Backus, C. R. Wilson, and J. D. Phillips, 1984, Color display of offset dependent reflectivity in seismic data: SEG Technical Program
Expanded Abstracts, 674–675.
Partyka, G., J. Gridley, and J. Lopez, 1999, Interpretational applications of spectral decomposition in reservoir characterization: The Leading Edge, 18,
353–360.
Pearson, K., 1901, On lines and planes of closest fit to systems of points in space: The London, Edinburgh, and Dublin Philosophical Magazine and
Journal of Science, 2, 559–572.
Pigott, J. D., M.-H. Kang, and H.-C. Han, 2013, First order seismic attributes for clastic seismic facies interpretation: Examples from the East China Sea:
Journal of Asian Earth Sciences, 66, 34–54.
11/15/2021 135
References
(Review of Seismic Attribute Taxonomies)
Purves, S., and H. Basford, 2011, Visualizing geological structure with subtractive color blending: GCSSEPM Extended Abstracts.
Qi, X., and K. Marfurt, 2018, Volumetric aberrancy to map subtle faults and flexures: Interpretation, 6, T349–T365.
Radovich, B. J., and R. B. Oliveros, 1998, 3-D sequence interpretation of seismic instantaneous attributes from the Gorgon Field: Leading Edge, 17,
1286–1293.
Randen, T., and L. Sønneland, 2005, Atlas of 3D Seismic Attributes, in A. Iske and T. Randen, eds., Mathematical Methods and Modelling in Hydrocarbon
Exploration and Production, Springer, 23–46.
Rijks, E. J. H., and J. C. E. M. Jauffred, 1991, Attribute extraction: An important application in any detailed 3-D interpretation study: Leading Edge, 10, 11–
19.
Roberts, A., 2001, Curvature attributes and their application to 3D interpreted horizons: First Break, 19.2, 85–100.
Roden, R., and C. W. Chen, 2017, Interpretation of DHI characteristics with machine learning: First Break, 35, 55–63.
de Rooij, M., and K. Tingdahl, 2003, Fault detection with meta‐attributes: SEG Technical Program Expanded Abstracts 2003, 354–357.
Roy, A., A. S. Romero-Peláez, T. J. Kwiatkowski, and K. J. Marfurt, 2014, Generative topographic mapping for seismic facies estimation of a carbonate
wash, Veracruz Basin, southern Mexico: Interpretation, 2, SA31–SA47.
Rummerfield, B. F., 1954, Reflection Quality, A Fourth Dimension: Geophysics, 19, 684–694.
11/15/2021 136
References
(Review of Seismic Attribute Taxonomies)
Russell, B., and D. Hampson, 1991, Comparison of poststack seismic inversion methods, in SEG Technical Program Expanded Abstracts, Society of
Exploration Geophysicists, 876–878.
Saggaf, M. M., M. N. Toksoz, and H. M. Mustafa, 2000, Application of Smooth Neural Networks for Inter-Well Estimation of Porosity from Seismic Data:
Massachusetts Institute of Technology. Earth Resources Laboratory.
Samuel, A. L., 1959, Some studies in machine learning using the game of Checkers: IBM Journal of Research and Development.
Schmidhuber, J., 2015, Deep learning in neural networks: an overview: Neural Networks: The Official Journal of the International Neural Network Society,
61, 85–117.
Schot, S. H., 1978, Aberrancy: Geometry of the Third Derivative: Mathematics Magazine, 51, 259–275.
Shuey, R. T., 1985, A simplification of the Zoeppritz equations: Geophysics, 50, 609–614.
Simonyan, K., and A. Zisserman, 2014, Very Deep Convolutional Networks for Large-Scale Image Recognition: ArXiv [Cs.CV].
Smith, W. B., 1898, Infinitesimal Analysis, by William Benjamin Smith: The Macmillan Company.
Smythe, J., A. Gersztenkorn, B. Radovich, C.-F. Li, and C. Liner, 2004, Gulf of Mexico shelf framework interpretation using a bed-form attribute from
spectral imaging: Leading Edge, 23, 921–926.
11/15/2021 137
References
(Review of Seismic Attribute Taxonomies)
Taner, M. T., 1997, Kohonen’s Self Organizing Networks with “Conscience”: Rock Solid Images.
Taner, M. T., 1999, Seismic Attributes, Their Classification And Projected Utilization: 6th International Congress of the Brazilian Geophysical Society.
Taner, M. T., 2001, Seismic attributes: CSEG Recorder, 26, 48–56.
Taner, M. T., and R. E. Sheriff, 1977, Application of Amplitude, Frequency, and Other Attributes to Stratigraphic and Hydrocarbon Determination, in C. E.
Payton, ed., Seismic Stratigraphy: Applications to Hydrocarbon Exploration (AAPG Memoir 26), , . AAPG Special VolumesAmerican Association of
Petroleum Geologists, 301–327.
Taner, M. T., F. Koehler, and R. E. Sheriff, 1979, Complex seismic trace analysis: Geophysics, 44, 1041–1063.
Taner, M. T., J. Walls, and G. Taylor, 2001, Seismic Attributes, Their Use in Petrophysical Classification: 63rd EAGE Conference & Exhibition.
Taner, M. T., J. S. Schuelke, R. O’Doherty, and E. Baysal, 1994, Seismic attributes revisited: SEG Technical Program Expanded Abstracts, 36, 1104–
1106.
Tayyab, M. N., S. Asim, M. M. Siddiqui, M. Naeem, S. H. Solange, and F. K. Babar, 2017, Seismic attributes’ application to evaluate the Goru clastics of
Indus Basin, Pakistan: Arabian Journal of Geosciences, 10, 158.
Todorov, T., D. Hampson, and B. Russell, 1997, Sonic Log Predictions Using Seismic Attributes: University of Calgary.
Todorov, T. I., R. R. Stewart, and D. P. Hampson, 1998, Porosity Prediction Using Attributes from 3C–3D Seismic Data: researchgate.net.
11/15/2021 138
References
(Review of Seismic Attribute Taxonomies)
Tukey, J. W., 1962, The Future of Data Analysis: Annals of Mathematical Statistics, 33, 1–67.
Turhan Taner, M., 2002, System for estimating the locations of shaley subsurface formations: US Patent.
Walker, C., and T. J. Ulrych, 1983, Autoregressive recovery of the acoustic impedance: Geophysics, 48, 1338–1350.
Walls, J. D., M. T. Taner, T. Guidish, G. Taylor, D. Dumas, and N. Derzhi, 1999, North Sea reservoir characterization using rock physics, seismic
attributes, and neural networks; a case history: SEG Technical Program Expanded Abstracts, 57, 1572–1575.
Walls, J. D., M. T. Taner, G. Taylor, M. Smith, M. Carr, N. Derzhi, J. Drummond, D. McGuire, S. Morris, J. Bregar, and J. Lakings, 2000, Seismic reservoir
characterization of a mid‐continent fluvial system using rock physics, poststack seismic attributes and neural networks; a case history: SEG Technical
Program Expanded Abstracts, 57, 1437–1439.
Walls, J. D., M. T. Taner, G. Taylor, M. Smith, M. Carr, N. Derzhi, J. Drummond, D. McGuire, S. Morris, J. Bregar, and J. Lakings, 2002, Seismic reservoir
characterization of a U.S. Midcontinent fluvial system using rock physics, poststack seismic attributes, and neural networks: Leading Edge, 21, 428–436.
Xing, L., V. Aarre, A. Barnes, T. Theoharis, N. Salman, and E. Tjåland, 2017, Instantaneous Frequency Seismic Attribute Benchmarking: SEG Technical
Program Expanded Abstracts.
11/15/2021 139
References
(Review of Seismic Attribute Taxonomies)
Xing, L., V. Aarre, A. E. Barnes, T. Theoharis, N. Salman, and E. Tjåland, 2019, Seismic attribute benchmarking on instantaneous frequency: Geophysics,
84, O63–O72.
Yenugu, M. (moe), K. J. Marfurt, and S. Matson, 2010, Seismic texture analysis for reservoir prediction and characterization: Leading Edge, 29, 1116–
1121.
Zhang, R., and J. Castagna, 2011, Seismic sparse-layer reflectivity inversion using basis pursuit decomposition: Geophysics, 76, R147–R158.
1977, Seismic Stratigraphy: Applications to Hydrocarbon Exploration (C. E. Payton, ed.,): American Association of Petroleum Geologists.
2014, Understanding Seismic Anisotropy in Exploration and Exploitation, Second Edition, Second Edition. (L. Thomsen, ed.,): The Society of Exploration
Geophysicists.
2007, Artificial General Intelligence (B. Goertzel and C. Pennachin, eds.,): Springer, Berlin, Heidelberg.
2012, Characterizing the Robustness of Science: After the Practice Turn in Philosophy of Science (L. Soler, E. Trizio, T. Nickles, and W. Wimsatt, eds.,):
Springer, Dordrecht.
11/15/2021 140
References
(Spectral Similarity Fault Enhancement)
Al-Dossary, S., and K. Al-Garni, 2013, Fault detection and characterization using a 3D multidirectional Sobel filter: Presented at the Saudi Arabia Section
Technical Symposium and Exhibition, SPE, SPE-168061-MS.
Aqrawi, A., and T. Boe, 2011, Improved fault segmentation using a dip guided and modified 3D Sobel filter: 81st Annual International Meeting, SEG,
Expanded Abstracts, 999–1003.
Bahorich, M. S., and S. L. Farmer, 1995, 3‐D seismic discontinuity for faults and stratigraphic features: The coherence cube: 65th Annual International
Meeting, SEG, Expanded Abstracts, 93–96, Crossref.
Basir, H., A. Javaherian, and M. Yaraki, 2013, Multi-attribute ant-tracking and neural network for fault detection: A case study of an Iranian oilfield: Journal
of Geophysics and Engineering, 10, 015009, Crossref.
Dorn, G., B. Kadlec, and M. Patty, 2012, Imaging faults in 3D seismic volumes: 82nd Annual International Meeting, SEG, Expanded Abstracts, Crossref.
Gao, D., 2013, Wavelet spectral probe for seismic structure interpretation and fracture characterization: A workflow with case studies: Geophysics, 78, no.
5, O57–O67, Crossref.
Garsztenkorn, A., and K. J. Marfurt, 1999, Eigenstructure-based coherence computations as an aid to 3-D structural and stratigraphic mapping:
Geophysics, 64, 1468–1479, Crossref.
Marfurt, K. J., 2006, Robust estimates of reflector dip and azimuth: Geophysics, 71, no. 4, P29–P40, Crossref.
11/15/2021 141
References
(Spectral Similarity Fault Enhancement)
Marfurt, K. J., R. Kirlin, S. Farmer, and M. Bahorich, 1998, 3-D seismic attributes using a semblance-based coherency algorithm: Geophysics, 63, 1150–
1165, Crossref.
Pedersen, S. I., T. Randen, L. Sønneland, and O. Steen, 2002, Automatic 3D fault interpretation by artificial ants: 72nd Annual International Meeting,
SEG, Expanded Abstracts, 512–515.
Randen, T., S. Pedersen, and L. Sønneland, 2001, Automatic extraction of fault surfaces from three‐dimensional seismic data: 71st Annual International
Meeting, SEG, Expanded Abstracts, 551–554.
11/15/2021 142
References
(Efficiency Gains in Inversion-based Interpretation)
Connolly, P.A., 1999, Elastic impedance: The Leading Edge, 18, 438-452, doi: 10.1190/1.1438307.
Kohonen, T., 1989, Self-organization and associative memory, 3rd Edition, Springer, New York.
Tao Zhao, Sumit Verma, Jie Qi, and Kurt J. Marfurt (2015) Supervised and unsupervised learning: how machines can assist quantitative seismic
interpretation. SEG Technical Program Expanded Abstracts 2015: pp. 1734-1738. doi: 10.1190/segam2015-5924540.1
11/15/2021 143

More Related Content

What's hot

Open CAESAR Initiative
Open CAESAR InitiativeOpen CAESAR Initiative
Open CAESAR InitiativeMaged Elaasar
 
Reservoir Geophysics : Brian Russell Lecture 2
Reservoir Geophysics : Brian Russell Lecture 2Reservoir Geophysics : Brian Russell Lecture 2
Reservoir Geophysics : Brian Russell Lecture 2Ali Osman Öncel
 
Semantic Web, Ontology, and Ontology Learning: Introduction
Semantic Web, Ontology, and Ontology Learning: IntroductionSemantic Web, Ontology, and Ontology Learning: Introduction
Semantic Web, Ontology, and Ontology Learning: IntroductionKent State University
 
Stealth technology 2
Stealth technology 2Stealth technology 2
Stealth technology 2vivek bisht
 
Natural Language Processing
Natural Language ProcessingNatural Language Processing
Natural Language ProcessingCloudxLab
 
Ontology Building and its Application using Hozo
Ontology Building and its Application using HozoOntology Building and its Application using Hozo
Ontology Building and its Application using HozoKouji Kozaki
 
Reservoir modeling work flow chart
Reservoir modeling work flow chartReservoir modeling work flow chart
Reservoir modeling work flow chartDr. Arzu Javadova
 
Abstractive Text Summarization
Abstractive Text SummarizationAbstractive Text Summarization
Abstractive Text SummarizationTho Phan
 
Halliburton 2001 basic petroleum geology
Halliburton 2001   basic petroleum geologyHalliburton 2001   basic petroleum geology
Halliburton 2001 basic petroleum geologyCogito123
 
Borehole Seismic Solutions for Integrated Reservoir Characterization and Moni...
Borehole Seismic Solutions for Integrated Reservoir Characterization and Moni...Borehole Seismic Solutions for Integrated Reservoir Characterization and Moni...
Borehole Seismic Solutions for Integrated Reservoir Characterization and Moni...Society of Petroleum Engineers
 
Capella Days 2021 | Using MBSE to Integrate Engineering Undergraduate Courses...
Capella Days 2021 | Using MBSE to Integrate Engineering Undergraduate Courses...Capella Days 2021 | Using MBSE to Integrate Engineering Undergraduate Courses...
Capella Days 2021 | Using MBSE to Integrate Engineering Undergraduate Courses...Obeo
 
Taxonomies and Ontologies – The Yin and Yang of Knowledge Modelling
Taxonomies and Ontologies – The Yin and Yang of Knowledge ModellingTaxonomies and Ontologies – The Yin and Yang of Knowledge Modelling
Taxonomies and Ontologies – The Yin and Yang of Knowledge ModellingSemantic Web Company
 
Petrel F 5 horizon interpretation 2018 v1.0
Petrel F 5 horizon interpretation 2018 v1.0Petrel F 5 horizon interpretation 2018 v1.0
Petrel F 5 horizon interpretation 2018 v1.0Sigve Hamilton Aspelund
 
Caractérisation des réservoirs compacts naturellement fracturés
Caractérisation des réservoirs compacts naturellement fracturésCaractérisation des réservoirs compacts naturellement fracturés
Caractérisation des réservoirs compacts naturellement fracturésNassim ABBAS
 
How to Optimize Dynamic Adaptive Video Streaming? Challenges and Solutions
How to Optimize Dynamic Adaptive Video Streaming? Challenges and SolutionsHow to Optimize Dynamic Adaptive Video Streaming? Challenges and Solutions
How to Optimize Dynamic Adaptive Video Streaming? Challenges and SolutionsAlpen-Adria-Universität
 
"Conref, conkeyref, conrefpush" - reuse strategies in DITA when migrating leg...
"Conref, conkeyref, conrefpush" - reuse strategies in DITA when migrating leg..."Conref, conkeyref, conrefpush" - reuse strategies in DITA when migrating leg...
"Conref, conkeyref, conrefpush" - reuse strategies in DITA when migrating leg...Adam Sanyo
 

What's hot (20)

Introduction to velocity model building
Introduction to velocity model buildingIntroduction to velocity model building
Introduction to velocity model building
 
Open CAESAR Initiative
Open CAESAR InitiativeOpen CAESAR Initiative
Open CAESAR Initiative
 
Reservoir Geophysics : Brian Russell Lecture 2
Reservoir Geophysics : Brian Russell Lecture 2Reservoir Geophysics : Brian Russell Lecture 2
Reservoir Geophysics : Brian Russell Lecture 2
 
Semantic Web, Ontology, and Ontology Learning: Introduction
Semantic Web, Ontology, and Ontology Learning: IntroductionSemantic Web, Ontology, and Ontology Learning: Introduction
Semantic Web, Ontology, and Ontology Learning: Introduction
 
Stealth technology 2
Stealth technology 2Stealth technology 2
Stealth technology 2
 
Natural Language Processing
Natural Language ProcessingNatural Language Processing
Natural Language Processing
 
Ontology Building and its Application using Hozo
Ontology Building and its Application using HozoOntology Building and its Application using Hozo
Ontology Building and its Application using Hozo
 
Reservoir modeling work flow chart
Reservoir modeling work flow chartReservoir modeling work flow chart
Reservoir modeling work flow chart
 
Abstractive Text Summarization
Abstractive Text SummarizationAbstractive Text Summarization
Abstractive Text Summarization
 
Halliburton 2001 basic petroleum geology
Halliburton 2001   basic petroleum geologyHalliburton 2001   basic petroleum geology
Halliburton 2001 basic petroleum geology
 
Petrel course
Petrel coursePetrel course
Petrel course
 
Borehole Seismic Solutions for Integrated Reservoir Characterization and Moni...
Borehole Seismic Solutions for Integrated Reservoir Characterization and Moni...Borehole Seismic Solutions for Integrated Reservoir Characterization and Moni...
Borehole Seismic Solutions for Integrated Reservoir Characterization and Moni...
 
Capella Days 2021 | Using MBSE to Integrate Engineering Undergraduate Courses...
Capella Days 2021 | Using MBSE to Integrate Engineering Undergraduate Courses...Capella Days 2021 | Using MBSE to Integrate Engineering Undergraduate Courses...
Capella Days 2021 | Using MBSE to Integrate Engineering Undergraduate Courses...
 
Ontology learning
Ontology learningOntology learning
Ontology learning
 
3D Facies Modeling
3D Facies Modeling3D Facies Modeling
3D Facies Modeling
 
Taxonomies and Ontologies – The Yin and Yang of Knowledge Modelling
Taxonomies and Ontologies – The Yin and Yang of Knowledge ModellingTaxonomies and Ontologies – The Yin and Yang of Knowledge Modelling
Taxonomies and Ontologies – The Yin and Yang of Knowledge Modelling
 
Petrel F 5 horizon interpretation 2018 v1.0
Petrel F 5 horizon interpretation 2018 v1.0Petrel F 5 horizon interpretation 2018 v1.0
Petrel F 5 horizon interpretation 2018 v1.0
 
Caractérisation des réservoirs compacts naturellement fracturés
Caractérisation des réservoirs compacts naturellement fracturésCaractérisation des réservoirs compacts naturellement fracturés
Caractérisation des réservoirs compacts naturellement fracturés
 
How to Optimize Dynamic Adaptive Video Streaming? Challenges and Solutions
How to Optimize Dynamic Adaptive Video Streaming? Challenges and SolutionsHow to Optimize Dynamic Adaptive Video Streaming? Challenges and Solutions
How to Optimize Dynamic Adaptive Video Streaming? Challenges and Solutions
 
"Conref, conkeyref, conrefpush" - reuse strategies in DITA when migrating leg...
"Conref, conkeyref, conrefpush" - reuse strategies in DITA when migrating leg..."Conref, conkeyref, conrefpush" - reuse strategies in DITA when migrating leg...
"Conref, conkeyref, conrefpush" - reuse strategies in DITA when migrating leg...
 

Similar to Seismic Attributes Taxonomy

Structural trend analysis for online social networks
Structural trend analysis for online social networksStructural trend analysis for online social networks
Structural trend analysis for online social networksMarkus Fensterer
 
Spatial data analysis 1
Spatial data analysis 1Spatial data analysis 1
Spatial data analysis 1Johan Blomme
 
Final_Presentation_SP2-2022-35.pptx
Final_Presentation_SP2-2022-35.pptxFinal_Presentation_SP2-2022-35.pptx
Final_Presentation_SP2-2022-35.pptxHarshilBaksani
 
In search of lost knowledge: joining the dots with Linked Data
In search of lost knowledge: joining the dots with Linked DataIn search of lost knowledge: joining the dots with Linked Data
In search of lost knowledge: joining the dots with Linked Datajonblower
 
Updating Ecological Niche Modeling Methodologies
Updating Ecological Niche Modeling MethodologiesUpdating Ecological Niche Modeling Methodologies
Updating Ecological Niche Modeling MethodologiesTown Peterson
 
D1T3 enm workflows updated
D1T3 enm workflows updatedD1T3 enm workflows updated
D1T3 enm workflows updatedTown Peterson
 
A multi criteria evaluation of environmental databases using hasse
A multi criteria evaluation of environmental databases using hasseA multi criteria evaluation of environmental databases using hasse
A multi criteria evaluation of environmental databases using hassebalamurugan.k Kalibalamurugan
 
Data Analysis in Research for Social Study
Data Analysis in Research for Social StudyData Analysis in Research for Social Study
Data Analysis in Research for Social StudyLisaneworkSileshi
 
EarthCube Stakeholder Alignment Survey - End-Users & Professional Societies W...
EarthCube Stakeholder Alignment Survey - End-Users & Professional Societies W...EarthCube Stakeholder Alignment Survey - End-Users & Professional Societies W...
EarthCube Stakeholder Alignment Survey - End-Users & Professional Societies W...EarthCube
 
Introduction to Data Mining
Introduction to Data MiningIntroduction to Data Mining
Introduction to Data MiningAbcdDcba12
 
Exploring Data (1).pptx
Exploring Data (1).pptxExploring Data (1).pptx
Exploring Data (1).pptxgina458018
 
FEM and it's applications
FEM and it's applicationsFEM and it's applications
FEM and it's applicationsChetan Mahatme
 
Spatial data analysis
Spatial data analysisSpatial data analysis
Spatial data analysisJohan Blomme
 
INTRODUCTION TO DATA COLLECTION INSTRUMENTS (DCIs)
INTRODUCTION TO DATA COLLECTION INSTRUMENTS (DCIs)INTRODUCTION TO DATA COLLECTION INSTRUMENTS (DCIs)
INTRODUCTION TO DATA COLLECTION INSTRUMENTS (DCIs)Auver2012
 

Similar to Seismic Attributes Taxonomy (20)

Big Data Challenges and Trust Management at CTS -2016
Big Data Challenges and Trust Management at CTS -2016Big Data Challenges and Trust Management at CTS -2016
Big Data Challenges and Trust Management at CTS -2016
 
Sensors1(1)
Sensors1(1)Sensors1(1)
Sensors1(1)
 
Structural trend analysis for online social networks
Structural trend analysis for online social networksStructural trend analysis for online social networks
Structural trend analysis for online social networks
 
MUMS: Transition & SPUQ Workshop - Uncertainty Quantification in Materials Wo...
MUMS: Transition & SPUQ Workshop - Uncertainty Quantification in Materials Wo...MUMS: Transition & SPUQ Workshop - Uncertainty Quantification in Materials Wo...
MUMS: Transition & SPUQ Workshop - Uncertainty Quantification in Materials Wo...
 
Spatial data analysis 1
Spatial data analysis 1Spatial data analysis 1
Spatial data analysis 1
 
Final_Presentation_SP2-2022-35.pptx
Final_Presentation_SP2-2022-35.pptxFinal_Presentation_SP2-2022-35.pptx
Final_Presentation_SP2-2022-35.pptx
 
In search of lost knowledge: joining the dots with Linked Data
In search of lost knowledge: joining the dots with Linked DataIn search of lost knowledge: joining the dots with Linked Data
In search of lost knowledge: joining the dots with Linked Data
 
Updating Ecological Niche Modeling Methodologies
Updating Ecological Niche Modeling MethodologiesUpdating Ecological Niche Modeling Methodologies
Updating Ecological Niche Modeling Methodologies
 
D1T3 enm workflows updated
D1T3 enm workflows updatedD1T3 enm workflows updated
D1T3 enm workflows updated
 
A multi criteria evaluation of environmental databases using hasse
A multi criteria evaluation of environmental databases using hasseA multi criteria evaluation of environmental databases using hasse
A multi criteria evaluation of environmental databases using hasse
 
Climate Extremes Workshop - Networks and Extremes: Review and Further Studies...
Climate Extremes Workshop - Networks and Extremes: Review and Further Studies...Climate Extremes Workshop - Networks and Extremes: Review and Further Studies...
Climate Extremes Workshop - Networks and Extremes: Review and Further Studies...
 
MUDROD - Ranking
MUDROD - RankingMUDROD - Ranking
MUDROD - Ranking
 
Data Analysis in Research for Social Study
Data Analysis in Research for Social StudyData Analysis in Research for Social Study
Data Analysis in Research for Social Study
 
EarthCube Stakeholder Alignment Survey - End-Users & Professional Societies W...
EarthCube Stakeholder Alignment Survey - End-Users & Professional Societies W...EarthCube Stakeholder Alignment Survey - End-Users & Professional Societies W...
EarthCube Stakeholder Alignment Survey - End-Users & Professional Societies W...
 
Introduction to Data Mining
Introduction to Data MiningIntroduction to Data Mining
Introduction to Data Mining
 
Exploring Data (1).pptx
Exploring Data (1).pptxExploring Data (1).pptx
Exploring Data (1).pptx
 
FEM and it's applications
FEM and it's applicationsFEM and it's applications
FEM and it's applications
 
Data_exploration.ppt
Data_exploration.pptData_exploration.ppt
Data_exploration.ppt
 
Spatial data analysis
Spatial data analysisSpatial data analysis
Spatial data analysis
 
INTRODUCTION TO DATA COLLECTION INSTRUMENTS (DCIs)
INTRODUCTION TO DATA COLLECTION INSTRUMENTS (DCIs)INTRODUCTION TO DATA COLLECTION INSTRUMENTS (DCIs)
INTRODUCTION TO DATA COLLECTION INSTRUMENTS (DCIs)
 

Recently uploaded

SOLUBLE PATTERN RECOGNITION RECEPTORS.pptx
SOLUBLE PATTERN RECOGNITION RECEPTORS.pptxSOLUBLE PATTERN RECOGNITION RECEPTORS.pptx
SOLUBLE PATTERN RECOGNITION RECEPTORS.pptxkessiyaTpeter
 
Call Girls in Mayapuri Delhi 💯Call Us 🔝9953322196🔝 💯Escort.
Call Girls in Mayapuri Delhi 💯Call Us 🔝9953322196🔝 💯Escort.Call Girls in Mayapuri Delhi 💯Call Us 🔝9953322196🔝 💯Escort.
Call Girls in Mayapuri Delhi 💯Call Us 🔝9953322196🔝 💯Escort.aasikanpl
 
Call Girls in Hauz Khas Delhi 💯Call Us 🔝9953322196🔝 💯Escort.
Call Girls in Hauz Khas Delhi 💯Call Us 🔝9953322196🔝 💯Escort.Call Girls in Hauz Khas Delhi 💯Call Us 🔝9953322196🔝 💯Escort.
Call Girls in Hauz Khas Delhi 💯Call Us 🔝9953322196🔝 💯Escort.aasikanpl
 
Vision and reflection on Mining Software Repositories research in 2024
Vision and reflection on Mining Software Repositories research in 2024Vision and reflection on Mining Software Repositories research in 2024
Vision and reflection on Mining Software Repositories research in 2024AyushiRastogi48
 
TOPIC 8 Temperature and Heat.pdf physics
TOPIC 8 Temperature and Heat.pdf physicsTOPIC 8 Temperature and Heat.pdf physics
TOPIC 8 Temperature and Heat.pdf physicsssuserddc89b
 
‏‏VIRUS - 123455555555555555555555555555555555555555
‏‏VIRUS -  123455555555555555555555555555555555555555‏‏VIRUS -  123455555555555555555555555555555555555555
‏‏VIRUS - 123455555555555555555555555555555555555555kikilily0909
 
Speech, hearing, noise, intelligibility.pptx
Speech, hearing, noise, intelligibility.pptxSpeech, hearing, noise, intelligibility.pptx
Speech, hearing, noise, intelligibility.pptxpriyankatabhane
 
Is RISC-V ready for HPC workload? Maybe?
Is RISC-V ready for HPC workload? Maybe?Is RISC-V ready for HPC workload? Maybe?
Is RISC-V ready for HPC workload? Maybe?Patrick Diehl
 
Microphone- characteristics,carbon microphone, dynamic microphone.pptx
Microphone- characteristics,carbon microphone, dynamic microphone.pptxMicrophone- characteristics,carbon microphone, dynamic microphone.pptx
Microphone- characteristics,carbon microphone, dynamic microphone.pptxpriyankatabhane
 
Transposable elements in prokaryotes.ppt
Transposable elements in prokaryotes.pptTransposable elements in prokaryotes.ppt
Transposable elements in prokaryotes.pptArshadWarsi13
 
Forest laws, Indian forest laws, why they are important
Forest laws, Indian forest laws, why they are importantForest laws, Indian forest laws, why they are important
Forest laws, Indian forest laws, why they are importantadityabhardwaj282
 
Module 4: Mendelian Genetics and Punnett Square
Module 4:  Mendelian Genetics and Punnett SquareModule 4:  Mendelian Genetics and Punnett Square
Module 4: Mendelian Genetics and Punnett SquareIsiahStephanRadaza
 
Scheme-of-Work-Science-Stage-4 cambridge science.docx
Scheme-of-Work-Science-Stage-4 cambridge science.docxScheme-of-Work-Science-Stage-4 cambridge science.docx
Scheme-of-Work-Science-Stage-4 cambridge science.docxyaramohamed343013
 
Call Us ≽ 9953322196 ≼ Call Girls In Mukherjee Nagar(Delhi) |
Call Us ≽ 9953322196 ≼ Call Girls In Mukherjee Nagar(Delhi) |Call Us ≽ 9953322196 ≼ Call Girls In Mukherjee Nagar(Delhi) |
Call Us ≽ 9953322196 ≼ Call Girls In Mukherjee Nagar(Delhi) |aasikanpl
 
zoogeography of pakistan.pptx fauna of Pakistan
zoogeography of pakistan.pptx fauna of Pakistanzoogeography of pakistan.pptx fauna of Pakistan
zoogeography of pakistan.pptx fauna of Pakistanzohaibmir069
 
RESPIRATORY ADAPTATIONS TO HYPOXIA IN HUMNAS.pptx
RESPIRATORY ADAPTATIONS TO HYPOXIA IN HUMNAS.pptxRESPIRATORY ADAPTATIONS TO HYPOXIA IN HUMNAS.pptx
RESPIRATORY ADAPTATIONS TO HYPOXIA IN HUMNAS.pptxFarihaAbdulRasheed
 
Call Girls in Munirka Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Munirka Delhi 💯Call Us 🔝8264348440🔝Call Girls in Munirka Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Munirka Delhi 💯Call Us 🔝8264348440🔝soniya singh
 
LIGHT-PHENOMENA-BY-CABUALDIONALDOPANOGANCADIENTE-CONDEZA (1).pptx
LIGHT-PHENOMENA-BY-CABUALDIONALDOPANOGANCADIENTE-CONDEZA (1).pptxLIGHT-PHENOMENA-BY-CABUALDIONALDOPANOGANCADIENTE-CONDEZA (1).pptx
LIGHT-PHENOMENA-BY-CABUALDIONALDOPANOGANCADIENTE-CONDEZA (1).pptxmalonesandreagweneth
 
Bentham & Hooker's Classification. along with the merits and demerits of the ...
Bentham & Hooker's Classification. along with the merits and demerits of the ...Bentham & Hooker's Classification. along with the merits and demerits of the ...
Bentham & Hooker's Classification. along with the merits and demerits of the ...Nistarini College, Purulia (W.B) India
 

Recently uploaded (20)

SOLUBLE PATTERN RECOGNITION RECEPTORS.pptx
SOLUBLE PATTERN RECOGNITION RECEPTORS.pptxSOLUBLE PATTERN RECOGNITION RECEPTORS.pptx
SOLUBLE PATTERN RECOGNITION RECEPTORS.pptx
 
Volatile Oils Pharmacognosy And Phytochemistry -I
Volatile Oils Pharmacognosy And Phytochemistry -IVolatile Oils Pharmacognosy And Phytochemistry -I
Volatile Oils Pharmacognosy And Phytochemistry -I
 
Call Girls in Mayapuri Delhi 💯Call Us 🔝9953322196🔝 💯Escort.
Call Girls in Mayapuri Delhi 💯Call Us 🔝9953322196🔝 💯Escort.Call Girls in Mayapuri Delhi 💯Call Us 🔝9953322196🔝 💯Escort.
Call Girls in Mayapuri Delhi 💯Call Us 🔝9953322196🔝 💯Escort.
 
Call Girls in Hauz Khas Delhi 💯Call Us 🔝9953322196🔝 💯Escort.
Call Girls in Hauz Khas Delhi 💯Call Us 🔝9953322196🔝 💯Escort.Call Girls in Hauz Khas Delhi 💯Call Us 🔝9953322196🔝 💯Escort.
Call Girls in Hauz Khas Delhi 💯Call Us 🔝9953322196🔝 💯Escort.
 
Vision and reflection on Mining Software Repositories research in 2024
Vision and reflection on Mining Software Repositories research in 2024Vision and reflection on Mining Software Repositories research in 2024
Vision and reflection on Mining Software Repositories research in 2024
 
TOPIC 8 Temperature and Heat.pdf physics
TOPIC 8 Temperature and Heat.pdf physicsTOPIC 8 Temperature and Heat.pdf physics
TOPIC 8 Temperature and Heat.pdf physics
 
‏‏VIRUS - 123455555555555555555555555555555555555555
‏‏VIRUS -  123455555555555555555555555555555555555555‏‏VIRUS -  123455555555555555555555555555555555555555
‏‏VIRUS - 123455555555555555555555555555555555555555
 
Speech, hearing, noise, intelligibility.pptx
Speech, hearing, noise, intelligibility.pptxSpeech, hearing, noise, intelligibility.pptx
Speech, hearing, noise, intelligibility.pptx
 
Is RISC-V ready for HPC workload? Maybe?
Is RISC-V ready for HPC workload? Maybe?Is RISC-V ready for HPC workload? Maybe?
Is RISC-V ready for HPC workload? Maybe?
 
Microphone- characteristics,carbon microphone, dynamic microphone.pptx
Microphone- characteristics,carbon microphone, dynamic microphone.pptxMicrophone- characteristics,carbon microphone, dynamic microphone.pptx
Microphone- characteristics,carbon microphone, dynamic microphone.pptx
 
Transposable elements in prokaryotes.ppt
Transposable elements in prokaryotes.pptTransposable elements in prokaryotes.ppt
Transposable elements in prokaryotes.ppt
 
Forest laws, Indian forest laws, why they are important
Forest laws, Indian forest laws, why they are importantForest laws, Indian forest laws, why they are important
Forest laws, Indian forest laws, why they are important
 
Module 4: Mendelian Genetics and Punnett Square
Module 4:  Mendelian Genetics and Punnett SquareModule 4:  Mendelian Genetics and Punnett Square
Module 4: Mendelian Genetics and Punnett Square
 
Scheme-of-Work-Science-Stage-4 cambridge science.docx
Scheme-of-Work-Science-Stage-4 cambridge science.docxScheme-of-Work-Science-Stage-4 cambridge science.docx
Scheme-of-Work-Science-Stage-4 cambridge science.docx
 
Call Us ≽ 9953322196 ≼ Call Girls In Mukherjee Nagar(Delhi) |
Call Us ≽ 9953322196 ≼ Call Girls In Mukherjee Nagar(Delhi) |Call Us ≽ 9953322196 ≼ Call Girls In Mukherjee Nagar(Delhi) |
Call Us ≽ 9953322196 ≼ Call Girls In Mukherjee Nagar(Delhi) |
 
zoogeography of pakistan.pptx fauna of Pakistan
zoogeography of pakistan.pptx fauna of Pakistanzoogeography of pakistan.pptx fauna of Pakistan
zoogeography of pakistan.pptx fauna of Pakistan
 
RESPIRATORY ADAPTATIONS TO HYPOXIA IN HUMNAS.pptx
RESPIRATORY ADAPTATIONS TO HYPOXIA IN HUMNAS.pptxRESPIRATORY ADAPTATIONS TO HYPOXIA IN HUMNAS.pptx
RESPIRATORY ADAPTATIONS TO HYPOXIA IN HUMNAS.pptx
 
Call Girls in Munirka Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Munirka Delhi 💯Call Us 🔝8264348440🔝Call Girls in Munirka Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Munirka Delhi 💯Call Us 🔝8264348440🔝
 
LIGHT-PHENOMENA-BY-CABUALDIONALDOPANOGANCADIENTE-CONDEZA (1).pptx
LIGHT-PHENOMENA-BY-CABUALDIONALDOPANOGANCADIENTE-CONDEZA (1).pptxLIGHT-PHENOMENA-BY-CABUALDIONALDOPANOGANCADIENTE-CONDEZA (1).pptx
LIGHT-PHENOMENA-BY-CABUALDIONALDOPANOGANCADIENTE-CONDEZA (1).pptx
 
Bentham & Hooker's Classification. along with the merits and demerits of the ...
Bentham & Hooker's Classification. along with the merits and demerits of the ...Bentham & Hooker's Classification. along with the merits and demerits of the ...
Bentham & Hooker's Classification. along with the merits and demerits of the ...
 

Seismic Attributes Taxonomy

  • 1. Seismic Attributes: Taxonomic Classification for Pragmatic Machine Learning Application Dustin T. Dewett Ph.D. Defense The University of Oklahoma Committee: Dr. John Pigott (Chair) Dr. Kurt Marfurt (Co-Chair) Dr. Zulfiquar Reza Dr. Younane Abousleiman 11/15/2021 1
  • 2. Abstract • Seismic interpretation involves a hierarchy of thought • Recall — recognize features from prior material • Generalize — classify new examples from prior context • Develop — applying acquired facts or information in a different way • Inference — differentiating similar observations • Synthesis — combining prior learned knowledge to understand a system • Evaluation — creating realistic models from unrelated information • Seismic interpretation is progressing toward an increase in automation and machine learning techniques. • Failure to appreciate these nuances will result in misapplication. after Anderson and Krathwohl, 2001 11/15/2021 2
  • 7. Develop • Relative motion • Classified as a “fault” after Dewett, 2017 11/15/2021 7
  • 8. Inference • Classified as “normal faults” • Seen in some geologic settings, but not others after Dewett, 2017 11/15/2021 8
  • 9. Synthesis • Normal faults are seen in extensional settings • May occur near subduction zones (local extension) • Conceptual understanding and/or quantification of: • Displacement • Stress • Deformation after Dewett, 2017 11/15/2021 9
  • 10. Evaluation What types of faults are likely observed near a salt dome? 11/15/2021 10
  • 11. Why is this important to understand? • “Recently it has been claimed that unsupervised classification of subsampled instantaneous attributes achieves subseismic resolution.” • “Increasing seismic resolution through classification of subsampled attributes is suspect.” • “Seismic interpreters should remain wary of seismic attribute processes that promise subseismic resolution.” Barnes, 2017 11/15/2021 11
  • 12. Outline • Review of Seismic Attribute Taxonomies • Spectral Similarity Fault Enhancement • Efficiency Gains in Inversion- based Interpretation • Conclusions • Motivation • Historical Context • Creating Useful Taxonomies • Discussion 11/15/2021 12
  • 13. Outline • Review of Seismic Attribute Taxonomies • Spectral Similarity Fault Enhancement • Efficiency Gains in Inversion- based Interpretation • Conclusions • Motivation • Spectral Similarity Defined • Case Study #1: High S:N • Case Study #2: Low S:N • Discussion 11/15/2021 13
  • 14. Outline • Review of Seismic Attribute Taxonomies • Spectral Similarity Fault Enhancement • Efficiency Gains in Inversion- based Interpretation • Conclusions • Motivation • Geologic Background • Results 11/15/2021 14
  • 15. Outline • Review of Seismic Attribute Taxonomies • Spectral Similarity Fault Enhancement • Efficiency Gains in Inversion- based Interpretation • Conclusions 11/15/2021 15
  • 16. Review of Seismic Attribute Taxonomies Motivation 11/15/2021 16
  • 17. Review of Seismic Attribute Taxonomies Motivation: • Communication — There are two data analysis domains used when discussing seismic attribute: exploratory data analysis (EDA) and confirmatory data analysis (CDA). Communication is less efficient when communicating across these domains. • Machine learning — Algorithms take two broad forms: supervised and unsupervised. Unsupervised algorithms require a user to be well versed in seismic attribute theory, esp. how attributes are related to one another. Supervised algorithms suffer from a similar bias, but it is less transparent. 11/15/2021 17
  • 18. Review of Seismic Attribute Taxonomies Motivation: • Communication — There are two data analysis domains used when discussing seismic attribute: exploratory data analysis (EDA) and confirmatory data analysis (CDA). Communication is less efficient when communicating across these domains. • Machine learning — Algorithms take two broad forms: supervised and unsupervised. Unsupervised algorithms require a user to be well versed in seismic attribute theory, esp. how attributes are related to one another. Supervised algorithms suffer from a similar bias, but it is less transparent. 11/15/2021 18
  • 19. Review of Seismic Attribute Taxonomies Their Historical Use and as Communication Frameworks 11/15/2021 19
  • 20. Review of Seismic Attribute Taxonomies Data analysis domain: refers to a method of analyzing data, which is either exploratory or confirmatory in nature Exploratory Data Analysis: favored by interpreters who are dominantly conceptual; often described as “general seismic interpreters” or similar Confirmatory Data Analysis: favored by interpreters who are dominantly analytical; often described as “quantitative seismic interpreters” or similar Notes: 1. These two terms represent extremes on a gradient, where many individuals occupy a mixture. 2. EDA techniques rely on qualitative visual methods centered on developing a hypothesis based on observations and statistical inferences 3. CDA techniques rely on predictions from calculations centered on those which represent deviations from a model after Tukey., 1962 11/15/2021 20
  • 21. Review of Seismic Attribute Taxonomies Data Analysis: refers to a method of analyzing data, which is either exploratory or confirmatory in nature Exploratory Data Analysis: favored by interpreters who are dominantly conceptual; often described as “general seismic interpreters” or similar Confirmatory Data Analysis: favored by interpreters who are dominantly analytical; often described as “quantitative seismic interpreters” or similar Notes: 1. These two terms represent extremes on a gradient, where many individuals occupy a mixture. 2. EDA techniques rely on qualitative visual methods centered on developing a hypothesis based on observations and statistical inferences 3. CDA techniques rely on predictions from calculations centered on those which represent deviations from a model after Tukey., 1962 11/15/2021 21
  • 22. Review of Seismic Attribute Taxonomies Dominant Taxonomic Schemes • Taner et al., 1994 • Carter, 1995 • Brown, 1996 • Barnes, 1997 • Chen and Sidney, 1997 • Taner, 1999 • Taner, 2001 • Liner et al., 2004 • Randen and Sønneland, 2005 • Brown, 2011 • Barnes, 2016 • Marfurt, 2018 Excluded Usage Guides and Tables • OpendTect Attribute Matrix • OpendTect Attribute Table • Paradise Attribute Usage Table • Petrel Attribute Table • Landmark Attribute Usage Guide 11/15/2021 22
  • 23. Review of Seismic Attribute Taxonomies • Taner et al., 1994 • Carter, 1995 • Brown, 1996 • Barnes, 1997 • Chen and Sidney, 1997 • Taner, 1999 • Taner, 2001 • Liner et al., 2004 • Randen and Sønneland, 2005 • Brown, 2011 • Barnes, 2016 • Marfurt, 2018 11/15/2021 23
  • 24. Review of Seismic Attribute Taxonomies • Taner et al., 1994 • Carter, 1995 • Brown, 1996 • Barnes, 1997 • Chen and Sidney, 1997 • Taner, 1999 • Taner, 2001 • Liner et al., 2004 • Randen and Sønneland, 2005 • Brown, 2011 • Barnes, 2016 • Marfurt, 2018 Themes • Taner — interpretation • Brown — signal properties • Barnes —mathematics (signal properties) 11/15/2021 24
  • 25. Review of Seismic Attribute Taxonomies • Taner et al., 1994 • Carter, 1995 • Brown, 1996 • Barnes, 1997 • Chen and Sidney, 1997 • Taner, 1999 • Taner, 2001 • Liner et al., 2004 • Randen and Sønneland, 2005 • Brown, 2011 • Barnes, 2016 • Marfurt, 2018 Themes • Taner — interpretation • Brown — signal properties • Barnes —mathematics (signal properties) Consider both how you understand seismic attributes, and how your colleagues do. Are they the same? 11/15/2021 25
  • 26. Review of Seismic Attribute Taxonomies • Taner et al., 1994 • Carter, 1995 • Brown, 1996 • Barnes, 1997 • Chen and Sidney, 1997 • Taner, 1999 • Taner, 2001 • Liner et al., 2004 • Randen and Sønneland, 2005 • Brown, 2011 • Barnes, 2016 • Marfurt, 2018 Themes • Taner — interpretation • Brown — signal properties • Barnes —mathematics (signal properties) • Taner may describe attributes toward scientists who favor EDA. • While Brown and Barnes may describe attributes toward scientists who favor CDA. 11/15/2021 26
  • 27. Why is this important to understand? • At its core, a seismic attribute taxonomy aims to communicate ideas to an audience to ease discussion and further research. • Incorrect understandings of how attributes relate to one another in the three described domains (interpretive, signal property, and mathematical) have a high change of applying attributes incorrectly, incorrect interpretations, and dubious research. • This translates to a loss of resources (time, money, etc.) due to misapplied techniques. • Machine learning approaches make it easier to misapply techniques and algorithms, while making it easier to get a result. 11/15/2021 27
  • 28. Why is this important to understand? • At its core, a seismic attribute taxonomy aims to communicate ideas to an audience to ease discussion and further research. • Incorrect understandings of how attributes relate to one another in the three described domains (interpretive, signal property, and mathematical) have a high change of applying attributes incorrectly, incorrect interpretations, and dubious research. • This translates to a loss of resources (time, money, etc.) due to misapplied techniques. • Machine learning approaches make it easier to misapply techniques and algorithms, while making it easier to get a result. 11/15/2021 28
  • 30. Review of Seismic Attribute Taxonomies Citation Factor is a value in the range of 1–5 assigned to a particular citation in an academic work such that a particular work was referenced: • 1: in a reference list, but not in the body (e.g, bibliography or additional resources sections) • 2: with several others in a list reinforcing an assertion or statement • 3: referring to a unique thought or short quote • 4: by itself, referring to a long thought, definition, or significant quote • 5: by itself and expounds on the original idea, often drawing new meaning How is this concept applied? • Raw citing works: 2,100 • Relevant citing works: 287 • Each relevant citing work was assigned a citation factor. • The following graphs use these data. 11/15/2021 30
  • 31. Review of Seismic Attribute Taxonomies Citation Factor is a value in the range of 1–5 assigned to a particular citation in an academic work such that a particular work was referenced: • 1: in a reference list, but not in the body (e.g, bibliography or additional resources sections) • 2: with several others in a list reinforcing an assertion or statement • 3: referring to a unique thought or short quote • 4: by itself, referring to a long thought, definition, or significant quote • 5: by itself and expounds on the original idea, often drawing new meaning How is this concept applied? • Raw citing works: 2,100 • Relevant citing works: 287 • Each relevant citing work was assigned a citation factor. • The following graphs use these data. 11/15/2021 31
  • 32. Dewett et al., 2021 11/15/2021 32
  • 33. Dewett et al., 2021 11/15/2021 33
  • 34. Dewett et al., 2021 Observations and Hypotheses: 1. The count of citations by CF should be higher for low CFs, because higher CF (inferring more detailed) works are less numerous. 2. The dominant group (low or high CF) as a function of time, implies the relative value of the work toward the respective group. 3. In context, lower CF works tend to be tangential topics of a more general nature. 11/15/2021 34
  • 35. Dewett et al., 2021 Observations and Hypotheses: 4. Hypothesis #1: High CFs leading implies an acceptance among more specialized authors. 5. Hypothesis #2: Low CFs leading implies an acceptance among more general authors. 6. Hypothesis #3: Mid CFs leading implies a broad acceptance. *Each situation may have a spillover to the remaining groups. 11/15/2021 35
  • 36. Dewett et al., 2021 11/15/2021 36
  • 37. Dewett et al., 2021 • Supports Hypothesis #2 (Low CFs lead) • Supports a “spillover effect” from Low CFs to Mid CFs to High CFs 11/15/2021 37
  • 38. Dewett et al., 2021 11/15/2021 38
  • 39. Dewett et al., 2021 • Supports Hypothesis #1 (High CFs lead) 11/15/2021 39
  • 40. Dewett et al., 2021 11/15/2021 40
  • 41. Dewett et al., 2021 • Supports Hypothesis #3 (Mid CFs lead) • Supports a “spillover effect” from Mid CFs to High/Low CFs 11/15/2021 41
  • 42. Dewett et al., 2021 Historical Analysis —Main Takeaway • EDA focused scientists, by definition, prefer conceptual models to quantitative ones, while CDA focused scientists prefer theoretical and mathematical models. • EDA scientists likely favor Interpretive taxonomic systems owing to the qualitative visual models such systems describe. • CDA scientists likely favor Mathematical and Signal Property taxonomic systems owing to the quantitative and theoretical properties such systems describe. 11/15/2021 42
  • 43. Dewett et al., 2021 Historical Analysis —Main Takeaway • EDA focused scientists, by definition, prefer conceptual models to quantitative ones, while CDA focused scientists prefer theoretical and mathematical models. • EDA scientists likely favor Interpretive taxonomic systems owing to the qualitative visual models such systems describe. • CDA scientists likely favor Mathematical and Signal Property taxonomic systems owing to the quantitative and theoretical properties such systems describe. 11/15/2021 43
  • 44. Dewett et al., 2021 Historical Analysis —Main Takeaway • EDA focused scientists, by definition, prefer conceptual models to quantitative ones, while CDA focused scientists prefer theoretical and mathematical models. • EDA scientists likely favor Interpretive taxonomic systems owing to the qualitative visual models such systems describe. • CDA scientists likely favor Mathematical and Signal Property taxonomic systems owing to the quantitative and theoretical properties such systems describe. 11/15/2021 44
  • 45. Review of Seismic Attribute Taxonomies Creating Useful Taxonomies 11/15/2021 45
  • 46. What is a “Useful Taxonomy” A useful seismic attribute taxonomy: • Communicates to a wide audience • Both EDA and CDA audiences (i.e., Interpretive Value, Signal Property, and Mathematical) • Facilitates communication between these domains • Defines example attributes so that a reader can easily add new or excluded attributes to the classification • Clearly states any shortcomings or assumptions 11/15/2021 46
  • 47. Dewett et al., 2021 It’s also helpful if it provides a chart… • Use 35 example attributes including: • Instantaneous • Derived • Geometric • Classification • Machine Learning • Inversion • Provide cross-referenced charts and definitions (in text) 11/15/2021 47
  • 48. Dewett et al., 2021 Signal Property Domain • Centered on amplitude, phase, and frequency • Used a hierarchical structure to imply relationships • Used colors to describe relationship to signal property • Provide for input dependent attributes, which concerns machine learning algorithms 11/15/2021 48
  • 49. Dewett et al., 2021 11/15/2021 49
  • 50. Dewett et al., 2021 Mathematical Formulation Domain • Focused on mathematical formulation of the algorithm (or likely formulation) • Provides for “component separation” and “dimensionality reduction” techniques (not a specific mathematical operation) 11/15/2021 50
  • 51. Dewett et al., 2021 11/15/2021 51
  • 52. Dewett et al., 2021 Interpretive Value Domain • Provides an overview of seismic attribute applications • Does not validate any specific technique or application • Focused on structural, stratigraphic, fluid, and signal anomaly divisions 11/15/2021 52
  • 53. Dewett et al., 2021 11/15/2021 53
  • 54. Review of Seismic Attribute Taxonomies Discussion 11/15/2021 54
  • 55. Discussion • Shown that data analysis domains are a convenient tool to understand how scientists use and understand seismic attribute classification and, by extension, the attributes themselves. • Shown that there are three domains in this context: • Interpretive • Signal Property • Mathematical • Shown that there is value (to scientists) for a clear communication path between each domain. • Concluded with the creation of three seismic attribute taxonomies. • Clearly identified assumptions and shortcomings. • Provided example seismic attributes that specifically allow for individuals to extend the taxonomies to include new and omitted attributes. • This work provides a solid foundation on which scientists and professionals can use seismic attributes more effectively in their current work and in future machine learning workflows. 11/15/2021 55
  • 56. Discussion • Shown that data analysis domains are a convenient tool to understand how scientists use and understand seismic attribute classification and, by extension, the attributes themselves. • Shown that there are three domains in this context: • Interpretive • Signal Property • Mathematical • Shown that there is value (to scientists) for a clear communication path between each domain. • Concluded with the creation of three seismic attribute taxonomies. • Clearly identified assumptions and shortcomings. • Provided example seismic attributes that specifically allow for individuals to extend the taxonomies to include new and omitted attributes. • This work provides a solid foundation on which scientists and professionals can use seismic attributes more effectively in their current work and in future machine learning workflows. 11/15/2021 56
  • 57. Outline • Review of Seismic Attribute Taxonomies • Spectral Similarity Fault Enhancement • Efficiency Gains in Inversion- based Interpretation • Conclusions • Motivation • Spectral Similarity Defined • Case Study #1: High S:N • Case Study #2: Low S:N • Discussion 11/15/2021 57
  • 59. Spectral Similarity Fault Enhancement Motivation: • Edge Detection Enhancement — Professional and academic seismic interpreters have long used edge detection attributes for fault identification and interpretation (1995) and their enhancement (2001). • Computer Based Interpretation — Ideally, fault interpretation could be automated via a computer extraction of faults into fault planes. The quality of the input attribute is the current limiting factor. While newer machine learning approaches are beginning to become commonplace, the spectral similarity extends conventional attributes by novel filtering and attribute choice. 11/15/2021 59
  • 60. Spectral Similarity Fault Enhancement Motivation: • Edge Detection Enhancement — Professional and academic seismic interpreters have long used edge detection attributes for fault identification and interpretation (1995) and their enhancement (2001). • Computer Based Interpretation — Ideally, fault interpretation could be automated via a computer extraction of faults into fault planes. The quality of the input attribute is the current limiting factor. While newer machine learning approaches are beginning to become commonplace, the spectral similarity extends conventional attributes by novel filtering and attribute choice. 11/15/2021 60
  • 61. Spectral Similarity Fault Enhancement Spectral Similarity Defined 11/15/2021 61
  • 62. Quick Definitions • Swarm intelligence — a family of algorithms that use decentralized self- organization to perform a task (examples include particle swarm optimization, ant colony optimization, or differential evolution). • Machine learning — a subdiscipline of computer science that consists of algorithms that can learn from and make predictions on data (examples include artificial neural networks, self- organized maps, and k-means clustering). • Spectral similarity — refers to the workflow described in this paper for similarity enhancement. • Fault Enhanced similarity — the patented filtering process for similarity enhancement described by Dorn et al. (2012) • Similarity — a family of edge detection attributes that include coherence, variance, the Sobel filter, or similar algorithms. 11/15/2021 62
  • 63. Quick Definitions • Swarm intelligence — a family of algorithms that use decentralized self- organization to perform a task (examples include particle swarm optimization, ant colony optimization, or differential evolution). • Machine learning — a subdiscipline of computer science that consists of algorithms that can learn from and make predictions on data (examples include artificial neural networks, self- organized maps, and k-means clustering). • Spectral similarity — refers to the workflow described in this paper for similarity enhancement. • Fault Enhanced similarity — the patented filtering process for similarity enhancement described by Dorn et al. (2012) • Similarity — a family of edge detection attributes that include coherence, variance, the Sobel filter, or similar algorithms. 11/15/2021 63
  • 64. Quick Definitions • Swarm intelligence — a family of algorithms that use decentralized self- organization to perform a task (examples include particle swarm optimization, ant colony optimization, or differential evolution). • Machine learning — a subdiscipline of computer science that consists of algorithms that can learn from and make predictions on data (examples include artificial neural networks, self- organized maps, and k-means clustering). • Spectral similarity — refers to the workflow described in this paper for similarity enhancement. • Fault Enhanced similarity — the patented filtering process for similarity enhancement described by Dorn et al. (2012). • Similarity — a family of edge detection attributes that include coherence, variance, the Sobel filter, or similar algorithms. 11/15/2021 64
  • 65. Quick Definitions • Swarm intelligence — a family of algorithms that use decentralized self- organization to perform a task (examples include particle swarm optimization, ant colony optimization, or differential evolution). • Machine learning — a subdiscipline of computer science that consists of algorithms that can learn from and make predictions on data (examples include artificial neural networks, self- organized maps, and k-means clustering). • Spectral similarity — refers to the workflow described in this paper for similarity enhancement. • Fault Enhanced similarity — the patented filtering process for similarity enhancement described by Dorn et al. (2012). • Similarity — a family of edge detection attributes that include coherence, variance, the Sobel filter, or similar algorithms. 11/15/2021 65
  • 66. Quick Definitions • Swarm intelligence — a family of algorithms that use decentralized self- organization to perform a task (examples include particle swarm optimization, ant colony optimization, or differential evolution). • Machine learning — a subdiscipline of computer science that consists of algorithms that can learn from and make predictions on data (examples include artificial neural networks, self- organized maps, and k-means clustering). • Spectral similarity — refers to the workflow described in this paper for similarity enhancement. • Fault Enhanced similarity — the patented filtering process for similarity enhancement described by Dorn et al. (2012). • Similarity — a family of edge detection attributes that include coherence, variance, the Sobel filter, or similar algorithms. 11/15/2021 66
  • 67. Note on Comparisons The case studies will compare the spectral similarity to: • A well-regarded commercial fault enhancement method • A second prominent fault enhancement method based on swarm intelligence algorithms • A traditional eigen structure similarity Dewett and Henza, 2016 11/15/2021 67
  • 68. The Spectral Similarity has been favored by 20+ experienced geoscientists (15–20- year average) with structural geology training and/or over a decade of seismic interpretation experience Dewett and Henza, 2016 Additional Testing 11/15/2021 68
  • 69. filter seismic spectral decomposition pick spec-d volumes run seismic attributes optimize dip response filter and smooth edges combine volumes final result General Workflow Dewett and Henza, 2016 11/15/2021 69
  • 70. Example Workflow Step 1: Filter data as needed Step 2: Decompose data into spectra Step 3: Compute seismic attributes Step 4: Connect & interpolate Step 5: Filter and smooth Step 6: Combine volumes • Addition • Machine Learning Step 7: Dip filter faults Dewett and Henza, 2016 11/15/2021 70
  • 71. Example Workflow Step 1: Filter data as needed Step 2: Decompose data into spectra Step 3: Compute seismic attributes Step 4: Connect & interpolate Step 5: Filter and smooth Step 6: Combine volumes • Addition • Machine Learning Step 7: Dip filter faults Dewett and Henza, 2016 43 Hz 37 Hz 20 Hz 11/15/2021 71
  • 72. Example Workflow Step 1: Filter data as needed Step 2: Decompose data into spectra Step 3: Compute seismic attributes Step 4: Connect & interpolate Step 5: Filter and smooth Step 6: Combine volumes • Addition • Machine Learning Step 7: Dip filter faults Dewett and Henza, 2016 11/15/2021 72
  • 73. Example Workflow Step 1: Filter data as needed Step 2: Decompose data into spectra Step 3: Compute seismic attributes Step 4: Connect & interpolate Step 5: Filter and smooth Step 6: Combine volumes • Addition • Machine Learning Step 7: Dip filter faults Dewett and Henza, 2016 11/15/2021 73
  • 74. Example Workflow Step 1: Filter data as needed Step 2: Decompose data into spectra Step 3: Compute seismic attributes Step 4: Connect & interpolate Step 5: Filter and smooth Step 6: Combine volumes • Addition • Machine Learning Step 7: Dip filter faults Dewett and Henza, 2016 11/15/2021 74
  • 75. Example Workflow Step 1: Filter data as needed Step 2: Decompose data into spectra Step 3: Compute seismic attributes Step 4: Connect & interpolate Step 5: Filter and smooth Step 6: Combine volumes • Addition • Machine Learning Step 7: Dip filter faults Dewett and Henza, 2016 11/15/2021 75
  • 76. Example Workflow Step 1: Filter data as needed Step 2: Decompose data into spectra Step 3: Compute seismic attributes Step 4: Connect & interpolate Step 5: Filter and smooth Step 6: Combine volumes • Addition • Machine Learning Step 7: Dip filter faults Dewett and Henza, 2016 11/15/2021 76
  • 77. Example Workflow Step 1: Filter data as needed Step 2: Decompose data into spectra Step 3: Compute seismic attributes Step 4: Connect & interpolate Step 5: Filter and smooth Step 6: Combine volumes • Addition • Machine Learning Step 7: Dip filter faults Dewett and Henza, 2016 11/15/2021 77
  • 78. Spectral Similarity Workflow • Multistep and multistage • Incorporates spectral decomposition • Bandlimited phase and amplitude • Bandlimited geometric attributes • Uses any number of frequency volumes (variable depending on data) • Uses novel approach to attribute filtering • Combines many input volumes into a single output fault probability attribute • Optionally, uses machine learning classification approaches • Provides a clean input for computer- based fault extraction Dewett and Henza, 2016 11/15/2021 78
  • 79. Spectral Similarity Fault Enhancement Case Study #1: High S:N 11/15/2021 79
  • 80. Fault Enhanced Similarity and Spectral Similarity Dewett and Henza, 2016 11/15/2021 80
  • 81. Fault Enhanced Similarity and Spectral Similarity Improved fault connectivity (yellow arrows) and more reasonable fault dip (red rectangle) in the Spectral Similarity Volume Dewett and Henza, 2016 11/15/2021 81
  • 82. Fault Interpretation Using Spectral Similarity • Useful in manual fault interpretation • Computer based fault extraction • Fault QC and refinement Dewett and Henza, 2016 11/15/2021 82
  • 83. Automatic Fault Interpretation Using Spectral Similarity • 332 faults total • 33% (108 faults) require no edits (orange) • 63% (209 faults) require basic fault splitting (blue) • 4% (15 faults) mesh edits only (red) Dewett and Henza, 2016 11/15/2021 83
  • 84. Spectral Similarity with Peak Frequency Spectral Similarity communicates structural information, while peak frequency communicates lithological and stratigraphic information. This yields a more complete geologic understanding. Dewett and Henza, 2016 11/15/2021 84
  • 85. Range comparison: Fault Enhanced Similarity and Spectral Similarity Dewett and Henza, 2016 Slide 85
  • 86. Spectral Similarity Fault Enhancement Case Study #2: Low S:N 11/15/2021 86
  • 87. Dewett and Henza, 2016 • Land survey • Variable fold • High random noise contamination • Filtered with structure-oriented noise filters in multiple passes Case Study #2 11/15/2021 87
  • 88. Similarity compared to Spectral Similarity Dewett and Henza, 2016 11/15/2021 88
  • 90. Spectral Similarity… • Integrates any seismic attribute and spectral decomposition, • Will improve computer based and human driven interpretation workflows, • Will enhance both small faults and large faults, • Can be customized using machine learning classification algorithms to produce a volume that retains frequency information. Dewett and Henza, 2016 11/15/2021 90
  • 91. Why is this important to understand? • The first major job of convolutional neural network (CNN) machine learning is fault attribute generation. • For a CNN to be useful, it must outperform a human interpreter (speed and quality). • CNN algorithms require a model based on human interpretation or synthetic fault data. • CNN performance in low signal areas is poor. • CNN performance in compressional and transpressional environments is poor. • For a CNN to be successful, it must: • Outperform a coherence in an arbitrary data set. • Outperform the best fault enhancements in arbitrary data sets. • Produce a suitable volume for computer-based fault interpretation. • This work… • Highlights the need for fault skeletonization (a process which has become common as a post-processing step on new CNN algorithms. • Highlights the usefulness of retaining frequency information using unsupervised classification. 11/15/2021 91
  • 92. Why is this important to understand? • The first major job of convolutional neural network (CNN) machine learning is fault attribute generation. • For a CNN to be useful, it must outperform a human interpreter (speed and quality). • CNN algorithms require a model based on human interpretation or synthetic fault data. • CNN performance in low signal areas is poor. • CNN performance in compressional and transpressional environments is poor. • For a CNN to be successful, it must: • Outperform a coherence in an arbitrary data set. • Outperform the best fault enhancements in arbitrary data sets. • Produce a suitable volume for computer-based fault interpretation. • This work… • Highlights the need for fault skeletonization (a process which has become common as a post-processing step on new CNN algorithms. • Highlights the usefulness of retaining frequency information using unsupervised classification. 11/15/2021 92
  • 93. Why is this important to understand? • The first major job of convolutional neural network (CNN) machine learning is fault attribute generation. • For a CNN to be useful, it must outperform a human interpreter (speed and quality). • CNN algorithms require a model based on human interpretation or synthetic fault data. • CNN performance in low signal areas is poor. • CNN performance in compressional and transpressional environments is poor. • For a CNN to be successful, it must: • Outperform a coherence in an arbitrary data set. • Outperform the best fault enhancements in arbitrary data sets. • Produce a suitable volume for computer-based fault interpretation. • This work… • Highlights the need for fault skeletonization (a process which has become common as a post-processing step on new CNN algorithms. • Highlights the usefulness of retaining frequency information using unsupervised classification. 11/15/2021 93
  • 94. Why is this important to understand? • The first major job of convolutional neural network (CNN) machine learning is fault attribute generation. • For a CNN to be useful, it must outperform a human interpreter (speed and quality). • CNN algorithms require a model based on human interpretation or synthetic fault data. • CNN performance in low signal areas is poor. • CNN performance in compressional and transpressional environments is poor. • For a CNN to be successful, it must: • Outperform a coherence in an arbitrary data set. • Outperform the best fault enhancements in arbitrary data sets. • Produce a suitable volume for computer-based fault interpretation. • This work… • Highlights the need for fault skeletonization (a process which has become common as a post-processing step on new CNN algorithms. • Highlights the usefulness of retaining frequency information using unsupervised classification. 11/15/2021 94
  • 95. Why is this important to understand? • The first major job of convolutional neural network (CNN) machine learning is fault attribute generation. • For a CNN to be useful, it must outperform a human interpreter (speed and quality). • CNN algorithms require a model based on human interpretation or synthetic fault data. • CNN performance in low signal areas is poor. • CNN performance in compressional and transpressional environments is poor. • For a CNN to be successful, it must: • Outperform a coherence in an arbitrary data set. • Outperform the best fault enhancements in arbitrary data sets. • Produce a suitable volume for computer-based fault interpretation. • This work… • Highlights the need for fault skeletonization (a process which has become common as a post-processing step on new CNN algorithms. • Highlights the usefulness of retaining frequency information using unsupervised classification. 11/15/2021 95
  • 96. Why is this important to understand? • The first major job of convolutional neural network (CNN) machine learning is fault attribute generation. • For a CNN to be useful, it must outperform a human interpreter (speed and quality). • CNN algorithms require a model based on human interpretation or synthetic fault data. • CNN performance in low signal areas is poor. • CNN performance in compressional and transpressional environments is poor. • For a CNN to be successful, it must: • Outperform a coherence in an arbitrary data set. • Outperform the best fault enhancements in arbitrary data sets. • Produce a suitable volume for computer-based fault interpretation. • This work… • Highlights the need for fault skeletonization (a process which has become common as a post-processing step on new CNN algorithms. • Highlights the usefulness of retaining frequency information using unsupervised classification. 11/15/2021 96
  • 97. Why is this important to understand? • The first major job of convolutional neural network (CNN) machine learning is fault attribute generation. • For a CNN to be useful, it must outperform a human interpreter (speed and quality). • CNN algorithms require a model based on human interpretation or synthetic fault data. • CNN performance in low signal areas is poor. • CNN performance in compressional and transpressional environments is poor. • For a CNN to be successful, it must: • Outperform a coherence in an arbitrary data set. • Outperform the best fault enhancements in arbitrary data sets. • Produce a suitable volume for computer-based fault interpretation. • This work… • Highlights the need for fault skeletonization (a process which has become common as a post-processing step on new CNN algorithms. • Highlights the usefulness of retaining frequency information using unsupervised classification. 11/15/2021 97
  • 98. Why is this important to understand? • The first major job of convolutional neural network (CNN) machine learning is fault attribute generation. • For a CNN to be useful, it must outperform a human interpreter (speed and quality). • CNN algorithms require a model based on human interpretation or synthetic fault data. • CNN performance in low signal areas is poor. • CNN performance in compressional and transpressional environments is poor. • For a CNN to be successful, it must: • Outperform a coherence in an arbitrary data set. • Outperform the best fault enhancements in arbitrary data sets. • Produce a suitable volume for computer-based fault interpretation. • This work… • Highlights the need for fault skeletonization (a process which has become common as a post-processing step on new CNN algorithms. • Highlights the usefulness of retaining frequency information using unsupervised classification. 11/15/2021 98
  • 99. Why is this important to understand? • The first major job of convolutional neural network (CNN) machine learning is fault attribute generation. • For a CNN to be useful, it must outperform a human interpreter (speed and quality). • CNN algorithms require a model based on human interpretation or synthetic fault data. • CNN performance in low signal areas is poor. • CNN performance in compressional and transpressional environments is poor. • For a CNN to be successful, it must: • Outperform a coherence in an arbitrary data set. • Outperform the best fault enhancements in arbitrary data sets. • Produce a suitable volume for computer-based fault interpretation. • This work… • Highlights the need for fault skeletonization (a process which has become common as a post-processing step on new CNN algorithms. • Highlights the usefulness of retaining frequency information using unsupervised classification. 11/15/2021 99
  • 100. Why is this important to understand? • The first major job of convolutional neural network (CNN) machine learning is fault attribute generation. • For a CNN to be useful, it must outperform a human interpreter (speed and quality). • CNN algorithms require a model based on human interpretation or synthetic fault data. • CNN performance in low signal areas is poor. • CNN performance in compressional and transpressional environments is poor. • For a CNN to be successful, it must: • Outperform a coherence in an arbitrary data set. • Outperform the best fault enhancements in arbitrary data sets. • Produce a suitable volume for computer-based fault interpretation. • This work… • Highlights the need for fault skeletonization (a process which has become common as a post-processing step on new CNN algorithms. • Highlights the usefulness of retaining frequency information using unsupervised classification. 11/15/2021 100
  • 101. Outline • Review of Seismic Attribute Taxonomies • Spectral Similarity Fault Enhancement • Efficiency Gains in Inversion- based Interpretation • Conclusions • Motivation • Background • SOM with Inversion Data • Results 11/15/2021 101
  • 102. Efficiency Gains in Inversion- based Interpretation Motivation 11/15/2021 102
  • 103. Motivation • Professional seismic interpreters often lack experience in quantitative interpretation (QI) workflows. • In areas where QI analyses drive prospect identification and prioritization, QI specialists are often a scares resource. • Can a simple classification by a self-organizing map (SOM) algorithm aid inexperienced or less specialized professionals by identifying QI targets? • Can inexperienced or less specialized professionals reliably and correctly identify leads that can be prioritized to QI specialists? 11/15/2021 103
  • 104. Efficiency Gains in Inversion- based Interpretation Background 11/15/2021 104
  • 105. SOM: Briefly Explained Suppose we have a table of data… • Let the columns be different attributes • Let the rows be what we want to classify • We can classify each object by the attributes that are ascribed to them. 11/15/2021 105
  • 106. SOM: Briefly Explained • The system (SOM) will map the input attributes onto an output space, • Generate an initial state (e.g., pseudo random or predefined) • Then iterate the following: • Select training data at random • Compute the winning neuron • Update all neurons • Stop when input is classified, or maximum epochs has been reached • In general, the neurons are interconnected in a defined topology. 11/15/2021 106
  • 107. Seismic Inversion: Briefly Explained Seismic data Well data Seismic Inversion Geophysical Properties (AI, SI, ρ) Rock Physics Model Geological Properties (fluid, lithology, etc.) Well data 11/15/2021 107
  • 108. Seismic Inversion: Briefly Explained Relative • Poor fidelity over long time intervals (a function of the frequency content of the data) • Scaling of the output values is only proportional to the real values • Suffers more from tuning effects • Easy to compute • No initial model dependence Absolute • Higher fidelity over long time intervals • Correct (or more correct) output scaling • Suffers far less from tuning effects • Difficult to compute (both in computational time and knowledge) • Dependent on initial model 11/15/2021 108
  • 109. Seismic Inversion: Briefly Explained Coloured Inversion • Tends to outperform other relative inversion methods (e.g.n RunSum or Recursive Inversion) • Attempts spectrum flattening through wavelet estimation • Matches the output seismic impedance spectrum to the well impedance spectrum • Quick and cheap • Simple and Robust • No background model • No wavelet derivation • Well data are only used for spectral matching 11/15/2021 109
  • 110. Efficiency Gains in Inversion- based Interpretation SOM with Inversion Data 11/15/2021 110
  • 111. Relative Seismic Inversion Results • Given multi-volume results from seismic inversion, typically specialists classify the results based on their prior knowledge and experience combined with an understanding of the rock physics. • However, specialists may be limited in time and more so in supply. • Therefore, if a more unbiased and repeatable approach can be used to classify the inversion response, efficiency can be gained through specialist engagement at key times with specific areas of interest. 11/15/2021 111
  • 112. Relative Seismic Inversion Results • Given multi-volume results from seismic inversion, typically specialists classify the results based on their prior knowledge and experience combined with an understanding of the rock physics. • However, specialists may be limited in time and more so in supply. • Therefore, if a more unbiased and repeatable approach can be used to classify the inversion response, efficiency can be gained through specialist engagement at key times with specific areas of interest. 11/15/2021 112
  • 113. • Begin with inversion products, • Classify the results with a computer-based classification algorithm, • Scan the results for anomalous classifications, • Focus the specialist’s engagement on anomalous areas. 11/15/2021 113
  • 118. Efficiency Gains in Inversion- based Interpretation Results 11/15/2021 118
  • 119. Effect of Adding Classes Most anomalous class (of 16). Determined after a scan of the data one class at a time. Using more classes can allow for finer distinctions between different data points. 11/15/2021 119
  • 120. Effect of Adding Classes • Five classes (of 64) that show the most anomalous events in the data. 11/15/2021 120
  • 121. Effect of Adding Classes • Five classes (of 64) that show the most anomalous events in the data. 11/15/2021 121
  • 123. Results By leveraging a self-organized map and inversion products together, it is possible: • To more rapidly localize interpretation in a dataset while leveraging a higher degree of geologic understanding typically found in prospect interpreters (who commonly lack in-depth QI knowledge). • To make potentially substantial gains in both efficiency and interpretation quality. • To increase the engagement of non-specialists. 11/15/2021 123
  • 124. Seismic Attributes: Taxonomic Classification for Pragmatic Machine Learning Application Conclusions 11/15/2021 124
  • 125. Conclusions • Explored the history of seismic attribute classification and taxonomy. • Surveyed every major influential system • Analyzed each taxonomic system • Provided an update that centers on data analysis • Explored a method of fault enhancement that centers on spectral decomposition and novel filtering techniques. • Works well in noisy and clean data • Can integrate any edge detecting attribute • Identifies fault skeletonization as an important post-processing step • Suggest a method of volume combination using unsupervised machine learning 11/15/2021 125
  • 126. Conclusions • Investigated the use of SOMs (unsupervised machine learning) applied to a seismic inversion. • Identifies an opportunity for efficiency gains by non-specialists. • Suggests a method of quality control for input attributes prior to unsupervised learning. 11/15/2021 126
  • 128. References (Review of Seismic Attribute Taxonomies) Adams, D., and D. Markus, 2013, Systematic error in instantaneous attributes: SEG Technical Program Expanded Abstracts, 1363–1367. Aki, K., and P. G. Richards, 2009, Quantitative Seismology, Second Edition.: University Science Books. Al-Dossary, S., and K. J. Marfurt, 2006, 3D volumetric multispectral estimates of reflector curvature and rotation: Geophysics, 71, P41–P51. Bahorich, M. S., and S. L. Farmer, 1995, 3‐D seismic discontinuity for faults and stratigraphic features: The coherence cube: SEG Technical Program Expanded Abstracts, 93–96. Balch, A. H., 1971, Color Sonagrams: A New Dimension in Seismic Data Interpretation: Geophysics, 36, 1074–1098. Barnes, A., 2006, Too many seismic attributes: CSEG Recorder, 31, 40–45. Barnes, A., 2016, Handbook of Poststack Seismic Attributes: Society of Exploration Geophysicists. Barnes, A., 2017, Instantaneous attributes and subseismic resolution: SEG Technical Program Expanded Abstracts, 2018–2022. Barnes, A. E., 1993, Instantaneous spectral bandwidth and dominant frequency with applications to seismic reflection data: Geophysics, 58, 419–428. Barnes, A. E., 1997, Genetic classification of complex seismic trace attributes: . Bengio, Y., A. Courville, and P. Vincent, 2013, Representation learning: a review and new perspectives: IEEE Transactions on Pattern Analysis and Machine Intelligence, 35, 1798–1828. 11/15/2021 128
  • 129. References (Review of Seismic Attribute Taxonomies) Bishop, C. M., M. Svensén, and C. K. I. Williams, 1998, Developments of the generative topographic mapping: Neurocomputing, 21, 203–224. Bodine, J. H., 1984, Waveform analysis with seismic attributes: SEG Technical Program Expanded Abstracts, 69, 505–509. Brown, A., 1996, Seismic attributes and their classification: The Leading Edge, 15, 1090–1090. Brown, A. R., 2011, Interpretation of Three-Dimensional Seismic Data: American Association of Petroleum Geologists. Carnot, L. N. M., 1803, Géométrie de Position: J. B. M. Duprat. Carter, D. C., 1995, Use of Windowed Seismic Attributes in 3D Seismic Facies Analysis and Pattern Recognition: International Symposium on Sequence Stratigraphy, 73–87. Castagna, J. P., and M. M. Backus, 1993, Offset Dependent Reflectivity-Theory and Practice of AVO (Investigations in Geophysics No. 8) (Investigations in Geophysics, Vol 8), Pristine Condition Like New Edition.: Society Of Exploration Geophysicists. Chen, Q., and S. Sidney, 1997a, Advances in seismic attribute technology: SEG Technical Program Expanded Abstracts. Chen, Q., and S. Sidney, 1997b, Seismic attribute technology for reservoir forecasting and monitoring: The Leading Edge, 16, 445–448. Chopra, S., and K. J. Marfurt, 2007, Seismic Attributes for Prospect Identification and Reservoir Characterization: Society of Exploration Geophysicists. 11/15/2021 129
  • 130. References (Review of Seismic Attribute Taxonomies) Chopra, S., and J. P. Castagna, 2014, AVO: SEG Investigations in Geophysics 16: Society of Exploration Geophysicists. Chopra, S., and K. J. Marfurt, 2019, Multispectral, multiazimuth, and multioffset coherence attribute applications: Interpretation, 7, SC21–SC32. Comon, P., 1994, Independent component analysis, A new concept? Signal Processing, 36, 287–314. Cooke, D. A., and W. A. Schneider, 1983, Generalized linear inversion of reflection seismic data: Geophysics, 48, 665–676. Costa, F. A., C. R. Suarez, D. J. Sarzenski, and C. C. Ferreira Guedes, 2007, Using Seismic Attributes in Petrophysical Reservoir Characterization: Latin American & Caribbean Petroleum Engineering Conference. Dao, T., and K. J. Marfurt, 2011, The value of visualization with more than 256 colors: SEG Technical Program Expanded Abstracts, 941–945. Desodt, G., 1994, Complex independent components analysis applied to the separation of radar signals: Proceedings of EUSIPCO-90, 665–668. Forkman, J., J. Josse, and H.-P. Piepho, 2019, Hypothesis Tests for Principal Component Analysis When Variables are Standardized: Journal of Agricultural, Biological, and Environmental Statistics, 24, 289–308. Gao, D., 2011, Latest developments in seismic texture analysis for subsurface structure, facies, and reservoir characterization: A review: Geophysics, 76, W1–W13. Gassaway, G. S., and H. J. Richgels, 1983, SAMPLE: seismic amplitude measurement for primary lithology estimation: SEG Technical Program Expanded Abstracts 1983, 39, 610–613. 11/15/2021 130
  • 131. References (Review of Seismic Attribute Taxonomies) Gersztenkorn, A., and K. J. Marfurt, 1996, Eigenstructure based coherence computations: SEG Technical Program Expanded Abstracts, 328–331. Gersztenkorn, A., and K. J. Marfurt, 1999, Eigenstructure-based coherence computations as an aid to 3-D structural and stratigraphic mapping: Geophysics, 64, 1468–1479. Golub, G. H., 1996, Matrix Computations, 3rd ed..: Johns Hopkins University Press. Guo, H., S. Lewis, and K. J. Marfurt, 2008, Mapping multiple attributes to three- and four-component color models — A tutorial: Geophysics, 73, W7– W19. Hampson, D., T. Todorov, and B. Russell, 2000, Using multi-attribute transforms to predict log properties from seismic data: Exploration Geophysics, 31, 481–487. Hampson, D., J. Schuelke, and J. Quirein, 2001, Use of multiattribute transforms to predict log properties from seismic data: Geophysics, 66, 220–236. Haralick, R. M., K. Shanmugam, and I. Dinstein, 1973, Textural Features for Image Classification: IEEE Transactions on Systems, Man, and Cybernetics, SMC-3, 610–621. Hart, B. S., 2008, Channel detection in 3-D seismic data sing sweetness: AAPG Bulletin, 92, 733–742. Hilterman, F. J., 2001, Seismic Amplitude Interpretation, Distinguished Instructor Short Course: Society of Exploration Geophysicists. Hinton, G. E., S. Osindero, and Y.-W. Teh, 2006, A fast learning algorithm for deep belief nets: Neural Computation, 18, 1527–1554. Hsu, F.-H., 2004, Behind Deep Blue: Building the Computer That Defeated the World Chess Champion: Princeton University Press. 11/15/2021 131
  • 132. References (Review of Seismic Attribute Taxonomies) Joblove, G. H., and D. Greenberg, 1978, Color spaces for computer graphics: Proceedings of the 5th Annual Conference on Computer Graphics and Interactive Techniques, 20–25. Jouppi, N. P., C. Young, N. Patil, D. Patterson, G. Agrawal, R. Bajwa, S. Bates, S. Bhatia, N. Boden, A. Borchers, R. Boyle, P.-L. Cantin, C. Chao, C. Clark, J. Coriell, M. Daley, M. Dau, J. Dean, B. Gelb, T. V. Ghaemmaghami, R. Gottipati, W. Gulland, R. Hagmann, C. R. Ho, D. Hogberg, J. Hu, R. Hundt, D. Hurt, J. Ibarz, A. Jaffey, A. Jaworski, A. Kaplan, H. Khaitan, D. Killebrew, A. Koch, N. Kumar, S. Lacy, J. Laudon, J. Law, D. Le, C. Leary, Z. Liu, K. Lucke, A. Lundin, G. MacKean, A. Maggiore, M. Mahony, K. Miller, R. Nagarajan, R. Narayanaswami, R. Ni, K. Nix, T. Norrie, M. Omernick, N. Penukonda, A. Phelps, J. Ross, M. Ross, A. Salek, E. Samadiani, C. Severn, G. Sizikov, M. Snelham, J. Souter, D. Steinberg, A. Swing, M. Tan, G. Thorson, B. Tian, H. Toma, E. Tuttle, V. Vasudevan, R. Walter, W. Wang, E. Wilcox, and D. H. Yoon, 2017, In-Datacenter Performance Analysis of a Tensor Processing Unit: Proceedings of the 44th Annual International Symposium on Computer Architecture, 1–12. Kalkomey, C., 1997, Potential risks when using seismic attributes as predictors of reservoir properties: Leading Edge, 16, 247–251. Kim, Y., R. Hardisty, and K. J. Marfurt, 2019, Attribute selection in seismic facies classification: Application to a Gulf of Mexico 3D seismic survey and the Barnett Shale: Interpretation, 7, SE281–SE297. 11/15/2021 132
  • 133. References (Review of Seismic Attribute Taxonomies) Klokov, A., and S. Fomel, 2012, Separation and imaging of seismic diffractions using migrated dip-angle gathers: Geophysics, 77, S131–S143. Kohonen, T., 1982, Self-organized formation of topologically correct feature maps: Biological Cybernetics, 43, 59–69. Kohonen, T., 2001, Self-Organizing Maps: Springer, Berlin, Heidelberg. Kohonen, T., and T. Honkela, 2007, Kohonen network: Scholarpedia Journal, 2, 1568. Krizhevsky, A., I. Sutskever, and G. E. Hinton, 2017, ImageNet classification with deep convolutional neural networks: Communications of the ACM, 60, 84–90. Latimer, R. B., R. Davidson, and P. van Riel, 2000, An interpreter’s guide to understanding and working with seismic-derived acoustic impedance data: Leading Edge, 19, 242–256. LeCun, Y., Y. Bengio, and G. Hinton, 2015, Deep learning: Nature, 521, 436–444. Lecun, Y., L. Bottou, Y. Bengio, and P. Haffner, 1998, Gradient-based learning applied to document recognition: Proceedings of the IEEE, 86, 2278–2324. Liner, C., 2016, Elements of 3D Seismology: Society of Exploration Geophysicists. Liner, C., C. Li, A. Gersztenkorn, and J. Smythe, 2004, SPICE: A new general seismic attribute: SEG Technical Program Expanded Abstracts, 433–436. Liu, J., and K. Marfurt, 2007, Multicolor display of spectral attributes: Leading Edge, 26, 268–271. Lubo-Robles, D., and K. J. Marfurt, 2018, Unsupervised seismic facies classification using independent component analysis: SEG Technical Program Expanded Abstracts, 2, 1603–1607. 11/15/2021 133
  • 134. References (Review of Seismic Attribute Taxonomies) Luo, Y., W. G. Higgs, and W. S. Kowalik, 1996, Edge detection and stratigraphic analysis using 3D seismic data: SEG Technical Program Expanded Abstracts, 324–327. Luo, Y., S. Al-Dossary, M. Marhoon, and M. Alfaraj, 2003, Generalized Hilbert transform and its applications in geophysics: Leading Edge, 22, 198–202. M. Turhan Taner, Naum M. Derzhi, Joel D. Walls, 2002, Method for generating an estimate of lithological characteristics of a region of the earth’s subsurface: US Patent. Marfurt, K., 2006, Robust estimates of 3D reflector dip and azimuth: Geophysics, 71, P29–P40. Marfurt, K., 2015, Techniques and best practices in multiattribute display: Interpretation, 3, B1–B23. Marfurt, K. J., 2018, Seismic Attributes as the Framework for Data Integration Throughout the Oilfield Life Cycle: Society of Exploration Geophysicists. Marfurt, K. J., R. L. Kirlin, S. L. Farmer, and M. S. Bahorich, 1998, 3-D seismic attributes using a semblance‐based coherency algorithm: Geophysics, 63, 1150–1165. Markidis, S., S. W. D. Chien, E. Laure, I. B. Peng, and J. S. Vetter, 2018, NVIDIA Tensor Core Programmability, Performance Precision: 2018 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), 522–531. 11/15/2021 134
  • 135. References (Review of Seismic Attribute Taxonomies) Meldahl, P., R. Heggland, B. Bril, and P. de Groot, 2001, Identifying faults and gas chimneys using multiattributes and neural networks: Leading Edge, 20, 474–482. Menke, W., 2018, Geophysical Data Analysis: Discrete Inverse Theory: Academic Press. Nanni, L., S. Brahnam, S. Ghidoni, E. Menegatti, and T. Barrier, 2013, Different approaches for extracting information from the co-occurrence matrix: PloS One, 8, e83554. Odegard, J. E., P. Steeghs, R. G. Baraniuk, and C. S. Burrus, 1998, Rice Consortium for Computational Seismic Interpretation: Rice University. Oldenburg, D. W., T. Scheuer, and S. Levy, 1983, Recovery of the acoustic impedance from reflection seismograms: Geophysics, 48, 1318–1337. Onstott, G. E., M. M. Backus, C. R. Wilson, and J. D. Phillips, 1984, Color display of offset dependent reflectivity in seismic data: SEG Technical Program Expanded Abstracts, 674–675. Partyka, G., J. Gridley, and J. Lopez, 1999, Interpretational applications of spectral decomposition in reservoir characterization: The Leading Edge, 18, 353–360. Pearson, K., 1901, On lines and planes of closest fit to systems of points in space: The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 2, 559–572. Pigott, J. D., M.-H. Kang, and H.-C. Han, 2013, First order seismic attributes for clastic seismic facies interpretation: Examples from the East China Sea: Journal of Asian Earth Sciences, 66, 34–54. 11/15/2021 135
  • 136. References (Review of Seismic Attribute Taxonomies) Purves, S., and H. Basford, 2011, Visualizing geological structure with subtractive color blending: GCSSEPM Extended Abstracts. Qi, X., and K. Marfurt, 2018, Volumetric aberrancy to map subtle faults and flexures: Interpretation, 6, T349–T365. Radovich, B. J., and R. B. Oliveros, 1998, 3-D sequence interpretation of seismic instantaneous attributes from the Gorgon Field: Leading Edge, 17, 1286–1293. Randen, T., and L. Sønneland, 2005, Atlas of 3D Seismic Attributes, in A. Iske and T. Randen, eds., Mathematical Methods and Modelling in Hydrocarbon Exploration and Production, Springer, 23–46. Rijks, E. J. H., and J. C. E. M. Jauffred, 1991, Attribute extraction: An important application in any detailed 3-D interpretation study: Leading Edge, 10, 11– 19. Roberts, A., 2001, Curvature attributes and their application to 3D interpreted horizons: First Break, 19.2, 85–100. Roden, R., and C. W. Chen, 2017, Interpretation of DHI characteristics with machine learning: First Break, 35, 55–63. de Rooij, M., and K. Tingdahl, 2003, Fault detection with meta‐attributes: SEG Technical Program Expanded Abstracts 2003, 354–357. Roy, A., A. S. Romero-Peláez, T. J. Kwiatkowski, and K. J. Marfurt, 2014, Generative topographic mapping for seismic facies estimation of a carbonate wash, Veracruz Basin, southern Mexico: Interpretation, 2, SA31–SA47. Rummerfield, B. F., 1954, Reflection Quality, A Fourth Dimension: Geophysics, 19, 684–694. 11/15/2021 136
  • 137. References (Review of Seismic Attribute Taxonomies) Russell, B., and D. Hampson, 1991, Comparison of poststack seismic inversion methods, in SEG Technical Program Expanded Abstracts, Society of Exploration Geophysicists, 876–878. Saggaf, M. M., M. N. Toksoz, and H. M. Mustafa, 2000, Application of Smooth Neural Networks for Inter-Well Estimation of Porosity from Seismic Data: Massachusetts Institute of Technology. Earth Resources Laboratory. Samuel, A. L., 1959, Some studies in machine learning using the game of Checkers: IBM Journal of Research and Development. Schmidhuber, J., 2015, Deep learning in neural networks: an overview: Neural Networks: The Official Journal of the International Neural Network Society, 61, 85–117. Schot, S. H., 1978, Aberrancy: Geometry of the Third Derivative: Mathematics Magazine, 51, 259–275. Shuey, R. T., 1985, A simplification of the Zoeppritz equations: Geophysics, 50, 609–614. Simonyan, K., and A. Zisserman, 2014, Very Deep Convolutional Networks for Large-Scale Image Recognition: ArXiv [Cs.CV]. Smith, W. B., 1898, Infinitesimal Analysis, by William Benjamin Smith: The Macmillan Company. Smythe, J., A. Gersztenkorn, B. Radovich, C.-F. Li, and C. Liner, 2004, Gulf of Mexico shelf framework interpretation using a bed-form attribute from spectral imaging: Leading Edge, 23, 921–926. 11/15/2021 137
  • 138. References (Review of Seismic Attribute Taxonomies) Taner, M. T., 1997, Kohonen’s Self Organizing Networks with “Conscience”: Rock Solid Images. Taner, M. T., 1999, Seismic Attributes, Their Classification And Projected Utilization: 6th International Congress of the Brazilian Geophysical Society. Taner, M. T., 2001, Seismic attributes: CSEG Recorder, 26, 48–56. Taner, M. T., and R. E. Sheriff, 1977, Application of Amplitude, Frequency, and Other Attributes to Stratigraphic and Hydrocarbon Determination, in C. E. Payton, ed., Seismic Stratigraphy: Applications to Hydrocarbon Exploration (AAPG Memoir 26), , . AAPG Special VolumesAmerican Association of Petroleum Geologists, 301–327. Taner, M. T., F. Koehler, and R. E. Sheriff, 1979, Complex seismic trace analysis: Geophysics, 44, 1041–1063. Taner, M. T., J. Walls, and G. Taylor, 2001, Seismic Attributes, Their Use in Petrophysical Classification: 63rd EAGE Conference & Exhibition. Taner, M. T., J. S. Schuelke, R. O’Doherty, and E. Baysal, 1994, Seismic attributes revisited: SEG Technical Program Expanded Abstracts, 36, 1104– 1106. Tayyab, M. N., S. Asim, M. M. Siddiqui, M. Naeem, S. H. Solange, and F. K. Babar, 2017, Seismic attributes’ application to evaluate the Goru clastics of Indus Basin, Pakistan: Arabian Journal of Geosciences, 10, 158. Todorov, T., D. Hampson, and B. Russell, 1997, Sonic Log Predictions Using Seismic Attributes: University of Calgary. Todorov, T. I., R. R. Stewart, and D. P. Hampson, 1998, Porosity Prediction Using Attributes from 3C–3D Seismic Data: researchgate.net. 11/15/2021 138
  • 139. References (Review of Seismic Attribute Taxonomies) Tukey, J. W., 1962, The Future of Data Analysis: Annals of Mathematical Statistics, 33, 1–67. Turhan Taner, M., 2002, System for estimating the locations of shaley subsurface formations: US Patent. Walker, C., and T. J. Ulrych, 1983, Autoregressive recovery of the acoustic impedance: Geophysics, 48, 1338–1350. Walls, J. D., M. T. Taner, T. Guidish, G. Taylor, D. Dumas, and N. Derzhi, 1999, North Sea reservoir characterization using rock physics, seismic attributes, and neural networks; a case history: SEG Technical Program Expanded Abstracts, 57, 1572–1575. Walls, J. D., M. T. Taner, G. Taylor, M. Smith, M. Carr, N. Derzhi, J. Drummond, D. McGuire, S. Morris, J. Bregar, and J. Lakings, 2000, Seismic reservoir characterization of a mid‐continent fluvial system using rock physics, poststack seismic attributes and neural networks; a case history: SEG Technical Program Expanded Abstracts, 57, 1437–1439. Walls, J. D., M. T. Taner, G. Taylor, M. Smith, M. Carr, N. Derzhi, J. Drummond, D. McGuire, S. Morris, J. Bregar, and J. Lakings, 2002, Seismic reservoir characterization of a U.S. Midcontinent fluvial system using rock physics, poststack seismic attributes, and neural networks: Leading Edge, 21, 428–436. Xing, L., V. Aarre, A. Barnes, T. Theoharis, N. Salman, and E. Tjåland, 2017, Instantaneous Frequency Seismic Attribute Benchmarking: SEG Technical Program Expanded Abstracts. 11/15/2021 139
  • 140. References (Review of Seismic Attribute Taxonomies) Xing, L., V. Aarre, A. E. Barnes, T. Theoharis, N. Salman, and E. Tjåland, 2019, Seismic attribute benchmarking on instantaneous frequency: Geophysics, 84, O63–O72. Yenugu, M. (moe), K. J. Marfurt, and S. Matson, 2010, Seismic texture analysis for reservoir prediction and characterization: Leading Edge, 29, 1116– 1121. Zhang, R., and J. Castagna, 2011, Seismic sparse-layer reflectivity inversion using basis pursuit decomposition: Geophysics, 76, R147–R158. 1977, Seismic Stratigraphy: Applications to Hydrocarbon Exploration (C. E. Payton, ed.,): American Association of Petroleum Geologists. 2014, Understanding Seismic Anisotropy in Exploration and Exploitation, Second Edition, Second Edition. (L. Thomsen, ed.,): The Society of Exploration Geophysicists. 2007, Artificial General Intelligence (B. Goertzel and C. Pennachin, eds.,): Springer, Berlin, Heidelberg. 2012, Characterizing the Robustness of Science: After the Practice Turn in Philosophy of Science (L. Soler, E. Trizio, T. Nickles, and W. Wimsatt, eds.,): Springer, Dordrecht. 11/15/2021 140
  • 141. References (Spectral Similarity Fault Enhancement) Al-Dossary, S., and K. Al-Garni, 2013, Fault detection and characterization using a 3D multidirectional Sobel filter: Presented at the Saudi Arabia Section Technical Symposium and Exhibition, SPE, SPE-168061-MS. Aqrawi, A., and T. Boe, 2011, Improved fault segmentation using a dip guided and modified 3D Sobel filter: 81st Annual International Meeting, SEG, Expanded Abstracts, 999–1003. Bahorich, M. S., and S. L. Farmer, 1995, 3‐D seismic discontinuity for faults and stratigraphic features: The coherence cube: 65th Annual International Meeting, SEG, Expanded Abstracts, 93–96, Crossref. Basir, H., A. Javaherian, and M. Yaraki, 2013, Multi-attribute ant-tracking and neural network for fault detection: A case study of an Iranian oilfield: Journal of Geophysics and Engineering, 10, 015009, Crossref. Dorn, G., B. Kadlec, and M. Patty, 2012, Imaging faults in 3D seismic volumes: 82nd Annual International Meeting, SEG, Expanded Abstracts, Crossref. Gao, D., 2013, Wavelet spectral probe for seismic structure interpretation and fracture characterization: A workflow with case studies: Geophysics, 78, no. 5, O57–O67, Crossref. Garsztenkorn, A., and K. J. Marfurt, 1999, Eigenstructure-based coherence computations as an aid to 3-D structural and stratigraphic mapping: Geophysics, 64, 1468–1479, Crossref. Marfurt, K. J., 2006, Robust estimates of reflector dip and azimuth: Geophysics, 71, no. 4, P29–P40, Crossref. 11/15/2021 141
  • 142. References (Spectral Similarity Fault Enhancement) Marfurt, K. J., R. Kirlin, S. Farmer, and M. Bahorich, 1998, 3-D seismic attributes using a semblance-based coherency algorithm: Geophysics, 63, 1150– 1165, Crossref. Pedersen, S. I., T. Randen, L. Sønneland, and O. Steen, 2002, Automatic 3D fault interpretation by artificial ants: 72nd Annual International Meeting, SEG, Expanded Abstracts, 512–515. Randen, T., S. Pedersen, and L. Sønneland, 2001, Automatic extraction of fault surfaces from three‐dimensional seismic data: 71st Annual International Meeting, SEG, Expanded Abstracts, 551–554. 11/15/2021 142
  • 143. References (Efficiency Gains in Inversion-based Interpretation) Connolly, P.A., 1999, Elastic impedance: The Leading Edge, 18, 438-452, doi: 10.1190/1.1438307. Kohonen, T., 1989, Self-organization and associative memory, 3rd Edition, Springer, New York. Tao Zhao, Sumit Verma, Jie Qi, and Kurt J. Marfurt (2015) Supervised and unsupervised learning: how machines can assist quantitative seismic interpretation. SEG Technical Program Expanded Abstracts 2015: pp. 1734-1738. doi: 10.1190/segam2015-5924540.1 11/15/2021 143