The document provides an overview of principles of seismic data interpretation. It discusses fundamentals of seismic acquisition and processing such as seismic response, phase, polarity, reflections, and resolution. It also covers topics like structural interpretation pitfalls, seismic interpretation workflows involving building databases and time-depth relationships, and structural styles. The document includes sections on depth conversion, subsurface mapping techniques, and different types of velocities.
2 d and 3d land seismic data acquisition and seismic data processingAli Mahroug
The seismic method has three important/principal applications
a. Delineation of near-surface geology for engineering studies, and coal and mineral
exploration within a depth of up to 1km: the seismic method applied to the near –
surface studies is known as engineering seismology.
b. Hydrocarbon exploration and development within a depth of up to 10 km: seismic
method applied to the exploration and development of oil and gas fields is known
as exploration seismology.
c. Investigation of the earth’s crustal structure within a depth of up to 100 km: the
seismic method applies to the crustal and earthquake studies is known as
earthquake seismology.
2 d and 3d land seismic data acquisition and seismic data processingAli Mahroug
The seismic method has three important/principal applications
a. Delineation of near-surface geology for engineering studies, and coal and mineral
exploration within a depth of up to 1km: the seismic method applied to the near –
surface studies is known as engineering seismology.
b. Hydrocarbon exploration and development within a depth of up to 10 km: seismic
method applied to the exploration and development of oil and gas fields is known
as exploration seismology.
c. Investigation of the earth’s crustal structure within a depth of up to 100 km: the
seismic method applies to the crustal and earthquake studies is known as
earthquake seismology.
The analysis of all of the significant processes that formed a basin and deformed its sedimentary fill from basin-scale processes (e.g., plate tectonics)
to centimeter-scale processes (e.g., fracturing)
Avo ppt (Amplitude Variation with Offset)Haseeb Ahmed
AVO/AVA can physically explain presence of hydrocarbon in the reservoirs and the thickness, porosity, density, velocity, lithology and fluid content of the reservoir of the rock can be estimated.
3D Facies Modelling project using Petrel software. Msc Geology and Geophysics
Abstract
The Montserrat and Sant Llorenç del Munt fan-delta complexes were developed during the Eocene in the Ebro basin. The depositional stratigraphic record of these fan deltas has been described as a made up by a several transgressive and regressive composite sequences each made up by several fundamental sequences. Each sequence set is in turn composed by five main facies belts: proximal alluvial fan, distal alluvial fan, delta front, carbonates platforms and prodelta.
Using outcrop data from three composite sequences (Sant Vicenç, Vilomara and Manresa), a 3D facies model was built. The key sequential traces of the studied area georeferenced and digitalized on to photorealistic terrain models, were the hard data used as input to reconstruct the main surfaces, which are separating transgressive and regressive stacking patterns. Regarding the facies modelling has been achieved using a geostatistical algorithm in order to define the stacking trend and the interfingerings of adjacent facies belts, and five paleogeographyc maps to reproduce the paleogeometry of the facies belts within each system tract.
The final model has been checked, using a real cross section, and analysed in order to obtain information about the Delta Front facies which are the ones susceptible to be analogous of a reservoir. Attending to the results including eight probability maps of occurrence, the transgressive sequence set of Vilomara is the greatest accumulation of these facies explained by its agradational component.
A REVIEW OF GROWTH FAULTS AND ROLLOVER ANTICLINES (A CASE STUDY OF NIGER DELTA) James Opemipo OLOMO
Growth faults and its associated rollover anticlines are generally syndepositional sedimentary structures that result from diastrophism which was contemporaneous with sedimentation. They are special structures which occur abundantly in the Niger Delta & constitute one of the most important hydrocarbon traps in the region .
Despite this abundance, their occurrence is however restricted to the extensional zone of the Niger delta. These structures can be identified from outcrops, seismic data , structure contour maps and well logs. While their propagation history can be constrained by the use of key kinematic tools, such as t-z, d-l and expansion index plots. Although, it has been identified that these structures are target structures in the accumulation of oil and gas, they can also be destructive, especially if they are reactivated after hydrocarbon accumulation.
Hence, it is important for the petroleum explorationist to identify, map their extent and constrain the propagation history of these structures, in order to minimise exploration risk.
WELL LOG : Types of Logs, The Bore Hole Image, Interpreting Geophysical Well Logs, applications, Production logs, Well Log Classification and Cataloging
The analysis of all of the significant processes that formed a basin and deformed its sedimentary fill from basin-scale processes (e.g., plate tectonics)
to centimeter-scale processes (e.g., fracturing)
Avo ppt (Amplitude Variation with Offset)Haseeb Ahmed
AVO/AVA can physically explain presence of hydrocarbon in the reservoirs and the thickness, porosity, density, velocity, lithology and fluid content of the reservoir of the rock can be estimated.
3D Facies Modelling project using Petrel software. Msc Geology and Geophysics
Abstract
The Montserrat and Sant Llorenç del Munt fan-delta complexes were developed during the Eocene in the Ebro basin. The depositional stratigraphic record of these fan deltas has been described as a made up by a several transgressive and regressive composite sequences each made up by several fundamental sequences. Each sequence set is in turn composed by five main facies belts: proximal alluvial fan, distal alluvial fan, delta front, carbonates platforms and prodelta.
Using outcrop data from three composite sequences (Sant Vicenç, Vilomara and Manresa), a 3D facies model was built. The key sequential traces of the studied area georeferenced and digitalized on to photorealistic terrain models, were the hard data used as input to reconstruct the main surfaces, which are separating transgressive and regressive stacking patterns. Regarding the facies modelling has been achieved using a geostatistical algorithm in order to define the stacking trend and the interfingerings of adjacent facies belts, and five paleogeographyc maps to reproduce the paleogeometry of the facies belts within each system tract.
The final model has been checked, using a real cross section, and analysed in order to obtain information about the Delta Front facies which are the ones susceptible to be analogous of a reservoir. Attending to the results including eight probability maps of occurrence, the transgressive sequence set of Vilomara is the greatest accumulation of these facies explained by its agradational component.
A REVIEW OF GROWTH FAULTS AND ROLLOVER ANTICLINES (A CASE STUDY OF NIGER DELTA) James Opemipo OLOMO
Growth faults and its associated rollover anticlines are generally syndepositional sedimentary structures that result from diastrophism which was contemporaneous with sedimentation. They are special structures which occur abundantly in the Niger Delta & constitute one of the most important hydrocarbon traps in the region .
Despite this abundance, their occurrence is however restricted to the extensional zone of the Niger delta. These structures can be identified from outcrops, seismic data , structure contour maps and well logs. While their propagation history can be constrained by the use of key kinematic tools, such as t-z, d-l and expansion index plots. Although, it has been identified that these structures are target structures in the accumulation of oil and gas, they can also be destructive, especially if they are reactivated after hydrocarbon accumulation.
Hence, it is important for the petroleum explorationist to identify, map their extent and constrain the propagation history of these structures, in order to minimise exploration risk.
WELL LOG : Types of Logs, The Bore Hole Image, Interpreting Geophysical Well Logs, applications, Production logs, Well Log Classification and Cataloging
Ultrasonic guided wave techniques have great potential for structural health monitoring applications. Appropriate mode and frequency selection is the basis for achieving optimised damage monitoring performance.
In this paper, several important guided wave mode attributes are
introduced in addition to the commonly used phase velocity and group velocity dispersion curves while using the general corrosion problem as an example. We first derive a simple and generic wave excitability function based on the theory of normal mode expansion and the reciprocity theorem. A sensitivity dispersion curve is formulated based on the group velocity dispersion curve. Both excitability and sensitivity dispersion curves are verified with finite element simulations. Finally, a
goodness dispersion curve concept is introduced to evaluate the tradeoffs between multiple mode selection objectives based on the wave velocity, excitability and sensitivity.
Evaluation of the Sensitivity of Seismic Inversion Algorithms to Different St...IJERA Editor
Seismic wavelet estimation is an important step in processing and analysis of seismic data. Inversion methods as Narrow-Band and theConstrained Sparse-Spike ones require information about it so that the inversion solution, once it is not a unique problem, may be restricted by comparing the real seismic trace with the synthetic generated by convolution of the estimated reflectivity and wavelet. Besides helping in seismic inversion, a good estimate of the wavelet enables an inverse filter with less uncertainty to be computed in the deconvolution step and while tying well logs, a better correlation between the seismic trace and well log can be achieved. Depending on the use or not of well log information, the methods of wavelet estimation can be divided into two classes: statistical and deterministic. This work aimed to test the sensitivity of acoustic post-stack seismic inversion algorithms to wavelets statistically estimated by two distinct methods
Initial study and implementation of the convolutional Perfectly Matched Layer...Arthur Weglein
In this report, first steps and results of the implementation of the Convolutional Perfectly
Matched Layer (CPML), for the modeling of the 2D acoustic heterogeneous wave equation
are presented. We also compare the conditions to set to zero, for all angles of incidence, the
reflection coefficient at the interface between two PML media, with the analogous conditions
for the reflection coefficient at an interface between two acoustic media. A side product of the
present work for the M-OSRP is a code to create synthetic data, using Finite-Difference (FD)
methods with PML BCs.
We also provide a short description of the main stages involved in the original Reverse Time
Migration (RTM) algorithm, with focus on the 2D acoustic heterogeneous wave equation. We
include a derivation of the equations of the CPML for the backward propagation of the data,
which is part of the RTM. As far as the authors knowledge, these equations and derivations
have not been reported in the literature. The reason we include the RTM is because the present
report can be considered part of a broader research project whose objective is to compare the
RTM with PML BCs with the Green’s theorem based RTM, developed within the M-OSRP.
Initial study and implementation of the convolutional Perfectly Matched Layer...Arthur Weglein
In this report, first steps and results of the implementation of the Convolutional Perfectly
Matched Layer (CPML), for the modeling of the 2D acoustic heterogeneous wave equation
are presented. We also compare the conditions to set to zero, for all angles of incidence, the
reflection coefficient at the interface between two PML media, with the analogous conditions
for the reflection coefficient at an interface between two acoustic media. A side product of the
present work for the M-OSRP is a code to create synthetic data, using Finite-Difference (FD)
methods with PML BCs.
We also provide a short description of the main stages involved in the original Reverse Time
Migration (RTM) algorithm, with focus on the 2D acoustic heterogeneous wave equation. We
include a derivation of the equations of the CPML for the backward propagation of the data,
which is part of the RTM. As far as the authors knowledge, these equations and derivations
have not been reported in the literature. The reason we include the RTM is because the present
report can be considered part of a broader research project whose objective is to compare the
RTM with PML BCs with the Green’s theorem based RTM, developed within the M-OSRP.
Directional Spreading Effect on a Wave Energy ConverterElliot Song
The results demonstrate the importance of tuning the WEC system for specific wave environments to harvest most energy and to avoid potential capsize due to hurricanes etc.
A Comparative Study of Wavelet and Curvelet Transform for Image DenoisingIOSR Journals
Abstract : This paper describes a comparison of the discriminating power of the various multiresolution based thresholding techniques i.e., Wavelet, curve let for image denoising.Curvelet transform offer exact reconstruction, stability against perturbation, ease of implementation and low computational complexity. We propose to employ curve let for facial feature extraction and perform a thorough comparison against wavelet transform; especially, the orientation of curve let is analysed. Experiments show that for expression changes, the small scale coefficients of curve let transform are robust, though the large scale coefficients of both transform are likely influenced. The reason behind the advantages of curvelet lies in its abilities of sparse representation that are critical for compression, estimation of images which are denoised and its inverse problems, thus the experiments and theoretical analysis coincide . Keywords: Curvelet transform, Face recognition, Feature extraction, Sparse representation Thresholding rules,Wavelet transform..
Similar to Principles of seismic data interpretation m.m.badawy (20)
A brief information about the SCOP protein database used in bioinformatics.
The Structural Classification of Proteins (SCOP) database is a comprehensive and authoritative resource for the structural and evolutionary relationships of proteins. It provides a detailed and curated classification of protein structures, grouping them into families, superfamilies, and folds based on their structural and sequence similarities.
The increased availability of biomedical data, particularly in the public domain, offers the opportunity to better understand human health and to develop effective therapeutics for a wide range of unmet medical needs. However, data scientists remain stymied by the fact that data remain hard to find and to productively reuse because data and their metadata i) are wholly inaccessible, ii) are in non-standard or incompatible representations, iii) do not conform to community standards, and iv) have unclear or highly restricted terms and conditions that preclude legitimate reuse. These limitations require a rethink on data can be made machine and AI-ready - the key motivation behind the FAIR Guiding Principles. Concurrently, while recent efforts have explored the use of deep learning to fuse disparate data into predictive models for a wide range of biomedical applications, these models often fail even when the correct answer is already known, and fail to explain individual predictions in terms that data scientists can appreciate. These limitations suggest that new methods to produce practical artificial intelligence are still needed.
In this talk, I will discuss our work in (1) building an integrative knowledge infrastructure to prepare FAIR and "AI-ready" data and services along with (2) neurosymbolic AI methods to improve the quality of predictions and to generate plausible explanations. Attention is given to standards, platforms, and methods to wrangle knowledge into simple, but effective semantic and latent representations, and to make these available into standards-compliant and discoverable interfaces that can be used in model building, validation, and explanation. Our work, and those of others in the field, creates a baseline for building trustworthy and easy to deploy AI models in biomedicine.
Bio
Dr. Michel Dumontier is the Distinguished Professor of Data Science at Maastricht University, founder and executive director of the Institute of Data Science, and co-founder of the FAIR (Findable, Accessible, Interoperable and Reusable) data principles. His research explores socio-technological approaches for responsible discovery science, which includes collaborative multi-modal knowledge graphs, privacy-preserving distributed data mining, and AI methods for drug discovery and personalized medicine. His work is supported through the Dutch National Research Agenda, the Netherlands Organisation for Scientific Research, Horizon Europe, the European Open Science Cloud, the US National Institutes of Health, and a Marie-Curie Innovative Training Network. He is the editor-in-chief for the journal Data Science and is internationally recognized for his contributions in bioinformatics, biomedical informatics, and semantic technologies including ontologies and linked data.
Introduction:
RNA interference (RNAi) or Post-Transcriptional Gene Silencing (PTGS) is an important biological process for modulating eukaryotic gene expression.
It is highly conserved process of posttranscriptional gene silencing by which double stranded RNA (dsRNA) causes sequence-specific degradation of mRNA sequences.
dsRNA-induced gene silencing (RNAi) is reported in a wide range of eukaryotes ranging from worms, insects, mammals and plants.
This process mediates resistance to both endogenous parasitic and exogenous pathogenic nucleic acids, and regulates the expression of protein-coding genes.
What are small ncRNAs?
micro RNA (miRNA)
short interfering RNA (siRNA)
Properties of small non-coding RNA:
Involved in silencing mRNA transcripts.
Called “small” because they are usually only about 21-24 nucleotides long.
Synthesized by first cutting up longer precursor sequences (like the 61nt one that Lee discovered).
Silence an mRNA by base pairing with some sequence on the mRNA.
Discovery of siRNA?
The first small RNA:
In 1993 Rosalind Lee (Victor Ambros lab) was studying a non- coding gene in C. elegans, lin-4, that was involved in silencing of another gene, lin-14, at the appropriate time in the
development of the worm C. elegans.
Two small transcripts of lin-4 (22nt and 61nt) were found to be complementary to a sequence in the 3' UTR of lin-14.
Because lin-4 encoded no protein, she deduced that it must be these transcripts that are causing the silencing by RNA-RNA interactions.
Types of RNAi ( non coding RNA)
MiRNA
Length (23-25 nt)
Trans acting
Binds with target MRNA in mismatch
Translation inhibition
Si RNA
Length 21 nt.
Cis acting
Bind with target Mrna in perfect complementary sequence
Piwi-RNA
Length ; 25 to 36 nt.
Expressed in Germ Cells
Regulates trnasposomes activity
MECHANISM OF RNAI:
First the double-stranded RNA teams up with a protein complex named Dicer, which cuts the long RNA into short pieces.
Then another protein complex called RISC (RNA-induced silencing complex) discards one of the two RNA strands.
The RISC-docked, single-stranded RNA then pairs with the homologous mRNA and destroys it.
THE RISC COMPLEX:
RISC is large(>500kD) RNA multi- protein Binding complex which triggers MRNA degradation in response to MRNA
Unwinding of double stranded Si RNA by ATP independent Helicase
Active component of RISC is Ago proteins( ENDONUCLEASE) which cleave target MRNA.
DICER: endonuclease (RNase Family III)
Argonaute: Central Component of the RNA-Induced Silencing Complex (RISC)
One strand of the dsRNA produced by Dicer is retained in the RISC complex in association with Argonaute
ARGONAUTE PROTEIN :
1.PAZ(PIWI/Argonaute/ Zwille)- Recognition of target MRNA
2.PIWI (p-element induced wimpy Testis)- breaks Phosphodiester bond of mRNA.)RNAse H activity.
MiRNA:
The Double-stranded RNAs are naturally produced in eukaryotic cells during development, and they have a key role in regulating gene expression .
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Sérgio Sacani
We characterize the earliest galaxy population in the JADES Origins Field (JOF), the deepest
imaging field observed with JWST. We make use of the ancillary Hubble optical images (5 filters
spanning 0.4−0.9µm) and novel JWST images with 14 filters spanning 0.8−5µm, including 7 mediumband filters, and reaching total exposure times of up to 46 hours per filter. We combine all our data
at > 2.3µm to construct an ultradeep image, reaching as deep as ≈ 31.4 AB mag in the stack and
30.3-31.0 AB mag (5σ, r = 0.1” circular aperture) in individual filters. We measure photometric
redshifts and use robust selection criteria to identify a sample of eight galaxy candidates at redshifts
z = 11.5 − 15. These objects show compact half-light radii of R1/2 ∼ 50 − 200pc, stellar masses of
M⋆ ∼ 107−108M⊙, and star-formation rates of SFR ∼ 0.1−1 M⊙ yr−1
. Our search finds no candidates
at 15 < z < 20, placing upper limits at these redshifts. We develop a forward modeling approach to
infer the properties of the evolving luminosity function without binning in redshift or luminosity that
marginalizes over the photometric redshift uncertainty of our candidate galaxies and incorporates the
impact of non-detections. We find a z = 12 luminosity function in good agreement with prior results,
and that the luminosity function normalization and UV luminosity density decline by a factor of ∼ 2.5
from z = 12 to z = 14. We discuss the possible implications of our results in the context of theoretical
models for evolution of the dark matter halo mass function.
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Sérgio Sacani
Since volcanic activity was first discovered on Io from Voyager images in 1979, changes
on Io’s surface have been monitored from both spacecraft and ground-based telescopes.
Here, we present the highest spatial resolution images of Io ever obtained from a groundbased telescope. These images, acquired by the SHARK-VIS instrument on the Large
Binocular Telescope, show evidence of a major resurfacing event on Io’s trailing hemisphere. When compared to the most recent spacecraft images, the SHARK-VIS images
show that a plume deposit from a powerful eruption at Pillan Patera has covered part
of the long-lived Pele plume deposit. Although this type of resurfacing event may be common on Io, few have been detected due to the rarity of spacecraft visits and the previously low spatial resolution available from Earth-based telescopes. The SHARK-VIS instrument ushers in a new era of high resolution imaging of Io’s surface using adaptive
optics at visible wavelengths.
Cancer cell metabolism: special Reference to Lactate PathwayAADYARAJPANDEY1
Normal Cell Metabolism:
Cellular respiration describes the series of steps that cells use to break down sugar and other chemicals to get the energy we need to function.
Energy is stored in the bonds of glucose and when glucose is broken down, much of that energy is released.
Cell utilize energy in the form of ATP.
The first step of respiration is called glycolysis. In a series of steps, glycolysis breaks glucose into two smaller molecules - a chemical called pyruvate. A small amount of ATP is formed during this process.
Most healthy cells continue the breakdown in a second process, called the Kreb's cycle. The Kreb's cycle allows cells to “burn” the pyruvates made in glycolysis to get more ATP.
The last step in the breakdown of glucose is called oxidative phosphorylation (Ox-Phos).
It takes place in specialized cell structures called mitochondria. This process produces a large amount of ATP. Importantly, cells need oxygen to complete oxidative phosphorylation.
If a cell completes only glycolysis, only 2 molecules of ATP are made per glucose. However, if the cell completes the entire respiration process (glycolysis - Kreb's - oxidative phosphorylation), about 36 molecules of ATP are created, giving it much more energy to use.
IN CANCER CELL:
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
introduction to WARBERG PHENOMENA:
WARBURG EFFECT Usually, cancer cells are highly glycolytic (glucose addiction) and take up more glucose than do normal cells from outside.
Otto Heinrich Warburg (; 8 October 1883 – 1 August 1970) In 1931 was awarded the Nobel Prize in Physiology for his "discovery of the nature and mode of action of the respiratory enzyme.
WARNBURG EFFECT : cancer cells under aerobic (well-oxygenated) conditions to metabolize glucose to lactate (aerobic glycolysis) is known as the Warburg effect. Warburg made the observation that tumor slices consume glucose and secrete lactate at a higher rate than normal tissues.
(May 29th, 2024) Advancements in Intravital Microscopy- Insights for Preclini...Scintica Instrumentation
Intravital microscopy (IVM) is a powerful tool utilized to study cellular behavior over time and space in vivo. Much of our understanding of cell biology has been accomplished using various in vitro and ex vivo methods; however, these studies do not necessarily reflect the natural dynamics of biological processes. Unlike traditional cell culture or fixed tissue imaging, IVM allows for the ultra-fast high-resolution imaging of cellular processes over time and space and were studied in its natural environment. Real-time visualization of biological processes in the context of an intact organism helps maintain physiological relevance and provide insights into the progression of disease, response to treatments or developmental processes.
In this webinar we give an overview of advanced applications of the IVM system in preclinical research. IVIM technology is a provider of all-in-one intravital microscopy systems and solutions optimized for in vivo imaging of live animal models at sub-micron resolution. The system’s unique features and user-friendly software enables researchers to probe fast dynamic biological processes such as immune cell tracking, cell-cell interaction as well as vascularization and tumor metastasis with exceptional detail. This webinar will also give an overview of IVM being utilized in drug development, offering a view into the intricate interaction between drugs/nanoparticles and tissues in vivo and allows for the evaluation of therapeutic intervention in a variety of tissues and organs. This interdisciplinary collaboration continues to drive the advancements of novel therapeutic strategies.
Richard's entangled aventures in wonderlandRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
Richard's aventures in two entangled wonderlandsRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
Nutraceutical market, scope and growth: Herbal drug technologyLokesh Patil
As consumer awareness of health and wellness rises, the nutraceutical market—which includes goods like functional meals, drinks, and dietary supplements that provide health advantages beyond basic nutrition—is growing significantly. As healthcare expenses rise, the population ages, and people want natural and preventative health solutions more and more, this industry is increasing quickly. Further driving market expansion are product formulation innovations and the use of cutting-edge technology for customized nutrition. With its worldwide reach, the nutraceutical industry is expected to keep growing and provide significant chances for research and investment in a number of categories, including vitamins, minerals, probiotics, and herbal supplements.
2. Principles of Seismic Data Interpretation
M.M.Badawy
Page2
Principles of Seismic Data
Interpretation
Mahmoud Mostafa Badawy
Lecturer Assistant of Geophysics, Geology Department, Faculty of Science,
Alexandria University, Egypt
3. Principles of Seismic Data Interpretation
M.M.Badawy
Page3
Contents:
Fundamentals:
Brief summary on seismic acquisition and processing
Seismic Response
Phase & Wavelet
Polarity.
Reflections.
Reflection Coefficient
Convolution Theorem
Seismic resolution.
Basic concept of seismic exploration
Seismic events
2D vs. 3D data
Colour, display and 3-D visualization
Structural Interpretation Pitfalls:
Pitfall Statics Busts
Pitfall Fault Shadow
Pitfall Multiplies
Seismic Interpretation workflows:
Building Project Database
Time Depth Relationships
Synthetic Seismograms
Check Shot
VSP
4. Principles of Seismic Data Interpretation
M.M.Badawy
Page4
Structural Styles & Structural Interpretation:
Normal faults:
PLANARS
LISTRIC
Reverse or Thrust faults:
Fault-bend fold
Fault-propagation fold
Inversion Structures
Strike slip faults
Depth Conversion:
Overview of seismic velocities.
Time-to-depth conversion
Subsurface Mapping Techniques:
Subsurface Structure Mapping
Fault Polygon Definition
The Mapping Process
5. Principles of Seismic Data Interpretation
M.M.Badawy
Page5
Fundamentals:
What Makes A Wiggle?
Seismic reflection profiling is an echo sounding technique. A controlled sound pulse is issued
into the Earth and the recording system listens a fixed time for energy reflected back from
interfaces within the Earth. The interface is often a geological boundary, for example the change
of sandstone to limestone.
Once the travel-time to the reflectors and the velocity of propagation is known, the geometry of
the reflecting interfaces can be reconstructed and interpreted in terms of geological structure in
depth. The principal purpose of seismic surveying is to help understand geological structure and
stratigraphy at depth and in the oil industry is ultimately used to reduce the risk of drilling dry
wells.
Wave is a disturbance which
travels in the medium or
without.
6. Principles of Seismic Data Interpretation
M.M.Badawy
Page6
What Is A Reflection?
The following figure shows a simple earth model and resulting seismic section used to
illustrate the basic concepts of the method.
The terms source, receiver and reflecting interface are introduced. Sound energy travels
through different media (rocks) at different velocities and is reflected at interfaces where
the media velocity and/or density changes.
The amplitude and polarity of the reflection is proportional to the acoustic impedance
(product of velocity and density) change across an interface. The arrival of energy at the
receiver is termed a seismic event.
A seismic trace records the events and is conventionally plotted below the receiver with
the time (or depth axis)
Snell's Law
The mathematical description of refraction or the physical change in the direction of a wave
front as it travels from one medium to another with a change in velocity and partial conversion
and reflection of a P-wave to an S-wave at the interface of the two media.
Snell's law, one of two laws describing refraction, was formulated in the context of light waves,
but is applicable to seismic waves. It is named for Willebrord Snel (1580 to 1626), a Dutch
mathematician.
Snell's law can be written as:
7. Principles of Seismic Data Interpretation
M.M.Badawy
Page7
Wave Propagation
For small deformations rocks are elastic, which is they return to their original shape once a small
stress applied to deform them is removed. Seismic waves are elastic waves and are the
"disturbances" which propagate through the rocks.
The most commonly used form of seismic wave is the P (primary)-wave which travels as a
series of compressions and rarefactions through the earth the particle motion being in the
direction of wave travel. The propagation of P-waves can be represented as a series of wave
fronts (lines of equal phase) which describe circles for a point source in a homogeneous media
(similar to when a stone is dropped vertically onto a calm water surface). As the wave front
expands the energy is spread over a wider area and the amplitude decays with distance from the
source.
This decay is called spherical or geometric divergence and is usually compensated for in seismic
processing. Rays are normal to the wave fronts and diagrammatically indicate the direction of
wave propagation. Usually the shortest ray-path is the direction of interest and is chosen for
clarity. Secondary or S waves travel at up to 70% of the velocity of P-waves and do not travel
through fluids.
The particle motion for an S-wave is perpendicular to its direction of propagation (shear stresses
are introduced) and the motion is usually resolved into a horizontal component (SH waves) and
a vertical component (SV waves).
Reflection: The energy or wave from a seismic source which has been reflected from an
acoustic impedance contrast (reflector) or a series of contrasts within the earth.
Refraction: The change in direction of a seismic ray upon passing into a medium with a
different velocity. The mathematics of this is defined by Snell’s law.
8. Principles of Seismic Data Interpretation
M.M.Badawy
Page8
Reflection Coefficient:
(The ratio of amplitude of the reflected wave to the incident wave, or how much energy is
reflected). If the wave has normal incidence, then its reflection coefficient can be expressed as:
If the A.I of the lower formation is higher than the upper one, the reflection polarity will be
+ve and vice versa.
If the difference in A.I between the two formations is high, the reflection magnitude
(Amplitude) will be high.
9. Principles of Seismic Data Interpretation
M.M.Badawy
Page9
Tape Formats:
Several tape formats defined by the SEG are currently in use. These standards are often treated
quite liberally, especially where 3D data is concerned. Most contractors also process data using
their own internal formats which are generally more efficient than the SEG standards.
The two commonest formats are SEG-D (for field data) and SEG-Y for final or intermediate
products.
The previous figure shows the typical way in which a seismic trace is stored on tape for SEG-Y
format.
The use of headers is particularly important since these headers are used in seismic processing to
manipulate the seismic data. Older multiplexed formats (data acquired in channel order) such as
SEG-B would typically be demultiplexed (in shot order) and transcribed to SEG-Y before
processing.
In SEG-Y format a 3200 byte EBCDIC (Extended Binary Coded Decimal Interchange Code)
"text" header arranged as forty 80 character images is followed by a 400 byte binary header
which contains general information about the data such as number of samples per trace. This is
followed by the 240 byte trace header (which contains important information related to the trace
such as shot point number, trace number) and the trace data itself stored as IBM floating point
numbers in 32 byte format.
The trace, or a series of traces such as a shot gather, will be terminated by an EOF (End of File)
marker. The tape is terminated by an EOM (End of Media) marker. Several lines may be
concatenated on tape separated by two EOF markers (double end of file). Separate lines should
have their own EBCIDC headers, although this may be stripped out (particularly for 3D
archives) for efficiency. Each trace must have its own 240 byte trace header. Note there are
considerable variations in the details of the SEG-Y format.
10. Principles of Seismic Data Interpretation
M.M.Badawy
Page10
Processing Concept:
The purpose of seismic processing is to manipulate the acquired data into an image that can be
used to infer the sub-surface structure. Only minimal processing would be required if we had a
perfect acquisition system.
Processing consists of the application of a series of computer routines to the acquired data
guided by the hand of the processing geophysicist. There is no single "correct" processing
sequence for a given volume of data.
At several stages judgments or interpretations have to be made which are often subjective and
rely on the processors experience or bias. The interpreter should be involved at all stages to
check that processing decisions do not radically alter the interpretability of the results in a
detrimental manner.
Processing routines generally fall into one of the following categories:
enhancing signal at the expense of noise
providing velocity information
collapsing diffractions and placing dipping events in their true subsurface locations
(migration)
increasing resolution (wavelet processing)
11. Principles of Seismic Data Interpretation
M.M.Badawy
Page11
A Processing Flow:
Processing flow is a collection of processing routines applied to a data volume. The processor
will typically construct several jobs which string certain processing routines together in a
sequential manner.
Most processing routines accept input data, apply a process to it and produce output data which
is saved to disk or tape before passing through to the next processing stage. Several of the stages
will be strongly interdependent and each of the processing routines will require several
parameters some of which may be defaulted.
Some of the parameters will be defined, for example by the acquisition geometry and some must
be determined for the particular data being processed by the process of testing.
Factors which Affect Amplitudes
12. Principles of Seismic Data Interpretation
M.M.Badawy
Page12
New Data:
Tape containing recorded seismic data (trace sequential or multiplexed)
Observer logs/reports
Field Geophysicist logs/reports and listings
Navigation/survey data
Field Q.C. displays
Contractual requirements
Simple Processing Sequence Flow:
Reformat
Geometry Definition
Field Static Corrections (Land - Shallow Water - Transition Zone)
Amplitude Recovery
Noise Attenuation (De-Noise)
Deconvolution
CMP Gather
NMO Correction
De-multiple (Marine)
Migration
CMP Stack
13. Principles of Seismic Data Interpretation
M.M.Badawy
Page13
The Seismic Method:
(Use acoustic waves (sound) to image the subsurface)
Measure
Time for sound to get from surface to subsurface reflectors and back - Two-
way traveltime (twt)
Amplitude of reflection
Wanted:
Depth - Need to know subsurface velocities
Rock properties (porosity, saturation, etc.)
Spherical Divergence:
Due to the nature propagation of the energy on the shape of wave fronts, and with
increasing of the diameter of these waves, the energy decays through time so we
have to compensate this decay.
The surface area of a sphere is proportional to the square of its radius so the energy
lost due to spherical divergence is proportional to 1/r2.
14. Principles of Seismic Data Interpretation
M.M.Badawy
Page14
Direct Waves:
They are source- generated due to the direct travel of these waves from the source to the
receiver and they are dominant in near offsets. They can be attenuated by normal move
out, muting and stacking.
Refraction:
They are generated by critically refracted waves from the near surface layers. They are
dominant in the far offsets. They can be attenuated by NMO, muting and stacking.
Ground Roll:
It is a source noise coming from propagation of waves in particles of near surface
layers without net movement. It is dominant in the upper part from the data and
interfered with direct waves and refracted waves.
Its characteristics: (low velocity, low frequency and high amplitude).
It could be attenuated by F-K filter or Tau-p filter.
Zero phasing:
It is a process that can be applied at the first steps or at the last but it is preferred to
be at first.
Zero phases: (the maximum amplitude is at zero time).
Zero phases is a mathematically solution but we can be close to it using vibroseis.
Minimum phase: (maximum amplitude at minimum time, we can obtain it
with dynamite).
Maximum phases: (maximum amplitude at maximum time).
Mixed phases: (it is a mixed phase in between minimum phase and maximum
phase and we can get it with air gun).
15. Principles of Seismic Data Interpretation
M.M.Badawy
Page15
Zero phasing is a process by which we can modify the position of peaks and
troughs to be at the reflector position instead of being above or below its real
position for facilitating the interpretation process.
To make zero phasing we should make:
1-Model source 2-cross correlation
For air gun we get the source signature from the contractor.
Then using software we determine the distance between the maximum amplitude
and zero time then we make shift toward zero time by a distance equal it from zero
time to max amplitude.
And we can attenuate the bubble effect by designing the wavelet before shifting. And
by this step we designed a filter that we multiply it with the source signature to
ensure that the result is a zero phase signature. And then we apply this filter on
seismic data using cross correlation.
Source modeling can be for dynamite using the
charge, whole depth and the recorder model.
We also determine the polarity of the traces
either it is normal or reverse. For vibroseise
we don't do that step.
17. Principles of Seismic Data Interpretation
M.M.Badawy
Page17
These wavelets all have similar frequency content, but have different phase. The
ideal wavelet from the interpreter’s point of view is ‘zero phases’. In a zero phase
wavelet, each frequency component is lined up so that the wavelet is symmetrical.
This creates the shortest possible wavelet, and the main peak is aligned at the time
corresponding to the travel time to the reflector, facilitating correlation between
seismic data and geology.
One aim of processing is to bring the data to zero phases. This is best done by
careful control of all the processes through stack and migration, followed by
calibration against one or, preferably, several wells. In the absence of well data, it is
possible to use a strong isolated reflector, such as a hard water bottom, chalk, or top
salt reflector, or calibrate against another seismic dataset of known phase.
Explosive source data, such as marine air gun or dynamite, is close to minimum
phase when acquired. For a given frequency content, the minimum phase wavelet is
the wavelet that has its energy as close to zero time as possible with no energy
before zero. It is easy to transform a minimum phase wavelet to zero phases
mathematically and this is done during processing.
We need to distinguish between the phase of the wavelet and the phase of the
individual frequency components. In the case of a zero-phase wavelet, all the
contributing frequencies have zero phases also.
18. Principles of Seismic Data Interpretation
M.M.Badawy
Page18
Answer 1:
What is the dominant frequency of the seismic data in the interval between 1500
and 1600 ms? If the velocity is 5000 m/s, what is the tuning thickness? If it is
possible to detect a bed down to 1/16 of the wavelength, what would that be?
Dominant frequency:
About 4 ½ cycles in 100 ms
= 45 cycles/second
= 45 Hz
Tuning thickness:
Frequency = 45 Hz, Velocity = 5000 m/s
Wavelength = 5000/45
= 111 m
Tuning thickness = ¼ x 111 = 28 m
19. Principles of Seismic Data Interpretation
M.M.Badawy
Page19
The practice of seismic rock physics:
The practice of seismic rock physics depends to a large extent on the application. In some
cases, simply fluid substituting the logs in a dry well and generating synthetic gathers for
various fluid fill scenarios may be all that is needed to identify seismic responses
diagnostic of hydrocarbon presence.
On the other hand, generating stochastic inversions for reservoir prediction and
uncertainty assessment will require a complete rock physics database in which the elastic
properties of various lithofacies and their distributions are defined in an effective pressure
context. Either way, the amount of knowledge required to master the art of seismic rock
physics is a daunting prospect for the seismic interpreter.
20. Principles of Seismic Data Interpretation
M.M.Badawy
Page20
Kinds of Velocity:
• Average velocity: at which represent depth to bed (from surface to layer). Average velocity is
commonly calculated by assuming a vertical path, parallel layers and straight ray paths,
conditions that are quite idealized compared to those actually found in the Earth.
• Pseudo Average Velocity: when we have time from seismic & depth from well
• True Average Velocity: when we measure velocity by VSP, Sonic, or Coring
• Interval Velocity: The velocity, typically P-wave velocity, of a specific layer or layers o rock,
• Pseudo Interval Velocity: when we have time from seismic & depth from well
• True Average Velocity: when we measure velocity by VSP, Cheak shot
• Stacking Velocity: The distance-time relationship determined from analysis of normal move
out (NMO) measurements from common depth point gathers of seismic data. The stacking
velocity is used to correct the arrival times of events in the traces for their varying offsets prior
to summing, or stacking, the traces to improve the signal-to noise ratio of the data.
• RMS Velocity: is root mean square velocity & equivalent to stacking velocity but increased by
10%
• Instantaneous Velocity: Most accurate velocity (comes from sonic tools) & can be measured
at every feet
• Migration Velocity: used to migrate certain point to another (usually > or < of stacking
velocity by 5-15%)
21. Principles of Seismic Data Interpretation
M.M.Badawy
Page21
Convolution:
Is a mathematical way of combining two signals to achieve a third, modified signal.
The signal we record seems to respond well to being treated as a series of signals superimposed
upon each other that is seismic signals seem to respond convolutionally. The process of
DECONVOLUTION is the reversal of the convolution process.
Convolution in the time domain is represented in the frequency domain by a multiplying
the amplitude spectra and adding the phase spectra.
22. Principles of Seismic Data Interpretation
M.M.Badawy
Page22
The Power of Stack:
Relies on signal being in phase and noise being out of phase i.e. primary signal is ‘flat’
on the cmp gather after NMO corrections
A spatial or K- filtering process
Data reduction - usually to [almost] ‘zero-offset’ trace
Attenuates coherent noise in the input record (to varying degrees)
Attenuates random noise relative to signal by up to N; where N = number of traces
stacked (i.e. fold of stack)
K filter - filtering of spatial frequencies by summing/mixing
K-filter - Apply an ‘all-ones’ filter and output the central sample.
To apply a spatial K-filter to a record we must first collect the series of samples having
the same time values from each data trace - ie. form a common-time trace.
This is the input data which must be convolved with our chosen filter to produce the
filtered output. The process is applied to each common-time trace in turn (0 msec, 4
msec, 8 msec, etc.).
The summing filter is a high-cut spatial filter. It passes energy close to K=0, ie.
effectively dips close to 0ms per trace. Therefore, if signal has been aligned to zero dip
(as in NMO corrected data), signal will be passed.
Organized noise contained in steeper dips will be suppressed - except at low temporal
frequencies or if the noise aliases and wraps-around through K=0.
If we increase the number of filter points - ie. increase the fold - then the filter becomes
more effective at passing only energy close to K=0, or dips closer to zero.
24. Principles of Seismic Data Interpretation
M.M.Badawy
Page24
Relating Density to Compression-Wave Velocity:
A popular relation between density (ρ) and P-wave velocity (PV) seems to be that of Gardner
et al. (1974). The relation takes the following forms, depending on the units of PV (in all cases
the units of density are gm/cc):
Ft/sec: (1) ρ=0.23 VP0.25
Km/sec: (2) ρ=1.74 VP0.25
M/sec: (3) ρ=0.31 VP0.25
Their relation is simply an approximate average of the relations for a number of
sedimentary rock types, weighted toward shales. The relation comes from the figure in
Gardner et al. (1974):
29. Principles of Seismic Data Interpretation
M.M.Badawy
Page29
INTERPRETATION
It is the last step in the seismic method. It means the transformation of seismic data
presented on seismic sections into geological information.
Seismic interpretation is an art that needs to be based on a clear knowledge of highly
developed technology and a proper understanding of what actually can happen within the
earth.
In the past, interpretation was mainly directed to detection of geologic subsurface
structures. In the present time, interpretation has been extended to include the detection
and mapping sand bodies and stratigraphic traps.
Seismic Ties To Well Data:
When the interpreter comes to establish a tie between the seismic sections and a borehole
section; S/he faces the problem of making a direct correlation between pattern of
reflectors which are scaled vertically in terms of two way reflection time and the realities
of subsurface geology.
Well Velocity Survey:
The Well Velocity Survey is the most direct method of identifying the relationship between
subsurface geology and the seismic reflection data. The technique involves detecting sound
from a near surface source with a pressure geophone at selected levels within the
borehole. These levels are usually chosen with reference to major changes in formations in
the geological section.
30. Principles of Seismic Data Interpretation
M.M.Badawy
Page30
Vertical Seismic Profiling:
Through the using of digital acquisition equipment, it is possible to derive additional data
than those required to produce a calibrated sonic log. If sufficient time interval is sampled,
the data from each test level provide a record which is equivalent to a reflection seismic
trace with a deeply buried detector, because the hydrophone is buried, both upward and
downward travelling waveforms will be recorded from reflecting horizons above and
below the detector’s location, as well as multiples generated in the time progression.
The product, after processing, is displayed in a form similar to that of a variablearea
seismic section as a Vertical Seismic Profile (VSP)
31. Principles of Seismic Data Interpretation
M.M.Badawy
Page31
Synthetic Seismograms:
The synthetic seismogram is considered to be of great value to the interpreter and it is best
presented by splicing it to an interpreted seismic section through the well location. The
acoustic impedance is calculated by multiplying seismic velocity by the density, and
reflection coefficients are calculated from impedance changes.
For comparison with the seismic trace, the reflection coefficient series must be convolved
with a suitable wavelet. Choice of the wavelet is critical for the appearance of the final
synthetic seismogram
Reflectivity
Density
Transittime
Primariesonly
32. Principles of Seismic Data Interpretation
M.M.Badawy
Page32
General Principles, [Seismic Facies Parameters]:
Continuity:
It is the criteria observed on seismic section of the waveform, which is the seismic arrival
of a reflection, and can be recognized on successive traces, perhaps with small changes in
arrival time from trace to trace.
These repeated pulses create an alignment, and this alignment has continuity which can be
followed. The length of continuity represent, an “island of confidence”, from which one can
work in both directions.
The visual impression is dominated by the alignment not by individual pulses Seismic
continuity of a reflection is not an expression of the continuity of a geologic unit. It is an
expression of the continuity of two geological units one following immediately on top of
the other, at their contact is the interface at which the reflection is produced
Correlation:
It is pattern recognition. The pattern may be a single pulse distinguished by its length ,
amplitude or shape , also characteristics of individual reflections, the spacing between
them It is used primarily to relate one area of confidence to another.
Correlation is:
Shape of individual pulses
Sequence of reflections and their spaces
The sequence of reflections is a very reliable basis for correlation. The spacing of
reflections is less reliable. Thickening and thinning change in seismic velocities
unconformities and other features tend to change the spacing of reflections.
33. Principles of Seismic Data Interpretation
M.M.Badawy
Page33
Tracing A Seismic Horizon (Phantom):
The primary purpose of most seismic surveys is to determine structure; and this can be
achieved by tracing identifiable seismic horizons on cross-sections.
Miss-tie at lines intersections changes interval time due to change of the water table
in land or large tidal movement at sea.
Change in stacking velocities.
Errors in survey
Recording and or processing changes (parameters)
Noise.
Splitting Of Reflections:
The sequence is thickening.
The sequence is changing.
Over-step relationship at unconformity.
Overlap.
Naming Of Reflections:
The identification of a seismic reflection requires two geological names, the rock above
and below the contact which generates the reflections.
Drilled wells
Outcrops
Tie to another survey
35. Principles of Seismic Data Interpretation
M.M.Badawy
Page35
Interpretation Process:
Data:
Available surveys.
Versions of the seismic sections.
Base maps.
Velocity (NMO, Migration, depth conversion).
The Interpretation:
Data review Q.C, overall impression of the geology, side label.
Seismic data quality.
Seismic panel.
Data quality map, to select lines, areas of easy interpretation, work schedule.
Geological review and well to seismic.
Identification of seismic sequence.
Identification of seismic boundaries.
Well tie, synthetics.
Horizon selection.
Objective horizon plus one above and one below it.
Interpretation of the seismic sections.
Section folding at all intersections.
Picking.
Line tying and correlation.
Digitizing.
Contouring.
36. Principles of Seismic Data Interpretation
M.M.Badawy
Page36
X
Loop Tying
Contouring Rules:
Recognize trends, establish regional dip, search for dip reversals and seek a geological
rationale for trends in anomalies (folds, faults, reefs,).
Contour from dense data and simple geology toward sparse data and more complex
geology.
Locally reduce the contour interval in complicated areas if the structure form is
unclear.
Be suspicions of closed high within a low.
Be suspicious of a closed low on a top of a high.
Look twice at a low trend.
Be wary of like contours which run parallel over considerable distance.
Be wary of contours that bear relationship with the seismic grid.
Check the interpretation against the seismic sections, especially in regions of complex
structure.
Contouring for locating a well or delineating a field should be done with the maximum
objectivity, (be optimistic).
37. Principles of Seismic Data Interpretation
M.M.Badawy
Page37
Geological Structural Styles
38. Principles of Seismic Data Interpretation
M.M.Badawy
Page38
Fault:
A break or planar surface in brittle rock across which there is observable displacement.
Depending on the relative direction of displacement between the rocks, or fault blocks, on
either side of the fault, its movement is described as normal, reverse or strike-slip.
According to terminology derived from the mining industry, the fault block above the fault
surface is called the hanging wall, while the fault block below the fault is the footwall.
Given the geological complexity of some faulted rocks and rocks that have undergone more
than one episode of deformation, it can be difficult to distinguish between the various
types of faults. Also, areas deformed more than once or that have undergone continual
deformation might have fault surfaces that are rotated from their original orientations,
so interpretation is not straightforward. In a normal fault, the hanging wall moves down
relative to the footwall along the dip of the fault surface, which is steep, from 45o to 90o.
A growth fault is a type of normal fault that forms during sedimentation and typically has
thicker strata on the downthrown hanging wall than the footwall. A reverse fault forms
when the hanging wall moves up relative to the footwall parallel to the dip of the fault
surface. A thrust fault, sometimes called an over thrust, is a reverse fault in which the fault
plane has a shallow dip, typically much less than 45o.
Normal fault:
A type of fault in which the hanging wall moves down relative to the footwall and the fault
surface dips steeply, commonly from 50o to 90o. Groups of normal faults can
produce horst and graben topography, or a series of relatively high- and low-standing fault
blocks, as seen in areas where the crust is rifting or being pulled apart by plate tectonic
activity.
A growth fault is a type of normal fault that forms during sedimentation and typically has
thicker strata on the downthrown hanging wall than the footwall.
39. Principles of Seismic Data Interpretation
M.M.Badawy
Page39
Reverse fault:
A type of fault formed when the hanging wall fault block moves up along a fault surface
relative to the footwall. Such movement can occur in areas where the Earth's crust is
compressed. A thrust fault, sometimes called an over thrust if the displacement is
particularly great, is a reverse fault in which the fault plane has a shallow dip, typically
much less than 45o.
Growth fault:
A type of normal fault that develops and continues to move during sedimentation and
typically has thicker strata on the downthrown, hanging wall side of the fault than in the
footwall. Growth faults are common in the Gulf of Mexico and in other areas where
the crust is subsiding rapidly or being pulled apart.
Growth faults are a particular type of normal fault that develops during ongoing
sedimentation, so the strata on the hanging wall side of the fault tend to be thicker than
those on the footwall side.
Antithetic fault:
A minor, secondary fault, usually one of a set, whose sense of displacement’s opposite to its
associated major and synthetic faults. Antithetic-synthetic fault sets are typical in areas of
normal faulting.
Synthetic fault:
A type of minor fault whose sense of displacement is similar to its associated major fault.
Antithetic-synthetic fault sets are typical in areas of normal faulting.
40. Principles of Seismic Data Interpretation
M.M.Badawy
Page40
Inversion Tectonics
The reversal of features particularly features such as faults by reactivation. For
example a normal fault might move in a direction opposite to its initial movement.
Basic Inversion terminology:
43. Principles of Seismic Data Interpretation
M.M.Badawy
Page43
Criteria of inversion:
Less dip of growth fault.
Normal – Null.
Normal – Reverse.
Reverse – Null.
Kink fold.
Short steep limp, long gentle limp.
Horst – Graben.
Half Grabens.
[The Main Benefits of Inversion Is To Know (Basin Shift)]
44. Principles of Seismic Data Interpretation
M.M.Badawy
Page44
Interpreting Seismic Amplitude:
In areas with favorable rock properties it is possible to detect hydrocarbon directly
by using standard 3-D seismic data.
Amplitude interpretation is then very effective in reducing risk when selecting
exploration and production drilling location.
Not all areas have such favorable rock physics, but it is always useful to understand
what seismic amplitudes may be telling us about hydrocarbon presence or reservoir
quality.
As well as amplitudes on migrated stacked data, it is often useful to look at pre-stack
data and the way that amplitude varies with source-receiver offset (AVO).
The first step is to use well log data to predict how seismic response will change
with different reservoir fluid fill (gas or oil or brine), with changing reservoir
porosity, and with changing reservoir thickness.
AVO [Amplitude versus Offset]:
AVO stands for amplitude variation with offset, or amplitude versus offset.
The AVO techniques use the amplitude variations of pre-stack seismic reflections to
predict reservoir fluid effect.
The AVO response is depending on the properties of P-wave velocity, S-wave
velocity and density in the porous reservoir rock.
The calibration of amplitude to reflectivity is possible from a well tie, but the
calibration is valid only over a limited interval vertically.
In any case, it is a good idea to inspect the entire section from surface to the target
event and below, if amplitude anomalies at target level are seen to be correlated
with overlying or underlying changes [high or low amplitudes due to lithology or
gas effect, or over burden faulting, as an example].
Following the amplitude anomaly through the seismic processing sequence from the
raw gathers may be helpful; this may reveal an artifact being introduced in a
particular processing step.
45. Principles of Seismic Data Interpretation
M.M.Badawy
Page45
DHI [Direct Hydrocarbon Indicators]:
AGC {Automatic Gain Control} cause a low opportunity for studying amplitude [e.g
Bright Spot].
Important considerations in seismic data processing for DHI are:
Polarity, Phase, Amplitude and Spatial extent.
Frequency, Velocity, Amplitude/Offset and Shear wave information help in the
positive identification of DHI.
Flat spot is a fluid contact reflection.
Bright spot reflects gas accumulations.
Termination of flat and bright spot at the same point increases the confidence of
hydrocarbon presence.