This document discusses tools for modeling elastic wave propagation to aid in seismic survey planning. It summarizes three main modeling techniques: recursive reflectivity methods, ray tracing methods, and full wavefield methods using finite-differencing. Ray tracing is useful for optimizing survey geometry but not reflectivity studies, while reflectivity and finite-difference methods model full wavefields and are better for amplitude studies like AVO. Integrating these modeling tools with real data and rock physics analysis allows comprehensive understanding of wave propagation for effective survey planning addressing all acquisition parameters and seismic phenomena.
Seismic Refraction Test
Subsurface investigation by seismic refraction
Seismic Data Analysis
Seismic refraction instrumental set up and operation
P-waves velocity ranges for different strata
Seismic Refraction Test
Subsurface investigation by seismic refraction
Seismic Data Analysis
Seismic refraction instrumental set up and operation
P-waves velocity ranges for different strata
It covers seismic method, gravity method, electromagnetic method, magnetic method and radiometric method. all these methods help in mineral exploration
Integrated Geophysical Approach for Rapid & Cost Effective Site Investigation...IEI GSC
Dr. Sanjay Rana, Director, PARSAN Overseas (P) Limited
With inputs & examples from Dr Gopal Dhawan & Dr S L Kapil
at 31st National Convention of Civil Engineers
organised by
Gujarat State Center, The Institution of Engineers (India) at Ahmedabad
Here are the lab assignments of Geophysical Exploration. It includes introduction of different geophysical equipments, seismic survey, GPR, magnetic survey, Gravity survey and resistivity survey. All applications of survey is listed in the document.
Quantitative and qualitative seismic attributes interpretationmohamed Shihata
Seismic attribute is the only way that can enable interpreter to understand seismic data very well and generate new view for his model, but there are hundreds of seismic attributes and there are many classes that make interpreters afraid of using new thing so in this course explain both theoretical and application for each one and try to generate workflow to help interpretation for different geological environment.
In this course, we will gain an intuitive understanding of the kinds of seismic features that can be identified by 3-D seismic attributes, the sensitivity of seismic attributes to seismic acquisition and processing, and of how ‘independent’ seismic attributes can are coupled through geology. We will also discuss alternative workflows using seismic attributes for reservoir characterization as implemented by modern commercial software and practiced by interpretation service companies. Participants are invited to bring case studies from their workplace that demonstrates either the success or failure ofseismic attributes to stimulate class discussion.
It covers seismic method, gravity method, electromagnetic method, magnetic method and radiometric method. all these methods help in mineral exploration
Integrated Geophysical Approach for Rapid & Cost Effective Site Investigation...IEI GSC
Dr. Sanjay Rana, Director, PARSAN Overseas (P) Limited
With inputs & examples from Dr Gopal Dhawan & Dr S L Kapil
at 31st National Convention of Civil Engineers
organised by
Gujarat State Center, The Institution of Engineers (India) at Ahmedabad
Here are the lab assignments of Geophysical Exploration. It includes introduction of different geophysical equipments, seismic survey, GPR, magnetic survey, Gravity survey and resistivity survey. All applications of survey is listed in the document.
Quantitative and qualitative seismic attributes interpretationmohamed Shihata
Seismic attribute is the only way that can enable interpreter to understand seismic data very well and generate new view for his model, but there are hundreds of seismic attributes and there are many classes that make interpreters afraid of using new thing so in this course explain both theoretical and application for each one and try to generate workflow to help interpretation for different geological environment.
In this course, we will gain an intuitive understanding of the kinds of seismic features that can be identified by 3-D seismic attributes, the sensitivity of seismic attributes to seismic acquisition and processing, and of how ‘independent’ seismic attributes can are coupled through geology. We will also discuss alternative workflows using seismic attributes for reservoir characterization as implemented by modern commercial software and practiced by interpretation service companies. Participants are invited to bring case studies from their workplace that demonstrates either the success or failure ofseismic attributes to stimulate class discussion.
My books- Hacking Digital Learning Strategies http://hackingdls.com & Learning to Go https://gum.co/learn2go
Resources at http://shellyterrell.com/classmanagement
The reality for companies that are trying to figure out their blogging or content strategy is that there's a lot of content to write beyond just the "buy now" page.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Evaluation of the Sensitivity of Seismic Inversion Algorithms to Different St...IJERA Editor
Seismic wavelet estimation is an important step in processing and analysis of seismic data. Inversion methods as Narrow-Band and theConstrained Sparse-Spike ones require information about it so that the inversion solution, once it is not a unique problem, may be restricted by comparing the real seismic trace with the synthetic generated by convolution of the estimated reflectivity and wavelet. Besides helping in seismic inversion, a good estimate of the wavelet enables an inverse filter with less uncertainty to be computed in the deconvolution step and while tying well logs, a better correlation between the seismic trace and well log can be achieved. Depending on the use or not of well log information, the methods of wavelet estimation can be divided into two classes: statistical and deterministic. This work aimed to test the sensitivity of acoustic post-stack seismic inversion algorithms to wavelets statistically estimated by two distinct methods
Fusion of Multispectral And Full Polarimetric SAR Images In NSST DomainCSCJournals
Polarimetric SAR (POLSAR) and multispectral images provide different characteristics of the imaged objects. Multispectral provides information about surface material while POLSAR provides information about geometrical and physical properties of the objects. Merging both should resolve many of object recognition problems that exist when they are used separately. Through this paper, we propose a new scheme for image fusion of full polarization radar image (POLSAR) with multispectral optical satellite image (Egyptsat). The proposed scheme is based on Non-Subsampled Shearlet Transform (NSST) and multi-channel Pulse Coupled Neural Network (m-PCNN). We use NSST to decompose images into low frequency and band-pass sub- band coefficients. With respect to low frequency coefficients, a fusion rule is proposed based on local energy and dispersion index. In respect of sub-band coefficients, m-PCNN is used to guide how the fused sub-band coefficients are calculated using image textural information.
The proposed method is applied on three batches of Egyptsat (Red-Green-infra-red) and radarsat2 (C-band full-polarimetric HH-HV and VV-polarization) images. The batches are selected to react differently with different polarization. Visual assessment of the obtained fused image gives excellent information on clarity and delineation of different objects. Quantitative evaluations show the proposed method can superior the other data fusion methods.
P-Wave Onset Point Detection for Seismic Signal Using Bhattacharyya DistanceCSCJournals
In seismology Primary p-wave arrival identification is a fundamental problem for the geologist worldwide. Several numbers of algorithms that deal with p-wave onset detection and identification have already been proposed. Accurate p- wave picking is required for earthquake early warning system and determination of epicenter location etc. In this paper we have proposed a novel algorithm for p-wave detection using Bhattacharyya distance for seismic signals. In our study we have taken 50 numbers of real seismic signals (generated by earthquake) recorded by K-NET (Kyoshin network), Japan. Our results show maximum standard deviation of 1.76 sample from true picks which gives better accuracy with respect to ratio test method.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Epistemic Interaction - tuning interfaces to provide information for AI support
Seismic Modeling ASEG 082001 Andrew Long
1. Understanding elastic wavefield recording by detailed 3D survey
planning and simulation
*
Andrew S. Long Hans-Jurgen Hoffmann Bingwen Du
PGS Seres AS, Australia PGS Seres AS, Norway PGS Seres AS, Australia
andrew.long@prth.pgs.com hans-jurgen.hoffmann@oslo.pgs.com bingwen.du@prth.pgs.com
SUMMARY SUMMARY OF MODELLING TOOLS
Elastic modelling of seismic wave propagation must be Everything always begins with a detailed understanding of the
pursued with different algorithms to fully understand all seismic source, which for marine applications is typically an
aspects of the recorded information. Modern seismic array of air guns fired in unison. The output pressure
survey planning will utilize a suite of modelling tools, wavefield involves extremely complex mechanical and
each incorporating the full acquisition system response thermodynamic phenomena. It is critical during any modeling
for any given survey. Therefore, the user can exercise that the so-called far-field signature (an idealistic
discriminate between acquisition and Earth effects upon description that can never quite be measured in the field) be
the data. Survey planning no longer simply estimates the describable at any azimuth and source emission angle away
basic configuration of the acquisition equipment. Every from the array. This is because of energy directivity effects,
facet of the seismic method must be replicated and which correspond to the direction-dependent output of
understood. Then the data implications of any given radiated energy. Directivity of both the source and receiver
acquisition approach (streamer, land, ocean bottom arrays must be understood before considering any elastic
sensor, vertical cable) can be understood and treated reflection process wherein amplitude varies with incidence
throughout the entire processing and interpretation angle (i.e. recording offset). A thorough “true amplitude”
workflow. In particular, the ability to accurately model processing sequence would correct for directivity effects
the Earth reflectivity sequence is critical for AVO, before pursuing any kind of AVO-based analysis or elastic
reservoir characterization and time-lapse studies. inversion scheme. The far-field signature is computed by a
linear superposition of notional sources (with appropriate time
After summarizing the main modelling algorithms, we shifts and spreading corrections), which are computed because
describe their integrated use within survey planning of the fundamental fact that the output from an individual air
exercises, with emphasis upon addressing the elastic gun in an array can never be directly measured in the field.
properties of the recorded seismic wavefield. Interaction effects between the pressure wavefields of each air
gun create a complex, time-varying phenomena. Therefore, a
Key words: Modelling, elastic, acoustic, survey design. numerical integration procedure is required to infer the
“notional” source from near-field and mid-field hydrophone
measurements. Alternatively, it is possible to accurately
INTRODUCTION model the notional sources using a physical modelling
algorithm that accounts for the full time-varying output of any
Wavefield propagation in an inhomogoneous medium is a specified source array. Sophisticated modeling algorithms
complex phenomena, that must be understood if we are to have evolved since the pioneering work of Ziolkowski (1970).
extract information about both the sedimentary structures of Current algorithms use a series of calibrations to controlled
prospective hydrocarbon provinces, and the elastic properties deep fjord measurements of air guns fired in various
of the rocks and fluids of interest. A cursory examination of configurations, and incorporating all interaction,
historically published “seismic survey planning” literature thermodynamic and acquisition system responses. Such a
yields an overwhelming emphasis upon fundamental modelling algorithm must be available for survey planning, so
geometric issues – offset requirements, simple criteria for that any possible source array can be accurately simulated and
estimating spatial sampling requirements, required record analyzed.
lengths, etc. Such criteria are no longer sufficient, making
simplistic and unrealistic assumptions about the Earth, and Likewise, it is essential to understand the full “system”
shed no light upon the dynamic aspects of wave propagation. response of the recording hardware (hydrophones, geophones,
Seismic survey planning is now a rigorous project that will arrays, streamers, ocean bottom sensors, vertical cables,
incorporate all relevant knowledge of the geology, acquisition recording instruments, and any filters involved). Therefore, it
systems, processing requirements and interpretation objectives should be possible to separate source and system effects upon
– all accounted for whilst honouring geophysical integrity and the recorded seismic wavefield from the effects purely
accuracy. We summarize modern state-of-the art tools and attributable to propagation through the Earth. We may then
techniques that offer powerful insights into the application and focus upon describing the physical phenomena that affect our
optimization of 3D seismic technology. For simplicity, we data – using a variety of Earth models and modeling
will focus most attention upon marine seismic techniques. algorithms. It is this source-system-model description of the
However, all the principles are equally relevant for land survey design process that must be the foundation for all
seismic techniques. studies.
th
ASEG 15 Geophysical Conference and Exhibition, August 2001, Brisbane. Extended Abstracts
2. Understanding elastic wavefield recording for survey planning Long and Hoffmann
It is essential to make a distinction at this point between likelihood displays (e.g. Figure 2) will provide the user with
acoustic and elastic reflectivity. Theory states that after an understanding of the sensitivity of event reflectivity and
ignoring some trivial thermodynamic processes, there are 21 AVO attributes to rock property variations.
unique stress-strain relationships (or elastic rock constants)
that are required to fully describe the deformation of an Ray tracing methods may be separated into two separate
anisotropic material subject to a seismic impulse. For aspects: kinematic ray tracing (ray path geometry and arrival
convenience, isotropic assumptions are typically made, times) and dynamic ray tracing (geometrical spreading
reducing the requirement to only 2 constants (Bulk and Shear factors, wavefront curvature, and amplitude coefficients along
Modulus). Each of the elastic moduli can be cast in terms of the ray paths). A layered 2D or 3D model is built, each layer
instantaneous P- and S-wave velocities (Vp, Vs), and density, containing unique interval anisotropic elastic parameters. The
as each of their vector values are a direct function of the three- ray path and traveltimes along a ray path within a continuous
dimensional particle motion. Correspondingly, any non- block of a model are calculated by solving a series of
normal incident seismic wave at an interface will always be differential equations (the kinematic ray tracing system). The
comprised of reflected and transmitted P-waves and S-waves. velocity function in the block determines the ray behaviour,
Therefore, mode conversions affect every reflection – which is e.g. a constant velocity (homogeneous layer) yields straight
why any P-P AVO study based purely upon Vp information ray paths, whereas a linear velocity yields circular rays. When
alone is inherently subject to inaccuracy. In the case of a the ray path and traveltimes have been calculated by a
normal incidence P-wave at an interface, there will be no kinematic ray tracer, one may optionally use a dynamic ray
traction force across the interface, and no conversion to tracer to calculate dynamic ray quantities (ray attributes). This
reflected or transmitted S-wave energy. In this unique case calculation is performed along an established ray path, using a
the reflection amplitudes are purely a function of the similar system of equations as used for the kinematic ray
impedance contrast (impedance = Vp times density), and tracer.
would be addressed by acoustic modelling. Clearly, any finite
offset, full wavefield modeling exercise must be elastic. Ray tracing may be classified as an approximate solution to
the general wave equation, valid for high frequencies (refer to
There are essentially three main types of modeling techniques Cerveny, 1985). This means that the seismic wavelength must
that are required to address all aspects of elastic seismic wave be considerably shorter than the “length of the smallest details
propagation: 1. Recursive reflectivity methods based upon in the model”. Therefore, it may be a requirement that an
Kennet (1983), 2. Ray tracing methods (e.g. Cerveny, 1985), interface(s) are smoothed prior to ray tracing. Most ray
and 3. Full wavefield methods – which are typically based theoretical methods pertain to so-called geometrical rays, i.e.
upon finite-differencing schemes (e.g. Virieux, 1989). following the geometrical law of reflection/transmission
(Snell’s law) at all interfaces. Another ray family are
Recursive reflectivity methods are typically “1D” in nature, diffracted rays, which follow Keller’s law of edge diffraction
assuming a flat Earth model. However, they incorporate the at a pre-defined diffraction point, i.e. at a point where two of
offset dimension, and are amenable for very accurate modeling the model interfaces intersect (refer to Klem-Musatov, 1994).
and simulation of elastic CMP gathers. Source-receiver
directivity, mode conversions, surface multiples, and interbed The power of the ray tracing method lies in the ability to
multiples may each be selectively incorporated into the record all geometric aspects of each ray segment, including a
modeling. Unlike ray tracing methods, which are a high multitude of associated dynamic parameters – these are
frequency approximation to the wave equation, and break collectively referred to as ray attributes. For any given model
down in the vicinity of the critical angle, recursive reflectivity interface, the distribution of reflection (P-P) or conversion (P-
methods incorporate refraction energy. There is essentially no S) points, incidence/takeoff angles, and ray density statistics
limit upon the number of, or thicknesses of layers within the can be analyzed in a variety of statistical and graphical
specified model – each interval layer containing uniquely- fashions. In particular, illumination analyses (e.g. Figure 3)
defined elastic parameters. However, due to the tau-p domain are a powerful tool that can be used for all types of seismic
implementation of the method, run time can become excessive experiments, allowing the optimization of subsurface (P-P or
with very complicated models. The method is ideal for P-S) fold coverage both prior to, or during an actual seismic
addressing the full wavefield interplay between signal and experiment (assuming that appropriate a priori 3D structural
noise upon recorded gathers, and for amplitude-based studies and elastic information is available for model building). Lima
such as AVO modeling, fluid substitution, converted-wave (2000) describes the “real time” updating of a complex 3D
feasibility studies, etc. Earth model during a production marine 3D seismic survey,
using immersive visualization technology to QC the
Most AVO feasibility studies will use the following vessel/streamer deployment for optimal fold coverage at the
diagnostics: Comparison of real vs. modelled PP and PS AVO target depths. This approach is equally applicable to any
curves, frequency-dependent thin layered target responses, and seismic technology – vertical cable, multi-component OBC,
angle range gathers and stacks. The ability to calculate the land, etc.
full frequency-dependent interaction of various seismic
wavefields, even in the presence of thin layering, makes the Unfortunately, ray tracing methods are not ideal for detailed
reflectivity method a valuable complement of any exercise. reflectivity or amplitude studies (e.g. AVO). Whilst the
Furthermore, the incorporation of a rock physics modelling methods serve valuable functions for optimizing survey
capability into the algorithm allows any relationships between geometries, investigating illumination and imaging challenges,
rock properties and AVO attributes to be established for P-P and for contributing to a suite of seismic processing issues,
or P-S data. By updating the elastic properties of discrete either recursive reflectivity methods (for simplified 1D Earth
(target) layers within a 1D model, the user can generate a suite models) or “full wavefield” (usually finite-difference) methods
of synthetic datasets. Then the use of AVO attribute cross- (for 2D and 3D Earth models) should be used.
plots, AVO attribute histograms and AVO signature
th
ASEG 15 Geophysical Conference and Exhibition, August 2001, Brisbane. Extended Abstracts
3. Understanding elastic wavefield recording for survey planning Long and Hoffmann
Although expensive, a fully visco-elastic (2D or 3D) finite- acquisition footprints, shooting direction etc., are all typically
difference modelling program will simulate seismic wave addressed by dynamic 3D ray tracing and processing. General
propagation in complex models with all wave types (P- and S- offset and spatial sampling requirements can be addressed by
waves, refracted and converted waves, diffractions, multiples all the modelling methods, depending upon the complexity of
and prism waves) and all couplings (reflections and primary and noise interference at larger offsets, and upon the
transmissions) included. Source array directivity can also be detail of resolution and AVO criteria specified for the survey.
incorporated. A grid-based Earth model is used, with assigned Each acquisition parameter is addressed individually by a
attributes of P- and S-wave velocity, density, and P- and S- variety of real data and modelling tests, and an understanding
wave absorption. The main cost in finite-difference modelling of the overall wavefield phenomena for the target area will
is associated with the spatial step size used. As the step size develop. In areas of existing seismic data, incorporation of the
increases, the maximum frequency achievable without real navigation data will increase the relevance of any
numerical dispersion (fmax) decreases. While fmax is inversely modelling results.
proportional to the grid step size, computational cost increases
cubically for 2D, and quartically for 3D. Therefore, such High-end studies involving reservoir characterization, fracture
modelling should be pursued judiciously, using source and analysis and reservoir monitoring feasibility studies all require
receiver configurations that have been optimized by (much the full wavefield modelling power of 2D and 3D visco-elastic
cheaper) ray tracing exercises with the same Earth model. finite difference algorithms, and will involve comprehensive
log analysis, rock physics investigation, and seismic
No attributes are produced by finite-difference modelling, modelling, greatly expanding upon the scope of exploration-
however, wavefield snapshots can be output from any stage of scale survey planning
wave propagation, allowing an improved understanding of the
complex phenomena involved. Synthetic data recorded will CONCLUSIONS
yield the highest possible simulation of the full wavefield, and
the technique is ideal for studies of P-P/P-S attenuation, Seismic survey planning has evolved to become a complex
anisotropy and fracture analysis (when appropriately process that must incorporate all available geological and
programmed). geophysical data for a study (survey) area, and will demand a
comprehensive understanding of wave propagation principles.
Figure 1 compares a single shot gather recorded from the same Depending upon the nature and complexity of the study, the
model, as yielded by reflectivity, ray tracing and finite- effort required will correspondingly vary, however, at all
difference modelling. In each case elastic mode conversions times, a full consideration of the elastic characteristics of the
are allowed, and primaries only (no multiples) are allowed. recorded seismic wavefield must be honoured. Overall, the
key is that a comprehensive suite of elastic modelling tools are
INTEGRATION OF TOOLS FOR SURVEY PLANNING available, all incorporating the source-system-model concept,
and all equally able to simulate the full range of acquisition
Using the source-system-model approach at all times, a typical approaches possible (multi-source, multi-streamer systems,
survey planning exercise begins with the assimilation of all ocean bottom sensors, vertical cables and fixed geometries).
available geological and geophysical data. Ideally, a full suite
of well logs will be provided, enabling an accurate reflectivity REFERENCES
analysis of the seismic response and resolution of the target
lithology and fluid characteristics. Such studies are of Cerveny, V., 1985, The application of numerical modelling of
particular significance for converted-wave ocean bottom cable seismic wavefields in complex structures, In: Seismic shear
(4C OBC) surveys, where we seek to understand the waves, Part A: Theory, (Ed.) G. Dohr, Handbook of
difference between P-P and P-S reflection events. Any geophysical exploration, (Eds.), K. Helbig and S. Treitel,
existing seismic data of relevance will be incorporated into the Geophysical Press, London, p. 1-124.
analyses at this stage, as there is no substitute for real data that
incorporates the full wavefield response of the target Earth Kennet, B., 1983, Seismic wave propagation in stratified
model. However, the use of real data in the overall survey media, Cambridge University Press
planning scheme is typically limited, as we are strongly
constrained by the acquisition parameters used, which will Klem-Musatov, K., 1994, Theory of seismic diffractions,
likely be quite different to those for any new survey. 4C OBS (Eds.) F. Hron and L. Lines, SEG, Tulsa, OK, USA.
survey planning will almost always have no precedent, which
is why the availability of full wavefield logs (including S- Lima, Y., 2000, Using VR tools to quality control seismic
wave sonics) is critical. For all survey planning scenarios, the acquisition surveys, Offshore, August, 136.
unavailability of full log data will demand some kind of
prediction of the elastic model parameters, and consequently, Virieux, J., 1989, P-SV wave propagation in heterogeneous
the integrity of all (P-P and P-S) reflectivity and AVO results media: Velocity-stress finite difference method, Geophysics
will be at best approximate. 51, 4, 889-901.
A suite of 2D and 3D elastic models are then constructed, the Ziolkowski, A., 1970, A method for calculating the output
complexity and accuracy of which are dictated by the amount pressure waveform from an air gun, Geophys. J. R. Astr. Soc.
of available data for model building. It is often the case that a 21, 137-161.
new 3D survey will occur in a relatively virgin area, so the
models built will be by necessity simplistic, and the
experience and technical skill of the survey planning
geophysicists will be of particular importance. Fundamental
3D issues like subsurface fold and illumination, analyses of
th
ASEG 15 Geophysical Conference and Exhibition, August 2001, Brisbane. Extended Abstracts
4. Figure 1. Example of the synthetic shot gathers output from the three main modelling algorithms. Using the simple elastic 1D
model on the left in all cases, the results are shown (from left to right) from 1D reflectivity, 2D dynamic ray tracing, and 2D
visco-elastic finite-difference modelling. Note that ray tracing is by far the fastest method, but yields the least events. In
contrast, finite difference modelling is the most expensive, but yields the most complete representation of the full wavefield.
Figure 2. Example PP (left) and PS (right) AVO likelihood plots for a target reservoir horizon. Such analyses can be used for
a 1D elastic model based upon well log data. Integration of rock physics programs can be used to pursue fluid substitution
and lithology perturbation studies at discrete intervals, and the generation of a suite of synthetic data and diagnostic displays
such as those shown here.
PP-waves PS-converted waves
Figure 3. Example PP and PS reflection strength at the top of an anticlinal feature. This result is derived from dynamic 3D
ray tracing, and simulates the ideal migrated amplitude response at the target horizon. Note the differences in illumination
due to the fundamental differences in PP and PS ray path geometries. The “ring” of sparse illumination in both cases
corresponds to the pinchout of onlapping strata in the full 3D model (only the target horizon is plotted here).
th
ASEG 15 Geophysical Conference and Exhibition, August 2001, Brisbane. Extended Abstracts