This summary provides an overview of seismic data processing steps performed on a P-Cable 3D seismic dataset from the Gulf of Mexico:
1. Wavelet deconvolution was applied to improve temporal resolution using a prediction-error filter estimated from the wavelet.
2. Surface-consistent amplitude corrections were applied using a method based on Taner and Koehler (1981) to obtain source and receiver terms.
3. Velocity analysis on supergathers was used to pick velocities ranging from 1.5-2.4 km/s, which were applied to NMO correct and stack the data.
4. FFT filtering and median filtering were tested but found to negatively impact the high frequency content of the
A SWITCHED-ANTENNA NADIR-LOOKING INTERFEROMETRIC SAR ALTIMETER FOR TERRAIN-AI...csandit
Conventional terrain-aided navigation (TAN) technique uses an altimeter to locate the position of an aerial vehicle. However, a major problem with a radar altimeter is that its beam (or pulse) footprint on the ground could be large, and therefore the nadir altitude cannot be estimated
accurately. To overcome this difficulty, one may use the nadir-looking synthetic aperture radar (SAR) technique to reduce the along-track beam width, while the cross-track ambiguity is
resolved with the interferometry technique. However, the cross-track resolution is still far from satisfactory, because of the limited aperture size of antennas. Therefore, the usual three-antenna array cannot resolve multiple terrain points in a same range bin, effectively. In this paper, we
propose a technique that can increase the cross-track resolution using a large number of antennas, but in a switched fashion, not raising hardware cost.
Internal multiple attenuation using inverse scattering: Results from prestack...Arthur Weglein
The attenuation of internal multiples in a multidimensional
earth is an important and longstanding problem in exploration
seismics. In this paper we report the results of applying
an attenuation algorithm based on the inverse scattering
series to synthetic prestack data sets generated in on
and two dimensional earth models. The attenuation algorithm
requires no information about the subsurface structure
or the velocity field. However, detailed information about
the source wavelet is a prerequisite. An attractive feature of:
the attenuation algorithm is the preservation of the amplitude
(and phase) of primary events in the data; thus allowing for
subsequent AVO and other true amplitude processing.
論文紹介"DynamicFusion: Reconstruction and Tracking of Non-‐rigid Scenes in Real...Ken Sakurada
CVPR2015(Best Paper Award)の論文紹介
"DynamicFusion: Reconstruction and Tracking of Non-‐rigid Scenes in Real-‐Time"
Richard A. Newcombe, Dieter Fox, Steven M. Seitz
内容に関して何かお気づきになりましたら,スライドに記載されているメールアドレスにご連絡頂けると幸いです
Particle Swarm Optimization for the Path Loss Reduction in Suburban and Rural...IJECEIAES
In the present work, a precise optimization method is proposed for tuning the parameters of the COST231 model to improve its accuracy in the path loss propagation prediction. The Particle Swarm Optimization is used to tune the model parameters. The predictions of the tuned model are compared with the most popular models. The performance criteria selected for the comparison of various empirical path loss models is the Root Mean Square Error (RMSE). The RMSE between the actual and predicted data are calculated for various path loss models. It turned out that the tuned COST 231 model outperforms the other studied models.
A SWITCHED-ANTENNA NADIR-LOOKING INTERFEROMETRIC SAR ALTIMETER FOR TERRAIN-AI...csandit
Conventional terrain-aided navigation (TAN) technique uses an altimeter to locate the position of an aerial vehicle. However, a major problem with a radar altimeter is that its beam (or pulse) footprint on the ground could be large, and therefore the nadir altitude cannot be estimated
accurately. To overcome this difficulty, one may use the nadir-looking synthetic aperture radar (SAR) technique to reduce the along-track beam width, while the cross-track ambiguity is
resolved with the interferometry technique. However, the cross-track resolution is still far from satisfactory, because of the limited aperture size of antennas. Therefore, the usual three-antenna array cannot resolve multiple terrain points in a same range bin, effectively. In this paper, we
propose a technique that can increase the cross-track resolution using a large number of antennas, but in a switched fashion, not raising hardware cost.
Internal multiple attenuation using inverse scattering: Results from prestack...Arthur Weglein
The attenuation of internal multiples in a multidimensional
earth is an important and longstanding problem in exploration
seismics. In this paper we report the results of applying
an attenuation algorithm based on the inverse scattering
series to synthetic prestack data sets generated in on
and two dimensional earth models. The attenuation algorithm
requires no information about the subsurface structure
or the velocity field. However, detailed information about
the source wavelet is a prerequisite. An attractive feature of:
the attenuation algorithm is the preservation of the amplitude
(and phase) of primary events in the data; thus allowing for
subsequent AVO and other true amplitude processing.
論文紹介"DynamicFusion: Reconstruction and Tracking of Non-‐rigid Scenes in Real...Ken Sakurada
CVPR2015(Best Paper Award)の論文紹介
"DynamicFusion: Reconstruction and Tracking of Non-‐rigid Scenes in Real-‐Time"
Richard A. Newcombe, Dieter Fox, Steven M. Seitz
内容に関して何かお気づきになりましたら,スライドに記載されているメールアドレスにご連絡頂けると幸いです
Particle Swarm Optimization for the Path Loss Reduction in Suburban and Rural...IJECEIAES
In the present work, a precise optimization method is proposed for tuning the parameters of the COST231 model to improve its accuracy in the path loss propagation prediction. The Particle Swarm Optimization is used to tune the model parameters. The predictions of the tuned model are compared with the most popular models. The performance criteria selected for the comparison of various empirical path loss models is the Root Mean Square Error (RMSE). The RMSE between the actual and predicted data are calculated for various path loss models. It turned out that the tuned COST 231 model outperforms the other studied models.
On The Fundamental Aspects of DemodulationCSCJournals
When the instantaneous amplitude, phase and frequency of a carrier wave are modulated with the information signal for transmission, it is known that the receiver works on the basis of the received signal and a knowledge of the carrier frequency. The question is: If the receiver does not have the a priori information about the carrier frequency, is it possible to carry out the demodulation process? This tutorial lecture answers this question by looking into the very fundamental process by which the modulated wave is generated. It critically looks into the energy separation algorithm for signal analysis and suggests modification for distortionless demodulation of an FM signal, and recovery of sub-carrier signals
FOSS4G korea 2017 발표자료
Himawari-8, VIIRS, MetopA/B, GOES, Sentinel, Aqua등의 기상위성(Weather satellite) 데이터를 처리하기 위한 Python 기반의 Pytroll pakages를 소개하고 활용법에 대해서 이야기 한다.
Velocity analysis is one of the prime aspects of seismic data processing. Velocity analysis is an
iterative process and one keeps on improving subsurface velocity field at different stages of processing. These
analyses require an initial velocity field, but in a virgin area, it is required, to estimate this, from the seismic
data itself by employing CVS (Constant Velocity Stack) or t
2
-x
2 methods. In the present context, we demonstrate
using field data, that t
2
-x
2 method based velocities are more reliable than CVS based velocities for the
subsequent velocity analysis purposes.
A novel and efficient mixed-signal compressed sensing for wide-band cognitive...Polytechnique Montreal
In cognitive radio (CR) networks, unlicensed (cognitive) users can exploit the licensed frequency bands by using spectrum sensing techniques to identify spectrum holes. This paper proposes a distributed compressive spectrum sensing scheme, in which the modulated wide-band converter can apply compressed sensing (CS) directly to analog signals at the sub-Nyquist rate and the central fusion receives signals from multiple CRs and exploits the multiple-measurements-vectors (MMV) subspace pursuit (M-SP) algorithm to jointly reconstruct the spectral support of the wide-band signal. This support is then used to detect whether the licensed bands are occupy or not. Finally, extensive simulation results show the advantages of the proposed scheme. Besides, we also compare the performance of M-SP with M-orthogonal matching pursuit (M-OMP) algorithms.
This paper aims, a 3D-Pilot Aided Multi-Input Multi-Output Orthogonal Frequency Division Multiplexing (MIMO-OFDM) Channel Estimation (CE) for Digital Video Broadcasting -T2 (DVB-T2) for the 5 different proposed block and comb pilot patterns model and performed on different antenna configuration. The effects of multi-transceiver antenna on channel estimation are addressed with different pilot position in frequency, time and the vertical direction of spatial domain framing. This paper first focus on designing of 5-different proposed spatial correlated pilot pattern model with optimization of pilot overhead. Then it demonstrates the performance comparison of Least Square (LS) & Linear Minimum Mean Square Error (LMMSE), two linear channel estimators for 3D-Pilot Aided patterns on different antenna configurations in terms of Bit Error Rate. The simulation results are shown for Rayleigh fading noise channel environments. Also, 3x4 MIMO configuration is recommended as the most suitable configuration in this noise channel environments.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Parallel implementation of geodesic distance transform with application in su...Tuan Q. Pham
This poster presents a parallel implementation of geodesic distance transform using OpenMP. This work forms part of a C implementation for geodesic superpixel segmentation of natural images. Presented at DICTA 2013 conference
An overview of the Phonopy (and Phono3py) lattice-dynamics codes, covering features, examples, applications and troubleshooting (2014 presentation updated for 2015).
DETERMINATION OF SPATIAL RESOLUTION IN COMPUTED RADIOGRAPHY (CR) BY COMPARING...AM Publications
The QC (Quality Control) testing of spatial resolution in CR (Computed Radiography) using ESF-PSF and IP-PSF methods has been investigated. The object used in this study is a phantom made of copper with 15 cm both in lenght and widht, and 1 mm in thickness. The exposure to phantom was occured with some variation of voltage, i.e. 50 kV, 60 kV, 70 kV and 80 kV for CR system. Current variation wass performed by four times for each voltage, i.e. 1.6 mAs; 4 mAs; 16 mAs and 32 mAs. Digital image data used for the acquisition is in the DICOM format. Measurement of image's spatial resolution wass performed by calculate the value of FWHM as an indicator of good or poor spatial resolution of images. Measurement of FWHM value has performed by using MATLAB R2015b and Corel Draw X7 programs. The FWHM value was obtained from gaussian function which provides a complete information on opaqueness effects that occur in images. The results showed that the best value of spatial resolution for the ESF-PSF methode is 2.50 lp/mm and the worst value is 2.36 lp/mm, while for the best resolution using IP-PSF is 2.85 lp/mm and worst is 1.01 lp/mm. The value of spatial resolution is proportional to the voltage of the tube, where the higher voltage provides the higher value of spatial resolution. But the value of spatial resolution has decreased with the current variation due to the higher current of mobile X-ray's tube.
Internal multiple attenuation using inverse scattering: Results from prestack 1 & 2D acoustic and
elastic synthetics
R. T. Coates*, Schlumberger Cambridge Research, A. B. Weglein, Arco Exploration and Production Technology
Summary
The attenuation of internal multiples in a multidimensional
earth is an important and longstanding problem in exploration
seismics. In this paper we report the results of applying
an attenuation algorithm based on the inverse scattering
series to synthetic prestack data sets generated in on
and two dimensional earth models. The attenuation algorithm
requires no information about the subsurface structure
or the velocity field. However, detailed information about
the source wavelet is a prerequisite. An attractive feature of:
the attenuation algorithm is the preservation of the amplitude
(and phase) of primary events in the data; thus allowing for
subsequent AVO and other true amplitude processing.
A SWITCHED-ANTENNA NADIR-LOOKING INTERFEROMETRIC SAR ALTIMETER FOR TERRAIN-AI...cscpconf
Conventional terrain-aided navigation (TAN) technique uses an altimeter to locate the position of an aerial vehicle. However, a major problem with a radar altimeter is that its beam (or pulse) footprint on the ground could be large, and therefore the nadir altitude cannot be estimated accurately. To overcome this difficulty, one may use the nadir-looking synthetic aperture radar (SAR) technique to reduce the along-track beam width, while the cross-track ambiguity is resolved with the interferometry technique. However, the cross-track resolution is still far from satisfactory, because of the limited aperture size of antennas. Therefore, the usual three-antenna array cannot resolve multiple terrain points in a same range bin, effectively. In this paper, we propose a technique that can increase the cross-track resolution using a large number of antennas, but in a switched fashion, not raising hardware cost.
On The Fundamental Aspects of DemodulationCSCJournals
When the instantaneous amplitude, phase and frequency of a carrier wave are modulated with the information signal for transmission, it is known that the receiver works on the basis of the received signal and a knowledge of the carrier frequency. The question is: If the receiver does not have the a priori information about the carrier frequency, is it possible to carry out the demodulation process? This tutorial lecture answers this question by looking into the very fundamental process by which the modulated wave is generated. It critically looks into the energy separation algorithm for signal analysis and suggests modification for distortionless demodulation of an FM signal, and recovery of sub-carrier signals
FOSS4G korea 2017 발표자료
Himawari-8, VIIRS, MetopA/B, GOES, Sentinel, Aqua등의 기상위성(Weather satellite) 데이터를 처리하기 위한 Python 기반의 Pytroll pakages를 소개하고 활용법에 대해서 이야기 한다.
Velocity analysis is one of the prime aspects of seismic data processing. Velocity analysis is an
iterative process and one keeps on improving subsurface velocity field at different stages of processing. These
analyses require an initial velocity field, but in a virgin area, it is required, to estimate this, from the seismic
data itself by employing CVS (Constant Velocity Stack) or t
2
-x
2 methods. In the present context, we demonstrate
using field data, that t
2
-x
2 method based velocities are more reliable than CVS based velocities for the
subsequent velocity analysis purposes.
A novel and efficient mixed-signal compressed sensing for wide-band cognitive...Polytechnique Montreal
In cognitive radio (CR) networks, unlicensed (cognitive) users can exploit the licensed frequency bands by using spectrum sensing techniques to identify spectrum holes. This paper proposes a distributed compressive spectrum sensing scheme, in which the modulated wide-band converter can apply compressed sensing (CS) directly to analog signals at the sub-Nyquist rate and the central fusion receives signals from multiple CRs and exploits the multiple-measurements-vectors (MMV) subspace pursuit (M-SP) algorithm to jointly reconstruct the spectral support of the wide-band signal. This support is then used to detect whether the licensed bands are occupy or not. Finally, extensive simulation results show the advantages of the proposed scheme. Besides, we also compare the performance of M-SP with M-orthogonal matching pursuit (M-OMP) algorithms.
This paper aims, a 3D-Pilot Aided Multi-Input Multi-Output Orthogonal Frequency Division Multiplexing (MIMO-OFDM) Channel Estimation (CE) for Digital Video Broadcasting -T2 (DVB-T2) for the 5 different proposed block and comb pilot patterns model and performed on different antenna configuration. The effects of multi-transceiver antenna on channel estimation are addressed with different pilot position in frequency, time and the vertical direction of spatial domain framing. This paper first focus on designing of 5-different proposed spatial correlated pilot pattern model with optimization of pilot overhead. Then it demonstrates the performance comparison of Least Square (LS) & Linear Minimum Mean Square Error (LMMSE), two linear channel estimators for 3D-Pilot Aided patterns on different antenna configurations in terms of Bit Error Rate. The simulation results are shown for Rayleigh fading noise channel environments. Also, 3x4 MIMO configuration is recommended as the most suitable configuration in this noise channel environments.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Parallel implementation of geodesic distance transform with application in su...Tuan Q. Pham
This poster presents a parallel implementation of geodesic distance transform using OpenMP. This work forms part of a C implementation for geodesic superpixel segmentation of natural images. Presented at DICTA 2013 conference
An overview of the Phonopy (and Phono3py) lattice-dynamics codes, covering features, examples, applications and troubleshooting (2014 presentation updated for 2015).
DETERMINATION OF SPATIAL RESOLUTION IN COMPUTED RADIOGRAPHY (CR) BY COMPARING...AM Publications
The QC (Quality Control) testing of spatial resolution in CR (Computed Radiography) using ESF-PSF and IP-PSF methods has been investigated. The object used in this study is a phantom made of copper with 15 cm both in lenght and widht, and 1 mm in thickness. The exposure to phantom was occured with some variation of voltage, i.e. 50 kV, 60 kV, 70 kV and 80 kV for CR system. Current variation wass performed by four times for each voltage, i.e. 1.6 mAs; 4 mAs; 16 mAs and 32 mAs. Digital image data used for the acquisition is in the DICOM format. Measurement of image's spatial resolution wass performed by calculate the value of FWHM as an indicator of good or poor spatial resolution of images. Measurement of FWHM value has performed by using MATLAB R2015b and Corel Draw X7 programs. The FWHM value was obtained from gaussian function which provides a complete information on opaqueness effects that occur in images. The results showed that the best value of spatial resolution for the ESF-PSF methode is 2.50 lp/mm and the worst value is 2.36 lp/mm, while for the best resolution using IP-PSF is 2.85 lp/mm and worst is 1.01 lp/mm. The value of spatial resolution is proportional to the voltage of the tube, where the higher voltage provides the higher value of spatial resolution. But the value of spatial resolution has decreased with the current variation due to the higher current of mobile X-ray's tube.
Internal multiple attenuation using inverse scattering: Results from prestack 1 & 2D acoustic and
elastic synthetics
R. T. Coates*, Schlumberger Cambridge Research, A. B. Weglein, Arco Exploration and Production Technology
Summary
The attenuation of internal multiples in a multidimensional
earth is an important and longstanding problem in exploration
seismics. In this paper we report the results of applying
an attenuation algorithm based on the inverse scattering
series to synthetic prestack data sets generated in on
and two dimensional earth models. The attenuation algorithm
requires no information about the subsurface structure
or the velocity field. However, detailed information about
the source wavelet is a prerequisite. An attractive feature of:
the attenuation algorithm is the preservation of the amplitude
(and phase) of primary events in the data; thus allowing for
subsequent AVO and other true amplitude processing.
A SWITCHED-ANTENNA NADIR-LOOKING INTERFEROMETRIC SAR ALTIMETER FOR TERRAIN-AI...cscpconf
Conventional terrain-aided navigation (TAN) technique uses an altimeter to locate the position of an aerial vehicle. However, a major problem with a radar altimeter is that its beam (or pulse) footprint on the ground could be large, and therefore the nadir altitude cannot be estimated accurately. To overcome this difficulty, one may use the nadir-looking synthetic aperture radar (SAR) technique to reduce the along-track beam width, while the cross-track ambiguity is resolved with the interferometry technique. However, the cross-track resolution is still far from satisfactory, because of the limited aperture size of antennas. Therefore, the usual three-antenna array cannot resolve multiple terrain points in a same range bin, effectively. In this paper, we propose a technique that can increase the cross-track resolution using a large number of antennas, but in a switched fashion, not raising hardware cost.
The concepts related of the New Model of River Adige, and especially an analysys of the existing OMS components ready and their interpretation on the basis of travel time approaches
Precise Attitude Determination Using a Hexagonal GPS PlatformCSCJournals
In this paper, a method of precise attitude determination using GPS is proposed. We use a hexagonal antenna platform of 1 m diameter (called the wheel) and post-processing algorithms to calculate attitude, where we focus on yaw to prove the concept. The first part of the algorithm determines an initial absolute position using single point positioning. The second part involves double differencing (DD) the carrier phase measurements for the received GPS signals to determine relative positioning of the antennas on the wheel. The third part consists of Direct Computation Method (DCM) or Implicit Least Squares (ILS) algorithms which, given sufficiently accurate knowledge of the fixed body frame coordinates of the wheel, takes in relative positions of all the receivers and produces the attitude. Field testing results presented in this paper will show that an accuracy of 0.05 degrees in yaw can be achieved. The results will be compared with a theoretical error, which is shown by Monte Carlo simulation to be < 0.001 degrees. The improvement to the current state-of-the-art is that current methods require either very large baselines of several meters to achieve such accuracy or provide errors in yaw that are orders of magnitude greater.
A Weighted Duality based Formulation of MIMO SystemsIJERA Editor
This work is based on the modeling and analysis of multiple-input multiple-output (MIMO) system in downlink communication system. We take into account a recent work on the ratio of quadratic forms to formulate the weight matrices of quadratic norm in a duality structure. This enables us to achieve exact solutions for MIMO system operating under Rayleigh fading channels. We outline couple of scenarios dependent on the structure of eigenvalues to investigate the system behavior. The results obtained are validated by means of Monte Carlo simulations.
Design of Linear Array Transducer Using Ultrasound Simulation Program Field-IIinventy
This paper analyze the effect of number of elements of linear array and frequency influence the
image quality in a homogenous medium. Linear arrays are most common for conventional ultrasound imaging,
because of the advantages of electronic focusing and steering. Propagation of ultrasound in biological tissues is
of nonlinear in nature. But linear approximation in far-field is promising solution to model and simulate the
real time ultrasound wave propagation. The simulation of ultrasound imaging using linear acoustics has been
most widely used for understanding focusing, image formation and flow estimation, and it has become a
standard tool in ultrasound research. . In this paper the ultrasound field generated from linear array transducer
and propagation through biological tissues is modeled and simulated using FIELD II program.
3D METALLIC PLATE LENS ANTENNA BASED BEAMSPACE CHANNEL ESTIMATION TECHNIQUE F...ijwmn
Beamspace channel estimation mechanism for massive MIMO (multiple input multiple output) antenna
system presents a major process to compensate the 5G spectrum challenges caused by the proliferation of
information from mobile devices. However, this estimation is required to ensure the perfect channel state
information (CSI) for lower amount of Radio Frequency (RF) chains for each beam. In addition, phase
shifter (PS) components used in this estimation need high power to select the beam in the desired direction.
To overcome these limitations, in this work, we propose Regular Scanning Support Detection (RSSD)
based channel estimation mechanism. Moreover, we utilise a 3D lens antenna array having metallic plate
and a switch in our model which compensates the limitation of phase shifters. Simulation results show that
the proposed RSSD based channel estimation surpasses traditional technique and SD based channel
estimation even in lower SNR area which is highly desirable in the millimeter wave (mmWave) massive
MIMO systems.
3D METALLIC PLATE LENS ANTENNA BASED BEAMSPACE CHANNEL ESTIMATION TECHNIQUE F...ijwmn
Beamspace channel estimation mechanism for massive MIMO (multiple input multiple output) antenna
system presents a major process to compensate the 5G spectrum challenges caused by the proliferation of
information from mobile devices. However, this estimation is required to ensure the perfect channel state
information (CSI) for lower amount of Radio Frequency (RF) chains for each beam. In addition, phase
shifter (PS) components used in this estimation need high power to select the beam in the desired direction.
To overcome these limitations, in this work, we propose Regular Scanning Support Detection (RSSD)
based channel estimation mechanism. Moreover, we utilise a 3D lens antenna array having metallic plate
and a switch in our model which compensates the limitation of phase shifters. Simulation results show that
the proposed RSSD based channel estimation surpasses traditional technique and SD based channel
estimation even in lower SNR area which is highly desirable in the millimeter wave (mmWave) massive
MIMO systems.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
How to Create Map Views in the Odoo 17 ERPCeline George
The map views are useful for providing a geographical representation of data. They allow users to visualize and analyze the data in a more intuitive manner.
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
We all have good and bad thoughts from time to time and situation to situation. We are bombarded daily with spiraling thoughts(both negative and positive) creating all-consuming feel , making us difficult to manage with associated suffering. Good thoughts are like our Mob Signal (Positive thought) amidst noise(negative thought) in the atmosphere. Negative thoughts like noise outweigh positive thoughts. These thoughts often create unwanted confusion, trouble, stress and frustration in our mind as well as chaos in our physical world. Negative thoughts are also known as “distorted thinking”.
This is a presentation by Dada Robert in a Your Skill Boost masterclass organised by the Excellence Foundation for South Sudan (EFSS) on Saturday, the 25th and Sunday, the 26th of May 2024.
He discussed the concept of quality improvement, emphasizing its applicability to various aspects of life, including personal, project, and program improvements. He defined quality as doing the right thing at the right time in the right way to achieve the best possible results and discussed the concept of the "gap" between what we know and what we do, and how this gap represents the areas we need to improve. He explained the scientific approach to quality improvement, which involves systematic performance analysis, testing and learning, and implementing change ideas. He also highlighted the importance of client focus and a team approach to quality improvement.
Ethnobotany and Ethnopharmacology:
Ethnobotany in herbal drug evaluation,
Impact of Ethnobotany in traditional medicine,
New development in herbals,
Bio-prospecting tools for drug discovery,
Role of Ethnopharmacology in drug evaluation,
Reverse Pharmacology.
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
Palestine last event orientationfvgnh .pptxRaedMohamed3
An EFL lesson about the current events in Palestine. It is intended to be for intermediate students who wish to increase their listening skills through a short lesson in power point.
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
1. GEO 365N/384S Seismic Data Processing
Final Project
Team: Albatross
ABSTRACT
In this report, we present our methods and results from processing a P-Cable high-
resolution 3D (HR3D) dataset collected in 2013. Our report consists of the following
sections:
1. (Maksat) Background and Acquisition Geometry
2. (Ben) Wavelet Deconvolution
3. (Ben and Maksat) Surface-Consistent Amplitude Correction
4. (Ben and Maksat) Velocity Analysis, NMO, and Stack
5. (Maksat) FFT Filtering and Median Filtering
6. (Ben) Multiple Attenuation
7. (Ben) Trace Removal and Interpolation
8. (Ben and Maksat) Gazdag Migration
9. (Ben and Maksat) Conclusions
The name beside the processing step indicates who performed that task. Additionally,
Ben typed the written report and Maksat created the powerpoint presentation.
BACKGROUND AND ACQUISITION GEOMETRY
The survey was conducted several miles offshore in the Gulf of Mexico (Figure 1), near
San Luis Pass, TX, with the intent of determining the area’s candidacy as a future carbon
sequestration location. Although the site was found to be unsuitable for carbon capture
and storage, many interesting geologic features, such as channel complexes and a salt dome,
lie within the survey boundaries (Meckel and Mulcahy, 2016).
The data were collected using a P-Cable HR3D acquisition system, a schematic of which
is displayed in Figure 2. Twelve streamers, each containing 8 channels, are towed behind an
acquisition vessel to create a catenary-shaped geometry. Channels have a 3 meter spacing
within each streamer, and streamer separation is 12.5 meters. Shot spacing is also 12.5
meters. This acquisition geometry leads to small bin sizes of 6.25 x 6.25 meters, which
combined with the high-frequency source (dominant frequency of 150 Hz), leads to high-
resolution in the shallow subsurface.
2. 2
The large size of this dataset made binning the original SEG-Y data into CMP gathers
difficult. We overcame this by windowing out inlines and crosslines that corresponded to
geologically-complex areas. Specifically, we used inlines 5035 through 5335 (out of 5396
inlines) and crosslines 1800 through 2500 (out of 2943 crosslines). The source and receiver
locations for this windowed data are shown in Figures 3 and 4. A fold map of our data
is given in Figure 5. Note the low-fold of CMPS: this is due to the small bin size used in
P-Cable acquisition.
Figure 1: Location of the 2013 P-Cable HR3D survey. Image taken from Meckel and
Mulcahy (2016). exp/ surveylocation
3. 3
Figure 2: Schematic of P-Cable acquisition layout. Image obtained from Tom Hess.
exp/ pcable
4. 4
Figure 3: Shot coordinates of the data.
Figure 4: Receiver coordinates of the data.
exp/ scoord
exp/ gcoord
5. 5
Figure 5: Fold map of the data.
WAVELET DECONVOLUTION
Our first step in processing is wavelet deconvolution. Wavelet deconvolution attempts
to remove the effects of the wavelet from the data in order to more accurately represent
the reflectivity of the subsurface and improve temporal resolution (Yilmaz, 2001). We
can estimate the source wavelet using Kolmogorov spectral decomposition, which finds a
minimum-phase wavelet that matches the spectrum of a corresponding signal. After taking
the spectrum of all traces in our dataset, the estimated Kolmogorov wavelet is given in
Figure 6. We then use conjugate gradient least-squares inversion to obtain a prediction-
error filter for deconvolution. With our given wavelet (Figure 6) and a spike as our model
(Figure 7) to be estimated, we get the prediction-error filter in Figure 8 after 100 iterations.
The output of convolving our wavelet with the prediction-error filter is given in Figure 9.
Seeing that the prediction-error filter improved the ”spikiness” of our wavelet, we then
applied the filter to the dataset. The first 30 traces of the data before and after wavelet
deconvolution are shown in Figures 10 and 11. It’s difficult to tell if deconvolution has
improved the gathers in these figures. We plotted the same data as wiggle traces in Figures
12 and 13. Some reflections in the wiggle plots appear more ”spiky” after deconvolution,
suggesting that our prediction-error filter helped portray the reflectivity series more accu-
rately.
exp/ fold
6. 6
Figure 6: Estimated wavelet from all trace spectra using Kolmogorov spectral decomposi-
tion.
SURFACE-CONSISTENT AMPLITUDE CORRECTIONS
Our next processing step was to apply surface-consistent amplitude corrections. Our method
is based off that of Taner and Koehler (1981), where trace amplitude is represented as a
function of source location, receiver location, midpoint, and offset. Given source location s
and receiver location r, trace amplitude is given by
A(s, r) = As(s)Ar(r)Ax(r − s)Am
r + s
2
. (1)
Taking the logarithm transforms this product into a sum, yielding
log[A(s, r)] = Ls(s) + Lr(r) + Lx(r − s) + Lm
r + s
2
. (2)
We use equation 2 for our surface-consistent corrections. To accomodate the logarithm in
equation 2, we first remove all traces that sum to zero after being squared and summed
over time. We then used the function surface-consistent.c along with conjugate gradient
least-squares inversion to get source and receiver terms for the surface-consistent correction.
While we tried to include the offset and midpoint terms, the conjugate gradient algorithm
would not converge when these variables were included. Our shot and receiver terms are
included in Figures 14 and 15. The surface-consistent amplitudes are plotted in shot and
receiver coordinates in Figures 16 and 17.
exp/ wavelet
7. 7
Figure 7: Spike used as model for prediction-error filter estimation. exp/ spike
8. 8
Figure 8: Prediction-error filter used for deconvolution.
Figure 9: Output from convolving our wavelet with the prediction-error filter.
exp/ filter
exp/ wdecon
9. 9
Figure 10: The first 30 traces from our data before deconvolution.
Figure 11: The first 30 traces from our data after deconvolution.
exp/ data
exp/ decon
10. 10
Figure 12: The first 30 traces from our data before deconvolution displayed as a wiggle plot.
Figure 13: The first 30 traces from our data after deconvolution displayed as a wiggle plot.
exp/ dataw
exp/ deconw
11. 11
Figure 14: Shot term for surface-consistent amplitude corrections.
Figure 15: Receiver term the surface-consistent amplitude corrections.
exp/ tshot
exp/ treceiver
12. 12
Figure 16: Surface-consistent amplitudes in shot coordinates.
Figure 17: Surface-consistent amplitudes in receiver coordinates.
exp/ sint
exp/ gint
13. 13
VELOCITY ANALYSIS, NMO, AND STACK
After surface-consistent corrections, the next step was to do velocity analysis, normal move-
out correction (NMO), and stack. These corrections increase the signal-to-noise ratio of the
data and decrease the overall size of the dataset. This being a 3D dataset, we could not
perform traditional velocity analysis on CMPs. Instead, we sorted the traces by offset and
windowed every 250th trace into a supergather. Our supergather and its corresponding
semblance scan are plotted in Figures 18 and 19. Note the low semblance throughout the
velocity scan: this stems from the short offsets of P-Cable acquisition. We used a mute to
force the automatic velocity picker to choose more geologically realistic velocities, ranging
from 1.5 to 2.4 km/s (Figure 20). The NMO-corrected supergather is given in Figure 21.
Several shallow reflectors appear to be flattened when compared to the original supergather.
Figure 18: Supergather used for velocity analysis.
To do NMO, we first sorted the data from shot gathers into CMP gathers. We then
sprayed our picked velocity function from 1D (time) to 3D (time, inline, crossline) and
applied NMO to the CMP gathers. A selected gather before and after NMO is shown in
Figure 22. Although the moveout is small, most reflectors are flattened after the NMO
correction, and will thus stack better. The stacked section is displayed in Figure 23.
FFT FILTERING AND MEDIAN FILTERING
Two methods were tested and applied to further enhance the data, Fast Fourier Transform
filter and Median filter. Both of these processes were applied to NMO corrected stack. We
wanted to separate the data into signal and noise but it was pretty predictive that it would
exp/ subcmps
14. 14
Figure 19: Semblance scan from supergather.
Figure 20: Semblance scan with mute and picked velocity.
exp/ vscan
exp/ vscanmute
15. 15
Figure 21: Supergather after NMO correction.
Figure 22: (left) CMP gather at inline 180 and crossline 130 (right) NMO-corrected gather
at inline 180 and crossline 130.
exp/ subnmo
exp/ nmo
16. 16
Figure 23: Stacked Section.
be inefficient since we don’t have any ground-roll noise to attenuate. Having consulted with
Tom Hess, we were informed that applying Tau-P (radon) filter would be better than a
polygonal FK filter at rejecting noise without artifacts, and they would still be redundant
to reject certain noise while maintaining events. Here we wanted to show that applying
noise reduction techniques in any domain would be detrimental to our data. Nevertheless,
we applied FFT filtering to get frequency-domain representation of the signal, and then set
power of unwanted high frequencies to zero. Inverse Fast Fourier Transform (IFFT) was
followed by to recover the filtered data in time domain as shown in Figure 25 Further, we
applied running median filter to our seismic data to attenuate coherent wavefields to see if
it enhances the signal . We wanted to distinguish out of range isolated noise from the stack
features like edges and lines. The method first aligned the data along first-break times
and applied the median filter across the traces. The filtering was performed by passing
a window over the time series and replacing each point in time series with median value
of the points. Then, it realigned the traces and the result was given by subtracting the
filtered version of the data from the original data as shown in Figure 26 We have found that
noise reduction techniques such as FFT filtering and medium filtering negatively affected
the stack by blurring the events corresponding to high frequencies.
exp/ stack
17. 17
Figure 24: NMO corrected stack
Figure 25: NMO stack seperated into a) signal and b) noise by FFT filtering using the
original filter (from filter.c)
exp/ stack1
exp/ FFT
18. 18
Figure 26: NMO stack seperated into a) signal and b) noise by Median filtering using the
running.c
MULTIPLE ATTENUATION
Once the data was stacked we worked on removing multiples. Although we learned to
remove multiples from the data using the Radon transform in class, the short offsets (and
therefore minimal moveout) of the data made this technique difficult. Instead, we opted to
use predictive deconvolution for multiple attenuation. This method uses prediction error
filters to remove periodic signals, i.e. multiples, and leaves signals that occur randomly, such
as the Earth’s reflectivity series. To accomplish this, we first compute the autocorrelation of
the data in order to see the time delay between multiples and choose a prediction lag. The
average autocorrelation of all traces is plotted in Figure 27. The third zero crossing of the
autocorrelation in Figure 27 is around 0.05 s, so this is the operator length we chose for our
prediction-error filter. Based on the wavelet previously estimated from our data, we used a
prediction lag of 0.01 s. A comparison of the data before and after multiple attenuation for
a selected inline is shown in Figure 28. One can see that temporal resolution is improved in
the demultipled inline, particularly in the deeper parts of the section and along the flanks
of the salt dome.
TRACE REMOVAL AND INTERPOLATION
To prepare the data for migration, we next masked traces with anomalously high amplitudes
and interpolated over areas missing traces. We have Sarah Greer to thank for suggesting
the masking of high-amplitude traces. To do so, we simply squared and summed traces
over time and removed traces with a total squared amplitude greater than 0.15. Figure 29
shows what traces were masked by this operation. Removal of these traces should lead to
exp/ median
19. 19
Figure 27: Average autocorrelation from all traces.
Figure 28: Inline 290 before (top) and after (bottom) multiple attenuation.
exp/ acstack
exp/ destack
20. 20
less artifacts in our final migrated image.
Figure 29: Stacked trace amplitudes (top) and high-amplitude traces to be masked (bot-
tom).
We next interpolated over areas with missing traces by way of plane wave destruction
(PWD) (Fomel, 2002). The estimated dip components from PWD are plotted in Figure
30. We could now use the estimated dips to interpolate over the areas of missing data,
the results of which are displayed in Figure 31b. While not perfect, the interpolation does
reasonably well in areas with horizontal layers and high SNR. The interpolation does poorly
inside the salt dome and along the edges of the section.
GAZDAG MIGRATION
Our final step was to migrate the stacked section. Migration attempts to place reflectors
in their proper subsurface locations using solutions to the wave equation. Additionally,
diffractions should collapse in a migrated image. Gazdag migration operates on data in the
frequency-wavenumber domain and can handle vertical, but not lateral, velocity variations.
Given post-stack data P(kx, , ω) at the Earth’s surface in the f-k domain, the wavefield at
depth ∆z can be calculated by
P(kx, z, ω) = P(kx, 0, ω)ei ∆z
0 kz(z )dz
, (3)
where
kz =
4ω2
v2
− k2
x − k2
y . (4)
exp/ highampmask
21. 21
Figure 30: Inline (top) and crossline (bottom) dip components obtained from PWD.
exp/ dip
22. 22
(a)
(b)
Figure 31: (a) Stacked Data Before Interpolation (b) Stacked Data After Interpolation.
exp/ stack2,stackfilled
23. 23
After applying the phase-shift in equation 3, the wavefield is summed over all frequencies
to apply the exploding reflector imaging condition at depth ∆z. Equation 3 is then applied
again from ∆z to 2∆z, and so on for all depths. An inverse Fourier transform over x will
give the migrated image in the t-x domain.
We obtained the interval velocity (in time) using Dix inversion, shown in Figure 32.
Using this velocity profile, we get the migrated section in Figure 33. Although the salt
dome isn’t imaged terribly well (due to the limitations of Gazdag migration), the migrated
data shows significant improvement in areas with relatively flat-lying geology. In Figure 34
one can see how triplications caused by channels transform into synclines after migration.
Another major benefit of migration is that much of the ”burst-like” noise present in the
stacked section, presumably caused by the acquisition footprint, is removed. Line 258, given
in Figure 35, shows how the migration handled imaging the salt dome. Given our migration
assumed velocity variations with depth only, the salt is not imaged as well as it would
be with more complex migration algorithms. A time slice at 0.104 s (Figure 36) contains
many interesting features, such as channels, a salt dome, and a normal fault (top-right).
Migration collapses triplications caused by the channel and makes the outline of the fault
and the edge of the salt more discernible.
Figure 32: RMS velocity (red, dashed) and interval velocity (blue, solid) obtained from Dix
inversion.
We additionally tried to apply post-stack RTM to a selected inline, however there was
little to no improvement to the stacked section, so we did not include it in this report.
exp/ velocity
24. 24
Figure 33: Final image after Gazdag migration.
CONCLUSIONS
In this project, we completed an entire processing workflow for a high-resolution 3D dataset,
from sorting to migration. We used many of the methods presented in class, as well as others
used by previous groups with similar datasets. Our final result was a decent migrated
image, however there are many ways our image could be improved: more terms in the
surface-consistent amplitude correction, more accurate (3D istead of 1D) velocity analysis,
and more advanced migration algorithms are all methods that could be used to enhance
our results. With more time, we would have enjoyed trying these other techniques.
ACKNOWLEDGEMENTS
We have many people to thank for their help on this project. We would first like to
thank Sergey Fomel for his advice on choosing the dataset and help in several processing
steps. Tom Hess was a huge asset to have due to his extensive experience with seismic
data processing, his advice on using the convolutional model, his aid in finding an area to
concentrate on, and general wisdom on things we wouldn’t think to ask. We owe a great
debt to two previous groups from this class, namely Team Emc-Hammer (Yanadet Sripanich
and Reinaldo Sabbagh) and Team Stegosaurus (Sarah Greer and Eric Goldfarb). Although
we didn’t share the same dataset, much of our processing workflow was based on what they
had done. Without their help, our final result wouldn’t be nearly what it is.
exp/ image
25. 25
Figure 34: Inline 24 before (top) and after (bottom) migration. Note how the migration
removes triplications from synclines in the shallow part of the section. exp/ line24
26. 26
Figure 35: Inline 258 before (top) and after (bottom) migration. Migration significantly
improves the image of the dipping reflectors along the edge of the salt.
Figure 36: Time slice at 0.104 s. Migration improves images of the channels, the normal
fault in the top-right corner, and the edge of the salt.
exp/ line258
exp/ slice52
27. 27
exp/SConstruct
1 from r s f . proj import ∗
2
3 # Import SEGY data
4 Flow ( ’gom tgom gom . asc gom . bin ’ , ’GOM2013redo . sgy ’ ,
5 ’ ’ ’
6 segyread t f i l e=${TARGETS[ 1 ] }
7 h f i l e=${TARGETS[ 2 ] }
8 b f i l e=${TARGETS[ 3 ] }
9 ’ ’ ’ )
10
11 # Seismic data corresponds to t r i d=1
12 Flow ( ’ t r i d ’ , ’tgom ’ , ’ headermath output=t r i d | mask min=1 max=1 ’ )
13
14
15 ################### Acquisition Geometry ###########################
16
17 # Subsample CMPs to grab area of i n t e r e s t ( s a l t dome and channels )
18 Flow ( ’ x l i n e s ’ , ’tgom ’ ,
19 ’ headermath output=x l i n e | mask min=1800 max=2500 ’ )
20 Flow ( ’ i l i n e s ’ , ’tgom ’ ,
21 ’ headermath output=i l i n e | mask min=5035 max=5335 ’ )
22 Flow ( ’maskcmp ’ , ’ t r i d x l i n e s i l i n e s ’ , ’mul ${SOURCES[ 0 : 2 ] } ’ )
23
24 Flow ( ’ theader ’ , ’tgom maskcmp ’ , ’ headerwindow mask=${SOURCES[ 1 ] } ’ )
25
26 # Recording s t a r t e d 100ms before shot , so s h i f t a l l data by 100 ms
27 # Remove data a f t e r 0.7 s
28 Flow ( ’ data ’ , ’gom maskcmp ’ ,
29 ’ ’ ’
30 headerwindow mask=${SOURCES[ 1 ] } |
31 window f1=50 max1=0.7 | put o1=0
32 ’ ’ ’ )
33
34 # Plot Source and Receiver Locations
35 for key in S p l i t ( ’ sx sy gx gy cdpx cdpy ’ ) :
36 Flow ( key , ’ theader ’ ,
37 ’ ’ ’ headermath output=%s | dd type=f l o a t |
38 math output=”input /1000”
39 ’ ’ ’ % key )
40
41
42 Flow ( ’ scoord ’ , ’ sx sy ’ , ’ cmplx ${SOURCES[ 1 ] } ’ )
43 Result ( ’ scoord ’ , ’ graph symbol=”.” symbolsz =0.5 t i t l e =”Shot Locations ” p l o t c o l=5
44
45 Flow ( ’ gcoord ’ , ’ gx gy ’ , ’ cmplx ${SOURCES[ 1 ] } ’ )
46 Result ( ’ gcoord ’ , ’ graph symbol=”.” symbolsz =0.5 t i t l e =”Reciever Locations ” p l o t c o
28. 28
47
48 # Fold map ( from binned data l a t e r in SConstruct )
49 Flow ( ’ bin3 mask ’ , ’ headersort ’ ,
50 ’ ’ ’
51 intbin3 head=$SOURCE xkey=−1 yk=i l i n e zk=x l i n e
52 mask=${TARGETS[ 1 ] }
53 ’ ’ ’ )
54
55 Flow ( ’ f o l d ’ , ’mask ’ , ’dd type=f l o a t | stack axis=1 norm=n ’ )
56 Result ( ’ f o l d ’ ,
57 ’ ’ ’
58 grey color=T p c l i p =100
59 scalebar=y a l l p o s=y transp=y yreverse=n
60 t i t l e =”Fold Map” b a r l a b e l=Fold l a b e l 1=i l i n e l a b e l 2=x l i n e
61 ’ ’ ’ )
62
63
64
65 ### Deconvolution ###
66
67 # Look at a trace spectra from a l l traces
68 Flow ( ’ spectra ’ , ’ data ’ ,
69 ’ ’ ’
70 spectra a l l=y | put l a b e l 2=Amplitude
71 ’ ’ ’ )
72 Result ( ’ spectra ’ , ’ graph t i t l e =”Spectra from a l l traces ” p l o t f a t=8 ’ )
73
74 # Estimate a wavelet from a l l traces
75 Flow ( ’ wavelet ’ , ’ spectra ’ ,
76 ’ ’ ’
77 math output=”input ∗ input ” | kolmog spec=y |
78 put l a b e l 1=Time unit1=s d1=0.002
79 ’ ’ ’ )
80 Result ( ’ wavelet ’ ,
81 ’ ’ ’
82 window n1=50 |
83 wiggle poly=y t i t l e =”Estimated Wavelet from a l l −trace spectra ” p l o t f a t =7
84 ’ ’ ’ )
85
86 prog = Program ( ’ convolve . c ’ )
87 convolve = s t r ( prog [ 0 ] )
88
89 # Estimate PEF by i t e r a t i v e least −squares inversion
90 Flow ( ’ spike ’ ,None , ’ spike n1=50 d1=0.002 k1=1 ’ ) # n1=30 d1=0.004
91 Result ( ’ spike ’ , ’ wiggle poly=y t i t l e =”Spike ” pclip =100 ’ )
92 Flow ( ’ f i l t e r ’ , [ ’ wavelet ’ , convolve , ’ spike ’ ] ,
93 ’ ’ ’
94 conjgrad ./ ${SOURCES[ 1 ] } nf=50 data=${SOURCES[ 0 ] }
29. 29
95 n i t e r =100 mod=${SOURCES[ 2 ] }
96 ’ ’ ’ )
97
98 Result ( ’ f i l t e r ’ , ’ f i l t e r spike ’ ,
99 ’ ’ ’
100 s ca l e axis=−1 | add ${SOURCES[ 1 ] } |
101 wiggle poly=y t i t l e =”Prediction −Error F i l t e r ” p c l i p =100
102 ’ ’ ’ )
103
104 # Wavelet deconvolution
105 Flow ( ’ wdecon ’ , [ ’ f i l t e r ’ , ’ wavelet ’ , convolve ] ,
106 ’ ’ ’
107 ./ ${SOURCES[ 2 ] } data=${SOURCES[ 1 ] } adj=n |
108 add ${SOURCES[ 1 ] } s ca l e =−1,1 | window n1=50 |
109 put l a b e l 1=Time
110 ’ ’ ’ )
111 Result ( ’ wdecon ’ , ’ wiggle poly=y t i t l e =”Wavelet Deconvolution ” pclip =100 ’ )
112
113 # Process a l l traces
114 Flow ( ’ decon ’ , [ ’ f i l t e r ’ , ’ data ’ , convolve ] ,
115 ’ ’ ’
116 ./ ${SOURCES[ 2 ] } data=${SOURCES[ 1 ] } adj=n |
117 add ${SOURCES[ 1 ] } s ca l e =−1,1
118 ’ ’ ’ )
119
120 Result ( ’ data ’ , ’window n2=30 | grey t i t l e =”First 30 Traces Before Deconvolution ”
121 Result ( ’ decon ’ , ’window n2=30 | grey t i t l e =”First 30 Traces After Deconvolution ”
122
123 Flow ( ’ dataw ’ , ’ data ’ , ’window n2=30 max1=0.2 ’ )
124 Result ( ’ dataw ’ , ’ wiggle t i t l e =”First 30 Traces Before Deconvolution ” ’ )
125 Flow ( ’ deconw ’ , ’ decon ’ , ’window n2=30 max1=0.2 ’ )
126 Result ( ’ deconw ’ , ’ wiggle t i t l e =”First 30 Traces Before Deconvolution ” ’ )
127
128
129
130
131
132
133
134
135
136
137
138
139
140 ### Surface−co n si st en t amplitude correction ###
141
142 # Remove traces that add to zero
30. 30
143
144 Flow ( ’ decon2 ’ , ’ decon ’ ,
145 ’ ’ ’
146 mul $SOURCE | stack axis=1
147 ’ ’ ’ )
148
149 Flow ( ’ nzmask ’ , ’ decon2 ’ , ’mask min=1e−20 ’ )
150 Flow ( ’ nzdata0 ’ , ’ decon2 nzmask ’ , ’ transp | headerwindow mask=${SOURCES[ 1 ] } | trans
151 Flow ( ’ nzdata ’ , ’ decon nzmask ’ , ’ headerwindow mask=${SOURCES[ 1 ] } ’ )
152 Flow ( ’ nzheader ’ , ’ theader nzmask ’ , ’ headerwindow mask=${SOURCES[ 1 ] } ’ )
153
154 # Average trace amplitude
155
156 Flow ( ’ arms ’ , ’ nzdata0 ’ , ’math output=”log ( input )” ’ )
157
158
159 # shot / receiver indeces : f l d r and t r a c f
160 Flow ( ’ index ’ , ’ nzheader ’ , ’window n1=2 f1=2 | transp ’ )
161
162 prog = Program ( ’ surface −c on si st en t . c ’ )
163 sc = s t r ( prog [ 0 ] )
164
165 Flow ( ’ model ’ , [ ’ arms ’ , ’ index ’ , sc ] ,
166 ’ ./ ${SOURCES[ 2 ] } index=${SOURCES[ 1 ] } verb=y ’ )
167
168 Flow ( ’ sc ’ , [ ’ arms ’ , ’ index ’ , sc , ’ model ’ ] ,
169 ’ ’ ’
170 conjgrad ./ ${SOURCES[ 2 ] } index=${SOURCES[ 1 ] }
171 mod=${SOURCES[ 3 ] } n i t e r =150
172 ’ ’ ’ )
173
174 Result ( ’ tshot ’ , ’ sc ’ ,
175 ’ ’ ’
176 window n1=31511 | put o1=1 d1=1 |
177 graph t i t l e =”Shot Term” l a b e l 1=”Shot Number”
178 unit1= l a b e l 2=Amplitude unit2=
179 ’ ’ ’ )
180
181 Result ( ’ t r e c e i v e r ’ , ’ sc ’ ,
182 ’ ’ ’
183 window f1 =31511 n1=96 | put o1=1 d1=1 |
184 graph t i t l e =”Receiver Term” l a b e l 1=”Receiver Number”
185 unit1= l a b e l 2=Amplitude unit2=
186 ’ ’ ’ )
187
188 # Surface−co n si st en t Log Amplitude for each trace
189 Flow ( ’ scarms ’ , [ ’ sc ’ , ’ index ’ , sc ] ,
190 ’ ./ ${SOURCES[ 2 ] } index=${SOURCES[ 1 ] } adj=n ’ )
31. 31
191
192 Flow ( ’amp ’ , ’ scarms ’ ,
193 ’ ’ ’
194 math output=”exp(−input /2)” |
195 spray axis=2 n=301 d=0.002 o=0 |
196 transp
197 ’ ’ ’ )
198
199
200 # Apply surface −c on si st en t correction to each trace
201 Flow ( ’ scdata ’ , ’ nzdata amp ’ , ’mul ${SOURCES[ 1 ] } ’ )
202
203 # Plot in shot and receiver coordinates
204 Flow ( ’ scoords ’ , ’ nzheader ’ , ’window n1=2 f1=21 | dd type=f l o a t | s c a l e dscale=1e−3
205 Flow ( ’ s i n t ’ , ’ scarms scoords ’ ,
206 ’ ’ ’
207 nnshape coord=${SOURCES[ 1 ] } rect1=10 rect2=10
208 o1=295.945 d1=0.01 n1=461 o2=3208.580 d2=0.01 n2=459 n i t e r =10
209 ’ ’ ’ )
210 Result ( ’ s i n t ’ ,
211 ’ ’ ’
212 grey color=j t i t l e =”Log−amplitude in Shot Coordinates ”
213 transp=n l a b e l 1=X l a b e l 2=Y unit1=km unit2=km scalebar=y
214 b a r l a b e l=”Log−Amplitude”
215 ’ ’ ’ )
216
217 Flow ( ’ gcoords ’ , ’ nzheader ’ , ’window n1=2 f1=23 | dd type=f l o a t | s c a l e dscale=1e−3
218 Flow ( ’ gint ’ , ’ scarms gcoords ’ ,
219 ’ ’ ’
220 nnshape coord=${SOURCES[ 1 ] } rect1=10 rect2=10
221 o1=295.873 d1=0.01 n1=476 o2=3208.570 d2=0.01 n2=460 n i t e r =10
222 ’ ’ ’ )
223 Result ( ’ gint ’ ,
224 ’ ’ ’
225 grey color=j t i t l e =”Log−amplitude in Receiver Coordinates ”
226 transp=n l a b e l 1=X l a b e l 2=Y unit1=km unit2=km scalebar=y
227 b a r l a b e l=”Log−Amplitude”
228 ’ ’ ’ )
229
230
231
232
233
234
235
236
237
238 ####### Velocity Analysis from super gather ##########
32. 32
239
240 # Sort by abs o f f s e t
241 Flow ( ’ o f f s e t o r d e r ’ , ’ nzheader ’ , ’ headermath output=” o f f s e t ” | dd type=f l o a t | math
242 Flow ( ’ o f f s e t h e a d e r s o r t ’ , ’ nzheader o f f s e t o r d e r ’ , ’ headersort head=${SOURCES[ 1 ] } ’ )
243 Flow ( ’ scdatabyoffset ’ , ’ scdata o f f s e t o r d e r ’ , ’ headersort head=${SOURCES[ 1 ] } ’ )
244
245
246 # Extract o f f s e t , convert from m to km
247 Flow ( ’ o f f s e t ’ , ’ o f f s e t h e a d e r s o r t ’ ,
248 ’ ’ ’
249 headermath output=o f f s e t |
250 dd type=f l o a t | s ca l e dscale =0.001
251 ’ ’ ’ )
252
253 # Take every 250 th trace
254 Flow ( ’ s u b o f f s e t ’ , ’ o f f s e t ’ , ’window j2 =250 ’ )
255 Flow ( ’ subcmps ’ , ’ scdatabyoffset ’ ,
256 ’ ’ ’ window j2=250 ’ ’ ’ )
257
258 Plot ( ’ s u b o f f s e t ’ , ’ graph t i t l e =”Offsets of super gather ” ’ )
259 Result ( ’ subcmps ’ , ’ window j2=8 | agc | grey t i t l e =”Super Gather” ’ )
260
261 # Velocity scan for for a supergather
262 prog = Program ( ’mute . c ’ )
263 mute = s t r ( prog [ 0 ] )
264 Flow ( ’ vscan ’ , ’ subcmps s u b o f f s e t ’ ,
265 ’ ’ ’
266 vscan o f f s e t=${SOURCES[ 1 ] } semblance=y
267 v0=1 nv=151 dv=0.02 h a l f=n | s ca l e dscale=50
268 ’ ’ ’ )
269 Flow ( ’ vscanmute ’ , [ ’ vscan ’ ,mute ] , ’ ./ ${SOURCES[ 1 ] } t1=−0.4 v1=2.4 | cut max1=0.004
270 for case in [ ’ vscan ’ , ’ vscanmute ’ ] :
271 Plot ( case ,
272 ’ ’ ’
273 cut max1=0.004 |
274 grey color=j a l l p o s=y t i t l e =”Semblance Scan”
275 unit2=km/s min2=1 max2=4 p c l i p =100
276 ’ ’ ’ )
277
278 Result ( ’ vscan ’ , ’ cut max1=0.004 | grey color=j a l l p o s=y t i t l e =”Semblance Scan” un
279
280 Flow ( ’ vpick ’ , ’ vscanmute ’ ,
281 ’ ’ ’
282 pick rect1=5 rect2=5 vel0 =1.2 an=2 gate=3
283 ’ ’ ’ )
284 Plot ( ’ vpick ’ ,
285 ’ ’ ’
286 graph yreverse=y transp=y p l o t c o l=0 p l o t f a t =7
33. 33
287 pad=n wantaxis=n w a n t t i t l e=n min2=1 max2=4
288 ’ ’ ’ )
289
290 Result ( ’ vscanmute ’ , ’ vscanmute vpick ’ , ’ Overlay ’ )
291
292 # NMO−corrected supergather
293 Flow ( ’ s u b o f f s e t t ’ , ’ s u b o f f s e t ’ , ’ transp ’ )
294 Flow ( ’subnmo ’ , ’ subcmps s u b o f f s e t t vpick ’ ,
295 ’ ’ ’
296 nmo o f f s e t=${SOURCES[ 1 ] } h a l f=n
297 v e l o c i t y=${SOURCES[ 2 ] }
298 ’ ’ ’ )
299 Result ( ’subnmo ’ , ’window j2=8 | agc | grey t i t l e =”Super Gather After NMO” ’ )
300
301
302
303 ###################### NMO ######################################
304
305 # Sort by i l i n e and x l i n e
306 Flow ( ’ binorder ’ , ’ nzheader ’ , ’ headermath output=”( i l i n e −5036)+360∗( xline −1800)” ’ )
307 Flow ( ’ headersort ’ , ’ nzheader binorder ’ , ’ headersort head=${SOURCES[ 1 ] } ’ )
308 Flow ( ’ o f f s e t t h e a d e r ’ , ’ nzheader ’ ,
309 ’ ’ ’
310 headermath output=o f f s e t |
311 dd type=f l o a t | s ca l e dscale =0.001
312 ’ ’ ’ )
313
314 Flow ( ’ databin ’ , ’ scdata binorder headersort o f f s e t t h e a d e r ’ ,
315 ’ ’ ’
316 mutter o f f s e t=${SOURCES[ 3 ] } v0=1.545 h a l f=n |
317 headersort head=${SOURCES[ 1 ] } |
318 intbin3 head=${SOURCES[ 2 ] } xkey=−1 yk=i l i n e zk=x l i n e
319 ’ ’ ’ )
320
321 Flow ( ’ o f f s e t 3 maskoffset ’ , ’ o f f s e t t h e a d e r binorder headersort ’ ,
322 ’ ’ ’
323 headersort head=${SOURCES[ 1 ] } |
324 intbin3 head=${SOURCES[ 2 ] } xkey=−1 yk=i l i n e zk=x l i n e
325 mask=${TARGETS[ 1 ] }
326 ’ ’ ’ )
327
328 Flow ( ’ vpick3 ’ , ’ vpick ’ ,
329 ’ ’ ’
330 spray axis=2 n=1 | spray axis=3 n=360 | spray axis=4 n=701
331 ’ ’ ’ )
332 Flow ( ’ maskoffset3 ’ , ’ maskoffset ’ , ’ spray axis=1 n=1 ’ )
333
334 Flow ( ’nmo ’ , ’ databin o f f s e t 3 maskoffset3 vpick3 ’ ,
34. 34
335 ’ ’ ’
336 nmo o f f s e t=${SOURCES[ 1 ] } h a l f=n
337 mask=${SOURCES[ 2 ] } v e l o c i t y=${SOURCES[ 3 ] }
338 ’ ’ ’ , s p l i t =[4 , ’omp ’ ] )
339 Flow ( ’ stack ’ , ’nmo ’ , ’ stack ’ )
340
341 Plot ( ’ databin ’ , ’window n4=1 f4 =130 n3=1 f3 =180 | sfgrey t i t l e =”CMP Gather Before
342 Plot ( ’nmo ’ , ’window n4=1 f4 =130 n3=1 f3 =180 | sfgrey t i t l e =”CMP Gather After NMO”
343 Result ( ’nmo ’ , ’ databin nmo ’ , ’ SideBySideAniso ’ )
344
345 Result ( ’ stack ’ , ’ byte | grey3 t i t l e =”Stacked Section ” f l a t=n frame1=107 frame2=29
346 Result ( ’ stack1 ’ , ’ stack ’ , ’window n3=1 f3 =679 | byte | grey f l a t=n frame1=107 fram
t i t l e =”NMO Corrected Stack” ’ )
347
348
349 ############################################################ FFT ########
350 Flow ( ’ f f t ’ , ’ stack ’ , ’ f f t 1 | f f t 3 ’ )
351 Plot ( ’ f f t ’ ,
352 ’ ’ ’
353 window max1=100 | math output=”abs ( input )” | re al |
354 grey a l l p o s=y t i t l e =”Fourier Transform”
355 ’ ’ ’ )
356
357 prog=Program ( ’ f i l t e r . c ’ )
358
359 v1=1.7
360 v2=2.5
361
362 f1 =160
363 f2 =200
364
365 Flow ( ’ f i l t e r f f t ’ , ’ f f t %s ’ % prog [ 0 ] ,
366 ’ ’ ’
367 ./ ${SOURCES[ 1 ] } v1=%g v2=%g f1=%g f2=%g
368 ’ ’ ’ % (v1 , v2 , f1 , f2 ))
369 Result ( ’ f i l t e r f f t ’ ,
370 ’ ’ ’
371 window max1=100 | math output=”abs ( input )” |
372 re al | grey a l l p o s=y t i t l e =”F i l t e r e d ”
373 ’ ’ ’ )
374
375 Flow ( ’mute ’ , ’ f f t %s ’ % prog [ 0 ] ,
376 ’ ’ ’
377 math output=1 | ./ ${SOURCES[ 1 ] } v1=%g v2=%g | re al
378 ’ ’ ’ % (v1 , v2 ))
379 Result ( ’mute ’ , ’window max1=100 | grey a l l p o s=y t i t l e =”Mute” ’ )
380
381 Flow ( ’ s i g n a l ’ , ’ f i l t e r f f t ’ , ’ f f t 3 inv=y | f f t 1 inv=y ’ )
35. 35
382 Plot ( ’ signal1 ’ , ’ s i g n a l ’ , ’window n3=1 f3 =679 | byte gainpanel=a l l | grey t i t l e =”S
383
384 Flow ( ’ noise ’ , ’ stack s i g n a l ’ , ’ add s c a l e =1,−1 ${SOURCES[ 1 ] } ’ )
385 Plot ( ’ noise1 ’ , ’ noise ’ , ’window n3=1 f3 =679 | byte gainpanel=a l l | grey t i t l e =”Noi
386 Result ( ’ noise ’ , ’ byte gainpanel=a l l | grey f l a t=n frame1=107 frame2=290 frame3=65
387 Result ( ’ s i g n a l ’ , ’ s i g n a l ’ , ’ byte | grey f l a t=n frame1=107 frame2=290 frame3=658
t i t l e =”Signal from FFT f i l t e r ” ’ )
388 Result ( ’FFT ’ , ’ signal1 noise1 ’ , ’ SideBySideAniso ’ )
389 ############################################################ Median F i l t e r ##
390 run = Program ( ’ running . c ’ )
391 w=2
392
393 Flow ( ’ ave ’ , ’ stack %s ’ % run [ 0 ] ,
394 ’ ./ ${SOURCES[ 1 ] } w1=%d w2=%d what=f a s t ’ %(w,w))
395 #Result ( ’ ave ’ , ’ grey t i t l e =”Signal ” ’)
396
397 #Difference
398 Flow ( ’ res ’ , ’ stack ave ’ , ’ add s c a l e =1,−1 ${SOURCES[ 1 ] } ’ )
399 Result ( ’ res ’ , ’ byte | grey f l a t=n frame1=107 frame2=290 frame3=658 t i t l e =”Noise f
400 Result ( ’ ave ’ , ’ ave ’ , ’ byte | grey f l a t=n frame1=107 frame2=290 frame3=658 t i t l e =”S
’ )
401 Plot ( ’ ave1 ’ , ’ ave ’ , ’window n3=1 f3 =679 | byte gainpanel=a l l | grey t i t l e =”Signal
402 Plot ( ’ res1 ’ , ’ res ’ , ’window n3=1 f3 =679 | byte gainpanel=a l l | grey t i t l e =”Noise f
403 Result ( ’ median ’ , ’ ave1 res1 ’ , ’ SideBySideAniso ’ )
404
405
406
407
408
409 ######################## Multple Attentuation ##################################
410 # Autocorrelation
411 Flow ( ’ acstack ’ , ’ stack ’ ,
412 ’ ’ ’
413 f f t 1 | math output=”input ∗ conj ( input )” | f f t 1 inv=y |
414 window n1=150 | s ca l e axis=1
415 ’ ’ ’ )
416 Result ( ’ acstack ’ , ’ stack axis=3 | stack axis=2 | graph t i t l e =”Average Autocorrela
417
418 # Deconvolution
419 Flow ( ’ destack ’ , ’ stack ’ ,
420 ’ pef minlag =0.01 maxlag=0.05 pnoise =0.001 mincorr=0 maxcorr=0.30 ’ )
421
422 Plot ( ’ stack ’ , ’window f2 =290 n2=1 | grey t i t l e =”I n l i n e 290 Before Multiple Attenu
423 Plot ( ’ destack ’ , ’window f2 =290 n2=1 | grey t i t l e =”I n l i n e 290 After Multiple Atten
424 Result ( ’ destack ’ , ’ stack destack ’ , ’ OverUnderAniso ’ )
425
426
427
36. 36
428
429
430
431 ######## Remove high amplitude traces #############
432 Flow ( ’ highamp ’ , ’ destack ’ , ’math output=”input ∗ input ” | stack axis=1 ’ )
433 Plot ( ’ highamp ’ , ’ grey t i t l e =”Stacked Traces” label1=I l i n e unit1= label2=Xline max
434 Flow ( ’ highampmask ’ , ’ highamp ’ , ’mask max=0.15 | dd type=f l o a t ’ )
435 Plot ( ’ highampmask ’ , ’ grey t i t l e =”High−Amplitude Stacked Traces to be removed” lab
436 Result ( ’ highampmask ’ , ’ highamp highampmask ’ , ’ OverUnderAniso ’ )
437 Flow ( ’ destackcorr ’ , ’ highampmask destack ’ , ’ spray axis=1 o=0 d=0.002 n=301 | math
438
439
440
441
442
443
444
445
446
447
448 ################ I n t e r p o l a t i o n of missing traces before migration
#####################################
449 ################ using plane−wave destruction ##################################
450 Flow ( ’ stack2 ’ , ’ destackcorr ’ , ’ put d2=0.00625 o2=0 d3=0.00625 o3=0 ’ )
451 Result ( ’ stack2 ’ , ’ byte | grey3 f l a t=n frame1=107 frame2=290 frame3=658 t i t l e =”Sta
452
453 Flow ( ’ maskstack ’ , ’ maskoffset ’ ,
454 ’ ’ ’
455 dd type=f l o a t | stack axis=1 | dd type=int |
456 spray axis=1 n=150 | put o1=0 d1=0.002 d2=0.00625 o2=0 d3=0.00625 o3=0
457 ’ ’ ’ )
458
459 Flow ( ’ stack2−transp ’ , ’ stack2 ’ , ’ transp plane=12 memsize=500 | transp plane=23 mem
460 Flow ( ’ maskstack−transp ’ , ’ maskstack ’ , ’ cut max1=0.13 | transp plane=12 memsize=500
461 Flow ( ’ s t a c k l a p f i l l −transp ’ , ’ stack2−transp maskstack−transp ’ ,
462 ’ ’ ’
463 l a p f i l l mask=${SOURCES[ 1 ] }
464 ’ ’ ’ , s p l i t =[3 , ’omp ’ , [ 0 ] ] )
465 Flow ( ’ s t a c k l a p f i l l ’ , ’ s t a c k l a p f i l l −transp ’ , ’ transp plane=23 memsize=500 | transp
466
467 Flow ( ’ dip ’ , ’ stack2 ’ , ’ dip rect1=10 rect2=10 rect3=20 ’ )
468 Flow ( ’ dipin ’ , ’ dip ’ , ’window n4=1 f4=0 ’ )
469 Flow ( ’ dipcross ’ , ’ dip ’ , ’window n4=1 f4=1 ’ )
470 Plot ( ’ dipin ’ , ’ byte | grey3 f l a t=n frame1=107 frame2=290 frame3=658 t i t l e =”I n l i n e
471 Plot ( ’ dipcross ’ , ’ byte | grey3 f l a t=n frame1=107 frame2=290 frame3=658 t i t l e =”Cro
472 Result ( ’ dip ’ , ’ dipin dipcross ’ , ’ OverUnderAniso ’ )
473
474 Flow ( ’ s t a c k f i l l e d ’ , ’ stack2 dip ’ , ’ planemis3 dip=${SOURCES[ 1 ] } ’ )#, s p l i t =[4 , ’omp ’ ] )
37. 37
475 Result ( ’ s t a c k f i l l e d ’ , ’ byte | grey3 f l a t=n frame1=107 frame2=290 frame3=658 t i t l e
476
477
478 ####################### Gazdag Migration ####################################
479
480 # Convert NMO v e l o c i t y to i n t e r v a l v e l o c i t y ( in time )
481 Flow ( ’semb ’ , ’ vscan vpick ’ , ’ s l i c e pick=${SOURCES[ 1 ] } ’ )
482 Flow ( ’ vdix ’ , ’ vpick semb ’ , ’ dix weight=${SOURCES[ 1 ] } rect1=50 ’ )
483 Result ( ’ v e l o c i t y ’ , ’ vdix vpick ’ ,
484 ’ ’ ’
485 cat axis=2 ${SOURCES[ 1 ] } |
486 graph dash=0,1 t i t l e =”I n t e r v a l Velocity ” unit2=km/s min2=1.45
487 p l o t f a t =7
488 ’ ’ ’ )
489
490 # Migration
491 Flow ( ’ c o s f t ’ , ’ s t a c k f i l l e d vdix ’ , ’ c o s f t sign2=1 sign3=1 | window n1=276 ’ )
492 Flow ( ’ vdixw ’ , ’ vdix ’ , ’window n1=276 ’ )
493 Flow ( ’ gazdag ’ , ’ c o s f t vdixw ’ ,
494 ’ ’ ’
495 gazdag v e l o c i t y=${SOURCES[ 1 ] } verb=y
496 ’ ’ ’ , s p l i t =[3 , ’omp ’ , [ 0 ] ] )
497 Flow ( ’ image ’ , ’ gazdag ’ , ’ c o s f t sign2=−1 sign3=−1 ’ )
498 Result ( ’ image ’ , ’ byte | grey3 f l a t=n frame1=107 frame2=290 frame3=658 t i t l e =”Migr
499
500 Flow ( ’ line24 ’ , ’ s t a c k f i l l e d ’ , ’window n2=1 f2=24 n1=276 ’ )
501 Flow ( ’ line24mig ’ , ’ image ’ , ’window n2=1 f2=24 ’ )
502 Plot ( ’ line24 ’ , ’ grey t i t l e =” I l i n e 24 Before Migration ” label1=Time label2=”Xline
503 Plot ( ’ line24mig ’ , ’ grey t i t l e =” I l i n e 24 After Migration ” label1=Time label2=”Xlin
504 Result ( ’ line24 ’ , ’ line24 line24mig ’ , ’ OverUnderAniso ’ )
505
506 Flow ( ’ line258 ’ , ’ s t a c k f i l l e d ’ , ’window n2=1 f2=52 n1=276 ’ )
507 Flow ( ’ line258mig ’ , ’ image ’ , ’window n2=1 f2 =258 ’ )
508 Plot ( ’ line258 ’ , ’ grey t i t l e =” I l i n e 258 Before Migration ” label1=Time label2=”Xlin
509 Plot ( ’ line258mig ’ , ’ grey t i t l e =” I l i n e 258 After Migration ” label1=Time label2=”Xl
510 Result ( ’ line258 ’ , ’ line258 line258mig ’ , ’ OverUnderAniso ’ )
511
512 Flow ( ’ s l i c e 5 2 ’ , ’ s t a c k f i l l e d ’ , ’window n1=1 f1=52 ’ )
513 Flow ( ’ slice52mig ’ , ’ image ’ , ’window n1=1 f1=52 ’ )
514 Plot ( ’ s l i c e 5 2 ’ , ’ grey t i t l e =”S l i c e 52 Before Migration ” label1=” I l i n e Position ” u
515 Plot ( ’ slice52mig ’ , ’ grey t i t l e =”S l i c e 52 After Migration ” label1=” I l i n e Position ”
516 Result ( ’ s l i c e 5 2 ’ , ’ s l i c e 5 2 s l i c e 5 2 ’ , ’ SideBySideIso ’ )
517 Plot ( ’ s l i c e 5 2 a n i s o ’ , ’ s l i c e 5 2 slice52mig ’ , ’ OverUnderAniso ’ )
518
519
520
521 ####################### Post−Stack RTM on i n l i n e 340 ###########################
522
38. 38
523 ### No r e s u l t s from t h i s section were included in the report . We include t h i s co
524 ### to demonstrate our workflow for post−stack RTM on a 2D s l i c e .
525
526 # Window out i n l i n e 340
527 Flow ( ’ stacked340 ’ , ’ s t a c k f i l l e d ’ , ’window n2=1 f2 =340 ’ )
528 Result ( ’ stacked340 ’ , ’ grey t i t l e =”I n l i n e 340 Before Migration ” ’ )
529
530 Flow ( ’ line340 ’ , ’ databin ’ , ’window n3=1 f3 =340 ’ )
531 Flow ( ’ l i n e 3 4 0 o f f ’ , ’ o f f s e t 3 ’ , ’window n3=1 f3 =340 | spray axis=1 n=1 o=0 d=1 ’ )
532 Flow ( ’maskbad ’ , ’ line340 ’ ,
533 ’ ’ ’
534 mul $SOURCE | stack axis=1 |
535 mask min=1e−20 | spray axis=1 n=1
536 ’ ’ ’ )
537 Flow ( ’mask2 ’ , ’maskbad mask ’ , ’ spray axis=1 n=1 | mul ${SOURCES[ 1 ] } ’ )
538 Flow ( ’ line340mask ’ , ’ maskoffset maskbad ’ , ’window n2=1 f2 =340 | spray axis=1 n=1 |
539
540 # Velocity Analysis
541 Flow ( ’ vscan340 ’ , [ ’ line340 ’ , ’ l i n e 3 4 0 o f f ’ , ’ line340mask ’ ,mute ] ,
542 ’ ’ ’
543 vscan semblance=y h a l f=n v0=1.4 nv=501 dv=0.02
544 o f f s e t=${SOURCES[ 1 ] } mask=${SOURCES[ 2 ] } |
545 ./ ${SOURCES[ 3 ] } t1=−0.4 v1=2.4 ’ ’ ’ )
546
547 Plot ( ’ vscan340 ’ , ’ grey gainpanel=a l l color=j a l l p o s=y scalebar=y barreverse=y ’ )
548 Flow ( ’ vpick340 ’ , ’ vscan340 ’ , ’ pick rect1=25 rect2=50 vel0 =1.5 ’ )
549 Flow ( ’ vstack340 ’ , ’ vpick340 ’ , ’ spray axis=2 n=21 d=1 o=0 ’ )
550 Result ( ’ vpick340 ’ , ’ grey t i t l e =”Picked RMS Velocity f or I n l i n e 340” label2=Xline
551
552 Flow ( ’ vdix340 ’ , ’ vpick340 ’ , ’ dix rect1=20 rect2=50 ’ )
553 Flow ( ’ vdepth340 ’ , ’ vdix340 ’ ,
554 ’ ’ ’
555 time2depth v e l o c i t y=$SOURCE intime=y nz=1001 z0=0 dz =0.005
556 ’ ’ ’ )
557 Result ( ’ vdepth340 ’ , ’window max1=1 | grey t i t l e =”Velocity with Depth” label1=”Dep
558
559 Flow ( ’ stacks ’ , ’ line340 l i n e 3 4 0 o f f line340mask ’ ,
560 ’ ’ ’
561 stacks h a l f=n v0=1.4 nv=121 dv=0.02
562 o f f s e t=${SOURCES[ 1 ] } mask=${SOURCES[ 2 ] }
563 ’ ’ ’ )
564
565 # NMO and Stack
566 Flow ( ’nmo340 ’ , ’ line340 l i n e 3 4 0 o f f line340mask vstack340 ’ ,
567 ’ ’ ’
568 nmo h a l f=n o f f s e t=${SOURCES[ 1 ] }
569 mask=${SOURCES[ 2 ] } v e l o c i t y=${SOURCES[ 3 ] }
570 ’ ’ ’ , s p l i t =[3 , ’omp ’ ] )
39. 39
571 Flow ( ’ stack340 ’ , ’nmo340 ’ , ’ stack axis=2 ’ )
572
573
574 # Multiple Removal
575 Flow ( ’ acstack340 ’ , ’ stack340 ’ ,
576 ’ ’ ’
577 f f t 1 | math output=”input ∗ conj ( input )” | f f t 1 inv=y |
578 window n1=150 | s ca l e axis=1
579 ’ ’ ’ )
580 Flow ( ’ destack340 ’ , ’ stack340 ’ ,
581 ’ pef minlag =0.01 maxlag=0.07 pnoise =0.001 mincorr=0 maxcorr=0.30 | put d2=0
582
583
584 # Use i n l i n e data from 3D processing
585 Flow ( ’ s t a c k f i l l e d 3 4 0 a l t ’ , ’ s t a c k f i l l e d ’ , ’window n2=1 f2 =340 ’ )
586
587 Flow ( ’ f f t v ’ , ’ vdepth340 ’ , ’ transp | f f t 1 | f f t 3 axis=2 pad=1 ’ )
588 Flow ( ’ right l e f t ’ , ’ vdepth340 f f t v ’ ,
589 ’ ’ ’
590 transp | i s o l r 2 seed=2016 dt =0.002
591 f f t=${SOURCES[ 1 ] } l e f t=${TARGETS[ 1 ] }
592 ’ ’ ’ )
593 Flow ( ’ rtmimage ’ , ’ s t a c k f i l l e d 3 4 0 a l t l e f t right ’ ,
594 ’ ’ ’
595 s p l i n e n1=301 o1=0 d1=0.002 |
596 reverse which=1 | transp |
597 f f t e x p 0 mig=y l e f t=${SOURCES[ 1 ] }
598 r i g h t=${SOURCES[ 2 ] } nz=1001 dz =0.005
599 ’ ’ ’ )
600
601 Result ( ’ rtmimage ’ , ’window max1=1.5 | grey t i t l e =”I n l i n e 340 After Post−Stack RTM
602 Flow ( ’ comparemigs ’ , ’ image ’ , ’window n2=1 f2 =340 ’ )
603 Plot ( ’ comparemigs ’ , ’ grey t i t l e =”I n l i n e 340 After Gazdag Migration ” label1=Time l
604 Plot ( ’ rtmimage ’ , ’window max1=1.5 | grey t i t l e =”I n l i n e 340 After Post−Stack RTM”
605 Result ( ’ comparemigs ’ , ’ comparemigs rtmimage ’ , ’ OverUnderAniso ’ )
606
607
608
609
610 End()
40. 40
REFERENCES
Fomel, S., 2002, Applications of plane-wave destruction filters: Geophysics, 67, 1946–1960.
Meckel, T. A., and F. J. Mulcahy, 2016, Use of novel high-resolution 3d marine seismic
technology to evaluate quaternary fluvial valley development and geologic controls on
shallow gas distribution, inner shelf, gulf of mexico: Interpretation, 4, SC35–SC49.
Taner, M. T., and F. Koehler, 1981, Surface consistent corrections: Geophysics, 46, 17–22.
Yilmaz, ¨O., 2001, Seismic data analysis: Society of Exploration Geophysicists Tulsa, 1.