Internal multiple attenuation using inverse scattering: Results from prestack 1 & 2D acoustic and
elastic synthetics
R. T. Coates*, Schlumberger Cambridge Research, A. B. Weglein, Arco Exploration and Production Technology
Summary
The attenuation of internal multiples in a multidimensional
earth is an important and longstanding problem in exploration
seismics. In this paper we report the results of applying
an attenuation algorithm based on the inverse scattering
series to synthetic prestack data sets generated in on
and two dimensional earth models. The attenuation algorithm
requires no information about the subsurface structure
or the velocity field. However, detailed information about
the source wavelet is a prerequisite. An attractive feature of:
the attenuation algorithm is the preservation of the amplitude
(and phase) of primary events in the data; thus allowing for
subsequent AVO and other true amplitude processing.
Summary
Methods for removal of free-surface and internal multiples have been developed from bath a feedback model approach and inverse scatterin g theory. White these two formulations derive from different mathematica) viewpoints,
the resulting algorithm s for free-surface multiple are very similar. By contrast , the feedback and inverse scattering
method for internal multiple are totally different and have different requirements for sub surface information or
interpretive intervention . The former removes all multiple related to a certain boundary with the a of a surface
integral along this boundary ; the alter wilt predict and attenuate a ll internal multiple a t the same time . In this paper, we continue our comparison study of these internal multiple attenuation method ; specifically , we examine two
different realizations of the feedback method and the inverse scattering technique .
Performance of cognitive radio networks with maximal ratio combining over cor...Polytechnique Montreal
In this paper, we apply the maximal ratio combining (MRC) technique to achieve higher detection probability in cognitive radio networks over correlated Rayleigh fading channels. We present a simple approach to derive the probability of detection in closed-form expression. The numerical results reveal that the detection performance is a monotonically increasing function with respect to the number of antennas. Moreover, we provide sets of complementary receiver operating characteristic (ROC) curves to illustrate the effect of antenna correlation on the sensing performance of cognitive radio networks employing MRC schemes in some respective scenarios.
EE402B Radio Systems and Personal Communication Networks-Formula sheetHaris Hassan
Programmes in which available:
Masters of Engineering - Electrical and Electronic
Engineering. Masters of Engineering - Electronic
Engineering and Computer Science. Master of Science -
Communication Systems and Wireless Networking.
Master of Science - Smart Telecom and Sensing
Networks. Master of Science - Photonic Integrated
Circuits, Sensors and Networks
To enable an extension of knowledge in fundamental data communications to radio communications and networks widely adopted
in modern telecommunications systems. To provide understanding of radio wave utilisation, channel loss properties, mobile
communication technologies and network protocol architecture applied to practical wireless systems
Analysis, Design and Optimization of Multilayer Antenna Using Wave Concept It...journalBEEI
The wave concept iterative process is a procedure used for analyses a planar circuits This method consists in generating a recursive relationship between a wave source and reflected waves from the discontinuity plane which is divided into cells. A high computational speed has been achieved by using Fast Modal Transform (FMT). In this paper we study a patch antenna and multilayer circuits, to determine the electromagnetic characteristics of these structures.
Summary
Methods for removal of free-surface and internal multiples have been developed from bath a feedback model approach and inverse scatterin g theory. White these two formulations derive from different mathematica) viewpoints,
the resulting algorithm s for free-surface multiple are very similar. By contrast , the feedback and inverse scattering
method for internal multiple are totally different and have different requirements for sub surface information or
interpretive intervention . The former removes all multiple related to a certain boundary with the a of a surface
integral along this boundary ; the alter wilt predict and attenuate a ll internal multiple a t the same time . In this paper, we continue our comparison study of these internal multiple attenuation method ; specifically , we examine two
different realizations of the feedback method and the inverse scattering technique .
Performance of cognitive radio networks with maximal ratio combining over cor...Polytechnique Montreal
In this paper, we apply the maximal ratio combining (MRC) technique to achieve higher detection probability in cognitive radio networks over correlated Rayleigh fading channels. We present a simple approach to derive the probability of detection in closed-form expression. The numerical results reveal that the detection performance is a monotonically increasing function with respect to the number of antennas. Moreover, we provide sets of complementary receiver operating characteristic (ROC) curves to illustrate the effect of antenna correlation on the sensing performance of cognitive radio networks employing MRC schemes in some respective scenarios.
EE402B Radio Systems and Personal Communication Networks-Formula sheetHaris Hassan
Programmes in which available:
Masters of Engineering - Electrical and Electronic
Engineering. Masters of Engineering - Electronic
Engineering and Computer Science. Master of Science -
Communication Systems and Wireless Networking.
Master of Science - Smart Telecom and Sensing
Networks. Master of Science - Photonic Integrated
Circuits, Sensors and Networks
To enable an extension of knowledge in fundamental data communications to radio communications and networks widely adopted
in modern telecommunications systems. To provide understanding of radio wave utilisation, channel loss properties, mobile
communication technologies and network protocol architecture applied to practical wireless systems
Analysis, Design and Optimization of Multilayer Antenna Using Wave Concept It...journalBEEI
The wave concept iterative process is a procedure used for analyses a planar circuits This method consists in generating a recursive relationship between a wave source and reflected waves from the discontinuity plane which is divided into cells. A high computational speed has been achieved by using Fast Modal Transform (FMT). In this paper we study a patch antenna and multilayer circuits, to determine the electromagnetic characteristics of these structures.
P-Wave Onset Point Detection for Seismic Signal Using Bhattacharyya DistanceCSCJournals
In seismology Primary p-wave arrival identification is a fundamental problem for the geologist worldwide. Several numbers of algorithms that deal with p-wave onset detection and identification have already been proposed. Accurate p- wave picking is required for earthquake early warning system and determination of epicenter location etc. In this paper we have proposed a novel algorithm for p-wave detection using Bhattacharyya distance for seismic signals. In our study we have taken 50 numbers of real seismic signals (generated by earthquake) recorded by K-NET (Kyoshin network), Japan. Our results show maximum standard deviation of 1.76 sample from true picks which gives better accuracy with respect to ratio test method.
Using Subspace Pursuit Algorithm to Improve Performance of the Distributed Co...Polytechnique Montreal
This paper applies a compressed algorithm to improve the spectrum sensing performance of cognitive radio technology.
At the fusion center, the recovery error in the analog to information converter (AIC) when reconstructing the
transmit signal from the received time-discrete signal causes degradation of the detection performance. Therefore, we
propose a subspace pursuit (SP) algorithm to reduce the recovery error and thereby enhance the detection performance.
In this study, we employ a wide-band, low SNR, distributed compressed sensing regime to analyze and evaluate the
proposed approach. Simulations are provided to demonstrate the performance of the proposed algorithm.
Ill-posedness formulation of the emission source localization in the radio- d...Ahmed Ammar Rebai PhD
To contact the authors : tarek.salhi@gmail.com and ahmed.rebai2@gmail.com
In the field of radio detection in astroparticle physics, many studies have shown the strong dependence of the solution of the radio-transient sources localization problem (the radio-shower time of arrival on antennas) such solutions are purely numerical artifacts. Based on a detailed analysis of some already published results of radio-detection experiments like : CODALEMA 3 in France, AERA in Argentina and TREND in China, we demonstrate the ill-posed character of this problem in the sens of Hadamard. Two approaches have been used as the existence of solutions degeneration and the bad conditioning of the mathematical formulation problem. A comparison between experimental results and simulations have been made, to highlight the mathematical studies. Many properties of the non-linear least square function are discussed such as the configuration of the set of solutions and the bias.
OPTIMAL BEAM STEERING ANGLES OF A SENSOR ARRAY FOR A MULTIPLE SOURCE SCENARIOcsandit
We present the gradient and Hessian of the trace of the multivariate Cramér-Rao bound (CRB)
formula for unknown impinging angles of plane waves with non-unitary beamspace measurements,. These gradient and Hessian can be used to find the optimal beamspace
transformation matrix, i.e., the optimum beamsteering angles, using the Newton-Raphson iteration. These trace formulas are particularly useful to deal with the multiple source senario.
We also show the mean squred error (MSE) performance gain of the optimally steered beamspace measurements compared with the usuall DFT steered measurements, when the angle
of arrivals (AOAs) are estimated with stochastic maximum likelihood (SMLE) algorithm.
Two novel transforms, related together and called Sine and Cosine Fresnel Transforms, as well as their optical implementation are presented. Each transform combines both backward and forward light propagation in the framework of the scalar diffraction approximation. It has been proven that the Fresnel transform is the optical version of the fractional Fourier transform. Therefore the former has the same properties as the latter. While showing properties similar to those of the Fresnel transform and therefore of the fractional Fourier transform, each of the Sine and Cosine Fresnel transforms provides a real result for a real input distribution. This enables saving half of the quantity of information in the complex plane. Because of parallelism, optics offers high speed processing of digital signals. Speech signals should be first represented by images through special light modulators for example. The Sine and Cosine Fresnel transforms may be regarded respectively as the fractional Sine and Cosine transforms which are more general than the Cosine transform used in information processing and compression.
VISUAL MODEL BASED SINGLE IMAGE DEHAZING USING ARTIFICIAL BEE COLONY OPTIMIZA...ijistjournal
Images are often degraded by atmospheric haze , a phenomenon due to the particles in the air that scatter light. Haze induces a loss of contrast,its visual effect is blurring of distant objects. This paper presents a novel algorithm for improving the visibility of an image degraded by haze. The proposed method uses a cost function based on human visual model to estimate airlight map. It employs Artificial Bee Colony optimization (ABC) as the optimization technique for estimating air light map. Image is dehazed by removing the estimatedairlight from the degraded image. The performance of the algorithm is tested and compared with various other dehazing methods and the proposed algorithm dehazes the image effectively outperforming other methods.
I will describe a sparse image reconstruction method for Poisson-distributed polychromatic X-ray computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. To obtain a parsimonious mean measurement-model parameterization, we first rewrite the measurement equation by changing the integral variable from photon energy to mass attenuation, which allows us to combine the variations brought by the unknown incident spectrum and mass attenuation into a single unknown mass-attenuation spectrum function; the resulting measurement equation has the Laplace integral form. The mass-attenuation spectrum is then expanded into basis functions using a B-spline basis of order one. We develop a block coordinate-descent algorithm for minimization of a penalized Poisson negative log-likelihood (NLL) cost function, where penalty terms ensure nonnegativity of the spline coefficients and nonnegativity and sparsity of the density map image. This algorithm alternates between a Nesterov’s proximal-gradient (NPG) step for estimating the density map image and a limited-memory Broyden–Fletcher–Goldfarb–Shanno with box constraints (L- BFGS-B) step for estimating the incident-spectrum parameters. To accelerate convergence of the density-map NPG steps, we apply a step-size selection scheme that accounts for varying local Lipschitz constants of the objective function. I will discuss the biconvexity of the penalized NLL function and outline preliminary results on convergence of PG-BFGS schemes. Finally, I will present real X-ray CT reconstruction examples that demonstrate the performance of the proposed scheme.
Wavelet estimation for a multidimensional acoustic or elastic earthArthur Weglein
A new and general wave theoretical wavelet estimation
method is derived. Knowing the seismic wavelet
is important both for processing seismic data and for
modeling the seismic response. To obtain the wavelet,
both statistical (e.g., Wiener-Levinson) and deterministic
(matching surface seismic to well-log data) methods
are generally used. In the marine case, a far-field
signature is often obtained with a deep-towed hydrophone.
The statistical methods do not allow obtaining
the phase of the wavelet, whereas the deterministic
method obviously requires data from a well. The
deep-towed hydrophone requires that the water be
deep enough for the hydrophone to be in the far field
and in addition that the reflections from the water
bottom and structure do not corrupt the measured
wavelet. None of the methods address the source
array pattern, which is important for amplitude-versus-
offset (AVO) studies.
The inverse scattering series for tasks associated with primaries: direct non...Arthur Weglein
The inverse scattering series for tasks associated with primaries: direct non-linear inversion of 1D elastic media. In this paper, research on direct inversion for two pa-
rameter acoustic media (Zhang and Weglein, 2005) is
extended to the three parameter elastic case. We present
the first set of direct non-linear inversion equations for
1D elastic media (i.e., depth varying P-velocity, shear
velocity and density). The terms for moving mislocated
reflectors are shown to be separable from amplitude
correction terms. Although in principle this direct
inversion approach requires all four components of elastic
data, synthetic tests indicate that consistent value-added
results may be achieved given only ˆDPP measurements.
We can reasonably infer that further value would derive
from actually measuring ˆDPP , ˆD PS, ˆDSP and ˆDSS as
the method requires. The method is direct with neither
a model matching nor cost function minimization.
P-Wave Onset Point Detection for Seismic Signal Using Bhattacharyya DistanceCSCJournals
In seismology Primary p-wave arrival identification is a fundamental problem for the geologist worldwide. Several numbers of algorithms that deal with p-wave onset detection and identification have already been proposed. Accurate p- wave picking is required for earthquake early warning system and determination of epicenter location etc. In this paper we have proposed a novel algorithm for p-wave detection using Bhattacharyya distance for seismic signals. In our study we have taken 50 numbers of real seismic signals (generated by earthquake) recorded by K-NET (Kyoshin network), Japan. Our results show maximum standard deviation of 1.76 sample from true picks which gives better accuracy with respect to ratio test method.
Using Subspace Pursuit Algorithm to Improve Performance of the Distributed Co...Polytechnique Montreal
This paper applies a compressed algorithm to improve the spectrum sensing performance of cognitive radio technology.
At the fusion center, the recovery error in the analog to information converter (AIC) when reconstructing the
transmit signal from the received time-discrete signal causes degradation of the detection performance. Therefore, we
propose a subspace pursuit (SP) algorithm to reduce the recovery error and thereby enhance the detection performance.
In this study, we employ a wide-band, low SNR, distributed compressed sensing regime to analyze and evaluate the
proposed approach. Simulations are provided to demonstrate the performance of the proposed algorithm.
Ill-posedness formulation of the emission source localization in the radio- d...Ahmed Ammar Rebai PhD
To contact the authors : tarek.salhi@gmail.com and ahmed.rebai2@gmail.com
In the field of radio detection in astroparticle physics, many studies have shown the strong dependence of the solution of the radio-transient sources localization problem (the radio-shower time of arrival on antennas) such solutions are purely numerical artifacts. Based on a detailed analysis of some already published results of radio-detection experiments like : CODALEMA 3 in France, AERA in Argentina and TREND in China, we demonstrate the ill-posed character of this problem in the sens of Hadamard. Two approaches have been used as the existence of solutions degeneration and the bad conditioning of the mathematical formulation problem. A comparison between experimental results and simulations have been made, to highlight the mathematical studies. Many properties of the non-linear least square function are discussed such as the configuration of the set of solutions and the bias.
OPTIMAL BEAM STEERING ANGLES OF A SENSOR ARRAY FOR A MULTIPLE SOURCE SCENARIOcsandit
We present the gradient and Hessian of the trace of the multivariate Cramér-Rao bound (CRB)
formula for unknown impinging angles of plane waves with non-unitary beamspace measurements,. These gradient and Hessian can be used to find the optimal beamspace
transformation matrix, i.e., the optimum beamsteering angles, using the Newton-Raphson iteration. These trace formulas are particularly useful to deal with the multiple source senario.
We also show the mean squred error (MSE) performance gain of the optimally steered beamspace measurements compared with the usuall DFT steered measurements, when the angle
of arrivals (AOAs) are estimated with stochastic maximum likelihood (SMLE) algorithm.
Two novel transforms, related together and called Sine and Cosine Fresnel Transforms, as well as their optical implementation are presented. Each transform combines both backward and forward light propagation in the framework of the scalar diffraction approximation. It has been proven that the Fresnel transform is the optical version of the fractional Fourier transform. Therefore the former has the same properties as the latter. While showing properties similar to those of the Fresnel transform and therefore of the fractional Fourier transform, each of the Sine and Cosine Fresnel transforms provides a real result for a real input distribution. This enables saving half of the quantity of information in the complex plane. Because of parallelism, optics offers high speed processing of digital signals. Speech signals should be first represented by images through special light modulators for example. The Sine and Cosine Fresnel transforms may be regarded respectively as the fractional Sine and Cosine transforms which are more general than the Cosine transform used in information processing and compression.
VISUAL MODEL BASED SINGLE IMAGE DEHAZING USING ARTIFICIAL BEE COLONY OPTIMIZA...ijistjournal
Images are often degraded by atmospheric haze , a phenomenon due to the particles in the air that scatter light. Haze induces a loss of contrast,its visual effect is blurring of distant objects. This paper presents a novel algorithm for improving the visibility of an image degraded by haze. The proposed method uses a cost function based on human visual model to estimate airlight map. It employs Artificial Bee Colony optimization (ABC) as the optimization technique for estimating air light map. Image is dehazed by removing the estimatedairlight from the degraded image. The performance of the algorithm is tested and compared with various other dehazing methods and the proposed algorithm dehazes the image effectively outperforming other methods.
I will describe a sparse image reconstruction method for Poisson-distributed polychromatic X-ray computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. To obtain a parsimonious mean measurement-model parameterization, we first rewrite the measurement equation by changing the integral variable from photon energy to mass attenuation, which allows us to combine the variations brought by the unknown incident spectrum and mass attenuation into a single unknown mass-attenuation spectrum function; the resulting measurement equation has the Laplace integral form. The mass-attenuation spectrum is then expanded into basis functions using a B-spline basis of order one. We develop a block coordinate-descent algorithm for minimization of a penalized Poisson negative log-likelihood (NLL) cost function, where penalty terms ensure nonnegativity of the spline coefficients and nonnegativity and sparsity of the density map image. This algorithm alternates between a Nesterov’s proximal-gradient (NPG) step for estimating the density map image and a limited-memory Broyden–Fletcher–Goldfarb–Shanno with box constraints (L- BFGS-B) step for estimating the incident-spectrum parameters. To accelerate convergence of the density-map NPG steps, we apply a step-size selection scheme that accounts for varying local Lipschitz constants of the objective function. I will discuss the biconvexity of the penalized NLL function and outline preliminary results on convergence of PG-BFGS schemes. Finally, I will present real X-ray CT reconstruction examples that demonstrate the performance of the proposed scheme.
Wavelet estimation for a multidimensional acoustic or elastic earthArthur Weglein
A new and general wave theoretical wavelet estimation
method is derived. Knowing the seismic wavelet
is important both for processing seismic data and for
modeling the seismic response. To obtain the wavelet,
both statistical (e.g., Wiener-Levinson) and deterministic
(matching surface seismic to well-log data) methods
are generally used. In the marine case, a far-field
signature is often obtained with a deep-towed hydrophone.
The statistical methods do not allow obtaining
the phase of the wavelet, whereas the deterministic
method obviously requires data from a well. The
deep-towed hydrophone requires that the water be
deep enough for the hydrophone to be in the far field
and in addition that the reflections from the water
bottom and structure do not corrupt the measured
wavelet. None of the methods address the source
array pattern, which is important for amplitude-versus-
offset (AVO) studies.
The inverse scattering series for tasks associated with primaries: direct non...Arthur Weglein
The inverse scattering series for tasks associated with primaries: direct non-linear inversion of 1D elastic media. In this paper, research on direct inversion for two pa-
rameter acoustic media (Zhang and Weglein, 2005) is
extended to the three parameter elastic case. We present
the first set of direct non-linear inversion equations for
1D elastic media (i.e., depth varying P-velocity, shear
velocity and density). The terms for moving mislocated
reflectors are shown to be separable from amplitude
correction terms. Although in principle this direct
inversion approach requires all four components of elastic
data, synthetic tests indicate that consistent value-added
results may be achieved given only ˆDPP measurements.
We can reasonably infer that further value would derive
from actually measuring ˆDPP , ˆD PS, ˆDSP and ˆDSS as
the method requires. The method is direct with neither
a model matching nor cost function minimization.
Towards the identification of the primary particle nature by the radiodetecti...Ahmed Ammar Rebai PhD
Radio signal from extensive air showers EAS studied by the CODALEMA experiment have been detected by means of the classic short fat antennas array working in a slave trigger mode by a particle scintillator array. It is shown that the radio shower wavefront is curved with respect to the plane wavefront hypothesis. Then a new tting model (parabolic model) is proposed to fit the radio signal time delay distributions in an event-by-event basis. This model take
into account this wavefront property and several shower geometry parameters such as: the existence of an apparent localised radio-emission source located at a distance Rc from the antenna array of and the radio shower core on the
ground. Comparison of the outputs from this model and other reconstruction models used in the same experiment show:
1)- That the radio shower core is shifted from the particle shower core in a statistic analysis approach.
2)- The capability of the radiodetection method to reconstruct the curvature radius with a statistical error less than 50 g.cm−2 .
Finally a preliminary study of the primary particle nature has been performed based on a comparison between data and Xmax distribution from Aires Monte-Carlo simulations for the same set of events.
On The Fundamental Aspects of DemodulationCSCJournals
When the instantaneous amplitude, phase and frequency of a carrier wave are modulated with the information signal for transmission, it is known that the receiver works on the basis of the received signal and a knowledge of the carrier frequency. The question is: If the receiver does not have the a priori information about the carrier frequency, is it possible to carry out the demodulation process? This tutorial lecture answers this question by looking into the very fundamental process by which the modulated wave is generated. It critically looks into the energy separation algorithm for signal analysis and suggests modification for distortionless demodulation of an FM signal, and recovery of sub-carrier signals
Arthur B. weglein, Hong Liang & Chao Ma - Research Paper Arthur Weglein
Seismic research paper published by the arthurs, Arthur B. weglein, Hong Liang & Chao Ma. Arthur B. Weglein is the director of the mission oriented seismic research, University of houston
The Inverse Scattering Series (ISS) is a direct inversion method
for a multidimensional acoustic, elastic and anelastic earth. It
communicates that all inversion processing goals are able to
be achieved directly and without any subsurface information.
This task is reached through a task-specific subseries of the
ISS. Using primaries in the data as subevents of the first-order
internal multiples, the leading-order attenuator can predict the
time of all the first-order internal multiples and is able to attenuate
them.
However, the ISS internal multiple attenuation algorithm can
be a computationally demanding method specially in a complex
earth. By using an approach that is based on two angular
quantities and that was proposed in Terenghi et al. (2012), the
cost of the algorithm can be controlled. The idea is to use the
two angles as key-control parameters, by limiting their variation,
to disregard some calculated contributions of the algorithm
that are negligible. Moreover, the range of integration
can be chosen as a compromise of the required degree of accuracy
and the computational time saving.
This time-saving approach is presented
Arthur B. Weglein, Hong Liang, and Chao Ma M-OSRP/Physics Dept./University o...Arthur Weglein
Arthur B. Weglein, a professor in the Department of Physics and the Department of Earth and Atmospheric Sciences in Houston TX. Read more on Research & Awards.
Accuracy of the internal multiple prediction when a time-saving method based ...Arthur Weglein
The inverse scattering series (ISS) is a direct inversion method for a multidimensional acoustic,
elastic and anelastic earth. It communicates that all inversion processing goals can be
achieved directly and without any subsurface information. This task is reached through a taskspecific
subseries of the ISS. Using primaries in the data as subevents of the first-order internal
multiples, the leading-order attenuator can predict the time of all the first-order internal multiples
and is able to attenuate them.
Finite-difference modeling, accuracy, and boundary conditions- Arthur Weglein...Arthur Weglein
This short report gives a brief review on the finite difference modeling method used in MOSRP
and its boundary conditions as a preparation for the Green’s theorem RTM. The first
part gives the finite difference formulae we used and the second part describes the implemented
boundary conditions. The last part, using two examples, points out some impacts of the accuracy
of source fields on the results of modeling.
Design of Low-Pass Digital Differentiators Based on B-splinesCSCJournals
This paper describes a new method for designing low-pass differentiators that could be widely suitable for low-frequency signals with different sampling rates. The method is based on the differential property of convolution and the derivatives of B-spline bias functions. The first order differentiator is just constructed by the first derivative of the B-spline of degree 5 or 4. A high (>2) order low-pass differentiator is constructed by cascading two low order differentiators, of which the coefficients are obtained from the nth derivative of a B-spline of degree n+2 expanded by factor a. In this paper, the properties of the proposed differentiators were presented. In addition, we gave the examples of designing the first to sixth order differentiators, and several simulations, including the effects of the factor a on the results and the anti-noise capability of the proposed differentiators. These properties analysis and simulations indicate that the proposed differentiator can be applied to a wide range of low-frequency signals, and the trade-off between noise- reduction and signal preservation can be made by selecting the maximum allowable value of a.
1 ECE 6340 Fall 2013 Homework 8 Assignment.docxjoyjonna282
1
ECE 6340
Fall 2013
Homework 8
Assignment: Please do Probs. 1-9 and 13 from the set below.
1) In dynamics, we have the equation
E j Aω= − −∇Φ .
(a) Show that in statics, the scalar potential function Φ can be interpreted as a voltage
function. That is, show that in statics
( ) ( )
B
AB
A
V E dr A B≡ ⋅ = Φ −Φ∫ .
(b) Next, explain why this equation is not true (in general) in dynamics.
(c) Explain why the voltage drop (defined as the line integral of the electric field, as
defined above) depends on the path from A to B in dynamics, using Faraday’s law.
(d) Does the right-hand side of the above equation (the difference in the potential
function) depend on the path, in dynamics?
Hint: Note that, according to calculus, for any function ψ we have
dr dx dy dz d
x y z
ψ ψ ψ
ψ ψ
∂ ∂ ∂
∇ ⋅ = + + =
∂ ∂ ∂
.
2) Starting with Maxwell’s equations, show that the electric field radiated by an impressed
current density source J i in an infinite homogeneous region satisfies the equation
( )2 2 iE k E E j Jωµ∇ + = ∇ ∇⋅ + .
Then use Ampere’s law (or, if you prefer, the continuity equation and the electric Gauss
law) to show that this equation may be written as
( )2 2 1 i iE k E J j J
j
ωµ
σ ωε
∇ + = − ∇ ∇⋅ +
+
.
2
Note that the total current density is the sum of the impressed current density and the
conduction current density, the latter obeying Ohm’s law (J c = σE).
Explain why this equation for the electric field would be harder to solve than the equation
that was derived in class for the magnetic vector potential.
3) Show that magnetic field radiated by an impressed current density source satisfies the
equation
2 2 iH k H J∇ + = −∇× .
Explain why this equation for the magnetic field would be harder to solve than the
equation that was derived in class for the magnetic vector potential.
4) Show that in a homogenous region of space the scalar electric potential satisfies the
equation
2 2
i
v
c
k
ρ
ε
∇ Φ + Φ = − ,
where ivρ is the impressed (source) charge density, which is the charge density that goes
along with the impressed current density, being related by
i ivJ jωρ∇⋅ = −
Hint: Start with E j Aω= − −∇Φ and take the divergence of both sides. Also, take the
divergence of both sides of Ampere’s law and use the continuity equation for the
impressed current (given above) to show that
1 ii v
c c
E J
j
ρ
ωε ε
∇⋅ = − ∇⋅ = .
Note: It is also true from the electric Gauss law that
vE
ρ
ε
∇⋅ = ,
but we prefer to have only an impressed (source) charge density on the right-hand side of
the equation for the potential Φ. In the time-harmonic steady state, assuming a
homogeneous and isotropic region, it follows that ρv = ρvi. That is, there is no charge
3
density arising from the conduction current. (If there were no impressed current sources,
the total charge density would therefore be ze ...
DEEP LEARNING BASED MULTIPLE REGRESSION TO PREDICT TOTAL COLUMN WATER VAPOR (...IJDKP
Total column water vapor is an important factor for the weather and climate. This study apply
deep learning based multiple regression to map the TCWV with elements that can improve
spatiotemporal prediction. In this study, we predict the TCWV with the use of ERA5 that is the
fifth generation ECMWF atmospheric reanalysis of the global climate. We use an appropriate
deep learning based multiple regression algorithm using Keras library to improve nonlinear
prediction between Total Column water vapor and predictors as Mean sea level pressure, Surface
pressure, Sea surface temperature, 100 metre U wind component, 100 metre V wind component,
10 metre U wind component, 10 metre V wind component, 2 metre dew point temperature, 2
metre temperature.
A short paper in techniques for rendering and examining acoustic wave forms as three dimensional surfaces, and how they may relate to an understanding of computer speech recognition.
Inverse scattering series for multiple attenuation: An example with surface a...Arthur Weglein
A multiple attenuation method derived from an inverse scattering
series is described. The inversion series approach allows a
separation of multiple attenuation subseries from the full series.
The surface multiple attenuation subseries was described and illustrated
in Carvalho et al. (1991, 1992). The internal multiple
attenuation method consists of selecting the parts of the odd
terms that are associated with removing only multiply reflected
energy. The method, for both types of multiples, is multidimensional
and does not rely on periodicity or differential moveout,
nor does it require a model of the reflectors generating the multiples.
An example with internal and surface multiples will be
presented.
In this paper we present a multidimensional method for attenuating internal multiples that derives from
an inverse scattering series . The method doesn't depend on periodicity or differential moveout, nor does it
require a model for the multiple generating reflectors.
Internal multiple attenuation using inverse scattering: Results from prestack...Arthur Weglein
The attenuation of internal multiples in a multidimensional
earth is an important and longstanding problem in exploration
seismics. In this paper we report the results of applying
an attenuation algorithm based on the inverse scattering
series to synthetic prestack data sets generated in on
and two dimensional earth models. The attenuation algorithm
requires no information about the subsurface structure
or the velocity field. However, detailed information about
the source wavelet is a prerequisite. An attractive feature of:
the attenuation algorithm is the preservation of the amplitude
(and phase) of primary events in the data; thus allowing for
subsequent AVO and other true amplitude processing.
Wavelet estimation for a multidimensional acoustic or elastic earth- Arthur W...Arthur Weglein
A new and general wave theoretical wavelet estimation
method is derived. Knowing the seismic wavelet
is important both for processing seismic data and for
modeling the seismic response. To obtain the wavelet,
both statistical (e.g., Wiener-Levinson) and deterministic
(matching surface seismic to well-log data) methods
are generally used. In the marine case, a far-field
signature is often obtained with a deep-towed hydrophone.
The statistical methods do not allow obtaining
the phase of the wavelet, whereas the deterministic
method obviously requires data from a well. The
deep-towed hydrophone requires that the water be
deep enough for the hydrophone to be in the far field
and in addition that the reflections from the water
bottom and structure do not corrupt the measured
wavelet. None of the methods address the source
array pattern, which is important for amplitude-versus-
offset (AVO) studies
Inverse scattering series for multiple attenuation: An example with surface a...Arthur Weglein
A multiple attenuation method derived from an inverse scattering
series is described. The inversion series approach allows a
separation of multiple attenuation subseries from the full series.
The surface multiple attenuation subseries was described and illustrated
in Carvalho et al. (1991, 1992). The internal multiple
attenuation method consists of selecting the parts of the odd
terms that are associated with removing only multiply reflected
energy. The method, for both types of multiples, is multidimensional
and does not rely on periodicity or differential moveout,
nor does it require a model of the reflectors generating the multiples.
An example with internal and surface multiples will be
presented.
Deghosting is a longstanding seismic objective and problem that has received considerable renewed attention due to : (1). an interest in so-called "broadband seismology" and the low frequency /low vertical wave number.
All of the perturbative approaches to multidimensional wave
equation processing. for example. wave equation migration (see,
e.g., Claerbout, 1971; French, 1975: Schneider, 1978; Stolt, 1978;
Sattlegger et al, 1980), or Born approximation inversion (see,
e.g., Cohen and Bleistein, 1979; Raz, 1981: Clayton and Stolt,
1981) require some input velocity information. In the Born approximation
to inversion, a reference or background velocity is
chosena nd a perturbationa boutt his velocity is determined.S imilarly,
a velocity model is a required input to all wave equation
migration techniques.
The Inverse Source Problem in The Presence of External Sources- Dr. Arthur B....Arthur Weglein
This paper presents a brief review of the various integral equation formuiations that have been employed
for the inverse source problem for the inhomogeneous scalar Heimhoitz equation. It is shown that these
formulations apply only in cases where either the data are prescribed on a closed surface surrounding the
unknown source or where the unknown source lies entirely on one side of an open measurement surface.
A generalized integral equation is derived that applies to the more general case where unknown sources
can exist on both sides of an open measurement surface. This latter problem arises in geophysical remote
sensing and the derived integral equation offers an approach to this class of problems not offered by
currently employed techniques.
Direct non-linear inversion of multi-parameter 1D elastic media using the inv...Arthur Weglein
In this paper, we present the first non-linear direct target identification method and algorithm
for 1D elastic media (P velocity, shear velocity and density vary in depth) from the inverse
scattering series. Direct non-linear means that we provide explicit formulas that: (1) input data
and directly output changes in material properties, without the use or need for any indirect procedures
such as model matching, searching, optimization or other assumed aligned objectives or
proxies, and (2) the algorithms recognize and directly invert the intrinsic non-linear relationship
between changes in material properties and changes in the concomitant wave-field. The results
clearly demonstrate that, in order to achieve full elastic inversion, all four components of data
(ˆD PP , ˆDPS, ˆD SP and ˆDSS) are needed. The method assumes that only data and reference
medium properties are input, and terms in the inverse series for moving mislocated reflectors
resulting from the linear inverse term, are separated from amplitude correction terms. Although
in principle this direct inversion approach requires all four components of elastic data, synthetic
tests indicate that a consistent value-added result may be achieved given only ˆDPP measurements,
as long as the ˆD PP were used to approximately synthesize the ˆD PS, ˆDSP and ˆD SS
components. We can reasonably infer that further value would derive from actually measuring
ˆD
PP , ˆDPS, ˆDSP and ˆD SS as the method requires. For the case that all four components of
data are available, we give one consistent method to solve for all of the second terms (the first
terms beyond linear). The method’s nonlinearity and directness provides this unambiguous data
requirement message, and that unique clarity, and the explicit non-linear formulas casts doubts
and reasonable concerns for indirect methods, in general, and their assumed aligned goals, e.g.,
using model matching objectives, that would never recognize the fundamental inadequacy from
a basic physics point of view of using only PP data to perform elastic inversion. There are important
conceptual and practical implications for the link between data acquisition and target
identification goals and objectives.
Linear inversion of absorptive/dispersive wave field measurements: theory and...Arthur Weglein
The use of inverse scattering theory for the inversion of viscoacoustic wave field
measurements, namely for a set of parameters that includes Q, is by its nature very
different from most current approaches for Q estimation. In particular, it involves an
analysis of the angle- and frequency-dependence of amplitudes of viscoacoustic data
events, rather than the measurement of temporal changes in the spectral nature of
events. We consider the linear inversion for these parameters theoretically and with
synthetic tests. The output is expected to be useful in two ways: (1) on its own it
provides an approximate distribution of Q with depth, and (2) higher order terms in
the inverse scattering series as it would be developed for the viscoacoustic case would
take the linear inverse as input.
We will begin, following Innanen (2003) by casting and manipulating the linear
inversion problem to deal with absorption for a problem with arbitrary variation of
wavespeed and Q in depth, given a single shot record as input. Having done this, we
will numerically and analytically develop a simplified instance of the 1D problem. This
simplified case will be instructive in a number of ways, first of all in demonstrating
that this type of direct inversion technique relies on reflectivity, and has no interest in
or ability to analyse propagation effects as a means to estimate Q. Secondly, through
a set of examples of slightly increasing complexity, we will demonstrate how and where
the linear approximation causes more than the usual levels of error. We show how
these errors may be mitigated through use of specific frequencies in the input data,
or, alternatively, through a layer-stripping based, or bootstrap, correction. In either
case the linear results are encouraging, and suggest the viscoacoustic inverse Born
approximation may have value as a standalone inversion procedure.
Initial study and implementation of the convolutional Perfectly Matched Layer...Arthur Weglein
In this report, first steps and results of the implementation of the Convolutional Perfectly
Matched Layer (CPML), for the modeling of the 2D acoustic heterogeneous wave equation
are presented. We also compare the conditions to set to zero, for all angles of incidence, the
reflection coefficient at the interface between two PML media, with the analogous conditions
for the reflection coefficient at an interface between two acoustic media. A side product of the
present work for the M-OSRP is a code to create synthetic data, using Finite-Difference (FD)
methods with PML BCs.
We also provide a short description of the main stages involved in the original Reverse Time
Migration (RTM) algorithm, with focus on the 2D acoustic heterogeneous wave equation. We
include a derivation of the equations of the CPML for the backward propagation of the data,
which is part of the RTM. As far as the authors knowledge, these equations and derivations
have not been reported in the literature. The reason we include the RTM is because the present
report can be considered part of a broader research project whose objective is to compare the
RTM with PML BCs with the Green’s theorem based RTM, developed within the M-OSRP.
The internal-multiple elimination algorithm for all first-order internal mult...Arthur Weglein
The ISS (Inverse-Scattering-Series) internal-multiple attenuation algorithm (Araújo et al. (1994)
and Weglein et al. (1997)) can predict the correct time and approximate amplitude for all firstorder
internal multiples without any information of the earth. This algorithm is effective and
can attenuate internal multiples in many cases. However, in certain places, both on-shore
and off-shore, the multiple is often proximal to or interfering with the primaries. Therefore,
the task of completely removing internal multiples without damaging primaries becomes more
challenging and subtle and currently beyond the collective capability of the petroleum industry.
Weglein (2014) proposed a three-pronged strategy for providing an effective response to this
pressing and prioritized challenge. One part of the strategy is to develop an internal-multiple
elimination algorithm that can predict both the correct amplitude and the correct time for all
internal multiples. The ISS internal-multiple elimination algorithm for all first-order internal
multiples generated from all reflectors in a 1D earth is proposed in this report. The primaries in
the reflection data that enters the algorithm provides that elimination capability, automatically
without our requiring the primaries to be identified or in any way separated. The other events in
the reflection data, that is, the internal multiples, will not be helpful in this elimination scheme.
That is a limitation of this algorithm. We will propose a modified strategy for providing the
elimination ability without the current shortcoming. We note that this elimination algorithm
based on the ISS internal-multiple attenuation algorithm is derived by using reverse engineering
to provide the difference between elimination and attenuation for a 1D earth. This particular
elimination algorithm is model type dependent since the reverse engineering method is model
type dependent. The ISS internal-multiple attenuation algorithm is completely model type
independent and in future work we will pursue the development of an eliminator for a multidimensional
earth by identifying terms in the inverse scattering series that have that purpose.
Multi-source connectivity as the driver of solar wind variability in the heli...Sérgio Sacani
The ambient solar wind that flls the heliosphere originates from multiple
sources in the solar corona and is highly structured. It is often described
as high-speed, relatively homogeneous, plasma streams from coronal
holes and slow-speed, highly variable, streams whose source regions are
under debate. A key goal of ESA/NASA’s Solar Orbiter mission is to identify
solar wind sources and understand what drives the complexity seen in the
heliosphere. By combining magnetic feld modelling and spectroscopic
techniques with high-resolution observations and measurements, we show
that the solar wind variability detected in situ by Solar Orbiter in March
2022 is driven by spatio-temporal changes in the magnetic connectivity to
multiple sources in the solar atmosphere. The magnetic feld footpoints
connected to the spacecraft moved from the boundaries of a coronal hole
to one active region (12961) and then across to another region (12957). This
is refected in the in situ measurements, which show the transition from fast
to highly Alfvénic then to slow solar wind that is disrupted by the arrival of
a coronal mass ejection. Our results describe solar wind variability at 0.5 au
but are applicable to near-Earth observatories.
Nutraceutical market, scope and growth: Herbal drug technologyLokesh Patil
As consumer awareness of health and wellness rises, the nutraceutical market—which includes goods like functional meals, drinks, and dietary supplements that provide health advantages beyond basic nutrition—is growing significantly. As healthcare expenses rise, the population ages, and people want natural and preventative health solutions more and more, this industry is increasing quickly. Further driving market expansion are product formulation innovations and the use of cutting-edge technology for customized nutrition. With its worldwide reach, the nutraceutical industry is expected to keep growing and provide significant chances for research and investment in a number of categories, including vitamins, minerals, probiotics, and herbal supplements.
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Sérgio Sacani
We characterize the earliest galaxy population in the JADES Origins Field (JOF), the deepest
imaging field observed with JWST. We make use of the ancillary Hubble optical images (5 filters
spanning 0.4−0.9µm) and novel JWST images with 14 filters spanning 0.8−5µm, including 7 mediumband filters, and reaching total exposure times of up to 46 hours per filter. We combine all our data
at > 2.3µm to construct an ultradeep image, reaching as deep as ≈ 31.4 AB mag in the stack and
30.3-31.0 AB mag (5σ, r = 0.1” circular aperture) in individual filters. We measure photometric
redshifts and use robust selection criteria to identify a sample of eight galaxy candidates at redshifts
z = 11.5 − 15. These objects show compact half-light radii of R1/2 ∼ 50 − 200pc, stellar masses of
M⋆ ∼ 107−108M⊙, and star-formation rates of SFR ∼ 0.1−1 M⊙ yr−1
. Our search finds no candidates
at 15 < z < 20, placing upper limits at these redshifts. We develop a forward modeling approach to
infer the properties of the evolving luminosity function without binning in redshift or luminosity that
marginalizes over the photometric redshift uncertainty of our candidate galaxies and incorporates the
impact of non-detections. We find a z = 12 luminosity function in good agreement with prior results,
and that the luminosity function normalization and UV luminosity density decline by a factor of ∼ 2.5
from z = 12 to z = 14. We discuss the possible implications of our results in the context of theoretical
models for evolution of the dark matter halo mass function.
A brief information about the SCOP protein database used in bioinformatics.
The Structural Classification of Proteins (SCOP) database is a comprehensive and authoritative resource for the structural and evolutionary relationships of proteins. It provides a detailed and curated classification of protein structures, grouping them into families, superfamilies, and folds based on their structural and sequence similarities.
Cancer cell metabolism: special Reference to Lactate PathwayAADYARAJPANDEY1
Normal Cell Metabolism:
Cellular respiration describes the series of steps that cells use to break down sugar and other chemicals to get the energy we need to function.
Energy is stored in the bonds of glucose and when glucose is broken down, much of that energy is released.
Cell utilize energy in the form of ATP.
The first step of respiration is called glycolysis. In a series of steps, glycolysis breaks glucose into two smaller molecules - a chemical called pyruvate. A small amount of ATP is formed during this process.
Most healthy cells continue the breakdown in a second process, called the Kreb's cycle. The Kreb's cycle allows cells to “burn” the pyruvates made in glycolysis to get more ATP.
The last step in the breakdown of glucose is called oxidative phosphorylation (Ox-Phos).
It takes place in specialized cell structures called mitochondria. This process produces a large amount of ATP. Importantly, cells need oxygen to complete oxidative phosphorylation.
If a cell completes only glycolysis, only 2 molecules of ATP are made per glucose. However, if the cell completes the entire respiration process (glycolysis - Kreb's - oxidative phosphorylation), about 36 molecules of ATP are created, giving it much more energy to use.
IN CANCER CELL:
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
introduction to WARBERG PHENOMENA:
WARBURG EFFECT Usually, cancer cells are highly glycolytic (glucose addiction) and take up more glucose than do normal cells from outside.
Otto Heinrich Warburg (; 8 October 1883 – 1 August 1970) In 1931 was awarded the Nobel Prize in Physiology for his "discovery of the nature and mode of action of the respiratory enzyme.
WARNBURG EFFECT : cancer cells under aerobic (well-oxygenated) conditions to metabolize glucose to lactate (aerobic glycolysis) is known as the Warburg effect. Warburg made the observation that tumor slices consume glucose and secrete lactate at a higher rate than normal tissues.
Seminar of U.V. Spectroscopy by SAMIR PANDASAMIR PANDA
Spectroscopy is a branch of science dealing the study of interaction of electromagnetic radiation with matter.
Ultraviolet-visible spectroscopy refers to absorption spectroscopy or reflect spectroscopy in the UV-VIS spectral region.
Ultraviolet-visible spectroscopy is an analytical method that can measure the amount of light received by the analyte.
1. PR 4.2
Internal multiple attenuation using inverse scattering: Results from prestack 1 & 2D acoustic and
elastic synthetics
R. T. Coates*, Schlumberger Cambridge Research, A. B. Weglein, Arco Exploration and Production Technology
Summary
The attenuation of internal multiples in a multidimensional
earth is an important and longstanding problem in explo-
ration seismics. In this paper we report the results of ap-
plying an attenuation algorithm based on the inverse scat-
tering series to synthetic prestack data sets generated in on
and two dimensional earth models. The attenuation algo-
rithm requires no information about the subsurface structure
or the velocity field. However, detailed information about
the source wavelet is a prerequisite. An attractive feature of:
the attenuation algorithm is the preservation of the amplitude
(and phase) of primary events in the data; thus allowing for
subsequent AVO and other true amplitude processing.
Introduction
Seismic processing typically assumes that reflection data
consists of primaries, i.e., that a single upward reflection has
occured between source and receiver. Signals which do not
conform to this model are usually regarded as noise to be
attenuated. Multiples have two or more upward, and one
or more downward reflectons between source and receiver,
figure 1, and thus are regarded as noise in seismic data. Mul-
tiples may be divided into two groups: surface multiples
where one or more of the downward reflections occur at the
free surface, and internal multiples where all downward re-
flections occur below the free surface. Here we concentrate
solely on internal multiples, assuming that all free surface
multiples have already been removed from the data.
Multiple attenuation is a classic and only partially solved
problem in exploration seismics. Existing attenuation meth-
ods generally make assumptions about the earth, e.g. that
it is flat layered with a white reflection series, or about the
character of the primary and multiple signals, e.g. that they
have significantly different moveouts. In many cases these
assumptions are violated and the effectiveness of the attenu-
ation or the preservation of the primary signal are degraded.
In this paper we consider an attenuation method for internal
multiples based on inverse scattering theory. The derivation
does not assume that the earth is 1D, indeed the theory re-
quires no information about the subsurface structure or ve-
locity field. It works by predicting and subtracting internal
multiples directly from the data. However, the method does
require an accurate knowledge of the source wavelet.
Theory
The internal multiple attenuation algorithm tested here is
presented in detail in Weglein and Araujo (1994), Araujo
(1994) and Weglein et al. (1996); here we provide only the
briefest summary. In the forward problem the scattered field,
Figure 1: A schematic illustration of primaries (solid)
and multiples (dash).
is given by the Lippmann-Schwinger equation, viz
(1)
where is the Green’s function in a homogeneous refer-
ence medium, G is the Green’s function in the actual medium
and the perturbation V is the difference between the wave
operators in the actual and homogeneous medium. This
equation may be expanded in powers of the perturbation,
(2)
Similarly, if we define the data, D, as the scattered field
recorded at the surface we can write the perturbation, V as a
series in the data, i.e. we write V as
(3)
where Vn is the portion of the perturbation that is order
in the data. Substituting (3) in (2) and equating orders of the
data we obtain
(4)
(5)
(6)
Data, D, is input and the model perturbation, V, is output.
One of the tasks of inversion is the elimination of multiples.
Since is linear in D, and the latter consists of primaries
and multiples, the multiple removal must be carried out by
1522
Downloaded 30 Apr 2011 to 99.10.237.97. Redistribution subject to SEG license or copyright; see Terms of Use at http://segdl.org/
2. the higher terms. In fact, the first contribution to the multiple
removal series, fur multiples of a given order, is determined
by the number of changes in direction of vertical propaga-
tion of the multiples, e.g. first-order multiples have three
reversals of vertical propagation direction and the first con-
tribution comes from part of the third term in equation (6).
The multiple attenuation series we consider consists of the
leading order attenuation term for each order of multiple.
For first-order internal multiples. the portion of the third
term chosen is determined by restricting the limits of inte-
gration. These restricted limits allow us to focus only on sig-
nals which interact with V1 on the first and third occasions at
points lower in the earth (or later in time) than on the second
occasion, thus satisfying our definition of an internal multi-
ple event of first-order.
For a 2D earth the first attenuation term may be written ex-
plicitly in the source and receiver slowness, and do-
mains as
is a small time interval which ensure that multiple scat-
tering of an event with itself is excluded fmm the multiple
attenuation series. For infinite bandwidth data may be a
single time sample, for bandlimited data should be greater
than the wavelet duration. M1 when added to the data at-
tenuates all first order internal multiples at a single step, Al-
though, equation (7) may be extended in a straight forward
way to higher-order multiples (see references above), it has
been out experience on synthetic data that these higher order
terms are rarely required due to the rapid decay with order in
the amplitude of internal multiples.
In a 1D earth each slowness propagates independently thus
the data in the slowness domain has a delta function depen-
dence on p1 – p2. i.e. t)
and equation (7) simplifies accordingly.
Example: 1D Prestack Acoustic Synthetics
To demonstrate the method in a simple 1D acoustic model
finite difference synthetics were calculated for a model con-
sisting of a 250 m thick layer = 2000 = 2.25
g/cc) separating two half-spaces (V, = 1500 m/s, = 1
g/cc). The source and receivers were located 125 m above
the top of the layer with offsets from 0 m. There
was was free surface. The synthetics are shown in figure 2.
The two primaries and the first order multiple are clearly vis-
ible in the data with the second order multiple less so.
Internal multiple attenuation
Figure 2: The ID prestack acoustic synthetics.
Figure 3 shows a detail of the synthetics after the calcula-
tion and addition of the first order mulitple attenuation term.
The first order multiple at 0.7 s has been significantly attenu-
ated. Although not shown on this time window the primaries
remain untouched. Note that the second order multiple at
0.95 s also experiences a reduction in amplitude; an addi-
tional degree of attenuation will be achieved by calculating
the second order attenuation term.
Example: 2D Acoustic Synthetics
To demonstrate the action of the 2D algorithm. equation (7),
we generate data directly in the plane-wave domain. This is
done for a simple wedge model = 4000 m/s. p = 1 g/cc)
separating two half-spaces = 1500 = 1 g/cc) by
illuminating it with a single plane wavefront with a variety of
different slownesses, see figure 4. Again the synthetics were
generated using finite differences. A single incident plane
wave generates two primaries with distinct slownesses and
a series of multiples (only one is shown) also with distinct
slownesses. figure 5.
If the ID single slowness algorithm was applied to each
slowness component independently. then the result would be
a zero multiple attenuation term since each slowness trace
exhibits only a single event. Figure 6 shows the result of ap-
plying the 1D - single slowness algorithm before transforma-
tion of the reflected wavefield into the slownesses domain at
the receiver and the result of applying the full 2D algorithm,
equation (7). to the data shown in figure 5.
The result of applying the single slowness algorithm shows
1523
Downloaded 30 Apr 2011 to 99.10.237.97. Redistribution subject to SEG license or copyright; see Terms of Use at http://segdl.org/
3. Internal multiple attenuation
Figure 3: A detail of the 1D prestack acoustic synthet-
ics before (solid line) and after (filled trace) first order
multiple attenuation.
Figure 4: The 2D acoustic model, showing the incident
and reflected plane waves schematically.
the multiple has been amplified for both incident slownesses.
The single slowness multiple attenuation term has incor-
rectly predicted the phase of the multiple and hence when
added to the data it has amplified not attenuated it. In con-
trast the 2D algorithm, equation (7), has correctly predicted
the phase of the multiple, and hence, when added to the data
has significantly attenuated the multiple signal.
Example: 1D Elastic Synthetics
Finally we show the results of applying the inverse scattering
multiple attenuation algorithm to synthetics from an elastic
model. The model is shown in figure 7 and consists of an
acoustic half-space overlying an elastic layer above an elas-
tic half-space. Again the model is illuminated by a plane
wave. The data, figure 8 now consists of four primaries, an
event from the top interface and three events from the bottom
interface with different modes of propagation (PP, PS and SP
together and SS), as well as a variety of multiple events with
a mixture of P- and S-wave legs. The central panel show the
first order multiple attenuation term and the right hand panel
a comparison of the data before and after multiple attenua-
tion.
The multiple events consisting of only P-wave legs are sig-
nificantly attenuated; this is not surprising since the form
of the inverse scattering algorithm we are using assumes
an acoustic reference wave propagation and a P-wave defi-
nition of the multiple. More surprising is the fact that the
events with one or more S-waves are also attenuated, if only
1524
Figure 5: The reflected plane wave primaries and first
order multiples for two distinct incident plane waves.
slightly. Extending the inverse scattering multiple attenua-
tion to an elastic reference medium, which we might expect
to attenuate S-wave events more effectively, is the subject of
further research.
Discussion
We have presented the results of testing an inverse scattering
series internal multiple attenuation algorithm on 1- and 2D
acoustic and 1D elastic media. The method attenuates all
multiples of a given order at a single step and does so without
affecting primary signals.
The method requires no information about the subsurface
structure or velocity field. It predicts multiples directly from
the data. A prerequiste of the method is a detailed knowl-
Downloaded 30 Apr 2011 to 99.10.237.97. Redistribution subject to SEG license or copyright; see Terms of Use at http://segdl.org/
4. Figure 6: (Left and center panels) Applying the 1D
algorithm to each slowness amplifies the multiple.
(Right panel) The 2D algorithm attenuates tbe multiple
(solid line) compared to the input data (dashed line).
edge of the wavelet. The results from the acoustic models are
very encouraging showing significant attenuation in both 1D
and ZD. The algorithm is equally effective for elastic models
in attenuating multiple events of an entirely P-wave history.
Events with S-wave legs are not as well attenuated, this mo-
tivates our elastic reference medium internal multiple atten-
uation research.
References
Araújo, F.V., 1994. Linear and non-linear methods derived
from scattering theory: back scattering tomography and inter-
nal multiple attenuation: Ph.D Thesis, Universidade Federal
da Bahia. Brazil (in Portuguese).
Weglein, A.B., and Araújo, F.V., 1994. Processing reflection
data, Patent Application No. GB94/O2246.
Weglein, A.B.. Gasparotto. F.V., Carvalo, P.M. and Stolt
R.H.. 1996. An inverse scattering series method for attenu-
ating multiples in seismic reflection data, submitted to Geo-
physics.
Figure 7: The 1D elastic model illuminated by a single
plane wave.
Figure 8: Internal multiples in an elastic model illu-
minated by a single plane wave. Data and first order
multiples (left panel), first order multiple attenuation
term (center) and a comparison of the data before and
after attenuation (right) (where tbe later has been sifted
slightly for visiblity). Multiple events consisting of
only P-wave legs have been significantly attenuated.
Events with one or more S-wave legs are less well at-
tenuated.
Downloaded 30 Apr 2011 to 99.10.237.97. Redistribution subject to SEG license or copyright; see Terms of Use at http://segdl.org/