This document provides an overview of multiple attenuation methods in seismic processing. It discusses two broad categories of methods: 1) filtering methods that exploit differences between primaries and multiples, and 2) prediction and subtraction methods that model wave propagation to predict and remove multiples. Specific filtering and prediction/subtraction techniques are examined, along with their assumptions, effectiveness, and limitations. The document also explores challenges in attenuating internal multiples and developments that could improve current methods.
Understanding climate model evaluation and validationPuneet Sharma
The document discusses model evaluation and validation. It introduces key concepts like evaluation, validation, and the apple-orange problem when directly comparing models and observations. It describes using a satellite simulator like COSP to facilitate apple-to-apple comparisons by simulating what satellites would observe from the model. The document also notes issues with observations like errors and uncertainties that must be considered during evaluation.
La Unión Europea ha acordado un embargo petrolero contra Rusia en respuesta a la invasión de Ucrania. El embargo prohibirá las importaciones marítimas de petróleo ruso a la UE y pondrá fin a las entregas a través de oleoductos dentro de seis meses. Esta medida forma parte de un sexto paquete de sanciones de la UE destinadas a aumentar la presión económica sobre Moscú y privar al Kremlin de fondos para financiar su guerra.
Este documento proporciona definiciones de varios sistemas operativos, incluyendo Windows XP, Windows Vista, Windows 7, Windows 8 y Ubuntu. Explica las características clave y mejoras de cada versión de Windows a través del tiempo. También describe las principales características de Ubuntu como un sistema operativo libre de costo basado en la comunidad.
Understanding climate model evaluation and validationPuneet Sharma
The document discusses model evaluation and validation. It introduces key concepts like evaluation, validation, and the apple-orange problem when directly comparing models and observations. It describes using a satellite simulator like COSP to facilitate apple-to-apple comparisons by simulating what satellites would observe from the model. The document also notes issues with observations like errors and uncertainties that must be considered during evaluation.
La Unión Europea ha acordado un embargo petrolero contra Rusia en respuesta a la invasión de Ucrania. El embargo prohibirá las importaciones marítimas de petróleo ruso a la UE y pondrá fin a las entregas a través de oleoductos dentro de seis meses. Esta medida forma parte de un sexto paquete de sanciones de la UE destinadas a aumentar la presión económica sobre Moscú y privar al Kremlin de fondos para financiar su guerra.
Este documento proporciona definiciones de varios sistemas operativos, incluyendo Windows XP, Windows Vista, Windows 7, Windows 8 y Ubuntu. Explica las características clave y mejoras de cada versión de Windows a través del tiempo. También describe las principales características de Ubuntu como un sistema operativo libre de costo basado en la comunidad.
1. O presidente do Conselho Municipal de Saúde de Carangola convoca os conselheiros para uma reunião ordinária em 21 de maio de 2013 às 18h30.
2. A pauta da reunião inclui itens como leitura da ata anterior, comunicados da Secretaria Municipal de Saúde, e discussão de demandas do Conselho para a Gestão Municipal de Saúde.
3. Os conselheiros são solicitados a entregar suas demandas para a pauta em forma de tópicos 15 minutos antes da reunião.
This document discusses the codes and conventions of teen drama genres. It outlines typical narratives around issues teenagers may face like addiction, mental health, sexuality, rape, and teen pregnancy. It also describes common stock characters, settings in schools, and costumes that help establish social groups. Controversial topics are often addressed to seem relatable to audiences.
O documento descreve as características dos lobinhos no escotismo, incluindo serem ativos, curiosos, observadores da natureza, dispostos a aprender com bons exemplos, defensores do que acham justo e aceitando compromissos.
El modelo de negocio tradicional se enfoca en producir y vender los productos más populares, lo que requiere un gran almacén y distribución. En contraste, los modelos de negocio digital como b2b, b2c y b2g aprovechan Internet para vender una variedad de productos y servicios a menor costo y de manera más rentable, ahorrando tiempo para empresas y clientes. La economía digital ha cambiado rápidamente cómo las personas y empresas hacen compras debido a los avances tecnológicos.
A Escola Básica de Paramos começou a colher os frutos de sua horta biológica, incluindo pessegos, laranjas, cebolinho, salva, cebola, limões, cenouras, couve-galega, feijão e morangos, demonstrando seu compromisso com uma alimentação saudável para os estudantes.
Do you want to grow your career horizontally?
Do you want to grow your business?
Do you want to know how to effectively use internet marketing and do this at an affordable price in Nairobi, Kenya?
Get in touch with us
Se vuoi costruire idee vincenti rispondi a queste semplici domande e sviluppa la tua idee rispettando un approccio etico e sociale, il tuo modello di business risulterà così sicuramente unico, applicabile e vincente, in un mercato in cui le idee buone sono sempre meno
El documento presenta un programa nacional de lectura con 11+5 acciones para mejorar las habilidades de lectura y escritura. Incluye un formato para registrar los libros prestados de la biblioteca escolar con información sobre el título, autor, estudiante, fecha de salida y entrada. El objetivo es promover el hábito de la lectura entre los estudiantes.
Este documento resume tres videos de propaganda. El primer video usa la emoción ("phatos") para persuadir a los padres a ayudar a su hijo que sufre de acoso escolar. El segundo video analiza cómo la publicidad crea estereotipos de género irrealistas sobre las mujeres usando evidencia sólida ("logos"). El tercer video busca generar empatía hacia las personas con discapacidades apelando a las emociones ("phatos") y ganándose la confianza del público ("ethos") a través de las acciones de un escultor.
Recent Advances In Multiple Attenuation - Arthur WegleinArthur Weglein
Multiple removal is a longstanding problem in exploration seismology. Director of Mission Oriented Seismic Research Program, Arthur Weglein has given the recent advancements and it road ahead in dealing with the Multiple removal in seismology
This thesis investigates methods for reducing a wave climate dataset into a smaller set of representative wave conditions, while minimizing the prediction error when using the reduced dataset as input for process-based morphodynamic simulations of a coastal beach. Eight different wave climate reduction algorithms are evaluated and compared using bulk sediment transport formulas and the process-based model Delft3D. The research aims to reduce the full wave climate dataset for the energetic coast of Durban, South Africa to 10 representative wave conditions. The key conclusions are that reducing the wave climate introduces significant prediction error, reproducing alongshore sediment transport is most successful but cross-shore transport and coastal morphology are more challenging to capture accurately, and doubling the number of representative conditions to 20 does not
The definition and extraction of actionable anomalous discords, i.e. pattern outliers, is a challenging
problem in data analysis. It raises the crucial issue of identifying criteria that would render a discord
more insightful than another one. In this paper, we propose an approach to address this by
introducing the concept of prominent discord. The core idea behind this new concept is to identify
dependencies among discords of varying lengths. How can we identify a discord that would be
prominent? We propose an ordering relation, that ranks discords, and we seek a set of prominent
discords with respect to this ordering. Our contributions are threefold 1) a formal definition,
ordering relation and methods to derive prominent discords based on Matrix Profile techniques,2)
their evaluation over large contextual climate data, covering 110 years of monthly data, and 3) a
comparison of an exact method based on STOMP and an approximate approach that is based on
SCRIMP++ to compute the prominent discords and study the tradeoff optimality/CPU. The
approach is generic and its pertinence shown over historical climate data.
APPLICATION OF MATRIX PROFILE TECHNIQUES TO DETECT INSIGHTFUL DISCORDS IN CLI...ijscai
This document summarizes a research paper that proposes a new approach to detect prominent discords, or anomalous patterns, in large climate data time series. The approach introduces the concept of a prominent discord, which is the most significant discord found across different window sizes that all start at the same position. It presents methods to compute prominent discords exactly using STOMP or approximately using SCRIMP++. The approach is applied to over 100 years of monthly climate impact runoff data to detect insightful discords. It compares the exact and approximate methods and explores the tradeoff between accuracy and computational efficiency.
M-OSR Director Arthur Weglein - Annual report & introduction, 2014Arthur Weglein
M-OSRP's research objective is to identify and address outstanding seismic processing challenges. Their goal is to develop new processing methods that provide additional options when current methods are insufficient or fail. Their focus is on making technically challenging exploration and production areas more feasible. Seismic processing challenges often arise when methods' assumptions are violated, such as inadequate data or subsurface information. M-OSRP decides whether to develop new methods/approaches to satisfy assumptions, or new methods without those assumptions. Their current focus is on developing fundamentally new concepts to address the major challenge of removing multiples that interfere with primaries without damaging the primaries.
Dear Mercatorian,
Be warned: half of the world's population live less than 100 km
from the sea! From this reality, an important work is developed at
Mercator around the service to the “Coastal people”. What is the
goal? Answer to their questions about tourism, development of the
shoreline, providing access to port infrastructures, the industrial
sector, prevention of pollution, aquaculture, etc.
Modelling and forecasting of the coastal ocean always has to take
into account the specific local environment. However, before we
can model these specific cases we need the most recent
information on the offshore ocean and the way it affects coastal
areas. Any operational modelling of local conditions requires upto-
date knowledge of the impact of offshore conditions. This is
provided by Mercator.
For Mercator, meeting the needs of 'Coastal people' means providing an accurate description of the state of the offshore
ocean a few kilometres away from the coastline. This description defines the limit conditions for their local coastal models.
Embedment, grid refinement, downscaling, boundary conditions, initial conditions, one-way, two-ways, … A Newsletter
entirely dedicated to the problematic transition between global and regional models.
Have a very good reading and see you for next issue with the new multivariate high-resolution Mercator prototype !
1. O presidente do Conselho Municipal de Saúde de Carangola convoca os conselheiros para uma reunião ordinária em 21 de maio de 2013 às 18h30.
2. A pauta da reunião inclui itens como leitura da ata anterior, comunicados da Secretaria Municipal de Saúde, e discussão de demandas do Conselho para a Gestão Municipal de Saúde.
3. Os conselheiros são solicitados a entregar suas demandas para a pauta em forma de tópicos 15 minutos antes da reunião.
This document discusses the codes and conventions of teen drama genres. It outlines typical narratives around issues teenagers may face like addiction, mental health, sexuality, rape, and teen pregnancy. It also describes common stock characters, settings in schools, and costumes that help establish social groups. Controversial topics are often addressed to seem relatable to audiences.
O documento descreve as características dos lobinhos no escotismo, incluindo serem ativos, curiosos, observadores da natureza, dispostos a aprender com bons exemplos, defensores do que acham justo e aceitando compromissos.
El modelo de negocio tradicional se enfoca en producir y vender los productos más populares, lo que requiere un gran almacén y distribución. En contraste, los modelos de negocio digital como b2b, b2c y b2g aprovechan Internet para vender una variedad de productos y servicios a menor costo y de manera más rentable, ahorrando tiempo para empresas y clientes. La economía digital ha cambiado rápidamente cómo las personas y empresas hacen compras debido a los avances tecnológicos.
A Escola Básica de Paramos começou a colher os frutos de sua horta biológica, incluindo pessegos, laranjas, cebolinho, salva, cebola, limões, cenouras, couve-galega, feijão e morangos, demonstrando seu compromisso com uma alimentação saudável para os estudantes.
Do you want to grow your career horizontally?
Do you want to grow your business?
Do you want to know how to effectively use internet marketing and do this at an affordable price in Nairobi, Kenya?
Get in touch with us
Se vuoi costruire idee vincenti rispondi a queste semplici domande e sviluppa la tua idee rispettando un approccio etico e sociale, il tuo modello di business risulterà così sicuramente unico, applicabile e vincente, in un mercato in cui le idee buone sono sempre meno
El documento presenta un programa nacional de lectura con 11+5 acciones para mejorar las habilidades de lectura y escritura. Incluye un formato para registrar los libros prestados de la biblioteca escolar con información sobre el título, autor, estudiante, fecha de salida y entrada. El objetivo es promover el hábito de la lectura entre los estudiantes.
Este documento resume tres videos de propaganda. El primer video usa la emoción ("phatos") para persuadir a los padres a ayudar a su hijo que sufre de acoso escolar. El segundo video analiza cómo la publicidad crea estereotipos de género irrealistas sobre las mujeres usando evidencia sólida ("logos"). El tercer video busca generar empatía hacia las personas con discapacidades apelando a las emociones ("phatos") y ganándose la confianza del público ("ethos") a través de las acciones de un escultor.
Recent Advances In Multiple Attenuation - Arthur WegleinArthur Weglein
Multiple removal is a longstanding problem in exploration seismology. Director of Mission Oriented Seismic Research Program, Arthur Weglein has given the recent advancements and it road ahead in dealing with the Multiple removal in seismology
This thesis investigates methods for reducing a wave climate dataset into a smaller set of representative wave conditions, while minimizing the prediction error when using the reduced dataset as input for process-based morphodynamic simulations of a coastal beach. Eight different wave climate reduction algorithms are evaluated and compared using bulk sediment transport formulas and the process-based model Delft3D. The research aims to reduce the full wave climate dataset for the energetic coast of Durban, South Africa to 10 representative wave conditions. The key conclusions are that reducing the wave climate introduces significant prediction error, reproducing alongshore sediment transport is most successful but cross-shore transport and coastal morphology are more challenging to capture accurately, and doubling the number of representative conditions to 20 does not
The definition and extraction of actionable anomalous discords, i.e. pattern outliers, is a challenging
problem in data analysis. It raises the crucial issue of identifying criteria that would render a discord
more insightful than another one. In this paper, we propose an approach to address this by
introducing the concept of prominent discord. The core idea behind this new concept is to identify
dependencies among discords of varying lengths. How can we identify a discord that would be
prominent? We propose an ordering relation, that ranks discords, and we seek a set of prominent
discords with respect to this ordering. Our contributions are threefold 1) a formal definition,
ordering relation and methods to derive prominent discords based on Matrix Profile techniques,2)
their evaluation over large contextual climate data, covering 110 years of monthly data, and 3) a
comparison of an exact method based on STOMP and an approximate approach that is based on
SCRIMP++ to compute the prominent discords and study the tradeoff optimality/CPU. The
approach is generic and its pertinence shown over historical climate data.
APPLICATION OF MATRIX PROFILE TECHNIQUES TO DETECT INSIGHTFUL DISCORDS IN CLI...ijscai
This document summarizes a research paper that proposes a new approach to detect prominent discords, or anomalous patterns, in large climate data time series. The approach introduces the concept of a prominent discord, which is the most significant discord found across different window sizes that all start at the same position. It presents methods to compute prominent discords exactly using STOMP or approximately using SCRIMP++. The approach is applied to over 100 years of monthly climate impact runoff data to detect insightful discords. It compares the exact and approximate methods and explores the tradeoff between accuracy and computational efficiency.
M-OSR Director Arthur Weglein - Annual report & introduction, 2014Arthur Weglein
M-OSRP's research objective is to identify and address outstanding seismic processing challenges. Their goal is to develop new processing methods that provide additional options when current methods are insufficient or fail. Their focus is on making technically challenging exploration and production areas more feasible. Seismic processing challenges often arise when methods' assumptions are violated, such as inadequate data or subsurface information. M-OSRP decides whether to develop new methods/approaches to satisfy assumptions, or new methods without those assumptions. Their current focus is on developing fundamentally new concepts to address the major challenge of removing multiples that interfere with primaries without damaging the primaries.
Dear Mercatorian,
Be warned: half of the world's population live less than 100 km
from the sea! From this reality, an important work is developed at
Mercator around the service to the “Coastal people”. What is the
goal? Answer to their questions about tourism, development of the
shoreline, providing access to port infrastructures, the industrial
sector, prevention of pollution, aquaculture, etc.
Modelling and forecasting of the coastal ocean always has to take
into account the specific local environment. However, before we
can model these specific cases we need the most recent
information on the offshore ocean and the way it affects coastal
areas. Any operational modelling of local conditions requires upto-
date knowledge of the impact of offshore conditions. This is
provided by Mercator.
For Mercator, meeting the needs of 'Coastal people' means providing an accurate description of the state of the offshore
ocean a few kilometres away from the coastline. This description defines the limit conditions for their local coastal models.
Embedment, grid refinement, downscaling, boundary conditions, initial conditions, one-way, two-ways, … A Newsletter
entirely dedicated to the problematic transition between global and regional models.
Have a very good reading and see you for next issue with the new multivariate high-resolution Mercator prototype !
This document describes a methodology for identifying critical time periods from hydrological observation data that contain important information for calibrating hydrological models. The methodology uses a statistical concept called data depth to identify unusual events in discharge or precipitation time series that lie near the boundary of the multivariate data set. These unusual events, which include extremes, long dry or wet periods, and periods of strong dynamics, are considered critical periods for model calibration. The methodology is tested on discharge and precipitation data from a catchment in Germany using two hydrological models. The results show that calibration using only the critical periods identified is only slightly worse than calibration using all the data, and the model parameters have similar transferability to different time periods.
Distributed hydrological models aim to quantitatively predict hydrological responses, but their application is more an exercise in prophecy than prediction due to difficulties defining model structures and calibrating models. While quantitative prophecy allows for gradual improvement, uncertainties are inherent that make validation impossible. The document argues that current physically-based distributed models can be invalidated when moving from small to large scales due to issues of heterogeneity, measurement scale, and averaging parameters and variables. This suggests model predictions should be viewed as prophecies relying on accepted principles rather than validated predictions, as calibration cannot fully remedy differences in measurement and model scales.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Inverse scattering series for multiple attenuation: An example with surface a...Arthur Weglein
A multiple attenuation method derived from an inverse scattering
series is described. The inversion series approach allows a
separation of multiple attenuation subseries from the full series.
The surface multiple attenuation subseries was described and illustrated
in Carvalho et al. (1991, 1992). The internal multiple
attenuation method consists of selecting the parts of the odd
terms that are associated with removing only multiply reflected
energy. The method, for both types of multiples, is multidimensional
and does not rely on periodicity or differential moveout,
nor does it require a model of the reflectors generating the multiples.
An example with internal and surface multiples will be
presented.
Inverse scattering series for multiple attenuation: An example with surface a...Arthur Weglein
A multiple attenuation method derived from an inverse scattering
series is described. The inversion series approach allows a
separation of multiple attenuation subseries from the full series.
The surface multiple attenuation subseries was described and illustrated
in Carvalho et al. (1991, 1992). The internal multiple
attenuation method consists of selecting the parts of the odd
terms that are associated with removing only multiply reflected
energy. The method, for both types of multiples, is multidimensional
and does not rely on periodicity or differential moveout,
nor does it require a model of the reflectors generating the multiples.
An example with internal and surface multiples will be
presented.
1. Scientific models are representations of phenomena that make them easier to understand through diagrams, physical models, or complex mathematics. The main types are visual, mathematical, and computer models.
2. Ocean circulation models represent ocean circulation, climate change, and pollutant distribution through factors like temperature, salinity, winds, and ocean features. There are mechanistic models for simplified processes and simulation models for realistic regional circulation.
3. Global climate models (GCMs) simulate climate system components but have coarse resolution. Regional climate models (RCMs) increase GCM resolution for a small area, providing more local information down to 50km. Parameterization replaces sub-grid scale processes in models.
Linear inversion of absorptive/dispersive wave field measurements: theory and...Arthur Weglein
The use of inverse scattering theory for the inversion of viscoacoustic wave field
measurements, namely for a set of parameters that includes Q, is by its nature very
different from most current approaches for Q estimation. In particular, it involves an
analysis of the angle- and frequency-dependence of amplitudes of viscoacoustic data
events, rather than the measurement of temporal changes in the spectral nature of
events. We consider the linear inversion for these parameters theoretically and with
synthetic tests. The output is expected to be useful in two ways: (1) on its own it
provides an approximate distribution of Q with depth, and (2) higher order terms in
the inverse scattering series as it would be developed for the viscoacoustic case would
take the linear inverse as input.
We will begin, following Innanen (2003) by casting and manipulating the linear
inversion problem to deal with absorption for a problem with arbitrary variation of
wavespeed and Q in depth, given a single shot record as input. Having done this, we
will numerically and analytically develop a simplified instance of the 1D problem. This
simplified case will be instructive in a number of ways, first of all in demonstrating
that this type of direct inversion technique relies on reflectivity, and has no interest in
or ability to analyse propagation effects as a means to estimate Q. Secondly, through
a set of examples of slightly increasing complexity, we will demonstrate how and where
the linear approximation causes more than the usual levels of error. We show how
these errors may be mitigated through use of specific frequencies in the input data,
or, alternatively, through a layer-stripping based, or bootstrap, correction. In either
case the linear results are encouraging, and suggest the viscoacoustic inverse Born
approximation may have value as a standalone inversion procedure.
This document summarizes a study on improving operational ocean wave ensemble forecasts. It reviews current state-of-the-art wave ensemble prediction centers and their validation against buoy data. Statistical methods like clustering and Empirical Orthogonal Functions are investigated to allow running higher resolution ensemble forecasts at lower computational cost. Very high resolution ensemble forecasts are validated but found to have limited reliability due to uncertainties in wind fields and model parameters. Further research is needed on improving bathymetry and wind forcing data to enhance forecast accuracy.
This document describes two "poor man's" methods for imputing missing values in large datasets with tight deadlines: 1) Univariate cumulative empirical distribution imputes values based on the distribution of each variable individually. 2) Multivariate cumulative empirical distribution selects the top 5 correlated variables and imputes based on patterns across those variables to capture some multivariate structure, though it has limitations. Both aim to be fast methods that can be easily implemented at scale for database marketing tasks with short deadlines.
This document discusses methods for clustering time series data in a way that allows the cluster structure to change over time. It begins by introducing the problem and defining relevant terms. It then provides spectral clustering as a preliminary benchmark approach before exploring an alternative method using triangular potentials within a graphical model framework. The document presents the proposed method and provides illustrative examples and discussion of extensions.
This document describes a dissertation on model-data integration for predictive assessment of groundwater reactive transport systems. Specifically, it develops reactive transport models to predict groundwater contamination at a field site where a permeable reactive barrier (PRB) using zero-valent iron is installed. Key challenges addressed include quantifying and reducing conceptual model uncertainty, integrating different types of field data using multivariate methods, and characterizing spatial heterogeneity to estimate model parameters from field measurements.
This document summarizes a proposed new conceptual model called the Fracturing Impacted Volume (FIV) model as an alternative to the commonly used Discrete Fracture Network (DFN) model for modeling unconventional reservoirs. The FIV model accounts for the pressure-dependent permeability of the reservoir system and fluid loss during fracturing. It views fracturing as pressurizing the reservoir system within a spatial volume around the fracture, including microfractures and pores, rather than solely focusing on discrete fracture propagation. Laboratory tests on tight sandstone and shale cores show that reservoir permeability increases with pressure, supporting this conceptualization. The FIV model aims to provide a more effective way to understand fracturing mechanisms and improve simulation of fract
This document examines observations of bi-modal cloud droplet distributions from the Cloud Physics Experiment (COPE). Bi-modal distributions, with distinct small and large droplet modes, were observed in around 15% of cumulus clouds using a cloud droplet probe on the University of Wyoming King Air. The goal is to determine the microphysical processes and conditions that lead to the development of bi-modal distributions. Past studies suggest entrainment of dry air followed by secondary activation of cloud condensation nuclei introduced from outside the cloud is important. The degree of bi-modality is quantified by the ratio of droplets in the large versus small modes.
This document discusses applying a novel approach using multi-criterion decision analysis (MCDA) with the generalized likelihood uncertainty estimation (GLUE) method to quantify uncertainty in hydrological modeling. Specifically, it examines uncertainty in the SLURP hydrological model. Rather than considering overall Nash-Sutcliffe efficiency, the approach considers NSE values for different flow magnitudes simultaneously. The TOPSIS MCDA method is used to compute predictive intervals by considering NSE values for different flow periods simultaneously. The Kootenay Catchment case study is used to demonstrate the MCDA-GLUE approach.
The document discusses (1) a new business model and IBM proposal for cost-effective seismic processing through cloud computing, (2) an invitation to participate in a pilot project using IBM cloud computing for multiple attenuation, (3) speedups of internal multiple attenuation achieved by ConocoPhillips, (4) upcoming deliverables from M-OSRP that address industry challenges and will benefit from IBM's cloud computing, and (5) an invitation to an upcoming technical review meeting.
Wavelet estimation for a multidimensional acoustic or elastic earthArthur Weglein
A new and general wave theoretical wavelet estimation
method is derived. Knowing the seismic wavelet
is important both for processing seismic data and for
modeling the seismic response. To obtain the wavelet,
both statistical (e.g., Wiener-Levinson) and deterministic
(matching surface seismic to well-log data) methods
are generally used. In the marine case, a far-field
signature is often obtained with a deep-towed hydrophone.
The statistical methods do not allow obtaining
the phase of the wavelet, whereas the deterministic
method obviously requires data from a well. The
deep-towed hydrophone requires that the water be
deep enough for the hydrophone to be in the far field
and in addition that the reflections from the water
bottom and structure do not corrupt the measured
wavelet. None of the methods address the source
array pattern, which is important for amplitude-versus-
offset (AVO) studies.
The inverse scattering series for tasks associated with primaries: direct non...Arthur Weglein
The inverse scattering series for tasks associated with primaries: direct non-linear inversion of 1D elastic media. In this paper, research on direct inversion for two pa-
rameter acoustic media (Zhang and Weglein, 2005) is
extended to the three parameter elastic case. We present
the first set of direct non-linear inversion equations for
1D elastic media (i.e., depth varying P-velocity, shear
velocity and density). The terms for moving mislocated
reflectors are shown to be separable from amplitude
correction terms. Although in principle this direct
inversion approach requires all four components of elastic
data, synthetic tests indicate that consistent value-added
results may be achieved given only ˆDPP measurements.
We can reasonably infer that further value would derive
from actually measuring ˆDPP , ˆD PS, ˆDSP and ˆDSS as
the method requires. The method is direct with neither
a model matching nor cost function minimization.
Internal multiple attenuation using inverse scattering: Results from prestack 1 & 2D acoustic and
elastic synthetics
R. T. Coates*, Schlumberger Cambridge Research, A. B. Weglein, Arco Exploration and Production Technology
Summary
The attenuation of internal multiples in a multidimensional
earth is an important and longstanding problem in exploration
seismics. In this paper we report the results of applying
an attenuation algorithm based on the inverse scattering
series to synthetic prestack data sets generated in on
and two dimensional earth models. The attenuation algorithm
requires no information about the subsurface structure
or the velocity field. However, detailed information about
the source wavelet is a prerequisite. An attractive feature of:
the attenuation algorithm is the preservation of the amplitude
(and phase) of primary events in the data; thus allowing for
subsequent AVO and other true amplitude processing.
The document discusses the inverse scattering series (ISS) approach for eliminating internal multiples in land seismic data. ISS methods do not require information about the subsurface and can predict the traveltimes and amplitudes of all internal multiples. The authors apply 1D and 1.5D ISS algorithms to synthetic and field data from Saudi Arabia and achieve encouraging results, successfully attenuating most internal multiples while preserving primaries. However, further work is needed to more accurately predict multiples to handle interference between primaries and multiples.
This article provides an alternative perspective on full-waveform inversion (FWI) methods, arguing that current FWI approaches are indirect model-matching methods rather than direct inversion. The article aims to 1) caution against claims that FWI is a final solution, 2) propose a direct inverse approach, and 3) provide an overview comparing indirect FWI and direct inversion methods. Indirect FWI methods incorrectly use modeling equations run backwards rather than true direct inversion, limiting the understanding and effectiveness of the solutions.
The Inverse Scattering Series (ISS) is a direct inversion method
for a multidimensional acoustic, elastic and anelastic earth. It
communicates that all inversion processing goals are able to
be achieved directly and without any subsurface information.
This task is reached through a task-specific subseries of the
ISS. Using primaries in the data as subevents of the first-order
internal multiples, the leading-order attenuator can predict the
time of all the first-order internal multiples and is able to attenuate
them.
However, the ISS internal multiple attenuation algorithm can
be a computationally demanding method specially in a complex
earth. By using an approach that is based on two angular
quantities and that was proposed in Terenghi et al. (2012), the
cost of the algorithm can be controlled. The idea is to use the
two angles as key-control parameters, by limiting their variation,
to disregard some calculated contributions of the algorithm
that are negligible. Moreover, the range of integration
can be chosen as a compromise of the required degree of accuracy
and the computational time saving.
This time-saving approach is presented
In this paper we present a multidimensional method for attenuating internal multiples that derives from
an inverse scattering series . The method doesn't depend on periodicity or differential moveout, nor does it
require a model for the multiple generating reflectors.
Summary
Methods for removal of free-surface and internal multiples have been developed from bath a feedback model approach and inverse scatterin g theory. White these two formulations derive from different mathematica) viewpoints,
the resulting algorithm s for free-surface multiple are very similar. By contrast , the feedback and inverse scattering
method for internal multiple are totally different and have different requirements for sub surface information or
interpretive intervention . The former removes all multiple related to a certain boundary with the a of a surface
integral along this boundary ; the alter wilt predict and attenuate a ll internal multiple a t the same time . In this paper, we continue our comparison study of these internal multiple attenuation method ; specifically , we examine two
different realizations of the feedback method and the inverse scattering technique .
Internal multiple attenuation using inverse scattering: Results from prestack...Arthur Weglein
The attenuation of internal multiples in a multidimensional
earth is an important and longstanding problem in exploration
seismics. In this paper we report the results of applying
an attenuation algorithm based on the inverse scattering
series to synthetic prestack data sets generated in on
and two dimensional earth models. The attenuation algorithm
requires no information about the subsurface structure
or the velocity field. However, detailed information about
the source wavelet is a prerequisite. An attractive feature of:
the attenuation algorithm is the preservation of the amplitude
(and phase) of primary events in the data; thus allowing for
subsequent AVO and other true amplitude processing.
Wavelet estimation for a multidimensional acoustic or elastic earth- Arthur W...Arthur Weglein
A new and general wave theoretical wavelet estimation
method is derived. Knowing the seismic wavelet
is important both for processing seismic data and for
modeling the seismic response. To obtain the wavelet,
both statistical (e.g., Wiener-Levinson) and deterministic
(matching surface seismic to well-log data) methods
are generally used. In the marine case, a far-field
signature is often obtained with a deep-towed hydrophone.
The statistical methods do not allow obtaining
the phase of the wavelet, whereas the deterministic
method obviously requires data from a well. The
deep-towed hydrophone requires that the water be
deep enough for the hydrophone to be in the far field
and in addition that the reflections from the water
bottom and structure do not corrupt the measured
wavelet. None of the methods address the source
array pattern, which is important for amplitude-versus-
offset (AVO) studies
The document analyzes the reference velocity sensitivity of an elastic internal multiple attenuation algorithm. It first provides background on internal multiples and inverse scattering series methods for their attenuation. It then presents a 1.5D layered earth model and uses it to show that the elastic internal multiple algorithm can correctly predict arrival times without requiring an accurate reference velocity model, similarly to the acoustic case. The analysis involves deriving expressions for predicted internal multiples in the frequency-wavenumber domain and showing they depend only on the traveltimes of the primary reflections involved, not the reference velocities.
This document summarizes finite difference modeling methods used at M-OSRP. It discusses:
1) The second order time and fourth order space finite difference schemes used to model acoustic wave propagation.
2) How boundary conditions like Dirichlet/Neumann generate strong spurious reflections that can mask true events.
3) The importance of accurate source fields for modeling - better source fields lead to more accurate linear inversions and the ability to observe phenomena like polarity reversals in modeled data.
Green's Theorem Deghostin Algorthm- Dr. Arthur B. WegleinArthur Weglein
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise boosts blood flow and levels of neurotransmitters and endorphins which elevate and stabilize mood.
What is greenhouse gasses and how many gasses are there to affect the Earth.moosaasad1975
What are greenhouse gasses how they affect the earth and its environment what is the future of the environment and earth how the weather and the climate effects.
Nucleophilic Addition of carbonyl compounds.pptxSSR02
Nucleophilic addition is the most important reaction of carbonyls. Not just aldehydes and ketones, but also carboxylic acid derivatives in general.
Carbonyls undergo addition reactions with a large range of nucleophiles.
Comparing the relative basicity of the nucleophile and the product is extremely helpful in determining how reversible the addition reaction is. Reactions with Grignards and hydrides are irreversible. Reactions with weak bases like halides and carboxylates generally don’t happen.
Electronic effects (inductive effects, electron donation) have a large impact on reactivity.
Large groups adjacent to the carbonyl will slow the rate of reaction.
Neutral nucleophiles can also add to carbonyls, although their additions are generally slower and more reversible. Acid catalysis is sometimes employed to increase the rate of addition.
The use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptxMAGOTI ERNEST
Although Artemia has been known to man for centuries, its use as a food for the culture of larval organisms apparently began only in the 1930s, when several investigators found that it made an excellent food for newly hatched fish larvae (Litvinenko et al., 2023). As aquaculture developed in the 1960s and ‘70s, the use of Artemia also became more widespread, due both to its convenience and to its nutritional value for larval organisms (Arenas-Pardo et al., 2024). The fact that Artemia dormant cysts can be stored for long periods in cans, and then used as an off-the-shelf food requiring only 24 h of incubation makes them the most convenient, least labor-intensive, live food available for aquaculture (Sorgeloos & Roubach, 2021). The nutritional value of Artemia, especially for marine organisms, is not constant, but varies both geographically and temporally. During the last decade, however, both the causes of Artemia nutritional variability and methods to improve poorquality Artemia have been identified (Loufi et al., 2024).
Brine shrimp (Artemia spp.) are used in marine aquaculture worldwide. Annually, more than 2,000 metric tons of dry cysts are used for cultivation of fish, crustacean, and shellfish larva. Brine shrimp are important to aquaculture because newly hatched brine shrimp nauplii (larvae) provide a food source for many fish fry (Mozanzadeh et al., 2021). Culture and harvesting of brine shrimp eggs represents another aspect of the aquaculture industry. Nauplii and metanauplii of Artemia, commonly known as brine shrimp, play a crucial role in aquaculture due to their nutritional value and suitability as live feed for many aquatic species, particularly in larval stages (Sorgeloos & Roubach, 2021).
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...University of Maribor
Slides from:
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Track: Artificial Intelligence
https://www.etran.rs/2024/en/home-english/
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...Travis Hills MN
Travis Hills of Minnesota developed a method to convert waste into high-value dry fertilizer, significantly enriching soil quality. By providing farmers with a valuable resource derived from waste, Travis Hills helps enhance farm profitability while promoting environmental stewardship. Travis Hills' sustainable practices lead to cost savings and increased revenue for farmers by improving resource efficiency and reducing waste.
Or: Beyond linear.
Abstract: Equivariant neural networks are neural networks that incorporate symmetries. The nonlinear activation functions in these networks result in interesting nonlinear equivariant maps between simple representations, and motivate the key player of this talk: piecewise linear representation theory.
Disclaimer: No one is perfect, so please mind that there might be mistakes and typos.
dtubbenhauer@gmail.com
Corrected slides: dtubbenhauer.com/talks.html
This presentation explores a brief idea about the structural and functional attributes of nucleotides, the structure and function of genetic materials along with the impact of UV rays and pH upon them.
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
Phenomics assisted breeding in crop improvementIshaGoswami9
As the population is increasing and will reach about 9 billion upto 2050. Also due to climate change, it is difficult to meet the food requirement of such a large population. Facing the challenges presented by resource shortages, climate
change, and increasing global population, crop yield and quality need to be improved in a sustainable way over the coming decades. Genetic improvement by breeding is the best way to increase crop productivity. With the rapid progression of functional
genomics, an increasing number of crop genomes have been sequenced and dozens of genes influencing key agronomic traits have been identified. However, current genome sequence information has not been adequately exploited for understanding
the complex characteristics of multiple gene, owing to a lack of crop phenotypic data. Efficient, automatic, and accurate technologies and platforms that can capture phenotypic data that can
be linked to genomics information for crop improvement at all growth stages have become as important as genotyping. Thus,
high-throughput phenotyping has become the major bottleneck restricting crop breeding. Plant phenomics has been defined as the high-throughput, accurate acquisition and analysis of multi-dimensional phenotypes
during crop growing stages at the organism level, including the cell, tissue, organ, individual plant, plot, and field levels. With the rapid development of novel sensors, imaging technology,
and analysis methods, numerous infrastructure platforms have been developed for phenotyping.
The binding of cosmological structures by massless topological defectsSérgio Sacani
Assuming spherical symmetry and weak field, it is shown that if one solves the Poisson equation or the Einstein field
equations sourced by a topological defect, i.e. a singularity of a very specific form, the result is a localized gravitational
field capable of driving flat rotation (i.e. Keplerian circular orbits at a constant speed for all radii) of test masses on a thin
spherical shell without any underlying mass. Moreover, a large-scale structure which exploits this solution by assembling
concentrically a number of such topological defects can establish a flat stellar or galactic rotation curve, and can also deflect
light in the same manner as an equipotential (isothermal) sphere. Thus, the need for dark matter or modified gravity theory is
mitigated, at least in part.
When I was asked to give a companion lecture in support of ‘The Philosophy of Science’ (https://shorturl.at/4pUXz) I decided not to walk through the detail of the many methodologies in order of use. Instead, I chose to employ a long standing, and ongoing, scientific development as an exemplar. And so, I chose the ever evolving story of Thermodynamics as a scientific investigation at its best.
Conducted over a period of >200 years, Thermodynamics R&D, and application, benefitted from the highest levels of professionalism, collaboration, and technical thoroughness. New layers of application, methodology, and practice were made possible by the progressive advance of technology. In turn, this has seen measurement and modelling accuracy continually improved at a micro and macro level.
Perhaps most importantly, Thermodynamics rapidly became a primary tool in the advance of applied science/engineering/technology, spanning micro-tech, to aerospace and cosmology. I can think of no better a story to illustrate the breadth of scientific methodologies and applications at their best.
1. This paper is an overview of the current state of multi-
ple attenuation and developments that we might antici-
pate in the near future.
The basic model in seismic processing assumes that
reflection data consist of primaries only. If multiples are
not removed, they can be misinterpreted as, or interfere
with, primaries. This is a longstanding and only partially
solved problem in exploration seismology. Many methods
exist to remove multiples, and they are useful when their
assumptions and prerequisites are satisfied. However,
there are also many instances when these assumptions are
violated or where the prerequisites are difficult or impos-
sible to attain; hence, multiples remain a problem. This
motivates the search for new demultiple concepts, algo-
rithms, and acquisition techniques to add to, and enhance,
our toolbox of methods.
Furthermore, interest in multiple attenuation has been
rejuvenated due to the industry trend toward more com-
plex, costly, and challenging exploration plays. These
include deepwater with a dipping ocean-bottom and tar-
gets that are subsalt and sub-basalt. These circumstances
can cause traditional methods to bump up hard against
their assumptions. The heightened economic risk and com-
plexity of these E&P objectives raise both the technology
bar and the associated stakes for methods that can accom-
modate less a priori information and fewer restrictions
and unrealistic assumptions.
Methods that can reach that level of effectiveness often
place extra demands on processing costs, and on a more
complete sampling and rigorous definition of the seismic
experiment (e.g., the need for the source signature in water).
However, that trade-off and added expense can be a real
bargain if they enable the identification and removal of
heretofore inaccessible multiples while preserving pri-
maries. Indeed, being able to distinguish, for example, a
gas sand from a multiple under a broader set of complex
circumstances makes the extra cost of processing pale com-
pared with reducing the risk and improving the reliabil-
ity of ÒdrillÓ or Òno-drillÓ decisions.
Two basic approaches to multiple attenuation. Methods
that attenuate multiples can be classified as belonging to
two broad categories: (1) those that seek to exploit a fea-
ture or property that differentiates primary from multiple
and (2) those that predict and then subtract multiples from
seismic data. The former are typically filtering methods,
and the latter are generally based on the prediction from
either modeling or inversion of the recorded seismic wave-
field. This classification is not rigid; methods will often have
aspects associated with each category.
There are some who have proposed an alternate point
of view: Primaries and multiples are considered as signal
to be imaged or otherwise utilized. We anticipate further
developments of this more inclusive approach. The cur-
rent dominant viewpoint is the exclusive one, where pri-
maries are signal and multiples are noise.
There is tremendous value in the latter approach, since
depth and time, and separating reflection from propaga-
tion are relatively simple for the model of signal as pri-
maries. The focus of this review will be on methods of mul-
tiple attenuation.
Filtering methods. These methods exploit some differ-
ence between multiple and primary. This difference may
only become apparent in a particular domain; hence the
reason these techniques employ so many transformations.
Table 1 shows, for filter methods, the various domains
used today, the type of algorithm, and the feature being
exploited.
Note that the feature being exploited can be roughly
categorized into ÒperiodicityÓ and Òseparability.Ó The first
group assumes that the key difference between multiples
and primaries is that the former are periodic while the pri-
maries are not. The second group assumes that by apply-
ing some transform to the data, the separation between
primaries and multiples can be realized by muting a por-
tion of the new domain. The transform is based on a fea-
ture that differentiates signal from noise, usually the
difference in moveout between primary and multiple
events. But the spatial behavior of a targeted multiple
event can also define such a transform.
The filter corresponds to a mute of the principal com-
ponents of a particular estimate of the covariance matrix.
In this case, we attenuate the targeted multiple rather than
all multiples in the data.
The stacking technique is slightly different in that we
do not mute, but we still rely on the moveout difference
between the NMO-corrected primaries and the uncor-
rected multiples.
Tau-p deconvolution has aspects that belong to both the
filtering and the prediction and subtraction categories; it has
recently been extended to accommodate dipping layers.
Of course, when these assumptions do not hold the
methods can fail. For example, multiples become less peri-
odic with offset; primaries may be periodic; multiples and
primaries may overlap; multiples of diffractions are not
accommodated; and primaries and multiples from either
curved and dipping reflectors or beneath a laterally vary-
ing overburden violate assumptions of periodicity and
hyperbolic moveout. Nevertheless, mild violations of these
assumptions can lead to less effective but still useful results.
Two general caveats:
1) when the assumptions behind a multiple-attenuation
methodareviolated,theresultscanbenotonlytheincom-
plete removal of multiples but also the concomitant (and
40 THE LEADING EDGE JANUARY 1999 JANUARY 1999 THE LEADING EDGE 0000
Multiple attenuation: an overview of recent
advances and the road ahead (1999)
ARTHUR B. WEGLEIN, ARCO Exploration and Production Technology, Plano, Texas, U.S.
Table 1.
Domain Algorithm Feature
t predictive decon periodicity
tau-p Radon transform + predictive decon periodicity
t-x stacking separability
principal comp. eigenimages + reject filter separability
f-k 2-D FT + reject filter separability
tau-p Radon transform + reject filter separability
f-k 3-D FT + reject filter separability
Downloaded 10 Sep 2011 to 99.10.237.97. Redistribution subject to SEG license or copyright; see Terms of Use at http://segdl.org/
2. perhaps even more serious) damage to primaries;
2) beware of the fallacy of expressing a 1-D method in
terms of 2-D or 3-D data and believing that a complete
multidimensional method has been derived. There are
many 2-D and 3-D phenomena (e.g., diffractions) that
have no 1-D analog. Multidimensional methods derive
from multidimensional theory.
Filtering is typically less costly than prediction and sub-
traction: hence, when effective, it is often the method of
choice. Moreover, among new developments in filter meth-
ods, we anticipate advances in interpretation-driven
schemes, such as 3-D prestack versions of targeted multi-
ple techniques.
Wavefield prediction and subtraction. In these proce-
dures, a wave-theoretic concept of how a given multiple
type is generated is used to predict and subtract the mul-
tiple. At present there are three different wavefield pre-
diction and subtraction methods: wavefield extrapolation;
feedback loop; and inverse-scattering series. Each has a
unique and distinct concept concerning the generation
and removal of multiples; and each has a different required
level of a priori and/or a posteriori information. The lat-
ter can be in the form of an algorithmic requirement for
subsurface information or the need for the intervention of
an interpreter with an interjection of judgment, analysis,
or discrimination.
Wavefield extrapolation is a modeling and subtraction
method; whereas the feedback and inverse- scattering meth-
ods are based on the prediction mechanisms within two
different inversion procedures.
Wavefield extrapolation. The wavefield extrapolation
method models wave propagation in the water column. It
takes the data one round trip through the water column,
and then adaptively subtracts this up- and downward-con-
tinued data from the original.
This method requires: (1) an a priori estimate of the
water depth and, (2) an a posteriori estimate of a set of
parameters for an adaptive matching and subtraction
process. These matching coefficients are derived within
the context of a phenomenological/statistical model and
therefore have an implicit dependence on the water-bot-
tom reflection coefficient and the wavelet. This implicit
dependence makes the parameter difficult to interpret,
exploit, or estimate in terms of physical processes. How-
ever, the very important upside of this implicit or indi-
rect dependence is that the method doesnÕt require
explicit knowledge of the ocean-bottom reflection coef-
ficient or the wavelet. The method has a demonstrated
effectiveness and an important niche in seismic pro-
cessing.
Free-surface multiple elimination: the feedback and
inverse-scattering methods. The surface multiple elimi-
nation methods derive from the physics of waves reflect-
ing at a free surface. A relationship is established between
the recorded data, containing free-surface multiples, and
the desired data with those multiples absent. These deriva-
tions make no assumption about the medium below the
receivers. There are series and full operator solutions that
can be realized in different transformed data domains.
The feedback and inverse-scattering techniques provide
different methods for removing all multiples. The former
is based on a free-surface and interface model and the lat-
ter on a free-surface and point-scatterer model.
The free-surface and interface removal and free-surface
and point-scatterer formulations both model the free-surface
reflector as the generator of free-surface multiples. They dif-
fer in their modeling of the source; the former method mod-
els in its simplest form the source as a vertical dipole in the
water, whereas the latter models the source as a monopole.
To compensate for the actual monopole nature of the source,
dipoledataareapproximatedbyremovingthereceiverghost
and leaving the source ghost intact. In the inverse-scatter-
ing formulation, the presence of the obliquity factor reflects
its modeling of the monopole source. While the two for-
mulations for free-surface multiples are conceptually and
algorithmically distinct, in practice the differences between
the two methods are often overshadowed by other factors
(e.g., cable feathering, source-and-receiver array effects, and
errors in deghosting). However, there are circumstances
where the differences matter; e.g., when seeking to interpret
the results of source-signature estimates (especially with
shallow-targets and long-offsets); and when interest is in
increasing the number of deterministic phenomena ac-
commodated, thereby reducing the burden on
statistical/adaptive techniques.
Practical prerequisites include: (1) estimating the source
signature and (2) missing near-trace compensation. This sig-
nature is actually a combination of source signature, free-
surface reflection coefficient, instrument response, and
algorithmic and numerical factors. A measured source sig-
nature could complement and enhance the efficiency of cur-
rent processing approaches. Missing near traces can often
be reasonably estimated, using trace-extrapolation methods,
when the nearest phone is within the precritical region of
the ocean-bottom reflection. There are important cases, for
example in shallower water, when current extrapolation
methods fail; new acquisition or processing methods will be
needed. Measurement of the near traces seems to be a direct
0000 THE LEADING EDGE JANUARY 1999 JANUARY 1999 THE LEADING EDGE 41
Downloaded 10 Sep 2011 to 99.10.237.97. Redistribution subject to SEG license or copyright; see Terms of Use at http://segdl.org/
3. and anticipated solution. In the past, these prerequisites
seemed insurmountable practical hurdles. But today, certain
consistently reliable methods for satisfying them have been
demonstrated; others will follow, and this new technology
will reach its full promise.
Specifically, the physics behind these free-surface multi-
ple elimination methods requires absolutely no a priori or a
posteriori information; however, the methods that are in
current use for finding the prerequisite wavelet (typically
based on minimization of the energy) sometimes require an
interpretive intervention.
In addition, one of the underlying strengths of the free-
surface multiple elimination techniques (the ability to sep-
arate primary from multiple with arbitrary close moveout)
can be compromised by the energy-minimization condition
for the wavelet. To advance this technology, it is important
that we clearly distinguish between assumptions behind the
physics of the method itself and assumptions of the proce-
dures designed to satisfy the prerequisites of the method.
The energy-minimization criterion has an impressive
track record, in its various adaptive and global-search for-
mulations, for providing a useful wavelet for these multi-
ple-attenuation methods.
However, for the feedback and inverse-scattering meth-
ods to reach their potential, wavelet estimation methods will
need to be developed that can avoid the current pitfalls.
In addition to understanding how well different mul-
tiple-attenuation techniques perform under different sub-
surface conditions (e.g., 1-D, 2-D, 2.5-D, and different
degrees of cross-line 3-D complexity), there are several key
and related issues that are high in technical and economic
priority and play an important role in both appropriate cur-
rent applications and in charting a path for the future.
Among them are: (1) how current 3-D acquisition impacts
(and influences the development of) different multiple-
attenuation techniques under different subsurface condi-
tions; and (2) charting a course that identifies the potential
added-value that derives from enhancements, in acquisi-
tion and processing, needed to satisfy the more data-
demanding techniques and, thereby, specifically identify
geologic circumstances where these enhancements could
have a differential and significant cost/benefit.
Internal multiples: the feedback and inverse-scattering
methods. Free-surface multiples are multiples that have
experienced at least one downward reflection at the air-
water Òfree-surfaceÓ; internal multiples are multiples that
have all of their downward reflections below the free sur-
face. Internal multiples have experienced reflectors that
are in general more remote and harder to precisely define
(in comparison with free-surface multiples); hence, inter-
nal multiples are more difficult to predict and attenuate.
Furthermore, internal multiples are in general more diffi-
cult to remove, even in 1-D circumstances using methods
that depend on moveout differences (than free-surface mul-
tiples), since internal multiples have often experienced sim-
ilar (or even higher) velocities to primaries in their vicinity.
The feedback method models primaries and internal
multiples in terms of the actual medium and interfaces
(reflectors) that are the sources of those events. The inverse-
scattering method models primaries and internal multi-
ples in terms of reference medium propagation
(propagation in water) and scattering at every point where
the properties of the earth differ from water.
The two fundamentally different models for the gen-
eration and associated-inverse-removal of internal multi-
ples (in the interface and point-scatter model) lead to
completely different (1) cataloging of multiples; (2) algo-
rithms for their attenuation; and (3) requirements for a pri-
ori or a posteriori information.
The feedback method, with its associated interface
model for internal multiples, proceeds from one reflector
down to the next and removes all internal multiples that
have their shallowest downward reflection at that reflec-
tor. To realize that program, within the feedback method,
requires at least an implicit estimate of the velocity model
of the downward continuation operators and updating of
those operators. The latter updating typically depends on
the flatness criteria for image gathers. Although we recog-
nize that this is a commonly used criteria, it is neverthe-
less a significant assumption to consider it a necessary and
sufficient condition for a correct downward continuation,
especially under complex circumstances.
An alternate realization of the feedback program for
internal multiples appears to avoid certain aspects of the a
priori information by substituting the infusion of a poste-
riori information, through the judgment of an interpreter
who decides at each interface what is primary and what is
multiple. The feedback method of internal multiple removal
would seem to be particularly effective and most appropriate
when the internal-multiple-generating reflectors are shal-
low and smooth and when the macromodel needed to reach
them from the measurement surface is not very complex.
The inverse-scattering method for attenuating internal
multiples derives from the multiple prediction and sub-
traction subseries that reside within the only multidimen-
sional direct inversion methodology: the inverse scattering
series.
The removal of multiples is viewed as one of the steps,
stages, or tasks that a direct inversion method would have
to perform prior to imaging and inverting primaries for rel-
ative changes in earth mechanical properties. The inverse-
scattering series performs direct inversion; hence, the
inverse-scattering series must contain a part of itself, i.e., a
subseries, that is devoted to the task of removing multiples.
If the overall series starts with no a priori information, then
each task is carried out without a priori information. The
distinct subseries that attenuate free-surface and internal
multiples have been identified. Each term in the internal
multiple-attenuating series provides a mechanism for pre-
dicting and attenuating all multiples that have experienced
a certain number of reflections, independent of the location
of those reflectors.Absolutely no a priori or a posteriori infor-
mation is required about the subsurface velocity or struc-
ture below the hydrophones. No iteration or interpretive
intervention is ever needed. The method doesnÕt depend
on periodicity or moveout differences or stripping. The
inverse-scattering method is the only multidimensional
method for attenuating all multiples that doesnÕt require any
form of a priori or a posteriori information. We would expect
that the inverse-scattering series for attenuating internal
multiples would be particularly well suited and appropri-
ate when the reflectors that generate the internal multiples
are either not shallow, or not simple, or not smooth, or when
a complex macromodel would be needed to carry out the
downward continuation to a reflector.
A strength of the feedback method is that, when it can
carry out its program to a given reflector, the cost per reflec-
tor is roughly twice the cost of the free-surface algorithm.
The cost of the inverse-scattering series approach to inter-
nal multiple attenuation is considerably greater.
Early tests indicate that the incremental costs of per-
forming inverse-scattering internal multiple attenuation are
about an order of magnitude (10 times) over performing
42 THE LEADING EDGE JANUARY 1999 JANUARY 1999 THE LEADING EDGE 0000
Downloaded 10 Sep 2011 to 99.10.237.97. Redistribution subject to SEG license or copyright; see Terms of Use at http://segdl.org/
4. free-surface multiples alone. This factors into the over-
head costs of data preprocessing and quality control that
are required for free-surface multiple attenuation. The ratio
is significantly larger if you just compare the compute
cycle time of the inverse scattering internal multiple algo-
rithm to the free-surface case. However, it is important to
note that the inverse scattering procedure accommodates
all reflectors at once.
It certainly appears that from both a cost/benefit and
domain of applicability that the feedback (interface) model
and inverse-scattering (point-scatterer) model could evolve
into complementary approaches to the important problem
of internal multiple attenuation. In any given case, you
might choose one or the other or a combination of the two.
Early field tests of the inverse-scattering and interface
methods for internal multiples are encouraging. Table 2
summarizes the prediction and subtraction methods.
Conclusions. Multiple attenuation methods continue to
develop, evolve and mature, driven by the confluence of
heightened technical challenge and increased economic risk.
Whereas filter-methods are continuously moving toward
greater effectiveness, the wavefield prediction and subtrac-
tion techniques are the current point men in the assault on
the most resistant and troublesome multiples. The latter
allow for the most complex subsurfaces but require clarity
and completeness in the seismic experiment.Although they
are in general more demanding, their demands are in a realm
where we are able, in principle, to satisfy them. They are a
reasonable trade for earlier demands or assumptions about
the subsurface that are intrinsically beyond our reach or
knowledge. All new methods (including multidimensional
wave-theoretic procedures) attempt to capture ever more
complete and realistic descriptions of the phenomena that
seismic signals experience. Although progress is measured
by ever more complete descriptions and models, they are
never completeÑthus the constant and ubiquitous need for
statistical, interpretive, and adaptive procedures to accom-
modate the components of reality beyond the current best
deterministic model. As the latter tools (that address out-of-
deterministic-model phenomena) become more general and
flexible, they increase their practical contribution. However,
the clearest measure of overall advancement is determined
by the movement of deterministic methods into the domain
of statistics and adaptive-interpretive techniques.
The dominant practical issue today is the 3-D applica-
tion of these techniques with current 3-D acquisition. For
example, the lack of well sampled cross-line data provides
the impetus for developing new extrapolation, interpola-
tion, and acquisition techniques.
It is difficult to overstate or exaggerate the significance
of the landmark event that developed as a byproduct of
the absolute need for the seismic wavelet, within the band-
width, for the inverse-scattering and feedback methods.
Current standard practice uses the output of the multiple
attenuation itself to find its wavelet. The result is that
under a large set of circumstances we have made progress
toward processing the absolute pressure field, within the
bandwidth, due to an impulsive source.
That it is possible to estimate the source signature from
typical towed streamer data, within the bandwidth, is a sin-
gular event in seismic processing history. The eventual
0000 THE LEADING EDGE JANUARY 1999 JANUARY 1999 THE LEADING EDGE 43
Table 2. Prediction and subtraction methods.
MODELING AND
INVERSION INVERSIONSUBTRACTION
Wavefield Feedback Inverse-scattering
extrapolation Series
Types of Water-bottom, Free surface multiples Free surface multiples
multiples peg-leg and Internal multiples Internal multiples
first-layer (all orders one (all interfaces one
reverberations interface at a time) order at a time)
Fundamental Water-layer Free-surface Free-surface
physical and + +
unit ocean-bottom Interface (reflector) Point scatterer
Additional Water depth (a priori) None for free-surface None for free-surface
information needed Adaptive subtraction Internal: A priori velocity model or internal multiples
(a posteriori) mimplicit for CFP operators;
mand updating; or an a posteriori
minterpretitive decision at each reflector
Downloaded 10 Sep 2011 to 99.10.237.97. Redistribution subject to SEG license or copyright; see Terms of Use at http://segdl.org/
5. impact of this true pressure-field band-limited impulse
response determination on other yet-to-be-developed seis-
mic methods and applications could even overshadow the
multiple-attenuation methods from which it emanated.
Further work is needed to determine the source wavelet
under a broader set of circumstances.
Basically, there are two current approaches to multiple
attenuation: (1) Distinguish and separate multiple from pri-
mary and (2) predict and subtract multiples from the data.
These broad categories contain several subgroups of meth-
ods, each, in turn, with specific strengths and limitations.
Filter methods are typically less expensive than prediction
and subtraction; and, when effective, are the methods of
choice.
The attitude we advocate is a tool-box approach, where
these strengths and limitations are understood and where
the appropriate method is chosen based on effectiveness,
cost, and processing objectives.
Suggestions for further reading. ÒInverse scattering series
for multiple attenuation: An example with surface and
internal multiplesÓ by Araujo et al. (SEG 1994 Expanded
Abstracts). ÒWater reverberationsÑtheir nature and elim-
inationÓ by Backus (GEOPHYSICS, 1959). ÒWavefield extrap-
olation techniques for prestack attenuation of water
reverberationsÓ by Benth and Sonneland, (presented at
SEGÕs 1983 Annual Meeting). ÒDeepwater peg-legs and
multiples: emulation and suppressionÓ by Berryhill and
Kim (GEOPHYSICS, 1986). ÒNonlinear inverse scattering for
multiple attenuation: Application to real data, Part 1Ó by
Carvalho et al. (SEG 1992 Expanded Abstracts). ÒSurface mul-
tiple attenuationÑtheory, practical issues, and examplesÓ
by Dragoset (1992 EAGE Abstracts). ÒMultichannel atten-
uation of high-amplitude peg-leg multiples: Examples
from the North SeaÓ by Doicin and Spitz (EAEG 1991
Annual Meeting). ÒMultichannel attenuation of water-bot-
tom peg-legs pertaining to a high-amplitude reflectionÓ by
Doicin and Spitz (SEG 1991 Expanded Abstracts). Seismic
applications of acoustic reciprocity by Fokkema and van den
Berg (Elsevier, 1993). ÒSuppression of multiple reflections
using the Radon transformÓ by Foster and Mosher
(GEOPHYSICS, 1992). ÒRemoval of surface-related diffracted
multiplesÓ by Hadidi et al. (1995 EAGE Abstracts). ÒInverse
velocity stacking for multiple eliminationÓ by Hampson
(CSEG Journal, 1986). ÒAstrategy for multiple suppressionÓ
by Hardy and Hobbs (First Break, 1991). ÒSource signature
estimation based on the removal of first-order multiplesÓ
by Ikelle et al. (SEG 1995 Expanded Abstracts). ÒRadon mul-
tiple elimination, a practical method for land dataÓ by
Kelamis et al. (SEG 1990 Expanded Abstracts). ÒThe sup-
pression of surface multiples on seismic recordsÓ by
Kennett (Geophysical Prospecting, 1979). ÒTargeted multiple
attenuationÓ by Kneib and Bardan (EAGE 1994 Annual
Meeting). ÒPredictive deconvolution in shot-receiver
spaceÓ by Morley and Claerbout (GEOPHYSICS, 1983). Ò2-D
multiple reflectionsÓ by Riley and Claerbout (GEOPHYSICS,
1976). ÒPrinciples of digital Wiener filteringÓ by Robinson
and Treitel (Geophysical Prospecting, 1967). ÒDecomposition
(DECOM) approach to wavefield analysis with seismic
reflection recordsÓ by Ryu (GEOPHYSICS, 1982). ÒLong-
period multiple suppression by predictive deconvolution
in the x-t domainÓ by Taner et al. (Geophysical Prospecting,
1995). ÒApplication of homomorphic deconvolution to
seismologyÓ by Ulrych (GEOPHYSICS, 1971). ÒSurface-related
multiple elimination: an inversion approachÓ by Verschuur
(Ph.D. dissertation, ISBN 90-9004520-1). ÒAdaptive sur-
face-related multiple eliminationÓ by Verschuur et al.
(GEOPHYSICS, 1992). ÒAttenuation of complex water-bottom
multiples by wave-equation based prediction and sub-
tractionÓ by Wiggins (GEOPHYSICS, 1988). ÒVelocity-stack
processingÓ by Yilmaz (Geophysical Prospecting, 1989). ÒWhy
donÕt we measure seismic signatureÓ by Ziolkowski
(GEOPHYSICS, 1991). ÒMultiple suppression by single chan-
nel and multichannel deconvolution in the tau-p domain:
by Lokshtanov (SEG 1995 Expanded Abstracts). ÒComparing
the interface and point-scatterer methods for attenuating
internal multiples: a study with synthetic dataÑPart 1Ó by
Verschuur et al. (SEG 1998 Expanded Abstracts). ÒComparing
the interface and point-scatterer methods for attenuating
internal multiples: a study with synthetic dataÑPart 2Ó by
Matson et al. (SEG 1998 Expanded Abstracts). ÒWave equa-
tion prediction and removal of interbed multiplesÓ by
Jakubowicz (SEG 1998 Expanded Abstracts). ÒWavelet esti-
mation for surface-related multiple attenuation using a
simulated annealing algorithmÓ by Carvalho and Weglein
(SEG 1994 Expanded Abstracts). Ò3-D surface-related mul-
tiple predictionÓ by Nekut (SEG 1998 Expanded Abstracts).
ÒDeghosting and free-surface multiple attenuation of mul-
ticomponent OBC dataÓ by Ikelle (SEG 1998 Expanded
Abstracts). ÒRemoval of water-layer multiples from multi-
component sea-bottom dataÓ by Osen et al. (SEG 1996
Expanded Abstracts). ÒMultiple wavefields: separating inci-
dent from scattered, up from down, and primaries from
multiplesÓ by Ziolkowski et al. (SEG 1998 Expanded
Abstracts). Ò2-D multiple attenuation operators in t-p
domainÓ by Liu et al. (SEG 1998 Expanded Abstracts).
ÒHough transform based multiple removal in the XT
domainÓ by Pipan et al. (SEG 1998 Expanded Abstracts).
Dereverberation after water migrationÓ by Parrish (SEG
1998 Expanded Abstracts). ÒMultiple suppression: beyond
2-D. Part 1: theoryÓ by Ross (SEG 1997 Expanded Abstracts).
ÒMultiple suppression: beyond 2-D. Part 2: application to
subsalt multiplesÓ by Ross et al. (SEG 1998 Expanded
Abstracts). ÒEstimation of multiple scattering by iterative
inversion. Part 1: Theoretical considerationsÓ by Berkhout
and Verschuur (GEOPHYSICS, 1997). ÒInternal multiple atten-
uation using inverse scattering: Results from prestack 1-
D and 2-D acoustic and elastic syntheticsÓ by Coates and
Weglein (SEG 1996 Expanded Abstracts). ÒRemoval of elas-
tic interface multiples from land and ocean bottom seis-
mic data using inverse scatteringÓ by Matson and Weglein
(SEG 1996 Expanded Abstracts). ÒAn inverse scattering series
method for attenuating multiples in seismic reflection
dataÓ by Weglein et al. (GEOPHYSICS, 1997). ÒRemoval of
internal multiples: Field data exampleÓ by Hadidi and
Verschuur (SEG 1997 Expanded Abstracts). LE
Acknowledgments: I once again express my sincere appreciation to all
of the contributors to my 1995 talk which formed the nucleus of this
paper. In addition, I thank A. J. Berkhout, Eric Verschuur, Ken Matson,
Chi Young, and Helmut Jakubowicz for the truly outstanding collabo-
rative effort and joint research papers presented at SEGÕs 1998 Annual
Meeting. Many views expressed in this paper were gleaned from the
conclusions and comparisons of that study. That collaboration and
analysis continue. I offer my congratulations to the Delft group for its
outstanding model of integrity and openness and to Verschuur and
Matson for leading the comparison studies and analysis. I thank the
ARCO management for supporting this effortÑin particular, J. K.
OÕConnell for his considered and astute technical analysis and perspective
and Steve Moore for his strong interest and constant support. Dennis
Corrigan, L. Peardon, Paulo Carvalho, F. Gasparotto, T. Ulrych, David
Campbell, Andre Romanelli, W. Beydoun, B. Davis, H. Akpati, Vandemir
Oliveira, P. Stoffa, Luc Ikelle, Steve Hill, and Bill Dragoset are thanked
for helpful and constructive suggestions.
Corresponding author: A. Weglein, aweglein@arco.com
44 THE LEADING EDGE JANUARY 1999 JANUARY 1999 THE LEADING EDGE 0000
Downloaded 10 Sep 2011 to 99.10.237.97. Redistribution subject to SEG license or copyright; see Terms of Use at http://segdl.org/