We consider elements of the nonlinear inversion of primaries with a particular em-
phasis on the viscoacoustic, or absorptive/dispersive case. Since the main ingredient
to higher-order terms in the inverse scattering series (as it is currently cast) is the
linear, or Born, inverse, we begin by considering its “natural” form, i.e. without the
corrections and/or assumptions discussed in Innanen and Weglein (2004). The absorp-
tive/dispersive linear inverse for a single interface is found to be a filtered step-function,
and for a single layer a set of increasingly smoothed step-functions. The “filters” char-
acterizing the model shape are analyzed.
We next consider the nature of the inversion subseries, which would accept the
linear inverse output and via a series of nonlinear operations transform it into a signal
with the correct amplitudes (i.e. the true Q profile). The nature of the data event
communication espoused by these nonlinear operations is discussed using a simple 1D
normal incidence physical milieu, and then we focus on the viscous (attenuating) case.
We show that the inversion subseries correctly produces the Q profile if the input signal
has had the attenuation (or viscous propagation effects) compensated. This supports
the existing ansatz (Innanen and Weglein, 2003) regarding the nature of a generalized
imaging subseries (i.e. not the inversion subseries) for the viscous case, as carrying out
the task of Q compensation.
This document provides an overview of higher dimensional theories and their connection to lower dimensional theories through renormalization group flows and fixed points. It discusses the Banks-Zaks fixed point of QCD specifically. Perturbative fixed points in higher dimensions are connected to non-perturbative fixed points in lower dimensions, allowing access to these non-perturbative regimes. As an example, it examines the O(N)×O(m) Landau-Ginzburg-Wilson model in 6-2 dimensions and calculates critical exponents to show universality between this theory and the 4D model. It also shows that critical exponents in QCD are renormalization scheme independent by calculating them to high loops in different schemes.
Principal component analysis (PCA) is a technique used to reduce the dimensionality of large data sets by transforming correlated variables into a smaller number of uncorrelated variables called principal components. PCA identifies patterns in data and expresses the data in such a way as to highlight their similarities and differences. The main goals of PCA are data reduction and interpretation. It works by identifying the directions (principal components) along which the variation in the data is maximized.
This document summarizes a research paper that estimates the scale parameter of the Nakagami distribution using Bayesian methods. The paper derives the posterior distributions of the scale parameter under different prior distributions, including uniform, inverse exponential, and Levy priors. It then finds the Bayesian estimators of the scale parameter under three loss functions: squared error loss function, quadratic loss function, and precautionary loss function. The paper uses Monte Carlo simulations to compare the performance of the different estimators.
Negative and Complex Probability in Quantum InformationVasil Penchev
1. Negative or complex probability appears in quantum mechanics due to the wave-particle duality of quantum objects and the non-commutativity of measurements.
2. It can be modeled by considering the phase space of a quantum system, where probabilities are mapped between phase space cells and Hilbert space qubits in a way that allows for negative values.
3. The Wigner transformation provides a way to represent the quantum state using a quasi-probability distribution over phase space, even though it can take on negative values due to quantum effects like interference.
ABSTRACT: A set of equations for removing and adding of a parameter were found in a scalar type vector function. By using the Cauchy-Euler differential operator in an exponential form, equations for calculation of the partial derivative with respect to the parameter were developed. Extensions to multiple vector and higher rank functions were made.
The klein gordon field in two-dimensional rindler space-time - smcprtfoxtrot jp R
This document discusses the Klein-Gordon scalar field in two-dimensional Rindler space-time. It derives a two-dimensional action for the Klein-Gordon scalar with the Rindler space-time background. The equation of motion obtained from this action is solved exactly, with the solution being oscillatory in imaginary time. The angular frequency of the oscillatory solution corresponds to an integral number, suggesting the scalar field appears quantized in terms of its frequency or has spatial modes corresponding to integral values.
This document provides an overview of higher dimensional theories and their connection to lower dimensional theories through renormalization group flows and fixed points. It discusses the Banks-Zaks fixed point of QCD specifically. Perturbative fixed points in higher dimensions are connected to non-perturbative fixed points in lower dimensions, allowing access to these non-perturbative regimes. As an example, it examines the O(N)×O(m) Landau-Ginzburg-Wilson model in 6-2 dimensions and calculates critical exponents to show universality between this theory and the 4D model. It also shows that critical exponents in QCD are renormalization scheme independent by calculating them to high loops in different schemes.
Principal component analysis (PCA) is a technique used to reduce the dimensionality of large data sets by transforming correlated variables into a smaller number of uncorrelated variables called principal components. PCA identifies patterns in data and expresses the data in such a way as to highlight their similarities and differences. The main goals of PCA are data reduction and interpretation. It works by identifying the directions (principal components) along which the variation in the data is maximized.
This document summarizes a research paper that estimates the scale parameter of the Nakagami distribution using Bayesian methods. The paper derives the posterior distributions of the scale parameter under different prior distributions, including uniform, inverse exponential, and Levy priors. It then finds the Bayesian estimators of the scale parameter under three loss functions: squared error loss function, quadratic loss function, and precautionary loss function. The paper uses Monte Carlo simulations to compare the performance of the different estimators.
Negative and Complex Probability in Quantum InformationVasil Penchev
1. Negative or complex probability appears in quantum mechanics due to the wave-particle duality of quantum objects and the non-commutativity of measurements.
2. It can be modeled by considering the phase space of a quantum system, where probabilities are mapped between phase space cells and Hilbert space qubits in a way that allows for negative values.
3. The Wigner transformation provides a way to represent the quantum state using a quasi-probability distribution over phase space, even though it can take on negative values due to quantum effects like interference.
ABSTRACT: A set of equations for removing and adding of a parameter were found in a scalar type vector function. By using the Cauchy-Euler differential operator in an exponential form, equations for calculation of the partial derivative with respect to the parameter were developed. Extensions to multiple vector and higher rank functions were made.
The klein gordon field in two-dimensional rindler space-time - smcprtfoxtrot jp R
This document discusses the Klein-Gordon scalar field in two-dimensional Rindler space-time. It derives a two-dimensional action for the Klein-Gordon scalar with the Rindler space-time background. The equation of motion obtained from this action is solved exactly, with the solution being oscillatory in imaginary time. The angular frequency of the oscillatory solution corresponds to an integral number, suggesting the scalar field appears quantized in terms of its frequency or has spatial modes corresponding to integral values.
This document discusses using Static Var Compensation (SVC) devices to reduce unbalancing and energy losses in distribution networks. It proposes using intelligent optimization algorithms like Particle Swarm Optimization (PSO), Cuckoo Search Algorithm (CSA), and Firefly Algorithm (FA) to determine the optimal sitting and sizing of SVC devices. The objective function considers losses, neutral line current, and SVC installation costs. Simulations on the IEEE 123 node test system show the proposed method significantly improves the performance of unbalanced distribution networks.
The document discusses computational electromagnetics and the finite element method. It provides 7 steps for the finite element method: 1) divide the problem domain into sub-domains, 2) approximate the potential for each element, 3) find the potential for each element in terms of end points, 4) find the energy for each element, 5) find the total energy, 6) obtain the general solution, and 7) obtain a unique solution by applying boundary conditions. The finite element method is useful for problems with complex geometries and boundary conditions that cannot be solved analytically.
A Novel Space-time Discontinuous Galerkin Method for Solving of One-dimension...TELKOMNIKA JOURNAL
In this paper we propose a high-order space-time discontinuous Galerkin (STDG) method for
solving of one-dimensional electromagnetic wave propagations in homogeneous medium. The STDG
method uses finite element Discontinuous Galerkin discretizations in spatial and temporal domain
simultaneously with high order piecewise Jacobi polynomial as the basis functions. The algebraic
equations are solved using Block Gauss-Seidel iteratively in each time step. The STDG method is
unconditionally stable, so the CFL number can be chosen arbitrarily. Numerical examples show that the
proposed STDG method is of exponentially accuracy in time.
Tensor Spectral Clustering is an algorithm that generalizes graph partitioning and spectral clustering methods to account for higher-order network structures. It defines a new objective function called motif conductance that measures how partitions cut motifs like triangles in addition to edges. The algorithm represents a tensor of higher-order random walk transitions as a matrix and computes eigenvectors to find a partition that minimizes the number of motifs cut, allowing networks to be clustered based on higher-order connectivity patterns. Experiments on synthetic and real networks show it can discover meaningful partitions by accounting for motifs that capture important structural relationships.
MTH101 - Calculus and Analytical Geometry- Lecture 43Bilal Ahmed
Virtual University
Course MTH101 - Calculus and Analytical Geometry
Lecture No 43
Instructor's Name: Dr. Faisal Shah Khan
Course Email: mth101@vu.edu.pk
2011 santiago marchi_souza_araki_cobem_2011CosmoSantiago
This document discusses the performance of the multigrid method for solving the Navier-Stokes equations using two alternative formulations: streamfunction-velocity and streamfunction-vorticity. The study examines the lid-driven cavity flow problem for different grid sizes, Reynolds numbers, numbers of grid levels, and inner iterations to determine optimal parameter values that minimize CPU time. Previous research has found that the multigrid method is less effective for the Navier-Stokes equations at higher Reynolds numbers. The document aims to investigate how formulation choice and multigrid parameters influence efficiency.
Scilab Finite element solver for stationary and incompressible navier-stokes ...Scilab
In this paper we show how Scilab can be used to solve Navier-stokes equations, for the incompressible and stationary planar flow. Three examples have been presented and some comparisons with reference solutions are provided.
Numerical methods for 2 d heat transferArun Sarasan
This document presents a numerical study comparing finite difference and finite volume methods for solving the heat transfer equation during solidification in a complex casting geometry. The study uses a multi-block grid with bilinear interpolation and generalized curvilinear coordinates. Results show good agreement between the two discretization methods, with a slight advantage for the finite volume method due to its use of more nodal information. The multi-block grid approach reduces computational time and allows complex geometries to be accurately modeled while overcoming issues at block interfaces.
This document discusses inverting the Dirac equation for the vector potential in the non-abelian SU(2) case. It first reviews the abelian U(1) case and shows how the vector potential can be written as a rational expression of Dirac bispinor current densities. For the non-abelian SU(2) case, the Dirac equation is set up for an SU(2) doublet spinor. An "isospin-charge conjugation" operator is defined to make the gauge potential covariant, but inverting the equation only yields the vector potential as a Neumann series involving Pauli scalar and vector current densities. Fierz identities are also derived to express skew tensor current densities solely in terms of
To the Issue of Reconciling Quantum Mechanics and General RelativityIOSRJAP
The notion of gravitational radiation as a radiation of the same level as the electromagnetic radiation is based on theoretically proved and experimentally confirmed fact of existence of stationary states of an electron in its gravitational field characterized by the gravitational constant K = 1042G (G is the Newtonian gravitational constant) and unrecoverable space-time curvature Λ. If the numerical values of K 5.11031 Nm2 kg-2 and =4.41029 m -2 , there is a spectrum of stationary states of the electron in its own gravitational field (0.511 MeV ... 0.681 MeV).Adjusting according to the known mechanisms of broadening does not disclose the broadening of the registered portion of the emission spectrum of the micropinch. It indicates the presence of an additional mechanism of broadening the registered portion of the spectrum of the characteristic radiation due to the contribution of the excited states of electrons in their own gravitational field. The energy spectrum of the electron in its own gravitational field and the energy spectra of multielectron atoms are such that there is a resonance of these spectra. As obvious, the consequence of such resonant interaction is appearance, including new lines, of electromagnetic transitions not associated with atomic transitions. The manuscript is the review of previously published papers cited in the references.
This document summarizes recent results from the STAR experiment regarding correlations and fluctuations in heavy ion collisions at RHIC. It discusses measurements of elliptic and directed flow that provide evidence for local equilibration and pressure gradients in the quark-gluon plasma. HBT interferometry measurements indicate a source elongated perpendicular to the reaction plane, consistent with initial collision geometry. Charge-dependent number correlations reveal modified hadronization in the quark-gluon plasma compared to pp collisions, suggesting local charge conservation effects during hadronization. Overall, the results provide insights into the equilibration and relevant degrees of freedom in the quark-gluon plasma.
1) The document discusses sparse kernel machines and support vector machines (SVMs). It covers the optimization of SVMs using Lagrange multipliers and the Karush-Kuhn-Tucker (KKT) conditions.
2) SVMs find the maximum-margin separating hyperplane between two classes of data points by maximizing the margin between the closest data points of each class. This is done by solving the dual optimization problem.
3) Support vectors are the data points that lie on the boundary or margin of the classifier. Only the support vectors are needed for making predictions on new data using the SVM model.
This document presents a method for solving the coupled-channels time-independent Schrödinger equation for bound states of the A1Σ+ − b3Π0 electronic states in NaCs, which are coupled by spin-orbit interaction. The method expands the coupled-channel eigenstates over a basis of rovibrational eigenstates of the uncoupled potentials. This leads to a system of equations for the expansion coefficients that can be solved by diagonalizing a 260x260 matrix. Plots of the bound-state matrix elements of the spin-orbit coupling operator show they decrease for more highly-excited vibrational states. Based on this, the method approximates the problem by neglecting couplings to continuum states
Chapter 6 Control systems analysis and design by the root-locus method. From the book (Ogata Modern Control Engineering 5th).
6-1 introduction.
6-2 Root locus plots.
6-5 root locus approach to control-system design.
This chapter discusses approximate inference methods for probabilistic models where exact inference is intractable. It introduces variational inference as a deterministic approximation approach. Variational inference works by restricting the distribution of latent variables to a simpler family that makes computation and optimization easier. The chapter provides examples of using variational inference for Gaussian mixtures and univariate Gaussian models. It explains how to derive a variational lower bound and optimize it using an iterative procedure similar to EM.
This document summarizes a study on using the Gauss-Seidel numerical method to simulate heat transfer through a 3D rectangular bar. The author varied parameters like grid resolution, tolerance, and boundary conditions to determine an optimal standard approach. They found that a resolution of 21x21 nodes and a tolerance of 10^-5 provided accurate results within an acceptable runtime of around 1 minute. This standard approach produced smooth temperature distributions and could be applied to different boundary condition tests without excessive computation time.
1) Gaussian processes provide a distribution over functions and can be used for regression and classification problems. They define a prior directly over functions, rather than parameters as in linear regression.
2) To make predictions in Gaussian processes, we compute the posterior distribution p(tN+1|tN) which depends on the kernel function K(x,x'). The predictive distribution has a closed form Gaussian distribution.
3) Kernel functions define the similarity between data points and should satisfy certain properties. Common kernels include the Gaussian and polynomial kernels. Kernel hyperparameters can be estimated through maximum likelihood.
Principal component analysis (PCA) is a technique used to simplify complex datasets. It works by converting a set of observations of possibly correlated variables into a set of linearly uncorrelated variables called principal components. PCA identifies patterns in data and expresses the data in such a way as to highlight their similarities and differences. The main implementations of PCA are eigenvalue decomposition and singular value decomposition. PCA is useful for data compression, reducing dimensionality for visualization and building predictive models. However, it works best for data that follows a multidimensional normal distribution.
The Industrial Revolution transformed Britain from a largely agricultural society to an industrialized one between the late 18th and early 19th centuries. New technologies like the steam engine and mechanized textile manufacturing led people to move from rural areas to cities where factories were located. This rapid urbanization strained city infrastructure and living conditions, with overcrowded slums and poor sanitation. While factory owners profited, workers faced long hours, dangerous conditions, low pay, and few rights. The Revolution also brought new ideas about economics, politics, and society that still influence the modern world.
The document discusses the four ways of knowing: emotion, reason, experience, and authority. It provides a personal experience of spending 5 weeks at Brown University during the summer of 2011 to illustrate experience as a way of knowing. The document concludes by noting that emotion, reason, experience, and authority are the four main ways in which humans gain knowledge and understanding.
This document discusses using Static Var Compensation (SVC) devices to reduce unbalancing and energy losses in distribution networks. It proposes using intelligent optimization algorithms like Particle Swarm Optimization (PSO), Cuckoo Search Algorithm (CSA), and Firefly Algorithm (FA) to determine the optimal sitting and sizing of SVC devices. The objective function considers losses, neutral line current, and SVC installation costs. Simulations on the IEEE 123 node test system show the proposed method significantly improves the performance of unbalanced distribution networks.
The document discusses computational electromagnetics and the finite element method. It provides 7 steps for the finite element method: 1) divide the problem domain into sub-domains, 2) approximate the potential for each element, 3) find the potential for each element in terms of end points, 4) find the energy for each element, 5) find the total energy, 6) obtain the general solution, and 7) obtain a unique solution by applying boundary conditions. The finite element method is useful for problems with complex geometries and boundary conditions that cannot be solved analytically.
A Novel Space-time Discontinuous Galerkin Method for Solving of One-dimension...TELKOMNIKA JOURNAL
In this paper we propose a high-order space-time discontinuous Galerkin (STDG) method for
solving of one-dimensional electromagnetic wave propagations in homogeneous medium. The STDG
method uses finite element Discontinuous Galerkin discretizations in spatial and temporal domain
simultaneously with high order piecewise Jacobi polynomial as the basis functions. The algebraic
equations are solved using Block Gauss-Seidel iteratively in each time step. The STDG method is
unconditionally stable, so the CFL number can be chosen arbitrarily. Numerical examples show that the
proposed STDG method is of exponentially accuracy in time.
Tensor Spectral Clustering is an algorithm that generalizes graph partitioning and spectral clustering methods to account for higher-order network structures. It defines a new objective function called motif conductance that measures how partitions cut motifs like triangles in addition to edges. The algorithm represents a tensor of higher-order random walk transitions as a matrix and computes eigenvectors to find a partition that minimizes the number of motifs cut, allowing networks to be clustered based on higher-order connectivity patterns. Experiments on synthetic and real networks show it can discover meaningful partitions by accounting for motifs that capture important structural relationships.
MTH101 - Calculus and Analytical Geometry- Lecture 43Bilal Ahmed
Virtual University
Course MTH101 - Calculus and Analytical Geometry
Lecture No 43
Instructor's Name: Dr. Faisal Shah Khan
Course Email: mth101@vu.edu.pk
2011 santiago marchi_souza_araki_cobem_2011CosmoSantiago
This document discusses the performance of the multigrid method for solving the Navier-Stokes equations using two alternative formulations: streamfunction-velocity and streamfunction-vorticity. The study examines the lid-driven cavity flow problem for different grid sizes, Reynolds numbers, numbers of grid levels, and inner iterations to determine optimal parameter values that minimize CPU time. Previous research has found that the multigrid method is less effective for the Navier-Stokes equations at higher Reynolds numbers. The document aims to investigate how formulation choice and multigrid parameters influence efficiency.
Scilab Finite element solver for stationary and incompressible navier-stokes ...Scilab
In this paper we show how Scilab can be used to solve Navier-stokes equations, for the incompressible and stationary planar flow. Three examples have been presented and some comparisons with reference solutions are provided.
Numerical methods for 2 d heat transferArun Sarasan
This document presents a numerical study comparing finite difference and finite volume methods for solving the heat transfer equation during solidification in a complex casting geometry. The study uses a multi-block grid with bilinear interpolation and generalized curvilinear coordinates. Results show good agreement between the two discretization methods, with a slight advantage for the finite volume method due to its use of more nodal information. The multi-block grid approach reduces computational time and allows complex geometries to be accurately modeled while overcoming issues at block interfaces.
This document discusses inverting the Dirac equation for the vector potential in the non-abelian SU(2) case. It first reviews the abelian U(1) case and shows how the vector potential can be written as a rational expression of Dirac bispinor current densities. For the non-abelian SU(2) case, the Dirac equation is set up for an SU(2) doublet spinor. An "isospin-charge conjugation" operator is defined to make the gauge potential covariant, but inverting the equation only yields the vector potential as a Neumann series involving Pauli scalar and vector current densities. Fierz identities are also derived to express skew tensor current densities solely in terms of
To the Issue of Reconciling Quantum Mechanics and General RelativityIOSRJAP
The notion of gravitational radiation as a radiation of the same level as the electromagnetic radiation is based on theoretically proved and experimentally confirmed fact of existence of stationary states of an electron in its gravitational field characterized by the gravitational constant K = 1042G (G is the Newtonian gravitational constant) and unrecoverable space-time curvature Λ. If the numerical values of K 5.11031 Nm2 kg-2 and =4.41029 m -2 , there is a spectrum of stationary states of the electron in its own gravitational field (0.511 MeV ... 0.681 MeV).Adjusting according to the known mechanisms of broadening does not disclose the broadening of the registered portion of the emission spectrum of the micropinch. It indicates the presence of an additional mechanism of broadening the registered portion of the spectrum of the characteristic radiation due to the contribution of the excited states of electrons in their own gravitational field. The energy spectrum of the electron in its own gravitational field and the energy spectra of multielectron atoms are such that there is a resonance of these spectra. As obvious, the consequence of such resonant interaction is appearance, including new lines, of electromagnetic transitions not associated with atomic transitions. The manuscript is the review of previously published papers cited in the references.
This document summarizes recent results from the STAR experiment regarding correlations and fluctuations in heavy ion collisions at RHIC. It discusses measurements of elliptic and directed flow that provide evidence for local equilibration and pressure gradients in the quark-gluon plasma. HBT interferometry measurements indicate a source elongated perpendicular to the reaction plane, consistent with initial collision geometry. Charge-dependent number correlations reveal modified hadronization in the quark-gluon plasma compared to pp collisions, suggesting local charge conservation effects during hadronization. Overall, the results provide insights into the equilibration and relevant degrees of freedom in the quark-gluon plasma.
1) The document discusses sparse kernel machines and support vector machines (SVMs). It covers the optimization of SVMs using Lagrange multipliers and the Karush-Kuhn-Tucker (KKT) conditions.
2) SVMs find the maximum-margin separating hyperplane between two classes of data points by maximizing the margin between the closest data points of each class. This is done by solving the dual optimization problem.
3) Support vectors are the data points that lie on the boundary or margin of the classifier. Only the support vectors are needed for making predictions on new data using the SVM model.
This document presents a method for solving the coupled-channels time-independent Schrödinger equation for bound states of the A1Σ+ − b3Π0 electronic states in NaCs, which are coupled by spin-orbit interaction. The method expands the coupled-channel eigenstates over a basis of rovibrational eigenstates of the uncoupled potentials. This leads to a system of equations for the expansion coefficients that can be solved by diagonalizing a 260x260 matrix. Plots of the bound-state matrix elements of the spin-orbit coupling operator show they decrease for more highly-excited vibrational states. Based on this, the method approximates the problem by neglecting couplings to continuum states
Chapter 6 Control systems analysis and design by the root-locus method. From the book (Ogata Modern Control Engineering 5th).
6-1 introduction.
6-2 Root locus plots.
6-5 root locus approach to control-system design.
This chapter discusses approximate inference methods for probabilistic models where exact inference is intractable. It introduces variational inference as a deterministic approximation approach. Variational inference works by restricting the distribution of latent variables to a simpler family that makes computation and optimization easier. The chapter provides examples of using variational inference for Gaussian mixtures and univariate Gaussian models. It explains how to derive a variational lower bound and optimize it using an iterative procedure similar to EM.
This document summarizes a study on using the Gauss-Seidel numerical method to simulate heat transfer through a 3D rectangular bar. The author varied parameters like grid resolution, tolerance, and boundary conditions to determine an optimal standard approach. They found that a resolution of 21x21 nodes and a tolerance of 10^-5 provided accurate results within an acceptable runtime of around 1 minute. This standard approach produced smooth temperature distributions and could be applied to different boundary condition tests without excessive computation time.
1) Gaussian processes provide a distribution over functions and can be used for regression and classification problems. They define a prior directly over functions, rather than parameters as in linear regression.
2) To make predictions in Gaussian processes, we compute the posterior distribution p(tN+1|tN) which depends on the kernel function K(x,x'). The predictive distribution has a closed form Gaussian distribution.
3) Kernel functions define the similarity between data points and should satisfy certain properties. Common kernels include the Gaussian and polynomial kernels. Kernel hyperparameters can be estimated through maximum likelihood.
Principal component analysis (PCA) is a technique used to simplify complex datasets. It works by converting a set of observations of possibly correlated variables into a set of linearly uncorrelated variables called principal components. PCA identifies patterns in data and expresses the data in such a way as to highlight their similarities and differences. The main implementations of PCA are eigenvalue decomposition and singular value decomposition. PCA is useful for data compression, reducing dimensionality for visualization and building predictive models. However, it works best for data that follows a multidimensional normal distribution.
The Industrial Revolution transformed Britain from a largely agricultural society to an industrialized one between the late 18th and early 19th centuries. New technologies like the steam engine and mechanized textile manufacturing led people to move from rural areas to cities where factories were located. This rapid urbanization strained city infrastructure and living conditions, with overcrowded slums and poor sanitation. While factory owners profited, workers faced long hours, dangerous conditions, low pay, and few rights. The Revolution also brought new ideas about economics, politics, and society that still influence the modern world.
The document discusses the four ways of knowing: emotion, reason, experience, and authority. It provides a personal experience of spending 5 weeks at Brown University during the summer of 2011 to illustrate experience as a way of knowing. The document concludes by noting that emotion, reason, experience, and authority are the four main ways in which humans gain knowledge and understanding.
The document provides summaries of famous landmarks around the world in 3-4 sentences each. It describes the Colosseum in Rome as an oval shaped stadium constructed in 80 AD that hosted gladiator fights and animal combats, holding over 50,000 spectators. Stonehenge in the UK was constructed over 5,000 years ago, with giant stones weighing up to 50 tons forming the outer circle. The Parthenon in Athens was built starting in 447 BC to replace earlier temples of Athena.
1) The document discusses the experiences of former East Germans 25 years after the fall of the Berlin Wall, including Klaus-Jurgen Warnick who lived in Kleinmachnow outside Berlin.
2) Warnick received land from the East German government in 1969 and built a house, planting trees for his grandchildren, but faced trouble when the original owner returned from West Germany after reunification and claimed the property.
3) While life is better now for former East Germans with freedoms like free speech, they still face economic disadvantages compared to West Germans in terms of wages, savings, and retirement benefits, showing that reunification did not fully resolve inequalities between East and West.
The document contains a series of multiple choice questions about an article on Rwanda's progress since the 1994 genocide. Question 1 asks which statement does not support the idea that the UN still fails to protect people during wars. Question 2 asks what the author aims to show in introducing President Kagame. Question 3 asks which choice best illustrates improved relations between Hutus and Tutsis. Question 4 asks which paragraph best supports the main idea of the article. The questions assess understanding of key details and themes in the article about Rwanda rebuilding after the genocide.
Andrew Wiswell, NAL Energy's President and CEO, presents at the CIBC 2012 Whistler Institutional Investor Conference at Whistler, B.C., at 8 a.m. PST (9 a.m. MST, 11 a.m. EST).
The document summarizes several major intellectual and cultural movements between the Renaissance and Enlightenment periods. It describes the Renaissance as a rebirth of Greco-Roman learning and humanism that focused on man as a subject of culture, art, and literature. The Reformation emphasized individualism in Christianity and a personal relationship with God. The Scientific Revolution established the scientific method of empirical observation, experimentation, and mathematical proofs to understand the mechanistic universe. Finally, the Enlightenment applied reason, science, and individualism to develop theories of natural law, inalienable rights, social contract, liberty, and a free market.
This document contains images and questions about analyzing images from the 19th and 21st centuries. The images depict various impacts of the industrial revolution like overcrowded cities, growth of the middle class through new jobs, and changes to labor patterns. One image shows a political cartoon about the Luddite movement resisting new technologies. Another image is a chart showing a demographic change caused by the industrial revolution.
The Industrial Revolution began in Great Britain in the late 18th century and spread to other parts of Western Europe, the United States, Japan, and Russia in the 19th century. It introduced new patterns of machine-powered mass production that transformed how goods were manufactured and consumed. This led to rapid urbanization as people moved from rural homes to factory cities. While industrialization provided benefits like increased production, it also had negative consequences for workers, who faced difficult working conditions, low wages, lack of protections, and other problems. The changes driven by the Industrial Revolution disrupted traditional societies and economies.
The document discusses Open Educational Resources (OERs), which are educational materials like course modules, videos, and textbooks that are freely available online for anyone to use under an open license. It provides reasons to use OERs such as offering free resources to students, supplementing existing course material, and saving time by reusing quality materials created by others. It also notes some words of caution about OERs, such as ensuring the quality of materials used and educating others on the benefits of OERs. The document encourages opening the door to lifelong learning by sharing OERs with students.
CCCOER: Research Review of Faculty and Student OER UsageUna Daly
Increasing textbooks costs, coupled with general rising costs of education have begun motivating faculty and their colleges to explore the use of open educational resources. At the same time, recent studies have shown that a majority of faculty and administrators are largely unaware of the quantity and quality of free and open educational resources. This webinar will feature two experienced researchers sharing recent findings from a wide variety of higher education and secondary education OER pilot studies. In addition, they will address best practices for conducting OER research on your campuses to expand usage and understand the benefits and challenges from faculty and student perspectives.
Please join the Community College Consortium for Open Educational Resources (CCCOER) for this free, open webinar on:
Date: Wednesday, February 11
Time: 10 am PST; 11:00 am MT; 1:00 pm EST
Featured speakers:
Boyoung Chae, Policy Associate, eLearning and Open Education, Washington State Board for Community and Technical Colleges
The Washington State Board for Community and Technical Colleges released a report last month on use of open educational resources based on interviews with 60 faculty in Washington’s community and technical college system which was built upon a previous state-wide survey with 770 faculty. Faculty were queried about (1) how and why they chose to use OER (2) six benefits including student savings (3) six challenges of using OER (4) nine supports from college and statewide stakeholders that could help them to expand their OER use.
John Hilton III, Assistant Professor of Ancient Scripture, OER Researcher, Brigham Young University.
This presentation synthesizes the results of eight different peer-reviewed studies that examine (1) the perceptions students and instructors of OER that replaced traditional textbooks (2) the potential influence of OER on student learning outcomes, and (3) the cost-savings resulting from OER. Suggested paths forward to expand the pool of academic peer reviewed research on (1) the perceptions students and instructors have of OER, (2) the potential influence of OER on student learning outcomes, and (3) the cost-savings resulting from OER will also be shared.
The Growing Community of College OER Projects May 2015Una Daly
Please join the Community College Consortium for Open Educational Resources (CCCOER) for a free open webinar on the growing community of College OER projects. We will be featuring college OER projects from the Minnesota State Colleges and Universities (MnSCU), College of the Canyons in California, as well as updates from the Maricopa College District in Arizona and the growing OER movement at Oregon community colleges.
Our speakers will share strategies to support faculty awareness and adoption of open textbooks and open educational resources. We will also have faculty sharing how open textbook adoption affects course design and departmental policies as well as feedback from their students on the use of free and open textbooks.
Date: Wednesday, May 13
Time: 10 am PST; 1:00 pm EST
Featured speakers:
• Katie Coleman and Thea Alvarado, Sociology faculty and open textbook editors, College of the Canyons, California
• Todd Digby, System Director of Academic Technology, MnSCU, Minnesota
• Paul Golisch, CIO & Dean of Information Technology Paradise Valley College, Arizona and Maricopa College District OER Committee co-chair.
This document summarizes research on direct non-linear inversion of 1D acoustic media using the inverse scattering series. Key points:
- A method is derived for directly inverting 1D acoustic media with varying velocity and density, without requiring an estimate of properties above reflectors or assuming linear relationships between property changes and reflection data.
- Testing on a single reflector case showed improved estimates of property changes beyond the reflector compared to linear methods, for a wider range of angles.
- A special parameter related to velocity change was identified that has the correct sign in the linear inversion and is not affected by issues like "leakage" that complicate inversions.
Linear inversion of absorptive/dispersive wave field measurements: theory and...Arthur Weglein
The use of inverse scattering theory for the inversion of viscoacoustic wave field
measurements, namely for a set of parameters that includes Q, is by its nature very
different from most current approaches for Q estimation. In particular, it involves an
analysis of the angle- and frequency-dependence of amplitudes of viscoacoustic data
events, rather than the measurement of temporal changes in the spectral nature of
events. We consider the linear inversion for these parameters theoretically and with
synthetic tests. The output is expected to be useful in two ways: (1) on its own it
provides an approximate distribution of Q with depth, and (2) higher order terms in
the inverse scattering series as it would be developed for the viscoacoustic case would
take the linear inverse as input.
We will begin, following Innanen (2003) by casting and manipulating the linear
inversion problem to deal with absorption for a problem with arbitrary variation of
wavespeed and Q in depth, given a single shot record as input. Having done this, we
will numerically and analytically develop a simplified instance of the 1D problem. This
simplified case will be instructive in a number of ways, first of all in demonstrating
that this type of direct inversion technique relies on reflectivity, and has no interest in
or ability to analyse propagation effects as a means to estimate Q. Secondly, through
a set of examples of slightly increasing complexity, we will demonstrate how and where
the linear approximation causes more than the usual levels of error. We show how
these errors may be mitigated through use of specific frequencies in the input data,
or, alternatively, through a layer-stripping based, or bootstrap, correction. In either
case the linear results are encouraging, and suggest the viscoacoustic inverse Born
approximation may have value as a standalone inversion procedure.
Implementation Of Geometrical Nonlinearity in FEASTSMTiosrjce
Analysis of the structures used in aerospace applications is done using finite element
method. These structures may face unexpected loads because of variable environmental situations.
These loads could lead to large deformation and inelastic manner. The aim of this research is to
formulate the finite elements considering the effect of large deformation and strain. Here total
Lagrangian method is used to consider the effect of large deformation. After deriving required
relations, implementation of formulated equation is done in FEASTSMT(Finite Element Analysis of
Structures - Substructured and Multi-Threading). .Newton-Raphson method was utilized to solve
nonlinear finite element equations. The validation is carried out with the results obtained from the
Marc Software.
The document describes the implementation of geometrical nonlinearity in finite element analysis software FEASTSMT. It discusses total Lagrangian formulation and the Newton-Raphson method to solve nonlinear finite element equations arising from large deformations. Element formulations accounting for incremental strains, strain-displacement relationships, and stresses are developed. The implementation is validated on a solid element and beam bending problem by comparing results with MARC software and analytical solutions, showing good agreement.
The document describes the implementation of geometrical nonlinearity in finite element analysis software FEASTSMT. It discusses total Lagrangian formulation and the Newton-Raphson method to solve nonlinear finite element equations arising from large deformations. Element formulations accounting for incremental strains, strain-displacement relationships, and stresses are developed. The implementation is validated on a solid element and beam bending problem by comparing results with MARC software and analytical solutions, showing good agreement.
This document discusses several methods for approximating solutions to the Navier-Stokes equations (NSE), including nondimensionalization, creeping flow, inviscid flow, irrotational flow, and potential flow. It explains how these approximations simplify the NSE by removing terms to create linear, analytically solvable forms. Elementary flows like source/sink, vortex, and doublet are introduced that can be combined using superposition to model more complex flows.
Turbulence Modeling – The k-ε Turbulence Model.pdfSkTarifAli1
The document discusses turbulence modeling using the k-ε turbulence model. It describes the standard k-ε model, including the Boussinesq hypothesis, transport equations for kinetic energy k and dissipation rate ε, model constants, and shortcomings. The realizable k-ε model and improvements to address issues like non-realizability are covered. Near-wall treatments like enhanced wall functions, scalable wall functions, and Menter-Lechner ε-equation are summarized. The appropriate y+ values according to different turbulence models and flow conditions are also briefly mentioned.
This document summarizes a method for calculating the sensitivity matrix that defines the linear relationship between circuit parameters and poles/response of an RLC network. The sensitivity matrix enables efficient statistical analysis and yield predictions. It is obtained by taking derivatives of the poles and transfer function, which are calculated from the eigenvalues and eigenvectors of the network's state equation. An example RLC circuit demonstrates calculating the sensitivity matrix and using it to predict yield based on Monte Carlo simulations.
The document discusses the application of wavelets in solving Hankel transforms numerically. It begins with introducing key concepts like waves, wavelets, Hankel transforms and their properties. It then discusses how wavelet methods like CAS wavelets and sine-cosine wavelets can be used to solve Hankel transforms of various functions by separating the integrand into slowly varying and rapidly oscillating components, and representing them using wavelet series. Numerical results on example functions show the proposed wavelet methods provide more accurate approximations compared to other classical numerical techniques for solving Hankel transforms. The paper indicates wavelet methods have promising applications in solving integral equations involving Bessel functions that appear in fields like acoustics, electromagnetics and image processing.
The document discusses the classification of governing equations for fluid flow based on their mathematical character. There are three main types of classifications: hyperbolic equations where information propagates at a finite speed along characteristics; parabolic equations where information travels in one direction along a single characteristic; and elliptic equations where information propagates equally in all directions with no preferred direction. The Navier-Stokes equations are parabolic for unsteady flows and elliptic for steady flows. The Euler equations depend on compressibility - steady subsonic flows are elliptic while steady supersonic and all unsteady compressible flows are hyperbolic. This classification determines the number of initial/boundary conditions needed to solve the equations.
Linear regression [Theory and Application (In physics point of view) using py...ANIRBANMAJUMDAR18
Machine-learning models are behind many recent technological advances, including high-accuracy translations of the text and self-driving cars. They are also increasingly used by researchers to help in solving physics problems, like Finding new phases of matter, Detecting interesting outliers
in data from high-energy physics experiments, Founding astronomical objects are known as gravitational lenses in maps of the night sky etc. The rudimentary algorithm that every Machine Learning enthusiast starts with is a linear regression algorithm. In statistics, linear regression is a linear approach to modelling the relationship between a scalar response (or dependent variable) and one or more explanatory variables (or independent
variables). Linear regression analysis (least squares) is used in a physics lab to prepare the computer-aided report and to fit data. In this article, the application is made to experiment: 'DETERMINATION OF DIELECTRIC CONSTANT OF NON-CONDUCTING LIQUIDS'. The entire computation is made through Python 3.6 programming language in this article.
Calculating Voltage Instability Using Index Analysis in Radial Distribution ...IJMER
This paper presents analysis of voltage stability index by a simple and efficient load flow
method to find out the magnitude of voltage at each node in radial distribution system in that network. It
shows the value of voltage stability index at each node in radial distribution network and predicts which
node is more sensitive to voltage collapse. This paper also presents the effect on voltage stability index
with variation in active power, reactive power, active and reactive power both. The voltage and VSI and
effect of load variation on VSI for 33-node system & 28-node system are calculated in this paper with
results shown
Alexander Litvinenko's research interests include developing efficient numerical methods for solving stochastic PDEs using low-rank tensor approximations. He has made contributions in areas such as fast techniques for solving stochastic PDEs using tensor approximations, inexpensive functional approximations of Bayesian updating formulas, and modeling uncertainties in parameters, coefficients, and computational geometry using probabilistic methods. His current research focuses on uncertainty quantification, Bayesian updating techniques, and developing scalable and parallel methods using hierarchical matrices.
DIGITAL WAVE FORMULATION OF THE PEEC METHODPiero Belforte
This document presents a digital wave formulation (DWF) method for solving the quasi-static partial element equivalent circuit (PEEC) method in the time domain. The standard PEEC model is transformed into a wave digital network by using wave variables and defining wave digital circuit elements corresponding to inductors, capacitors, resistors, sources, and connections. This allows the PEEC model to be solved as a wave digital network using a digital wave simulator (DWS). A numerical example compares the proposed DWF-PEEC solver to a standard SPICE solver, showing the DWF-PEEC method provides accurate results with significant speed-up in computation time.
Relevance Vector Machines for Earthquake Response Spectra drboon
This study uses Relevance Vector Machine (RVM) regression to develop a probabilistic model for the average horizontal component of 5%-damped earthquake response spectra. Unlike conventional models, the proposed approach does not require a functional form, and constructs the model based on a set predictive variables and a set of representative ground motion records. The RVM uses Bayesian inference to determine the confidence intervals, instead of estimating them from the mean squared errors on the training set. An example application using three predictive variables (magnitude, distance and fault mechanism) is presented for sites with shear wave velocities ranging from 450 m/s to 900 m/s. The predictions from the proposed model are compared to an existing parametric model. The results demonstrate the validity of the proposed model, and suggest that it can be used as an alternative to the conventional ground motion models. Future studies will investigate the effect of additional predictive variables on the predictive performance of the model.
Relevance Vector Machines for Earthquake Response Spectra drboon
This study uses Relevance Vector Machine (RVM) regression to develop a probabilistic model for the average horizontal component of 5%-damped earthquake response spectra. Unlike conventional models, the proposed approach does not require a functional form, and constructs the model based on a set predictive variables and a set of representative ground motion records. The RVM uses Bayesian inference to determine the confidence intervals, instead of estimating them from the mean squared errors on the training set. An example application using three predictive variables (magnitude, distance and fault mechanism) is presented for sites with shear wave velocities ranging from 450 m/s to 900 m/s. The predictions from the proposed model are compared to an existing parametric model. The results demonstrate the validity of the proposed model, and suggest that it can be used as an alternative to the conventional ground motion models. Future studies will investigate the effect of additional predictive variables on the predictive performance of the model.
Hall 2006 problems-encounteredfromuserayleighdampingJuan Camacho
Rayleigh damping is commonly used in finite element models to provide energy dissipation from dynamic loads like earthquakes. However, under certain conditions like nonlinear analysis with softening behavior, the damping forces generated by a Rayleigh damping matrix can become unrealistically large compared to restoring forces, making the analysis unconservative. This paper demonstrates potential problems through examples and proposes bounding the damping forces as a remedy.
This document discusses linear and bilinear mappings. It begins by defining mapping as the process of associating elements between sets. Linear mapping is described as a mathematical function that transforms data using a matrix, having properties like linearity and additivity. It provides examples of linear mapping applications in areas like engineering, data analysis, and physics. Bilinear mapping is then introduced as a mapping that combines elements from two vector spaces to return a scalar value. The document notes differences between linear and bilinear mappings and provides concluding remarks about their respective applications and importance.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Similar to Nonlinear inversion of absorptive/dispersive wave field measurements: preliminary development and tests- Arthur B. Weglein UH (20)
The document discusses (1) a new business model and IBM proposal for cost-effective seismic processing through cloud computing, (2) an invitation to participate in a pilot project using IBM cloud computing for multiple attenuation, (3) speedups of internal multiple attenuation achieved by ConocoPhillips, (4) upcoming deliverables from M-OSRP that address industry challenges and will benefit from IBM's cloud computing, and (5) an invitation to an upcoming technical review meeting.
Wavelet estimation for a multidimensional acoustic or elastic earthArthur Weglein
A new and general wave theoretical wavelet estimation
method is derived. Knowing the seismic wavelet
is important both for processing seismic data and for
modeling the seismic response. To obtain the wavelet,
both statistical (e.g., Wiener-Levinson) and deterministic
(matching surface seismic to well-log data) methods
are generally used. In the marine case, a far-field
signature is often obtained with a deep-towed hydrophone.
The statistical methods do not allow obtaining
the phase of the wavelet, whereas the deterministic
method obviously requires data from a well. The
deep-towed hydrophone requires that the water be
deep enough for the hydrophone to be in the far field
and in addition that the reflections from the water
bottom and structure do not corrupt the measured
wavelet. None of the methods address the source
array pattern, which is important for amplitude-versus-
offset (AVO) studies.
The inverse scattering series for tasks associated with primaries: direct non...Arthur Weglein
The inverse scattering series for tasks associated with primaries: direct non-linear inversion of 1D elastic media. In this paper, research on direct inversion for two pa-
rameter acoustic media (Zhang and Weglein, 2005) is
extended to the three parameter elastic case. We present
the first set of direct non-linear inversion equations for
1D elastic media (i.e., depth varying P-velocity, shear
velocity and density). The terms for moving mislocated
reflectors are shown to be separable from amplitude
correction terms. Although in principle this direct
inversion approach requires all four components of elastic
data, synthetic tests indicate that consistent value-added
results may be achieved given only ˆDPP measurements.
We can reasonably infer that further value would derive
from actually measuring ˆDPP , ˆD PS, ˆDSP and ˆDSS as
the method requires. The method is direct with neither
a model matching nor cost function minimization.
Internal multiple attenuation using inverse scattering: Results from prestack 1 & 2D acoustic and
elastic synthetics
R. T. Coates*, Schlumberger Cambridge Research, A. B. Weglein, Arco Exploration and Production Technology
Summary
The attenuation of internal multiples in a multidimensional
earth is an important and longstanding problem in exploration
seismics. In this paper we report the results of applying
an attenuation algorithm based on the inverse scattering
series to synthetic prestack data sets generated in on
and two dimensional earth models. The attenuation algorithm
requires no information about the subsurface structure
or the velocity field. However, detailed information about
the source wavelet is a prerequisite. An attractive feature of:
the attenuation algorithm is the preservation of the amplitude
(and phase) of primary events in the data; thus allowing for
subsequent AVO and other true amplitude processing.
The document discusses the inverse scattering series (ISS) approach for eliminating internal multiples in land seismic data. ISS methods do not require information about the subsurface and can predict the traveltimes and amplitudes of all internal multiples. The authors apply 1D and 1.5D ISS algorithms to synthetic and field data from Saudi Arabia and achieve encouraging results, successfully attenuating most internal multiples while preserving primaries. However, further work is needed to more accurately predict multiples to handle interference between primaries and multiples.
This article provides an alternative perspective on full-waveform inversion (FWI) methods, arguing that current FWI approaches are indirect model-matching methods rather than direct inversion. The article aims to 1) caution against claims that FWI is a final solution, 2) propose a direct inverse approach, and 3) provide an overview comparing indirect FWI and direct inversion methods. Indirect FWI methods incorrectly use modeling equations run backwards rather than true direct inversion, limiting the understanding and effectiveness of the solutions.
The Inverse Scattering Series (ISS) is a direct inversion method
for a multidimensional acoustic, elastic and anelastic earth. It
communicates that all inversion processing goals are able to
be achieved directly and without any subsurface information.
This task is reached through a task-specific subseries of the
ISS. Using primaries in the data as subevents of the first-order
internal multiples, the leading-order attenuator can predict the
time of all the first-order internal multiples and is able to attenuate
them.
However, the ISS internal multiple attenuation algorithm can
be a computationally demanding method specially in a complex
earth. By using an approach that is based on two angular
quantities and that was proposed in Terenghi et al. (2012), the
cost of the algorithm can be controlled. The idea is to use the
two angles as key-control parameters, by limiting their variation,
to disregard some calculated contributions of the algorithm
that are negligible. Moreover, the range of integration
can be chosen as a compromise of the required degree of accuracy
and the computational time saving.
This time-saving approach is presented
Inverse scattering series for multiple attenuation: An example with surface a...Arthur Weglein
A multiple attenuation method derived from an inverse scattering
series is described. The inversion series approach allows a
separation of multiple attenuation subseries from the full series.
The surface multiple attenuation subseries was described and illustrated
in Carvalho et al. (1991, 1992). The internal multiple
attenuation method consists of selecting the parts of the odd
terms that are associated with removing only multiply reflected
energy. The method, for both types of multiples, is multidimensional
and does not rely on periodicity or differential moveout,
nor does it require a model of the reflectors generating the multiples.
An example with internal and surface multiples will be
presented.
In this paper we present a multidimensional method for attenuating internal multiples that derives from
an inverse scattering series . The method doesn't depend on periodicity or differential moveout, nor does it
require a model for the multiple generating reflectors.
Summary
Methods for removal of free-surface and internal multiples have been developed from bath a feedback model approach and inverse scatterin g theory. White these two formulations derive from different mathematica) viewpoints,
the resulting algorithm s for free-surface multiple are very similar. By contrast , the feedback and inverse scattering
method for internal multiple are totally different and have different requirements for sub surface information or
interpretive intervention . The former removes all multiple related to a certain boundary with the a of a surface
integral along this boundary ; the alter wilt predict and attenuate a ll internal multiple a t the same time . In this paper, we continue our comparison study of these internal multiple attenuation method ; specifically , we examine two
different realizations of the feedback method and the inverse scattering technique .
Internal multiple attenuation using inverse scattering: Results from prestack...Arthur Weglein
The attenuation of internal multiples in a multidimensional
earth is an important and longstanding problem in exploration
seismics. In this paper we report the results of applying
an attenuation algorithm based on the inverse scattering
series to synthetic prestack data sets generated in on
and two dimensional earth models. The attenuation algorithm
requires no information about the subsurface structure
or the velocity field. However, detailed information about
the source wavelet is a prerequisite. An attractive feature of:
the attenuation algorithm is the preservation of the amplitude
(and phase) of primary events in the data; thus allowing for
subsequent AVO and other true amplitude processing.
Wavelet estimation for a multidimensional acoustic or elastic earth- Arthur W...Arthur Weglein
A new and general wave theoretical wavelet estimation
method is derived. Knowing the seismic wavelet
is important both for processing seismic data and for
modeling the seismic response. To obtain the wavelet,
both statistical (e.g., Wiener-Levinson) and deterministic
(matching surface seismic to well-log data) methods
are generally used. In the marine case, a far-field
signature is often obtained with a deep-towed hydrophone.
The statistical methods do not allow obtaining
the phase of the wavelet, whereas the deterministic
method obviously requires data from a well. The
deep-towed hydrophone requires that the water be
deep enough for the hydrophone to be in the far field
and in addition that the reflections from the water
bottom and structure do not corrupt the measured
wavelet. None of the methods address the source
array pattern, which is important for amplitude-versus-
offset (AVO) studies
The document analyzes the reference velocity sensitivity of an elastic internal multiple attenuation algorithm. It first provides background on internal multiples and inverse scattering series methods for their attenuation. It then presents a 1.5D layered earth model and uses it to show that the elastic internal multiple algorithm can correctly predict arrival times without requiring an accurate reference velocity model, similarly to the acoustic case. The analysis involves deriving expressions for predicted internal multiples in the frequency-wavenumber domain and showing they depend only on the traveltimes of the primary reflections involved, not the reference velocities.
This document summarizes finite difference modeling methods used at M-OSRP. It discusses:
1) The second order time and fourth order space finite difference schemes used to model acoustic wave propagation.
2) How boundary conditions like Dirichlet/Neumann generate strong spurious reflections that can mask true events.
3) The importance of accurate source fields for modeling - better source fields lead to more accurate linear inversions and the ability to observe phenomena like polarity reversals in modeled data.
Inverse scattering series for multiple attenuation: An example with surface a...Arthur Weglein
A multiple attenuation method derived from an inverse scattering
series is described. The inversion series approach allows a
separation of multiple attenuation subseries from the full series.
The surface multiple attenuation subseries was described and illustrated
in Carvalho et al. (1991, 1992). The internal multiple
attenuation method consists of selecting the parts of the odd
terms that are associated with removing only multiply reflected
energy. The method, for both types of multiples, is multidimensional
and does not rely on periodicity or differential moveout,
nor does it require a model of the reflectors generating the multiples.
An example with internal and surface multiples will be
presented.
Green's Theorem Deghostin Algorthm- Dr. Arthur B. WegleinArthur Weglein
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms.
Current Ms word generated power point presentation covers major details about the micronuclei test. It's significance and assays to conduct it. It is used to detect the micronuclei formation inside the cells of nearly every multicellular organism. It's formation takes place during chromosomal sepration at metaphase.
Authoring a personal GPT for your research and practice: How we created the Q...Leonel Morgado
Thematic analysis in qualitative research is a time-consuming and systematic task, typically done using teams. Team members must ground their activities on common understandings of the major concepts underlying the thematic analysis, and define criteria for its development. However, conceptual misunderstandings, equivocations, and lack of adherence to criteria are challenges to the quality and speed of this process. Given the distributed and uncertain nature of this process, we wondered if the tasks in thematic analysis could be supported by readily available artificial intelligence chatbots. Our early efforts point to potential benefits: not just saving time in the coding process but better adherence to criteria and grounding, by increasing triangulation between humans and artificial intelligence. This tutorial will provide a description and demonstration of the process we followed, as two academic researchers, to develop a custom ChatGPT to assist with qualitative coding in the thematic data analysis process of immersive learning accounts in a survey of the academic literature: QUAL-E Immersive Learning Thematic Analysis Helper. In the hands-on time, participants will try out QUAL-E and develop their ideas for their own qualitative coding ChatGPT. Participants that have the paid ChatGPT Plus subscription can create a draft of their assistants. The organizers will provide course materials and slide deck that participants will be able to utilize to continue development of their custom GPT. The paid subscription to ChatGPT Plus is not required to participate in this workshop, just for trying out personal GPTs during it.
The technology uses reclaimed CO₂ as the dyeing medium in a closed loop process. When pressurized, CO₂ becomes supercritical (SC-CO₂). In this state CO₂ has a very high solvent power, allowing the dye to dissolve easily.
(June 12, 2024) Webinar: Development of PET theranostics targeting the molecu...Scintica Instrumentation
Targeting Hsp90 and its pathogen Orthologs with Tethered Inhibitors as a Diagnostic and Therapeutic Strategy for cancer and infectious diseases with Dr. Timothy Haystead.
ESA/ACT Science Coffee: Diego Blas - Gravitational wave detection with orbita...Advanced-Concepts-Team
Presentation in the Science Coffee of the Advanced Concepts Team of the European Space Agency on the 07.06.2024.
Speaker: Diego Blas (IFAE/ICREA)
Title: Gravitational wave detection with orbital motion of Moon and artificial
Abstract:
In this talk I will describe some recent ideas to find gravitational waves from supermassive black holes or of primordial origin by studying their secular effect on the orbital motion of the Moon or satellites that are laser ranged.
PPT on Direct Seeded Rice presented at the three-day 'Training and Validation Workshop on Modules of Climate Smart Agriculture (CSA) Technologies in South Asia' workshop on April 22, 2024.
hematic appreciation test is a psychological assessment tool used to measure an individual's appreciation and understanding of specific themes or topics. This test helps to evaluate an individual's ability to connect different ideas and concepts within a given theme, as well as their overall comprehension and interpretation skills. The results of the test can provide valuable insights into an individual's cognitive abilities, creativity, and critical thinking skills
Mending Clothing to Support Sustainable Fashion_CIMaR 2024.pdfSelcen Ozturkcan
Ozturkcan, S., Berndt, A., & Angelakis, A. (2024). Mending clothing to support sustainable fashion. Presented at the 31st Annual Conference by the Consortium for International Marketing Research (CIMaR), 10-13 Jun 2024, University of Gävle, Sweden.
The debris of the ‘last major merger’ is dynamically youngSérgio Sacani
The Milky Way’s (MW) inner stellar halo contains an [Fe/H]-rich component with highly eccentric orbits, often referred to as the
‘last major merger.’ Hypotheses for the origin of this component include Gaia-Sausage/Enceladus (GSE), where the progenitor
collided with the MW proto-disc 8–11 Gyr ago, and the Virgo Radial Merger (VRM), where the progenitor collided with the
MW disc within the last 3 Gyr. These two scenarios make different predictions about observable structure in local phase space,
because the morphology of debris depends on how long it has had to phase mix. The recently identified phase-space folds in Gaia
DR3 have positive caustic velocities, making them fundamentally different than the phase-mixed chevrons found in simulations
at late times. Roughly 20 per cent of the stars in the prograde local stellar halo are associated with the observed caustics. Based
on a simple phase-mixing model, the observed number of caustics are consistent with a merger that occurred 1–2 Gyr ago.
We also compare the observed phase-space distribution to FIRE-2 Latte simulations of GSE-like mergers, using a quantitative
measurement of phase mixing (2D causticality). The observed local phase-space distribution best matches the simulated data
1–2 Gyr after collision, and certainly not later than 3 Gyr. This is further evidence that the progenitor of the ‘last major merger’
did not collide with the MW proto-disc at early times, as is thought for the GSE, but instead collided with the MW disc within
the last few Gyr, consistent with the body of work surrounding the VRM.
Immersive Learning That Works: Research Grounding and Paths ForwardLeonel Morgado
We will metaverse into the essence of immersive learning, into its three dimensions and conceptual models. This approach encompasses elements from teaching methodologies to social involvement, through organizational concerns and technologies. Challenging the perception of learning as knowledge transfer, we introduce a 'Uses, Practices & Strategies' model operationalized by the 'Immersive Learning Brain' and ‘Immersion Cube’ frameworks. This approach offers a comprehensive guide through the intricacies of immersive educational experiences and spotlighting research frontiers, along the immersion dimensions of system, narrative, and agency. Our discourse extends to stakeholders beyond the academic sphere, addressing the interests of technologists, instructional designers, and policymakers. We span various contexts, from formal education to organizational transformation to the new horizon of an AI-pervasive society. This keynote aims to unite the iLRN community in a collaborative journey towards a future where immersive learning research and practice coalesce, paving the way for innovative educational research and practice landscapes.
Describing and Interpreting an Immersive Learning Case with the Immersion Cub...Leonel Morgado
Current descriptions of immersive learning cases are often difficult or impossible to compare. This is due to a myriad of different options on what details to include, which aspects are relevant, and on the descriptive approaches employed. Also, these aspects often combine very specific details with more general guidelines or indicate intents and rationales without clarifying their implementation. In this paper we provide a method to describe immersive learning cases that is structured to enable comparisons, yet flexible enough to allow researchers and practitioners to decide which aspects to include. This method leverages a taxonomy that classifies educational aspects at three levels (uses, practices, and strategies) and then utilizes two frameworks, the Immersive Learning Brain and the Immersion Cube, to enable a structured description and interpretation of immersive learning cases. The method is then demonstrated on a published immersive learning case on training for wind turbine maintenance using virtual reality. Applying the method results in a structured artifact, the Immersive Learning Case Sheet, that tags the case with its proximal uses, practices, and strategies, and refines the free text case description to ensure that matching details are included. This contribution is thus a case description method in support of future comparative research of immersive learning cases. We then discuss how the resulting description and interpretation can be leveraged to change immersion learning cases, by enriching them (considering low-effort changes or additions) or innovating (exploring more challenging avenues of transformation). The method holds significant promise to support better-grounded research in immersive learning.
Nonlinear inversion of absorptive/dispersive wave field measurements: preliminary development and tests- Arthur B. Weglein UH
1. Nonlinear inversion of absorptive/dispersive wave field
measurements: preliminary development and tests
Kristopher A. Innanen and Arthur B. Weglein
Abstract
We consider elements of the nonlinear inversion of primaries with a particular em-
phasis on the viscoacoustic, or absorptive/dispersive case. Since the main ingredient
to higher-order terms in the inverse scattering series (as it is currently cast) is the
linear, or Born, inverse, we begin by considering its “natural” form, i.e. without the
corrections and/or assumptions discussed in Innanen and Weglein (2004). The absorp-
tive/dispersive linear inverse for a single interface is found to be a filtered step-function,
and for a single layer a set of increasingly smoothed step-functions. The “filters” char-
acterizing the model shape are analyzed.
We next consider the nature of the inversion subseries, which would accept the
linear inverse output and via a series of nonlinear operations transform it into a signal
with the correct amplitudes (i.e. the true Q profile). The nature of the data event
communication espoused by these nonlinear operations is discussed using a simple 1D
normal incidence physical milieu, and then we focus on the viscous (attenuating) case.
We show that the inversion subseries correctly produces the Q profile if the input signal
has had the attenuation (or viscous propagation effects) compensated. This supports
the existing ansatz (Innanen and Weglein, 2003) regarding the nature of a generalized
imaging subseries (i.e. not the inversion subseries) for the viscous case, as carrying out
the task of Q compensation.
1 Introduction
The linear viscoacoustic inversion procedure of Innanen and Weglein (2004), including the
“pure” linear inversion and the subsequent “patches” used to correct for transmission error,
may well be of practical use. The authors show that, numerically, the linear Q estimate
is close to the true value in cases where strong nonlinear effects (such as propagation in
the non-reference medium) are not present. However in any real situation the nonlinear
nature of the relationship between the data and the parameters of interest will be present.
Patching the procedure, through the use of intermediate medium parameter estimates, to
correct for nonlinearity has been shown to be effective in numerical tests. However, this relies
on linear parameter and operator estimates, and hence error will accumulate; furthermore,
the application is ad hoc, and in any real medium the analogous procedure would require
medium assumptions.
254
2. Nonlinear inversion for wavespeed/Q MOSRP03
The linear inverse output is important on its own (especially when constructed using ju-
diciously chosen – low – frequency inputs), but also as the potential input to higher order
terms in the series. In either case, the quality and fidelity of the linear inversion results
are of great importance; what defines those qualities, however, may differ. For instance,
consider the imaging subseries of the inverse scattering series (Weglein et al., 2002; Shaw et
al., 2003). Its task is to take the linearized inversion, which consists of contrasts that (i)
are incorrectly located, and (ii) are of the incorrect amplitude (both in comparison to the
true model), and correct the locations of the contrasts without altering the amplitudes. This
task is accomplished, in 1D normal incidence examples, by weighting the derivatives of the
linearized inversion result by factors which rely on the incorrect (Born) amplitudes. In other
words, the imaging subseries relies on the correct provision of the incorrect model values. It
doesn’t matter that the linear inverse is not close to the true inverse – the inverse scattering
series expects this. What matters is that the incorrect linear results are of high fidelity, so
that a sophisticated nonlinear inversion procedure may function properly to correct them.
The full inverse scattering series provides a means to carry out viscoacoustic/viscoelastic in-
version, in the absence of assumptions about the structure of the medium, and without ad hoc
and theoretically inconsistent corrections and patches. This paper addresses some prelimi-
nary issues regarding the posing of the nonlinear inverse problem for absorptive/dispersive
media, including a discussion and demonstration of the “natural” form for the linear inverse
result – important because the linear inverse result is the input for all higher order terms
in the various subseries as they are currently cast. We also consider some general aspects
of the inversion subseries from the standpoint of the nonlinear, corrective communication
between events that is implied by the computation of the terms in this subseries. We finally
show results of the inversion subseries performing a nonlinear Q-estimation, but using a Q-
compensated data set, and comment on the implications for nonlinear absorptive/dispersive
inversion.
2 Input Issues: What Should a Viscous V1 Look Like?
Second- and higher-order terms in the inverse scattering series will work to get around the
problems of the linear inverse in a theoretically consistent way, one that does not require the
linearly solved-for model to be close to the true model. We need to understand what these
nonlinear methods expect as input. The subseries use structure and amplitude information
in the Born approximation to construct the desired subsurface model; what then of the
step-like structural assumptions we made previously? They are unlikely to be appropriate.
What we need is to postulate the “natural” form of the linear viscoacoustic inverse, the form
expected by the higher order terms in both subseries.
The “patch-up” structural form for the linear inversion of the 1D normal incidence problem
was adopted to permit two parameters to be solved-for. We can avoid this assumption
and investigate the natural linear inverse form by considering a two parameter-with-offset
inversion, as discussed in Innanen and Weglein (2004), or a linear single parameter inversion
255
3. Nonlinear inversion for wavespeed/Q MOSRP03
from data associated with contrasts in Q only. For the sake of parsimony, we consider the
latter here.
2.1 Viscous V1 for a Single-Interface
Consider first the single interface configuration illustrated in Figure 1. The data reflected
from such a scatterer is
Figure 1: Single interface experiment involving a contrast in Q only.
D(k) = RQ(k)ei2kz1
, (1)
where as ever, k = ω/c0, z1 is the depth of the Q contrast, and the source and receiver are
coincident at zs = zg = 0. Using terminology developed in Innanen and Weglein (2004), the
reflection coefficient is
RQ(k) = −
F(k)
Q1 2 + F(k)
Q1
. (2)
An example data set is illustrated in Figure 2. General 1D normal incidence data equations
for wavespeed perturbations (α1) and Q perturbations (β1) were produced by Innanen and
Weglein (2004):
α1(−2k) − 2β1(−2k)F(k) = 4
D(k)
i2k
, (3)
which can be altered to serve our current purposes by setting α1(−2k) ≡ 0. With this
alteration we have a single equation and a single unknown for each wavenumber k, and thus
a well-posed if not overdetermined problem. The revised equations are
β1(−2k) = −2
D(k)
i2kF(k)
. (4)
256
4. Nonlinear inversion for wavespeed/Q MOSRP03
We can gain some insight by inserting the analytic form for data from a single interface case
(equation 1):
β1(−2k) = −2
RQ(k)ei2kz1
i2kF(k)
= −2
RQ(k)
F(k)
ei2kz1
i2k
. (5)
Recognizing the frequency-domain form for a Heaviside function, and making use of the
convolution theorem, the linear form for β1(z) in the pseudo-depth z domain is
β1(z) = L1(z) ∗ H(z − z1), (6)
where ∗ denotes convolution, and L1(z) is a spatial filter of the form
L1(z) =
∞
−∞
e−i2kz
L1(−2k)d(−2k), (7)
in which
L1(−2k) =
2RQ(k)
F(k)
=
2
2Q1 + i
2
− 1
π
ln k
kr
, (8)
the latter arising from the substitution of the reflection coefficient. From the point of view
of equation (6), the linear inverse output will be a Heaviside function altered (filtered) by a
function that depends on the reflection coefficient and F(k); obviously, the more this filter
L1(z) differs from a delta function, the more the linear absorptive/dispersive reconstruc-
tion will be perturbed away from the (correct) step-like form. Figures 3 – 5 illustrate this
combination and the resulting form for β1(z) for several values of Q1.
Figure 2: Example numerical data for a single interface experiment with acoustic reference/viscoacoustic
non-reference contrast.
At large Q contrast (i.e. Figure 5) there is a distinct effect on the spatial distribution of the
recovered linear perturbation. Nevertheless, compared to the result that would be produced
by simply integrating the data D(k) (in which there is a strong filtering RQ(k) upon what
would usually be a delta-like event) the effect is small. In other words, there has been an
attempt by the linear inverse formalism to correct for the filtering effects of RQ(k). By
257
5. Nonlinear inversion for wavespeed/Q MOSRP03
inspection of equation (4), the correction takes the form of spectral division of the data
D(k) by F(k).
0 1000 2000 3000 4000 5000 6000
0
0.5
1
Heaviside
−500 −400 −300 −200 −100 0 100 200 300 400 500
−5
0
5
10
15
x 10
−3
FilterL1
(z)
0 1000 2000 3000 4000 5000 6000
−5
0
5
10
15
x 10
−3
Pseudo−depth z (m)
β1
(z)
a
b
c
Figure 3: Recovered single parameter viscoacoustic model, Q1 = 100: (a) Heaviside component with step at
z1, (b) Filter L1(z) to be convolved with Heaviside component, (c) β1(z) = L1(z)∗H(z −z1). At this contrast
level, L1(z) is close to a delta-function and there is little effect other than a scaling of the Heaviside function.
258
6. Nonlinear inversion for wavespeed/Q MOSRP03
0 1000 2000 3000 4000 5000 6000
0
0.5
1
Heaviside
−500 −400 −300 −200 −100 0 100 200 300 400 500
−0.01
0
0.01
0.02
FilterL1
(z)
0 1000 2000 3000 4000 5000 6000
−0.01
0
0.01
0.02
Pseudo−depth z (m)
β
1
(z)
a
b
c
Figure 4: Recovered single parameter viscoacoustic model, Q1 = 50: (a) Heaviside component with step at
z1, (b) Filter L1(z) to be convolved with Heaviside component, (c) β1(z) = L1(z)∗H(z −z1). At this contrast
level, there is a slight visible effect on the Heaviside, including a slight “droop” with depth.
259
7. Nonlinear inversion for wavespeed/Q MOSRP03
0 1000 2000 3000 4000 5000 6000
0
0.5
1
Heaviside
−500 −400 −300 −200 −100 0 100 200 300 400 500
0
0.05
0.1
FilterL
1
(z)
0 1000 2000 3000 4000 5000 6000
0
0.05
0.1
Pseudo−depth z (m)
β1
(z)
a
b
c
Figure 5: Recovered single parameter viscoacoustic model, Q1 = 10: (a) Heaviside component with step at
z1, (b) Filter L1(z) to be convolved with Heaviside component, (c) β1(z) = L1(z)∗H(z −z1). At this contrast
level, L1(z) produces a noticeable effect on the spatial structure of the recovered parameter distribution.
260
8. Nonlinear inversion for wavespeed/Q MOSRP03
2.2 Viscous V1 for Multiple-Interfaces
The linear inversion for a single-interface β1(z) produces a filtered Heaviside function whose
spectrum is an imperfect correction of the first integral of the viscoacoustic reflection coeffi-
cient. Numerical examples indicate that the correction is good up to high Q contrast, after
which a distinct impact on the recovered spatial distribution of β1(z) is noticed. Any deeper
reflectors will produce data events which are likewise determined by absorptive/dispersive
reflectivity, but more importantly will have been affected by viscous propagation.
Figure 6: Single layer experiment involving contrasts in Q only.
Consider the single viscoacoustic layer example of Figure 6, which produces data qualitatively
like the example in Figure 7. The data from such an experiment has the form
D(k) = RQ1(k)ei2kz1
+ R′
Q2(k)ei2kz1
ei2k1(z2−z1)
, (9)
where k1 is the propagation wavenumber within the layer, generally for this example
kj =
ω
c0
1 +
F(k)
Qj
, (10)
(notice that the wavespeed remains c0), and the reflection coefficient is
R′
Q2(k) = [1 − RQ1(k)]2 k1 − k2
k1 + k2
= [1 − RQ1(k)]2
F(k) 1
Q1
− 1
Q2
2 + F(k) 1
Q1
+ 1
Q2
. (11)
261
9. Nonlinear inversion for wavespeed/Q MOSRP03
By separating the “ballistic” component of k1 from the absorptive/dispersive component,
the data may be re-written as
D(k) = [RQ1(k)] ei2kz1
+ R′
Q2(k)e
i2k
F (k)
Q1
(z2−z1)
ei2kz2
. (12)
We can again gain some insight into the form of β1(z) by substituting in the analytic data
in equation (12):
β1(−2k) = −2
D(k)
i2kF(k)
= −
2R1Q(k)
F(k)
ei2kz1
i2k
+
−
2R′
Q2(k)e
i2k
F (k)
Q1
(z2−z1)
F(k)
ei2kz2
i2k
= L1(−2k)
ei2kz1
i2k
+ L2(−2k)
ei2kz2
i2k
(13)
where
L1(−2k) = −
2R1Q(k)
F(k)
,
L2(−2k) = −
2R′
Q2(k)e
i2k
F (k)
Q1
(z2−z1)
F(k)
,
(14)
are once again filters whose inverse Fourier transforms L1(z) and L2(z) act upon Heaviside
functions:
β1(z) = L1(z) ∗ H(z − z1) + L2(z) ∗ H(z − z2). (15)
The second filter L2 contains components that will impart effects of attenuative propagation
onto the linear inverse result – see equation (14). Meanwhile, the only “processing” that
occurs is the deconvolution of F(k), which we have seen goes some way towards correcting
for the phase/amplitude effects of the viscoacoustic reflectivity. The linear inversion will
therefore include the unmitigated effects of propagation in its output.
Figures 8 – 9 illustrate the linear reconstruction of the two-event, single layer problem. Not
surprisingly there is a characteristic smoothness – and error – in the linear inverse results
below the first interface.
In summary, we may utilize a one-parameter, 1D normal incidence absorptive/dispersive
linear inverse milieu to investigate the “natural” spatial form of the viscoacoustic Born
approximation. This is useful in that (1) such a form is what the higher order terms of
262
10. Nonlinear inversion for wavespeed/Q MOSRP03
Figure 7: Example numerical data for a single layer experiment with acoustic reference/viscoacoustic non-
reference contrast.
the series will be expecting – also, it is the normal-incidence version of the form we may
expect from the multi-parameter, 1D-with offset inversion output developed theoretically by
Innanen and Weglein (2004) – and (2) we see what remains to be done by the higher-order
terms in the series.
The results are consistent with the statements of Innanen (2003) and Innanen and Weglein
(2003); namely, that amplitude adjustment or [nonlinear] Q-estimation is required, and that
depropagation, or nonlinear Q-compensation (in which the smooth edges of Figures 8 – 9
are sharpened) is also required. The balance of this paper is concerned with the former task;
the latter is considered future work.
3 Nonlinear Event Communication in Inversion
At the same time that we consider specifically viscoacoustic inputs to the nonlinear inversion
procedures afforded us by the inverse scattering series, we further consider some general
aspects of the inversion subseries – i.e., those component terms of the series which devote
themselves to the task of parameter identification. In particular, we comment in this section
on the nature of the communication between events that occurs during the computation of
the inversion subseries, and what that implies regarding the relative importance of nonlinear
combinations of events.
Consider momentarily the relationship between a simple 1D normal incidence acoustic data
set and its corresponding Born inverse. Imagine the data set to be made up of the reflected
primaries associated with a single layer of wavespeed c1, bounded from above by a half space
of wavespeed c0, and below by a half space of wavespeed c2. If the interfaces are at z1 > 0
and z2 > z1, and the source and receiver are at zs = zg = 0, the data is
D(k) = R1e
i2 ω
c0
z1
+ R′
2e
i2 ω
c0
z1
e
i2 ω
c1
(z2−z1)
, (16)
where R′
2 = (1 − R1)2
R2 contains the effects of shallower transmission effects, and R1 and
R2 are the reflection coefficients of the upper and lower interfaces respectively. If c0 is taken
263
11. Nonlinear inversion for wavespeed/Q MOSRP03
−500 −400 −300 −200 −100 0 100 200 300 400 500
−5
0
5
10
15
x 10
−3
FilterL
1
(z)
−500 −400 −300 −200 −100 0 100 200 300 400 500
−5
0
5
10
15
x 10
−3
FilterL
2
(z)
1000 1500 2000 2500 3000 3500 4000 4500 5000 5500 6000
−0.05
0
0.05
0.1
Pseudo−depth z (m)
β
1
(z)
a
b
c
Figure 8: Illustration of a recovered single parameter viscoacoustic model, Q1 = 100, Q2 = 20: (a) Filter
L1(z) to be convolved with Heaviside component H(z − z1), (b) Filter L2(z) to be convolved with Heaviside
component H(z − z2), (c) β1(z) = L1(z) ∗ H(z − z1) + L2(z) ∗ H(z − z2). The propagation effects on the
estimate of the lower interface are apparent.
to be the wavespeed everywhere, and pseudo-depths z′
1 and z′
2 are assigned to events of half
the measured traveltime and this reference wavespeed assumption, equation (16) may be
written
D(k) = R1eikz′
1 + R′
2eikz′
2 , (17)
where k = ω/c0. We now drop the primes in z for convenience, but remain in the pseudo-
depth domain for the duration of this section. In the space domain this reads
D(z) = R1δ(z − z1) + R′
2δ(z − z2). (18)
Using standard terminology, the Born inverse associated with the above data set, α1(z) =
V1(k, z)/k2
, is given by
α1(z) = 4
z
0
D(z′′
)dz′′
= 4R1H(z − z1) + 4R′
2H(z − z2). (19)
264
12. Nonlinear inversion for wavespeed/Q MOSRP03
−500 −400 −300 −200 −100 0 100 200 300 400 500
0
0.02
0.04
0.06
FilterL
1
(z)
−500 −400 −300 −200 −100 0 100 200 300 400 500
−0.01
−0.005
0
0.005
0.01
FilterL
2
(z)
1000 1500 2000 2500 3000 3500 4000 4500 5000 5500 6000
0
0.1
0.2
Pseudo−depth z (m)
β1
(z)
a
b
c
Figure 9: Illustration of a recovered single parameter viscoacoustic model, Q1 = 20, Q2 = 10: (a) Filter
L1(z) to be convolved with Heaviside component H(z − z1), (b) Filter L2(z) to be convolved with Heaviside
component H(z − z2), (c) β1(z) = L1(z) ∗ H(z − z1) + L2(z) ∗ H(z − z2). The propagation effects on the
estimate of the lower interface are increasingly destructive, and produce a characteristic smoothing.
The inversion subseries then acts upon this quantity nonlinearly:
αINV (z) =
∞
j=1
Ajαj
1(z), (20)
(where Aj = [j(−1)j−1
]/[4j−1
]) i.e. by summing weighted powers of α1(z).
Let us begin by considering the form of the Born inverse result of equation (19). The
Heaviside sum is in some sense not a good way of expressing the local amplitude of α1(z).
Another way of writing it is
α1(z) = 4R1 [H(z − z1) − H(z − z2)] + (4R1 + 4R′
2)H(z − z2), (21)
which now expressly gives the local amplitudes. This is somewhat more edifying in the sense
that it explicitly makes reference to the linear (additive) communication between events
265
13. Nonlinear inversion for wavespeed/Q MOSRP03
implied by the linear inversion for the lower layer (i.e. R1 and R′
2). Let us make this even
more explicit with a definition. Let:
α1(z) = D1 [H(z − z1) − H(z − z2)] + (D1 + D2)H(z − z′
2), (22)
where D1 = 4R1, D2 = 4R′
2, or more generally Dn = 4R′
n, the subscript referring explicitly
to the event in the data that contributed to that portion of the inverse. Equation (22)
straightforwardly generalizes to N layers, if we define H(z − zN+1) ≡ 0:
α1(z) =
N
n=1
n
i=1
Di [H(z − zn) − H(z − zn+1]. (23)
Figure 10 illustrates these definitions for the two-layer case. The Heavisides disallow in-
teraction between the terms in brackets (·) (for different n values) under exponentiation,
so
αj
1(z) =
N
n=1
n
i=1
Di
j
[H(z − zn) − H(z − zn+1)]. (24)
This allows us the flexibility to consider the local behaviour of the inversion result αINV (z)
directly in terms of the data events Dk that contribute to it. The amplitude of the j’th term
in the inversion subseries for the correction of the m’th layer is
Aj
m
i=1
Di
j
. (25)
For the second layer, this amounts to
Aj (D1 + D2)j
, (26)
and for the third layer,
Aj (D1 + D2 + D3)j
. (27)
Notice that in terms of Dn, the computation of the inversion subseries will produce terms
similar to the exponentiation of a multinomial. Therefore each term of the inversion subseries
(in this simple 1D normal incidence case) will involve a set of subterms whose coefficients
are given by the multinomial formula. For instance, the first three terms for the second layer
are
A1(D1 + D2),
A2(D2
1 + 2D1D2 + D2
2),
A3(D3
1 + 3D2
1D2 + 3D1D2
2 + D3
2),
(28)
266
14. Nonlinear inversion for wavespeed/Q MOSRP03
etc., i.e. they depend strongly on the familiar coefficients of the binomial formula:
1
1 1
1 2 1
1 3 3 1
1 4 6 4 1
...
(29)
Meanwhile the third layer follows trinomial-like terms, and so forth. This has an interesting
consequence. Consider the third-order term of the third layer:
A3(D3
1 +D3
2 +D3
3 +3D2
1D2 +3D2
1D3 +3D2
2D1 +3D2
2D3 +3D2
3D1 +3D2
3D2 +6D1D2D3). (30)
In equation (30): all else being equal (i.e. if we assume that events D1, D2, and D3 are of
approximately equal size), the term which combines all events evenly, D1D2D3, can be seen
to be six times as important as a term which involves a single event only, e.g. D3
1. The
multinomial formula therefore not only informs us about the distribution of communicating
data events produced by the inversion subseries, but it also predicts the relative importance
of one kind of communication over another. If we believe what we see in equation (30),
nonlinear interaction terms which use information as evenly-distributed as possible from all
data events above the layer of interest will dominate.
Since the binomial/multinomial formulas are used to model the relative probabilities of the
outcomes of random experiments, the inter-event communication distribution problem is
analogous to any number of well-known betting games, such as throwing dice or tossing
coins. For instance, the relative probabilities of observing heads (h) vs. tails (t) in n coin
tosses is given by the binomial formula:
(t + h)n
=
n
j=0
n!
j!(n − j)!
tn−j
hj
, (31)
and the rolling of dice is similarly given by the multinomial formula for a 6-term generating
function.
The inversion subseries is of course anything but random or stochastic. The point of the
analogy is to show that, especially at high order, the contribution of evenly distributed
nonlinear data communication is greater than that of any other type. For the two event case,
the relative importance of even-distribution over biased distribution is precisely the same as
the relative probability of tossing equal numbers of heads and tails over large numbers of
either. In other words, the nonlinear inversion terms which simultaneously make use of as
267
15. Nonlinear inversion for wavespeed/Q MOSRP03
many data events as are available dwarfs any other contribution. This behaviour may be
speaking of some intuitively reasonable feature of the inversion subseries, that in inverting
for the amplitude of a deep layer, the global effect of what overlies it is far more important
than any one (or few) local effect(s).
Figure 10: Synthetic data (a) corresponding to a two-layer model; (b) the Born approximation α1(z). Local
increases or decreases in the linear inversion due to specific data events are labelled Dn.
4 Nonlinear Q-estimation
In this section some properties of the inversion subseries as it addresses the reconstruction
of attenuation contrasts from 1D seismic data are considered. A peculiar data set is created
for this purpose, one that is designed not to reflect reality, but rather to examine certain
aspects of the inverse scattering series and only those aspects. Also, it involves the choice of
a simplified model of attenuation. We begin with a description of the physical framework of
this analysis.
We use the following two dispersion relations:
k =
ω
c0
,
k(z) =
ω
c0
[1 + iβ(z)],
(32)
the first being that of the reference medium, and the second that of a non reference medium
in which the wavespeed may vary, as may the parameter β, which, like the wavespeed
268
16. Nonlinear inversion for wavespeed/Q MOSRP03
is considered an inherent property of the medium in which the wave propagates. It is
responsible for not only altered reflection and transmission properties of a wave, but also for
the amplitude decay associated with intrinsic friction. One could argue pretty reasonably
that β is related to the better known parameter Q by β = 1/2Q because of their similar
roles in changing the amplitude spectrum of an impulse. However, most Q models are
more sophisticated in that they utilize dispersion to ensure the causality of the medium
response. We make use of this “friction” model, which captures much of the key behaviour
of attenuating media, to take advantage of its simplicity.
For this investigation, we restrict attention to models which contain variations in β only.
Waves propagate with wavespeed c0 everywhere.
4.1 A Further Complex Reflection Coefficient
Contrasts in β only will produce reflections with strength
Rn =
kn−1 − kn
kn−1 + kn
=
i(βn−1 − βn)
2 + i(βn−1 + βn)
, (33)
and, since we assume an acoustic reference medium (β0 = 0),
R1 =
−iβ1
2 + iβ1
. (34)
As in the dispersive models used elsewhere in this paper, the reflection coefficient is complex.
Without dispersion, the complex reflection coefficient implies the combination of a frequency
independent phase rotation and a real reflection coefficient: R1 = |R1|eiθ
. This is akin to
the reflection coefficients discussed in Born and Wolf (1999).
4.2 A Non-attenuating Viscoacoustic Data Set
A further simplification has to do with the measured data, rather than the medium itself.
As in the acoustic case, true viscoacoustic data will be a series of transient pulses that
return, delayed by an appropriate traveltime and weighted by a combination of reflection
and (shallower) transmission coefficients. But unlike an acoustic medium, the propagation
between contrasts also determines the character of the events. Each pulse that has travelled
through a medium with β = 0 will have had its amplitude suppressed at a cycle-independent
rate, and therefore be broadened. Although this is a fundamental aspect of viscoacoustic
wave propagation (and indeed one of the more interesting aspects for data processing), for
the purposes of this investigation we suppress it in creating the data. The results to follow
will help explain why.
The perturbation γ is based on an attenuative propagation law, and is itself complex:
269
17. Nonlinear inversion for wavespeed/Q MOSRP03
γ(z) = 1 −
k2
(z)
k2
0
= −2iβ(z) + β2
(z), (35)
If the propagation effects are suppressed (or have been corrected for already), then the Born
approximation of γ(z), namely γ1(z) is
γ1(z) = A1H(z − z1) + A2H(z − z2) + ..., (36)
where the An are given by
An = 4Rn
n−1
j=1
(1 − R2
j ), (37)
and the reflection coefficients Rj(k) are given by equations (33) and (34).
In the previous section it was noted that the contributing terms to any layer in the inversion
subseries can be found by invoking the multinomial formula; and the contributing terms to
a two-layer model are produced by the binomial formula (which has a much more simply
computable form). We assume a two-layer model, in which the reference medium (c = c0,
β = 0) is perturbed below z1 such that c = c0 and β = β1, and again below z2 such that
c = c0 and β = β2. The depths z1 and z2 are chosen to be 300 m and 600 m respectively.
This results in
γ1(z) =
−4iβ1
2 + iβ1
H(z − z1) +
−4i(β1 − β2)
2 + i(β1 + β2)
1 −
β2
1
(2 + iβ1)2
H(z − z2). (38)
The n’th power of the Born approximation for either layer of interest is computable by
appealing to the binomial formula. The result of the inversion subseries on this two interface
model is therefore
γINV (z) = AINV
1 H(z − z1) + AINV
2 H(z − z2), (39)
where
AINV
1 =
∞
j=1
j(−1)j−1
4j−1
Aj
1,
AINV
2 =
∞
j=1
j(−1)j−1
4j−1
j
k=1
j!
k!(j − k)!
Aj−k
1 Ak
2 ,
(40)
and where
270
18. Nonlinear inversion for wavespeed/Q MOSRP03
A1 =
−4iβ1
2 + iβ1
,
A2 =
−4i(β1 − β2)
2 + i(β1 + β2)
1 −
β2
1
(2 + iβ1)2
(41)
are the amplitudes of the Born approximation in equation (36). Figures 11 – 13 demonstrate
numerically the convergence of these expressions (the real part, for the sake of illustration)
towards the true value. There is clear oscillation over the first three or so terms for the
second layer, but the oscillations settle down to close to the correct value with reasonable
speed. This suggests an important aspect of going beyond the Born approximation in Q
estimation – the deeper layers, whose amplitudes are related not only to their own Q value
but to those above them, require the invocation of several nonlinear orders to achieve their
correct value.
Finally, Figures 14 – 16 illustrate the same inversion results as those seen in Figures 11 –
13, but in the context of the spatial distribution of the reconstructed perturbation.
0 5 10 15
−0.03
−0.02
−0.01
0
0.01
0.02
0.03
0.04
Laye1γINV
0 5 10 15
−0.1
−0.05
0
0.05
0.1
Layer2γINV
Iteration Number/Inversion Order
a
b
Figure 11: Convergence of the viscoacoustic inversion subseries for a two-interface model, consisting of
contrasts in (friction-model) attenuation parameter. The real component of the perturbation is plotted against
inversion order or iteration number (connected dots) and with respect to the true perturbation value (solid).
(a) layer 1 inversion. (b) layer 2 inversion. For model contrasts β1 = 0.15, β2 = 0.25, γINV (z) is close to
γ(z) for both layers.
271
19. Nonlinear inversion for wavespeed/Q MOSRP03
0 5 10 15
−0.2
−0.1
0
0.1
0.2
Layer1γ
INV
0 5 10 15
−0.4
−0.2
0
0.2
0.4
Layer2γINV
Iteration Number/Inversion Order
a
b
Figure 12: Convergence of the viscoacoustic inversion subseries for a two-interface model, consisting of
contrasts in (friction-model) attenuation parameter. The real component of the perturbation is plotted against
inversion order or iteration number (connected dots) and with respect to the true perturbation value (solid).
(a) layer 1 inversion. (b) layer 2 inversion. For model contrasts β1 = 0.35, β2 = 0.55, γINV (z) differs
slightly from γ(z) in the lower layer.
272
20. Nonlinear inversion for wavespeed/Q MOSRP03
0 5 10 15
−0.4
−0.2
0
0.2
0.4
Layer1γINV
0 5 10 15
−0.2
−0.1
0
0.1
0.2
Layer2γ
INV
Iteration Number/Inversion Order
a
b
Figure 13: Convergence of the viscoacoustic inversion subseries for a two-interface model, consisting of
contrasts in (friction-model) attenuation parameter. The real component of the perturbation is plotted against
inversion order or iteration number (connected dots) and with respect to the true perturbation value (solid).
(a) layer 1 inversion. (b) layer 2 inversion. For model contrasts β1 = 0.55, β2 = 0.35, γINV (z) again differs
slightly from γ(z) in the lower layer.
273
21. Nonlinear inversion for wavespeed/Q MOSRP03
0 100 200 300 400 500 600 700 800 900
−0.1
0
0.1
0 100 200 300 400 500 600 700 800 900
−0.1
0
0.1
0 100 200 300 400 500 600 700 800 900
−0.1
0
0.1
Perturbationsγ1
throughγ5
0 100 200 300 400 500 600 700 800 900
−0.1
0
0.1
0 100 200 300 400 500 600 700 800 900
−0.1
0
0.1
Pseudo−depth (m)
a
b
c
d
e
Figure 14: Convergence of the viscoacoustic inversion subseries for a two- interface model, consisting of
contrasts in (friction-model) attenuation parameter. Depth profile of the real component of the constructed
perturbation (solid) is plotted against the true perturbation (dotted) for 5 iterations. (a) iteration 1 (Born
approximation) – (e) iteration 5. Model inputs: β1 = 0.15, β2 = 0.25.
274
22. Nonlinear inversion for wavespeed/Q MOSRP03
0 100 200 300 400 500 600 700 800 900
−0.5
0
0.5
0 100 200 300 400 500 600 700 800 900
−0.5
0
0.5
0 100 200 300 400 500 600 700 800 900
−0.5
0
0.5
Perturbationsγ
1
throughγ
5
0 100 200 300 400 500 600 700 800 900
−0.5
0
0.5
0 100 200 300 400 500 600 700 800 900
−0.5
0
0.5
Pseudo−depth (m)
a
b
c
d
e
Figure 15: Convergence of the viscoacoustic inversion subseries for a two- interface model, consisting of
contrasts in (friction-model) attenuation parameter. Depth profile of the real component of the constructed
perturbation (solid) is plotted against the true perturbation (dotted) for 5 iterations. (a) iteration 1 (Born
approximation) – (e) iteration 5. Model inputs: β1 = 0.35, β2 = 0.55.
275
23. Nonlinear inversion for wavespeed/Q MOSRP03
0 100 200 300 400 500 600 700 800 900
−0.5
0
0.5
0 100 200 300 400 500 600 700 800 900
−0.5
0
0.5
0 100 200 300 400 500 600 700 800 900
−0.5
0
0.5
Perturbationsγ
1
throughγ
5
0 100 200 300 400 500 600 700 800 900
−0.5
0
0.5
0 100 200 300 400 500 600 700 800 900
−0.5
0
0.5
Pseudo−depth (m)
a
b
c
d
e
Figure 16: Convergence of the viscoacoustic inversion subseries for a two- interface model, consisting of
contrasts in (friction-model) attenuation parameter. Depth profile of the real component of the constructed
perturbation (solid) is plotted against the true perturbation (dotted) for 5 iterations. (a) iteration 1 (Born
approximation) – (e) iteration 5. Model inputs: β1 = 0.55, β2 = 0.35.
276
24. Nonlinear inversion for wavespeed/Q MOSRP03
The success of the inversion subseries in reconstructing the amplitudes of the attenuation
contrasts in the examples of the previous section is worth pursuing, since the success was
achieved by using a strange input data set; one in which the propagation effects of the
medium had been stripped off a priori. How should this success be interpreted?
It follows that if the inversion subseries correctly uses the above data set to estimate Q – as
opposed to one in which the propagation effects were present – then the subseries must have
been expecting to encounter something of that nature.
The apparent conclusion is that the other non-inversion subseries terms should be looked-to
to provide just such an attenuation-compensated Born approximation. This is in agreement
with the conclusions reached by Innanen and Weglein (2003), in which it appeared that the
terms representing the viscous analogy to the imaging subseries would be involved in the
more general task of removing the effects of propagation (reflector location and attenua-
tion/dispersion) on the wave field.
More generally, these results suggest that for these simple cases at least (1D normal inci-
dence acoustic/viscoacoustic) we see distinct evidence that the inversion subseries produces
corrected parameter amplitudes when the output of the imaging/depropagation subseries is
used as input. This is a strong departure from the natural behavior of the inverse scattering
series, which, of course, does not use output of one set of terms as input for another.
5 Conclusions
The aims of this paper have been to explore (1) the nature of the input to the nonlinear
absorptive/dispersive methods implied by the mathematics of the inverse scattering series,
and (2) the nature of the inversion subseries, or target identification subseries itself, both
generally and with respect to the viscoacoustic problem.
Regarding the input, the viscoacoustic Born approximation is shown to have two characteris-
tic elements – an amplitude/phase effect due to the reflectivity, and an absorptive/dispersive
effect due to the propagation effects; the former is ameliorated somewhat in the linear in-
verse procedure, and the former is not. Hence the linear Q-profile is reconstructed with
some account taken of the phase and amplitude spectra of the local events, but without any
attempt to correct for the effective transmission associated with viscous propagation.
The nonlinear processing and inversion ideas fleshed out in this paper rely, in their detail,
on specific models of absorption and dispersion. For instance, much of the formalism in
the Born inversion for c and Q in the single interface case arises because the frequency
dependence of the dispersion law (F(k)) is separable, multiplying a frequency-independent
Q. Since the detail of the inversion relies on the good fortune of having a viscoacoustic
model that behaves so accommodatingly, it is natural to ask: if we have tied ourselves to
one particular model of dispersion, do we not then rise and fall with the (case by case) utility
and accuracy of this one model? The practical answer is yes, of course; the specific formulae
277
25. Nonlinear inversion for wavespeed/Q MOSRP03
and methods in this paper require that this constant Q model explain the propagation and
reflection/transmission in the medium. But one of the core ideas of this paper has been
to root out the basic reasons why such an approach might work. The dispersion-based
frequency dependence of the reflection coefficient as a means to separate Q from c is not a
model dependent idea, regardless of model-to-model differences in detail and difficulty.
The more sophisticated Q model employed in this paper provided a complex, frequency-
dependent scattering potential, which has since been used to approximate the Q profile in
linear inversion, and to comment on the expected spatial form of the input to higher order
terms. However, at an early stage in the derivation the form was simplified with a rejection
of terms quadratic in Q. One might notice that in the latter portion of this paper the same
step was not taken in the friction model perturbation that was used as input in the higher
order terms. It is anticipated that the more sophisticated quadratic potential will be needed
in the dispersive Q model when it is used as series input.
The results of this paper appear to confirm the predictions of the forward series analysis in
Innanen and Weglein (2003); i.e. that prior to the inversion (target identification) step, the
subseries terms which are nominally concerned with imaging, must also remove the effects
of attenuative propagation. The confirmation in this paper is circumstantial but compelling:
the ubiquitous increase in quality of the inverse results (both low order constant Q and
high order friction-based attenuation) in the absence of propagation effects strongly suggest
that depropagated (Q compensated) inputs are expected in these inversion procedures. A
clear future direction for research is in generalizing the imaging subseries to perform Q
compensation simultaneously with reflector location.
Acknowledgments
The authors thank the sponsors and members of M-OSRP and CDSST, especially Simon
Shaw, Bogdan Nita and Tad Ulrych, for discussion and support.
References
Innanen, K. A. “Methods for the Treatment of Acoustic and Absorptive/Dispersive Wave
Field Measurements.” Ph.D. Thesis, University of British Columbia, (2003).
Innanen, K. A. and Weglein, A. B., “Construction of Absorptive/Dispersive Wave Fields
with the Forward Scattering Series.” Journal of Seismic Exploration, (2003): In Press.
Innanen, K. A. and Weglein, A. B., “Linear Viscoacoustic Inversion: Theory and 1D Syn-
thetic Tests.” M-OSRP Annual Report, (2004): This Report.
278
26. Nonlinear inversion for wavespeed/Q MOSRP03
Shaw, S. A., Weglein, A. B., Foster, D. J., Matson, K. H. and Keys, R. G. “Isolation of
a Leading Order Depth Imaging Series and Analysis of its Convergence Properties.” In
Preparation, (2004).
279