This document outlines an algorithm for robust 3D gravity gradient inversion by planting anomalous densities. It describes the forward problem of modeling gravity gradients from anomalous densities, as well as the inverse problem of estimating densities from observed gradients. The algorithm formulates the inverse problem as a regularized optimization that minimizes data misfit while imposing constraints like compactness and concentration around seed densities. Neighboring prisms are iteratively accreted to seeds in a manner that reduces misfit and regularization cost. The algorithm is inspired by previous work and aims to robustly estimate densities from real geophysical data.
This document outlines key concepts in linear models and estimation that will be covered in the STA721 Linear Models course, including:
1) Linear regression models decompose observed data into fixed and random components.
2) Maximum likelihood estimation finds parameter values that maximize the likelihood function.
3) Linear restrictions on the mean vector μ define a subspace and equivalent parameterizations represent the same subspace.
4) Inference should be independent of the parameterization or coordinate system used to represent μ.
A Review of Proximal Methods, with a New OneGabriel Peyré
The document discusses proximal splitting methods for solving optimization problems with composite objectives. It begins by introducing inverse problems regularization and how proximal operators are used to solve problems by splitting them into smooth and non-smooth components. It then presents the forward-backward splitting method, Douglas-Rachford splitting, and the generalized forward-backward splitting method. Examples are provided to illustrate how these methods can be applied to problems like L1 regularization, constrained L1 minimization, and block sparsity regularization.
The document discusses generating functions. It defines a generating function G(z) as a power series representation of a sequence <an> = a0, a1, a2, ... . Properties of generating functions include that differentiating or multiplying generating functions results in new generating functions, and that generating functions can reveal relationships between sequences.
This document provides instructions for a MATLAB assignment with two parts. Part I involves constructing Lagrange interpolants for a given function. Students are asked to create MATLAB function files for Lagrange interpolation and for defining a test function, as well as a script file to test the interpolation. Part II involves solving a system of linear ordinary differential equations and constructing the solution at discrete time points. Students are asked to create a function file to solve the ODE using eigenvalues and eigenvectors, and a script file to test it on a sample problem. Detailed hints are provided for both parts.
This document provides an overview of Bayesian methods for machine learning. It introduces some foundational Bayesian concepts including representing beliefs with probabilities, the Dutch book theorem, asymptotic certainty, and model comparison using Occam's razor. It discusses challenges like intractable integrals and presents approximation tools like Laplace's approximation, variational inference, and MCMC. It also covers choosing priors, including objective priors like noninformative, Jeffreys, and reference priors as well as subjective and hierarchical priors.
Reasoning over the evolution of source code using QRPEstevensreinout
1. The document discusses Quantified Regular Path Expressions (QRPEs), a temporal query language that allows reasoning over the evolution of source code.
2. QRPEs use regular expressions to match patterns in version graphs and source code changes. This enables formulating queries over how code elements like methods are added, removed, or changed over time.
3. The document provides examples of QRPE queries to find "zombie methods" that are no longer called, analyze test-driven development patterns, and more. It also describes how QRPEs could be extended to incorporate additional information sources.
Aristidis Likas, Associate Professor and Christoforos Nikou, Assistant Professor, University of Ioannina, Department of Computer Science , Mixture Models for Image Analysis
The document discusses the definite integral, including computing it using Riemann sums, estimating it using approximations like the midpoint rule, and reasoning about its properties. It outlines the topics to be covered, such as recalling previous concepts and comparing properties of integrals. Formulas are provided for calculating Riemann sums using different representative points within the intervals.
This document outlines key concepts in linear models and estimation that will be covered in the STA721 Linear Models course, including:
1) Linear regression models decompose observed data into fixed and random components.
2) Maximum likelihood estimation finds parameter values that maximize the likelihood function.
3) Linear restrictions on the mean vector μ define a subspace and equivalent parameterizations represent the same subspace.
4) Inference should be independent of the parameterization or coordinate system used to represent μ.
A Review of Proximal Methods, with a New OneGabriel Peyré
The document discusses proximal splitting methods for solving optimization problems with composite objectives. It begins by introducing inverse problems regularization and how proximal operators are used to solve problems by splitting them into smooth and non-smooth components. It then presents the forward-backward splitting method, Douglas-Rachford splitting, and the generalized forward-backward splitting method. Examples are provided to illustrate how these methods can be applied to problems like L1 regularization, constrained L1 minimization, and block sparsity regularization.
The document discusses generating functions. It defines a generating function G(z) as a power series representation of a sequence <an> = a0, a1, a2, ... . Properties of generating functions include that differentiating or multiplying generating functions results in new generating functions, and that generating functions can reveal relationships between sequences.
This document provides instructions for a MATLAB assignment with two parts. Part I involves constructing Lagrange interpolants for a given function. Students are asked to create MATLAB function files for Lagrange interpolation and for defining a test function, as well as a script file to test the interpolation. Part II involves solving a system of linear ordinary differential equations and constructing the solution at discrete time points. Students are asked to create a function file to solve the ODE using eigenvalues and eigenvectors, and a script file to test it on a sample problem. Detailed hints are provided for both parts.
This document provides an overview of Bayesian methods for machine learning. It introduces some foundational Bayesian concepts including representing beliefs with probabilities, the Dutch book theorem, asymptotic certainty, and model comparison using Occam's razor. It discusses challenges like intractable integrals and presents approximation tools like Laplace's approximation, variational inference, and MCMC. It also covers choosing priors, including objective priors like noninformative, Jeffreys, and reference priors as well as subjective and hierarchical priors.
Reasoning over the evolution of source code using QRPEstevensreinout
1. The document discusses Quantified Regular Path Expressions (QRPEs), a temporal query language that allows reasoning over the evolution of source code.
2. QRPEs use regular expressions to match patterns in version graphs and source code changes. This enables formulating queries over how code elements like methods are added, removed, or changed over time.
3. The document provides examples of QRPE queries to find "zombie methods" that are no longer called, analyze test-driven development patterns, and more. It also describes how QRPEs could be extended to incorporate additional information sources.
Aristidis Likas, Associate Professor and Christoforos Nikou, Assistant Professor, University of Ioannina, Department of Computer Science , Mixture Models for Image Analysis
The document discusses the definite integral, including computing it using Riemann sums, estimating it using approximations like the midpoint rule, and reasoning about its properties. It outlines the topics to be covered, such as recalling previous concepts and comparing properties of integrals. Formulas are provided for calculating Riemann sums using different representative points within the intervals.
1. Geodesic sampling and meshing techniques can be used to generate adaptive triangulations and meshes on Riemannian manifolds based on a metric tensor.
2. Anisotropic metrics can be defined to generate meshes adapted to features like edges in images or curvature on surfaces. Triangles will be elongated along strong features to better approximate functions.
3. Farthest point sampling can be used to generate well-spaced point distributions over manifolds according to a metric, which can then be triangulated using geodesic Delaunay refinement.
Lesson 13: Derivatives of Logarithmic and Exponential FunctionsMatthew Leingang
The document is notes for a calculus class covering derivatives of exponential and logarithmic functions. It includes:
- Announcements about upcoming review sessions and an exam on sections 1.1-2.5.
- An outline of topics to be covered, including derivatives of the natural exponential function, natural logarithm function, other exponentials/logarithms, and logarithmic differentiation.
- Definitions and properties of exponential functions, the natural number e, and logarithmic functions.
- Examples of graphs of various exponential and logarithmic functions.
- Derivatives of exponential functions and proofs involving limits.
This document contains tables summarizing formulas for derivatives, trigonometric functions, logarithms. It lists the derivative of common functions like x, x^2, sinx, cosx. It also provides trigonometric formulas for sine, cosine, tangent of sum and difference of angles. Formulas are given for logarithms, including the change of base formula and properties of logarithms.
The document provides information about Expert Systems and Solutions, including their contact details and areas of expertise. They are calling for research projects from final year students in fields like electrical engineering, electronics and communications, power systems, and applied electronics. Students can assemble hardware projects in the company's research labs with guidance from experts.
A current perspectives of corrected operator splitting (os) for systemsAlexander Decker
This document discusses operator splitting methods for solving systems of convection-diffusion equations. It begins by introducing operator splitting, where the time evolution is split into separate steps for convection and diffusion. While efficient, operator splitting can produce significant errors near shocks.
The document then examines the nonlinear error mechanism that causes issues for operator splitting near shocks. When a shock develops in the convection step, it introduces a local linearization that neglects self-sharpening effects. This leads to splitting errors.
To address this, the document discusses corrected operator splitting, which uses the wave structure from the convection step to identify where nonlinear splitting errors occur. Terms are added to the diffusion step to compensate for
Exponential functions change addition into multiplication. Different bases for exponentials produce different functions but they share similar characteristics. One base--a number we call e--is an especially good one.
Exponential functions change addition into multiplication. Different bases for exponentials produce different functions but they share similar characteristics. One base--a number we call e--is an especially good one.
1. The document describes Anchor Graph Hashing (AGH), a method for learning binary codes for approximate nearest neighbor search using graphs.
2. AGH constructs an anchor graph from a set of anchor points and learns binary codes by solving a graph partitioning problem on the anchor graph.
3. AGH has time and space complexities that are sublinear in the number of data points for training and efficient computation for out-of-sample extensions.
05 history of cv a machine learning (theory) perspective on computer visionzukun
This document provides an overview of machine learning algorithms used in computer vision from the perspective of a machine learning theorist. It discusses how the theorist got involved in a computer vision project in 2002 and summarizes key algorithms at that time like boosting, support vector machines, and their developments. It also provides historical context and comparisons of algorithms like perceptron and Winnow. The document uses examples to explain concepts like kernels and the kernel trick in support vector machines.
The document discusses derivative-free optimization and evolutionary algorithms. It begins with an introduction to derivative-free optimization, explaining why it is useful when derivatives are unavailable or functions are noisy. Evolutionary algorithms are then discussed, including their fundamental elements like populations, selection, and variation operators. Specific evolutionary algorithms are presented, such as the estimation of distribution algorithm (EDA) and the (1+1)-ES algorithm with 1/5th success rule adaptation. The slides note that evolutionary algorithms are robust to noise and difficult optimization problems but are generally slower than derivative-based methods.
This document discusses using linear approximations to estimate functions. It provides an example estimating sin(61°) using linear approximations about a=0 and a=60°. When approximating about a=0, the estimate is 1.06465. When approximating about a=60°, the estimate is 0.87475, which is closer to the actual value of sin(61°) according to a calculator check. The document teaches that the tangent line provides the best linear approximation near a point, and its equation can be used to estimate function values.
Parameter Estimation in Stochastic Differential Equations by Continuous Optim...SSA KPI
AACIMP 2010 Summer School lecture by Gerhard Wilhelm Weber. "Applied Mathematics" stream. "Modern Operational Research and Its Mathematical Methods with a Focus on Financial Mathematics" course. Part 8.
More info at http://summerschool.ssa.org.ua
The inverse of a function "undoes" the effect of the function. We look at the implications of that property in the derivative, as well as logarithmic functions, which are inverses of exponential functions.
The inverse of a function "undoes" the effect of the function. We look at the implications of that property in the derivative, as well as logarithmic functions, which are inverses of exponential functions.
This document contains the questions from an Engineering Mathematics examination from December 2012. It covers topics like:
- Using Taylor's series method and Runge-Kutta method to solve initial value problems
- Using Milne's method, Adams-Bashforth method, and Picard's method to solve differential equations
- Properties of analytic functions and bilinear transformations
- Evaluating integrals using Cauchy's integral formula and finding Laurent series
- Expressing polynomials in terms of Legendre polynomials
- Concepts related to probability distributions like binomial, exponential and normal distributions
- Hypothesis testing and confidence intervals
The questions test the students' understanding of numerical methods for solving differential equations, complex analysis topics, orthogonal
3D gravity inversion by planting anomalous densitiesLeonardo Uieda
Paper presented at the 2011 SBGf International Congress in Rio de Janeiro, Brazil.
Abstract:
This paper presents a novel gravity inversion method for estimating a 3D density-contrast distribution defined on a grid of prisms. Our method consists of an iterative algorithm that does not require the solution of a large equation system. Instead, the solution
grows systematically around user-specified prismatic
elements called “seeds”. Each seed can have a different density contrast, allowing the interpretation of multiple bodies with different density contrasts and interfering gravitational effects. The compactness of the solution around the seeds is imposed by
means of a regularizing function. The solution
grows by the accretion of neighboring prisms of the
current solution. The prisms for the accretion are chosen by systematically searching the set of current neighboring prisms. Therefore, this approach allows that the columns of the Jacobian matrix be calculated on demand. This is a known technique from computer science called “lazy evaluation”, which greatly reduces the demand of computer memory and processing time. Test on synthetic data and on real data collected over the ultramafic Cana Brava complex, central Brazil, confirmed the ability of our method in detecting sharp
and compact bodies.
The document provides 14 formulae across various topics:
- Algebra formulas for operations, exponents, logarithms
- Calculus formulas for derivatives, integrals, areas under curves
- Statistics formulas for means, standard deviations, probabilities
- Geometry formulas for distances, midpoints, areas of shapes
- Trigonometry formulas for trig functions, angles, triangles
- The symbols used in the formulas are explained.
The document provides 14 formulae across various topics:
- Algebra formulas for operations, exponents, logarithms
- Calculus formulas for derivatives, integrals, areas under curves
- Statistics formulas for means, standard deviations, probabilities
- Geometry formulas for distances, midpoints, areas of shapes
- Trigonometry formulas for trig functions, angles, triangles
- The symbols used in the formulas are explained.
The document derives the normal probability density function from basic assumptions. It assumes that errors in perpendicular directions are independent, large errors are less likely than small errors, and the distribution is not dependent on orientation. This leads to a differential equation that can only be satisfied by an exponential function, giving the normal distribution. The values of the coefficients are determined by requiring the total area under the curve to be 1 and that the variance equals 1/k. This fully specifies the normal probability density function.
1. Geodesic sampling and meshing techniques can be used to generate adaptive triangulations and meshes on Riemannian manifolds based on a metric tensor.
2. Anisotropic metrics can be defined to generate meshes adapted to features like edges in images or curvature on surfaces. Triangles will be elongated along strong features to better approximate functions.
3. Farthest point sampling can be used to generate well-spaced point distributions over manifolds according to a metric, which can then be triangulated using geodesic Delaunay refinement.
Lesson 13: Derivatives of Logarithmic and Exponential FunctionsMatthew Leingang
The document is notes for a calculus class covering derivatives of exponential and logarithmic functions. It includes:
- Announcements about upcoming review sessions and an exam on sections 1.1-2.5.
- An outline of topics to be covered, including derivatives of the natural exponential function, natural logarithm function, other exponentials/logarithms, and logarithmic differentiation.
- Definitions and properties of exponential functions, the natural number e, and logarithmic functions.
- Examples of graphs of various exponential and logarithmic functions.
- Derivatives of exponential functions and proofs involving limits.
This document contains tables summarizing formulas for derivatives, trigonometric functions, logarithms. It lists the derivative of common functions like x, x^2, sinx, cosx. It also provides trigonometric formulas for sine, cosine, tangent of sum and difference of angles. Formulas are given for logarithms, including the change of base formula and properties of logarithms.
The document provides information about Expert Systems and Solutions, including their contact details and areas of expertise. They are calling for research projects from final year students in fields like electrical engineering, electronics and communications, power systems, and applied electronics. Students can assemble hardware projects in the company's research labs with guidance from experts.
A current perspectives of corrected operator splitting (os) for systemsAlexander Decker
This document discusses operator splitting methods for solving systems of convection-diffusion equations. It begins by introducing operator splitting, where the time evolution is split into separate steps for convection and diffusion. While efficient, operator splitting can produce significant errors near shocks.
The document then examines the nonlinear error mechanism that causes issues for operator splitting near shocks. When a shock develops in the convection step, it introduces a local linearization that neglects self-sharpening effects. This leads to splitting errors.
To address this, the document discusses corrected operator splitting, which uses the wave structure from the convection step to identify where nonlinear splitting errors occur. Terms are added to the diffusion step to compensate for
Exponential functions change addition into multiplication. Different bases for exponentials produce different functions but they share similar characteristics. One base--a number we call e--is an especially good one.
Exponential functions change addition into multiplication. Different bases for exponentials produce different functions but they share similar characteristics. One base--a number we call e--is an especially good one.
1. The document describes Anchor Graph Hashing (AGH), a method for learning binary codes for approximate nearest neighbor search using graphs.
2. AGH constructs an anchor graph from a set of anchor points and learns binary codes by solving a graph partitioning problem on the anchor graph.
3. AGH has time and space complexities that are sublinear in the number of data points for training and efficient computation for out-of-sample extensions.
05 history of cv a machine learning (theory) perspective on computer visionzukun
This document provides an overview of machine learning algorithms used in computer vision from the perspective of a machine learning theorist. It discusses how the theorist got involved in a computer vision project in 2002 and summarizes key algorithms at that time like boosting, support vector machines, and their developments. It also provides historical context and comparisons of algorithms like perceptron and Winnow. The document uses examples to explain concepts like kernels and the kernel trick in support vector machines.
The document discusses derivative-free optimization and evolutionary algorithms. It begins with an introduction to derivative-free optimization, explaining why it is useful when derivatives are unavailable or functions are noisy. Evolutionary algorithms are then discussed, including their fundamental elements like populations, selection, and variation operators. Specific evolutionary algorithms are presented, such as the estimation of distribution algorithm (EDA) and the (1+1)-ES algorithm with 1/5th success rule adaptation. The slides note that evolutionary algorithms are robust to noise and difficult optimization problems but are generally slower than derivative-based methods.
This document discusses using linear approximations to estimate functions. It provides an example estimating sin(61°) using linear approximations about a=0 and a=60°. When approximating about a=0, the estimate is 1.06465. When approximating about a=60°, the estimate is 0.87475, which is closer to the actual value of sin(61°) according to a calculator check. The document teaches that the tangent line provides the best linear approximation near a point, and its equation can be used to estimate function values.
Parameter Estimation in Stochastic Differential Equations by Continuous Optim...SSA KPI
AACIMP 2010 Summer School lecture by Gerhard Wilhelm Weber. "Applied Mathematics" stream. "Modern Operational Research and Its Mathematical Methods with a Focus on Financial Mathematics" course. Part 8.
More info at http://summerschool.ssa.org.ua
The inverse of a function "undoes" the effect of the function. We look at the implications of that property in the derivative, as well as logarithmic functions, which are inverses of exponential functions.
The inverse of a function "undoes" the effect of the function. We look at the implications of that property in the derivative, as well as logarithmic functions, which are inverses of exponential functions.
This document contains the questions from an Engineering Mathematics examination from December 2012. It covers topics like:
- Using Taylor's series method and Runge-Kutta method to solve initial value problems
- Using Milne's method, Adams-Bashforth method, and Picard's method to solve differential equations
- Properties of analytic functions and bilinear transformations
- Evaluating integrals using Cauchy's integral formula and finding Laurent series
- Expressing polynomials in terms of Legendre polynomials
- Concepts related to probability distributions like binomial, exponential and normal distributions
- Hypothesis testing and confidence intervals
The questions test the students' understanding of numerical methods for solving differential equations, complex analysis topics, orthogonal
3D gravity inversion by planting anomalous densitiesLeonardo Uieda
Paper presented at the 2011 SBGf International Congress in Rio de Janeiro, Brazil.
Abstract:
This paper presents a novel gravity inversion method for estimating a 3D density-contrast distribution defined on a grid of prisms. Our method consists of an iterative algorithm that does not require the solution of a large equation system. Instead, the solution
grows systematically around user-specified prismatic
elements called “seeds”. Each seed can have a different density contrast, allowing the interpretation of multiple bodies with different density contrasts and interfering gravitational effects. The compactness of the solution around the seeds is imposed by
means of a regularizing function. The solution
grows by the accretion of neighboring prisms of the
current solution. The prisms for the accretion are chosen by systematically searching the set of current neighboring prisms. Therefore, this approach allows that the columns of the Jacobian matrix be calculated on demand. This is a known technique from computer science called “lazy evaluation”, which greatly reduces the demand of computer memory and processing time. Test on synthetic data and on real data collected over the ultramafic Cana Brava complex, central Brazil, confirmed the ability of our method in detecting sharp
and compact bodies.
The document provides 14 formulae across various topics:
- Algebra formulas for operations, exponents, logarithms
- Calculus formulas for derivatives, integrals, areas under curves
- Statistics formulas for means, standard deviations, probabilities
- Geometry formulas for distances, midpoints, areas of shapes
- Trigonometry formulas for trig functions, angles, triangles
- The symbols used in the formulas are explained.
The document provides 14 formulae across various topics:
- Algebra formulas for operations, exponents, logarithms
- Calculus formulas for derivatives, integrals, areas under curves
- Statistics formulas for means, standard deviations, probabilities
- Geometry formulas for distances, midpoints, areas of shapes
- Trigonometry formulas for trig functions, angles, triangles
- The symbols used in the formulas are explained.
The document derives the normal probability density function from basic assumptions. It assumes that errors in perpendicular directions are independent, large errors are less likely than small errors, and the distribution is not dependent on orientation. This leads to a differential equation that can only be satisfied by an exponential function, giving the normal distribution. The values of the coefficients are determined by requiring the total area under the curve to be 1 and that the variance equals 1/k. This fully specifies the normal probability density function.
The document provides 14 formulae across 4 topics:
1) Algebra - includes formulae for roots of quadratic equations, logarithms, sequences, etc.
2) Calculus - includes formulae for derivatives, integrals, areas under curves, volumes of revolution.
3) Statistics - includes formulae for means, standard deviation, probability, binomial distribution.
4) Geometry - includes formulae for distances, midpoints, areas of triangles, circles, trigonometry ratios.
The document provides 14 formulae across 4 topics:
1) Algebra - includes formulae for roots of quadratic equations, logarithms, sequences, etc.
2) Calculus - includes formulae for derivatives, integrals, areas under curves, volumes of revolution.
3) Statistics - includes formulae for means, standard deviation, probability, binomial distribution.
4) Geometry - includes formulae for distances, midpoints, areas of triangles, circles, trigonometry ratios.
This document summarizes Hill's method for numerically approximating the eigenvalues and eigenfunctions of differential operators. Hill's method has two main steps:
1. Perform a Floquet-Bloch decomposition to reduce the problem from the real line to the interval [0,L] with periodic boundary conditions, parameterized by the Floquet exponent μ. This gives an operator with a compact resolvent.
2. Approximate the solutions by Fourier series, reducing the problem to a matrix eigenvalue problem that can be solved numerically.
The method is straightforward to implement and effective for various problems involving differential operators on the real line or with periodic boundary conditions. Convergence rates and error bounds for Hill's method are also presented.
Basic differential equations in fluid mechanicsTarun Gehlot
This document provides an overview of fluid dynamics concepts including the continuity equation, Navier-Stokes equations, and examples of their application to laminar flow situations. It derives the 1-dimensional continuity equation and uses it to describe flow between parallel plates. It then derives the equation for laminar flow velocity profile between infinite horizontal parallel plates based on the Navier-Stokes equations and applies it to calculate discharge rate. Finally, it provides an example problem calculating discharge rate and power for an oil skimming device.
This document contains a summary of key concepts in algebra, geometry, and trigonometry:
1) Algebra topics include arithmetic operations, factoring, exponents, binomials, and the quadratic formula.
2) Geometry topics cover lines, triangles, circles, spheres, cones, cylinders, sectors, and trapezoids including formulas for area, perimeter, volume, and surface area.
3) Trigonometry definitions and formulas are provided for sine, cosine, tangent, cotangent, addition, subtraction, and half-angle identities.
This document contains a summary of key concepts in algebra, geometry, and trigonometry:
1) Algebra topics include arithmetic operations, factoring, exponents, binomials, and the quadratic formula.
2) Geometry topics cover lines, triangles, circles, spheres, cones, cylinders, sectors, and trapezoids including formulas for area, perimeter, volume, and surface area.
3) Trigonometry definitions and formulas are provided for sine, cosine, tangent, cotangent, addition, subtraction, and half-angle identities.
This document contains examples of integrals and their solutions. It begins by showing (a) integrals of polynomials, (b) integrals involving logarithmic and trigonometric functions, and (c) integrals of exponential functions. It then provides more complex integrals involving combinations of functions.
Scientific Computing with Python Webinar 9/18/2009:Curve FittingEnthought, Inc.
This webinar will provide an overview of the tools that SciPy and NumPy provide for regression analysis including linear and non-linear least-squares and a brief look at handling other error metrics. We will also demonstrate simple GUI tools that can make some problems easier and provide a quick overview of the new Scikits package statsmodels whose API is maturing in a separate package but should be incorporated into SciPy in the future.
The document provides 4 examples of calculating triple integrals over different regions. Example 1 calculates a triple integral over a region bounded by a paraboloid and plane in rectangular coordinates, then evaluates it in polar coordinates. Example 2 calculates a triple integral over a hemisphere in spherical coordinates. Example 3 finds the volume under a paraboloid and above a rectangle using a double integral. Example 4 calculates a triple integral over a tetrahedron bounded by 4 planes.
icml2004 tutorial on bayesian methods for machine learningzukun
This document provides an overview of Bayesian methods for machine learning. It introduces Bayesian foundations including representing beliefs with probabilities, Cox's axioms, the Dutch book theorem, asymptotic certainty, and Occam's razor. It then outlines the intractability problem in Bayesian inference and various approximation tools like Laplace's approximation, variational approximations, and MCMC. The document concludes by discussing advanced topics and limitations of Bayesian methods.
1) Euler-Bernoulli bending theory and Timoshenko beam theory describe the stresses and deflections of beams under bending loads.
2) Euler-Bernoulli theory assumes a beam's cross-section remains plane and perpendicular to the neutral axis during bending. Timoshenko theory accounts for shear deformation.
3) Both theories relate the bending moment M and shear force V to the beam's deflection w and its derivatives, allowing calculation of stresses, forces, and deflections for given beam geometries and loads.
The document discusses key concepts in seismology including:
1. Snell's law is generalized for spherical earth models using ray parameters.
2. The ray equation relates the change in ray geometry to variations in seismic velocity.
3. Radius of curvature is determined by velocity gradients and ray parameters.
4. Amplitude is affected by geometrical spreading and focusing/defocusing of rays due to velocity variations.
5. Tau-p analysis represents seismic travel times through intercept time curves as a function of ray parameter.
Higher-order factorization machines (HOFMs) provide a framework for modeling feature interactions of arbitrary order in recommendation systems and link prediction tasks. The key ideas are:
(1) HOFMs express the prediction function as a weighted sum of ANOVA kernels of varying orders, capturing interactions between features.
(2) Computing the ANOVA kernel and its gradient can be done in linear time using dynamic programming, enabling efficient learning and prediction.
(3) Experiments on link prediction tasks show HOFMs can effectively model higher-order interactions to improve predictions compared to lower-order models like FM.
This document provides summaries of common derivatives and integrals, including:
- Basic properties and formulas for derivatives and integrals of functions like polynomials, trig functions, inverse trig functions, exponentials/logarithms, and more.
- Standard integration techniques like u-substitution, integration by parts, and trig substitutions.
- How to evaluate integrals of products and quotients of trig functions using properties like angle addition formulas and half-angle identities.
- How to use partial fractions to decompose rational functions for the purpose of integration.
So in summary, this document outlines essential derivatives and integrals for many common functions, along with standard integration strategies and techniques.
Nonconvex Compressed Sensing with the Sum-of-Squares MethodTasuku Soma
This document presents a method for nonconvex compressed sensing using the sum-of-squares (SoS) method. It formulates q-minimization, which requires fewer samples than l1-minimization but is nonconvex, as a polynomial optimization problem. The SoS method is then applied to obtain a pseudoexpectation operator satisfying a pseudo robust null space property, guaranteeing stable signal recovery. Specifically, it shows that for a Rademacher measurement matrix, with the number of measurements scaling quadratically in the sparsity s, the SoS method finds a solution x^ satisfying ||x^-x||_q ≤ O(σs(x)q) + ε, providing nearly q-stable recovery.
Similar to Robust 3D gravity gradient inversion by planting anomalous densities (20)
Modelagem e inversão em coordenadas esféricas na gravimetriaLeonardo Uieda
1) O documento discute modelagem gravitacional direta e inversão usando coordenadas esféricas e tesseroides.
2) É apresentado um algoritmo para calcular modelos a partir de dados gravitacionais usando plantação de sementes.
3) Software livre é desenvolvido para modelagem direta e inversão gravitacional com diferentes algoritmos de otimização.
Gravity inversion in spherical coordinates using tesseroidsLeonardo Uieda
Leonardo Uieda and Valéria C. F. Barbosa
Satellite observations of the gravity field have provided geophysicists with exceptionally dense and uniform coverage of data over vast areas. This enables regional or global scale high resolution geophysical investigations. Techniques like forward modeling and inversion of gravity anomalies are routinely used to investigate large geologic structures, such as large igneous provinces, suture zones, intracratonic basins, and the Moho. Accurately modeling such large structures requires taking the sphericity of the Earth into account. A reasonable approximation is to assume a spherical Earth and use spherical coordinates.
In recent years, efforts have been made to advance forward modeling in spherical coordinates using tesseroids, particularly with respect to speed and accuracy. Conversely, traditional space domain inverse modeling methods have not yet been adapted to use spherical coordinates and tesseroids. In the literature there are a range of inversion methods that have been developed for Cartesian coordinates and right rectangular prisms. These include methods for estimating the relief of an interface, like the Moho or the basement of a sedimentary basin. Another category includes methods to estimate the density distribution in a medium. The latter apply many algorithms to solve the inverse problem, ranging from analytic solutions to random search methods as well as systematic search methods.
We present an adaptation for tesseroids of the systematic search method of "planting anomalous densities". This method can be used to estimate the geometry of geologic structures. As prior information, it requires knowledge of the approximate densities and positions of the structures. The main advantage of this method is its computational efficiency, requiring little computer memory and processing time. We demonstrate the shortcomings and capabilities of this approach using applications to synthetic and field data. Performing the inversion of gravity and gravity gradient data, simultaneously or separately, is straight forward and requires no changes to the existing algorithm. Such feature makes it ideal for inverting the multicomponent gravity gradient data from the GOCE satellite.
An implementation of our adaptation is freely available in the open-source modeling and inversion package Fatiando a Terra (http://www.fatiando.org).
3D magnetic inversion by planting anomalous densitiesLeonardo Uieda
Slides for the presentation "3D magnetic inversion by planting anomalous densities" given at the 2013 AGU Meeting of the Americas in Cancun, Mexico.
Note: There was an error in the title of the talk. The correct title should be "3D magnetic inversion by planting anomalous magnetization"
Iron ore interpretation using gravity-gradient inversions in the Carajás, Br...Leonardo Uieda
The document summarizes a gravity gradient survey over the Carajás region in Brazil to interpret iron ore deposits. The survey used a flying gravimeter system over 550 km of lines spaced 150 m apart at a height of 100 m. Two 3D inversion methods were applied to the data: planting anomalous densities and smooth inversion. Both recovered models compatible with borehole data, identifying concentrated hematite above 300 m and possible jaspilite below 200 m due to similar density contrasts. The methods produced comparable preliminary results for joint interpretation of the targeted iron ore.
Use of the 'shape-of-anomaly' data misfit in 3D inversion by planting anomalo...Leonardo Uieda
E-poster presentation given at the 2012 SEG Annual Meeting in Las Vegas.
Abstract:
We present an improvement to the method of 3D gravity gradient inversion by planting anomalous densities. This method estimates a density-contrast distribution defined on a grid of right-rectangular prisms. Instead of solving large equation systems, the method uses a systematic search algorithm to grow the solution, one prism at a time, around user-specified prisms called “seeds”. These seeds have known density contrasts and the solution is constrained to be concentrated around the seeds as well as have their density contrasts. Thus, prior geologic and geophysical information are incorporated into the inverse problem through the seeds. However, this leads to a strong dependence of the solution on the correct location, density contrast, and number of seeds used. Our improvement to this method consists of using the “shape-of-anomaly” data-misfit function in conjunction with the l2-norm data-misfit function. The shape-of-anomaly function measures the different in shape between the observed and predicted data and is insen- sitive to differences in amplitude. Tests on synthetic and real data show that the improved method not only has an increased robustness with respect to the number of seeds and their locations, but also provides a better fit of the observed data.
Computation of the gravity gradient tensor due to topographic masses using te...Leonardo Uieda
The GOCE satellite mission has the objective of measuring the Earth's gravitational field with an unprecedented accuracy through the measurement of the gravity gradient tensor (GGT). One of the several applications of this new gravity data set is to study the geodynamics of the lithospheric plates, where the flat Earth approximation may not be ideal and the Earth's curvature should be taken into account. In such a case, the Earth could be modeled using tesseroids, also called spherical prisms, instead of the conventional rectangular prisms. The GGT due to a tesseroid is calculated using numerical integration methods, such as the Gauss-Legendre Quadrature (GLQ), as already proposed by Asgharzadeh et al. (2007) and Wild-Pfeiffer (2008). We present a computer program for the direct computation of the GGT caused by a tesseroid using the GLQ. The accuracy of this implementation was evaluated by comparing its results with the result of analytical formulas for the special case of a spherical cap with computation point located at one of the poles. The GGT due to the topographic masses of the Parana basin (SE Brazil) was estimated at 260 km altitude in an attempt to quantify this effect on the GOCE gravity data. The digital elevation model ETOPO1 (Amante and Eakins, 2009) between 40º W and 65º W and 10º S and 35º S, which includes the Paraná Basin, was used to generate a tesseroid model of the topography with grid spacing of 10' x 10' and a constant density of 2670 kg/m3. The largest amplitude observed was on the second vertical derivative component (-0.05 to 1.20 Eötvos) in regions of rough topography, such as that along the eastern Brazilian continental margins. These results indicate that the GGT due to topographic masses may have amplitudes of the same order of magnitude as the GGT due to density anomalies within the crust and mantle.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
30. Minimize difference between g and d
Residual vector r= g−d
Datamisfit function:
N 1
ϕ( p)=∥r∥2 =
( 2 2
∑ (gi−d i )
i=1
)
31. Minimize difference between g and d
Residual vector r= g−d
Datamisfit function:
N 1
ϕ( p)=∥r∥2 =
( 2 2
∑ (gi−d i )
i=1
)
ℓ2norm of r
32. Minimize difference between g and d
Residual vector r= g−d
Datamisfit function:
N 1
ϕ( p)=∥r∥2 =
( 2 2
∑ (gi−d i )
i=1
)
ℓ2norm of r
Leastsquares fit
33. Minimize difference between g and d
Residual vector r= g−d
Datamisfit function:
N 1 N
ϕ( p)=∥r∥2 =
( 2 2
∑ (gi−d i )
i=1
) ϕ( p)=∥r∥1=∑ ∣gi −d i∣
i=1
ℓ2norm of r ℓ1norm of r
Leastsquares fit
34. Minimize difference between g and d
Residual vector r= g−d
Datamisfit function:
N 1 N
ϕ( p)=∥r∥2 =
( 2 2
∑ (gi−d i )
i=1
) ϕ( p)=∥r∥1=∑ ∣gi −d i∣
i=1
ℓ2norm of r ℓ1norm of r
Leastsquares fit Robust fit
40. Constraints:
1. Compact no holes inside
2. Concentrated around “seeds”
41. Constraints:
1. Compact no holes inside
2. Concentrated around “seeds”
● Userspecified prisms
● Given density contrasts ρs
● Any # of ≠ density contrasts
42. Constraints:
1. Compact no holes inside
2. Concentrated around “seeds”
● Userspecified prisms
● Given density contrasts ρs
● Any # of ≠ density contrasts
3. Only p j =0 or p j =ρs
43. Constraints:
1. Compact no holes inside
2. Concentrated around “seeds”
● Userspecified prisms
● Given density contrasts ρs
● Any # of ≠ density contrasts
3. Only p j =0 or p j =ρs
4. p j =ρs of closest seed
49. Wellposed problem: Minimize goal function
Γ( p)=ϕ( p)+μ θ( p)
Regularizing function
M
pj
Similar to θ( p)=∑ l β
j
Silva Dias et al. (2009) j=1 p j +ϵ
Distance between
jth prism and seed
50. Wellposed problem: Minimize goal function
Γ( p)=ϕ( p)+μ θ( p)
Regularizing function
M
pj
Similar to θ( p)=∑ l β
j
Silva Dias et al. (2009) j=1 p j +ϵ
Distance between
jth prism and seed
Imposes:
● Compactness Concentration around seeds
●
51. Constraints:
1. Compact
Regularization
2. Concentrated around “seeds”
3. Only p j =0 or p j =ρs
4. p j =ρs of closest seed
52. Constraints:
1. Compact
Regularization
2. Concentrated around “seeds”
3. Only p j =0 or p j =ρs
Algorithm
4. p j =ρs of closest seed Based on René (1986)
55. Setup: g = observed data
Define interpretative model
Interpretative model
56. Setup: g = observed data
Define interpretative model
All parameters zero
Interpretative model
57. Setup: g = observed data
Define interpretative model
All parameters zero
N S seeds
Interpretative model
58. Setup: g = observed data
Define interpretative model
All parameters zero
N S seeds
Include seeds
Prisms with p j =0
not shown
59. Setup: g = observed data
Define interpretative model
All parameters zero
N S seeds
d = predicted data
Include seeds
Compute initial residuals
(0) (0)
r = g− d
Prisms with p j =0
not shown
60. Setup: g = observed data
Define interpretative model
All parameters zero
N S seeds
d = predicted data
Include seeds
Compute initial residuals
(0) (0)
r = g− d
Predicted by seeds
Prisms with p j =0
not shown
61. Setup: g = observed data
Define interpretative model
All parameters zero
N S seeds
d = predicted data
Include seeds
Compute initial residuals
NS
(∑ )
r (0)= g−
s=1
ρs a j S
Prisms with p j =0
not shown
62. Setup: g = observed data
Define interpretative model
All parameters zero
N S seeds
d = predicted data
Include seeds
Compute initial residuals
NS
(∑ )
r (0)= g−
s=1
ρs a j S
Neighbors
Find neighbors of seeds
Prisms with p j =0
not shown
63. Growth:
Try accretion to sth seed:
Prisms with p j =0
not shown
64. Growth:
Try accretion to sth seed:
Choose neighbor:
1. Reduce data misfit
2. Smallest goal function
Prisms with p j =0
not shown
65. Growth:
Try accretion to sth seed:
Choose neighbor:
1. Reduce data misfit
2. Smallest goal function
j = chosen p j =ρs (New elements)
j
Prisms with p j =0
not shown
66. Growth:
Try accretion to sth seed:
Choose neighbor:
1. Reduce data misfit
2. Smallest goal function
j = chosen p j =ρs (New elements)
Update residuals
( new) (old )
r =r − pj aj
j
Prisms with p j =0
not shown
67. Growth:
Try accretion to sth seed:
Choose neighbor:
1. Reduce data misfit
2. Smallest goal function
j = chosen p j =ρs (New elements)
Update residuals
( new) (old )
r =r − pj aj
j
Contribution of j
Prisms with p j =0
not shown
68. Growth:
Try accretion to sth seed:
Choose neighbor:
1. Reduce data misfit
2. Smallest goal function
j = chosen p j =ρs (New elements)
Update residuals
( new) (old )
r =r − pj aj
None found = no accretion j
Variable sizes
Prisms with p j =0
not shown
69. Growth:
Try accretion to sth seed:
Choose neighbor:
1. Reduce data misfit
NS 2. Smallest goal function
j = chosen p j =ρs (New elements)
Update residuals
( new) (old )
r =r − pj aj
None found = no accretion
Prisms with p j =0
not shown
70. Growth:
Try accretion to sth seed:
Choose neighbor:
1. Reduce data misfit
NS 2. Smallest goal function
j = chosen p j =ρs (New elements)
Update residuals
( new) (old )
r =r − pj aj
None found = no accretion
j
Prisms with p j =0
not shown
71. Growth:
Try accretion to sth seed:
Choose neighbor:
1. Reduce data misfit
NS 2. Smallest goal function
j = chosen p j =ρs (New elements)
Update residuals
( new) (old )
r =r − pj aj
None found = no accretion
j
At least one seed grow?
Prisms with p j =0
not shown
72. Growth:
Try accretion to sth seed:
Choose neighbor:
1. Reduce data misfit
NS 2. Smallest goal function
j = chosen p j =ρs (New elements)
Update residuals
( new) (old )
r =r − pj aj
None found = no accretion
j
At least one seed grow?
Yes
Prisms with p j =0
not shown
73. Growth:
Try accretion to sth seed:
Choose neighbor:
1. Reduce data misfit
NS 2. Smallest goal function
j = chosen p j =ρs (New elements)
Update residuals
( new) (old )
r =r − pj aj
None found = no accretion
j
At least one seed grow?
Yes No
Prisms with p j =0
Done! not shown
74. Advantages:
Compact & nonsmooth
Any number of sources
Any number of different density contrasts
No large equation system
Search limited to neighbors
75. Remember equations:
Initial residual Update residual vector
NS
(0)
r = g− ( ∑ ρs a j
s=1
S ) r (new)
=r (old )
− pj aj
76. Remember equations:
Initial residual Update residual vector
NS
(0)
r = g−
( ∑ ρs a j
s=1
S ) r
(new)
=r
(old )
− pj aj
No matrix multiplication (only vector +)
77. Remember equations:
Initial residual Update residual vector
NS
(0)
r = g−
( ∑ ρs a j
s=1
S ) r
(new)
=r
(old )
− pj aj
No matrix multiplication (only vector +)
Only need some columns of A
78. Remember equations:
Initial residual Update residual vector
NS
(0)
r = g−
( ∑ ρs a j
s=1
S ) r
(new)
=r
(old )
− pj aj
No matrix multiplication (only vector +)
Only need some columns of A
Calculate only when needed
79. Remember equations:
Initial residual Update residual vector
NS
(0)
r = g−
( ∑ ρs a j
s=1
S ) r
(new)
=r
(old )
− pj aj
No matrix multiplication (only vector +)
Only need some columns of A
Calculate only when needed & delete after update
80. Remember equations:
Initial residual Update residual vector
NS
(0)
r = g−
( ∑ ρs a j
s=1
S ) r
(new)
=r
(old )
− pj aj
No matrix multiplication (only vector +)
Only need some columns of A
Calculate only when needed & delete after update
Lazy evaluation
81. Advantages:
Compact & nonsmooth
Any number of sources
Any number of different density contrasts
No large equation system
Search limited to neighbors
82. Advantages:
Compact & nonsmooth
Any number of sources
Any number of different density contrasts
No large equation system
Search limited to neighbors
No matrix multiplication (only vector +)
Lazy evaluation of Jacobian
83. Advantages:
Compact & nonsmooth
Any number of sources
Any number of different density contrasts
No large equation system
Search limited to neighbors
No matrix multiplication (only vector +)
Lazy evaluation of Jacobian
Fast inversion + low memory usage
94. Common scenario
●
May not have prior information
●
Density contrast
●
Approximate depth
●
95. Common scenario
●
May not have prior information
●
Density contrast
●
Approximate depth
●
No way to provide seeds
●
96. Common scenario
●
May not have prior information
●
Density contrast
●
Approximate depth
●
No way to provide seeds
●
Difficult to isolate effect of targets
●
114. Inversion: ● 13 seeds ● 7,803 data 37,500 prisms
●
● Recover shape of targets
● Total time = 2.2 minutes (on laptop) Only prisms with zero
density contrast not shown
135. Inversion: 46 seeds 13,746 data
● ● ● 164,892 prisms
● Agree with previous interpretations
(Martinez et al., 2010)
● Total time = 14 minutes
(on laptop)
137. Conclusions
● New 3D gravity gradient inversion
● Multiple sources
● Interfering gravitational effects
● Nontargeted sources
● No matrix multiplications
● No linear systems
● Lazy evaluation of Jacobian matrix
138. Conclusions
● Estimates geometry
● Given density contrasts
● Ideal for:
● Sharp contacts
● Wellconstrained physical properties
– Ore bodies
– Intrusive rocks
– Salt domes