SlideShare a Scribd company logo
1 of 66
Adam Lee Perelman
25/5/2015
o Understanding the problem
o Mathematical modeling
o The need for Regularization
o Regularization Methods
o Solution Development
o Results and Conclusion
Results and Conclusion
Solution Development
Regularization Methods
The need for
Regularization
Mathematical modeling
Understanding the problem
o The study of the structure and dynamic
behavior of molecules is extremely important
 Medical imaging
 Industrial quality control
 Chemical and Pharmaceutical analysis
 Safety inspections
o However, molecules are too small to be
observed and studied directly
 The nuclear magnetic resonance (NMR) is a
versatile and powerful technique for exploring their
structure and dynamic behavior.
o Protons have a magnetic charge and
possess a spin. Due to this, they have
a magnetic field.
o In an external magnetic field 𝐵∅ (𝐵𝑧)
they align parallel or anti-parallel.
o The spinning protons wobble, about
the axis of the external magnetic field.
 This motion is called precession.
 a relationship which is defined by the
Larmor Equation: 𝜔∅ = 𝛾𝐵∅
o An electromagnetic RF pulse at the
resonance frequency causes the
protons to presses in phase
o T1-The longitudinal relaxation or
spin-lattice relaxation.
 T1 is the exponential recovery of 𝑀𝑧.
 𝑀𝑧 = 𝑀𝑧,𝑒𝑞(1 − 𝑒−𝑡/𝑇1)
o T2-The transversal relaxation or
spin-spin relaxation.
 T2 is the exponential decay of a signal,
𝑀 𝑥𝑦.
 𝑀 𝑥𝑦 = 𝑀 𝑥𝑦,𝑒𝑞(𝑒−𝑡/𝑇2)
o Simultaneously; the longitudinal
magnetization begin to increases
again as the excited spins begin to
return to the original 𝑀𝑧
orientation.
Results and Conclusion
Solution Development
Regularization Methods
The need for
Regularization
Mathematical modeling
Understanding the problem
o Fredholm first integral equation
 𝑠 𝑡 = 𝑇1∈𝑡
1 − 𝑒
−
𝑡
𝑇1 𝑓 𝑇1 𝑑𝑇1
 𝑠 𝑡 = 𝑇2∈𝑡
𝑒
−
𝑡
𝑇2 𝑓 𝑇2 𝑑𝑇2
 𝑠 𝑡1, 𝑡2 = 𝑇2∈𝑡2 𝑇1∈𝑡1
(1 − 𝑒−𝑡1/𝑇1)(𝑒−𝑡2/𝑇2)𝑓 𝑇1, 𝑇2 𝑑𝑇1 𝑑𝑇2
o Discretizing the integral
 𝑠 = 𝐾𝑓
 𝑆 = 𝐾1 𝐹𝐾2
o Solving the equation
 𝑠 = 𝐾𝑓
 𝑓 = 𝐾 𝑇 𝐾 −1 𝐾 𝑇 𝑠
o We are done!
o Oh no!
 𝑓 ≠ 𝐾 𝑇 𝐾 −1 𝐾 𝑇 𝑠
 𝐹 ≠ 𝐾1
𝑇
𝐾1
−1 𝐾1
𝑇
𝑆𝐾2
𝑇
𝐾2 𝐾2
𝑇 −1
o Inverse problems
 Conversion of the relaxation signal into a
continuous distribution of relaxation components is
an inverse Laplace transform problem.
o Ill-posed problems
 Inverse problems, in particular, belong to a class of
ill-posed problems and frequently exhibit this
extreme sensitivity to changes in the input.
o Perturbation theory
 That is, even minute perturbations in the data can
vastly affect the computed solution.
Results and Conclusion
Solution Development
Regularization Methods
The need for
Regularization
Mathematical modeling
Understanding the problem
o Consider for example the following system.
𝐴𝑥 = 𝑏
o A the matrix which describes the model.
o b the vector which describes the output of the
system.
o x the solution for the inverse problem
 Is the vector which describes the input of the system.
o The underdetermined problem have infinitely
many solutions.
 (1 1)𝑥 = 1
o The problem is replaced by a nearby problem
 where the solution is less sensitive to errors in the
data.
o This replacement is commonly referred to as
regularization
o If we require the 2-norm of x to be a minimum,
that is:
o Then there is a unique solution at
o compute an approximate solution to the linear
least-squares minimization problem associated
with the linear system of equations.
1
..
min
21
2
 xx
ts
x
2
1
21  xx
o Assume that the solution x can be separated to
o Inserting to the above and rearranging
o If then this means that the vector 𝑥0 is a
null vector (kernel).
o The system behaves like an underdetermined
system
oxxx  ˆ
bAxxA o ˆ
0oAx
o To stabilize the solution reinforcing an upper
bound on the norm of the solution
o From optimization theory, we can incorporate
the constraint via a Lagrange multiplier 𝛾 .


2
2
..
min
x
ts
bAx
 
 22
2
2
2
.
min
 

xbAx
ts
x
Results and Conclusion
Solution Development
Regularization Methods
The need for
Regularization
Mathematical modeling
Understanding the problem
o Tikhonov regularization
 Perhaps the most successful and widely used
regularization method of is the Tikhonov
regularization.
o Singular Value Decomposition (SVD)
 The singular value decomposition, in the discrete
setting, is a power tool for many useful applications
in signal processing and statistics.
o Specifically, the tikhonov solution xλ is defined, for the
strictly positive weighting regularization parameter λ, as
the solution to the problem
min
𝑥
𝐴𝑥 − 𝑏 2
2
+ 𝜆2
𝑥 2
2
o The first term 𝐴𝑥 − 𝑏 2
2
is a measure of the goodness of
fit
o If the term is too large, then x cannot be considered a
good solution because we are under fitting the model.
o If the term is too small, then we are over fitting our
model to the noisy measurements.
o If we can control the norm of x, then we can
suppress most of the large noise components.
o The objective is to find a suitable balance via the
regularization parameter λ for these two terms,
such that, the regularized solution xλ fits the
data thoroughly and is sufficiently regularized
o The balance between the two terms is
structured by the factor λ.
o It is obvious that for λ =0 we obtain the least
square problem
 more weights are given to fitting the noisy data,
resulting in a solution that is less regular.
o However, the larger the λ, the more effort is
devoted into the regularity of the solution.
 more weights are given to the minimization of the
L2-norm of the solution, and so as 𝜆 → ∞ we have
𝑥 → 0 .
o Discrepancy Principle:
 This method is very likely to overestimate the
regularization parameter.
⇒ L-Curve:
 Some underestimation expected, very robust.
o Generalize Cross Validation (GCV):
 risk of severe over or under estimation.
o Normalized Cumulative Periodogram (NCP)
Criterion:
 for low or high noise level considerable overestimate.
o It is a convenient graphical tool for displaying
the trade-off between the size of a regularized
solution and its fit to the given data, as the
regularization parameter varies.
o Advantages of the L-curve criterion are
 robustness
 ability to treat perturbations consisting of
correlated noise.
o Disadvantage is of the L-curve criterion is
 for a low noise level, the regularization parameter
given is much smaller then the optimal parameter.
Vortex
Minimum
Optimal
o Formally, the singular value decomposition of
an m × n real or complex matrix A is a
factorization of the form
𝐴 = 𝑈Σ𝑉∗
=
𝑖=1
𝑛
𝜎𝑖 𝑢𝑖 𝑣𝑖
𝑇
o 𝑈𝜖ℝ 𝑚×𝑚
and 𝑉𝜖ℝ 𝑛×𝑛
are orthogonal matrices
o Σ𝜖ℝ 𝑚×𝑛 Is a rectangular diagonal matrix with
non-negative real numbers 𝜎𝑖
 𝜎1 > 𝜎2 > ⋯ > 𝜎𝑟 > 𝜎𝑟+1 = ⋯ = 𝜎 𝑛 = 0
o As the singular value decreases its corresponding
singular vector becomes more chaotic and with less
information.
o This regularization method resolves the issue of
the problematic tiny positive singular values by
setting them to zero.
o The TSVD approximation of A will effectively
ignore the smallest singular values.


k
i
T
iiikk vuVUA
1
*

)0,...,0,,...,,( 21 kk diag 
Too thin
and high!
Artifact?
Convoluted peak
or one peak?
o A necessary condition for obtaining good
regularized solutions is that the Fourier
coefficients of the right-hand side, when
expressed in terms of the generalized SVD
associated with the regularization problem, on
the average decay to zero faster than the
generalized singular values
o In other words, chop off SVD components that
are dominated by the noise
o (Note: the need for Fourier coefficients to
converge has been understood for many years)
o the condition must be satisfied in order to
obtain “good regularized solutions”.
o The Discrete Picard Condition. Let Ƭ denote the
level at which the computed singular values σi
level off due to rounding errors. The discrete
Picard condition is satisfied if, for all singular
values larger than Ƭ, the corresponding
coefficient |ui
Ts|, on average, decay faster than
the σi.
Artifact!
Better resolution!
No thin
peaks
o Recall our 2D problem from 𝑆 = 𝐾1 𝐹𝐾2.
o Transforming the equation back to 1D.
 𝑣𝑒𝑐 𝑆 = 𝐾2
𝑇
⨂𝐾1 𝑣𝑒𝑐 𝐹
 𝐾2
𝑇
⨂𝐾1 = 𝑉2⨂𝑈1 Σ2⨂Σ1 𝑈2
𝑇
⨂𝑉1
𝑇
 𝑠 = 𝜇𝜉𝜈 𝑇
𝑓.
o We now define the Picard curve:
 𝜌 = 𝑙𝑜𝑔 𝜇𝑖
𝑇
𝑠 − 𝑙𝑜𝑔 𝑑𝑖𝑎𝑔 𝜉𝑖
o However, the matrix µ is extremely large
o perform an inverse kronecker product operation.
 V2⨂U1 𝑠 ↔ U1SV2
𝑇
 𝑑𝑖𝑎𝑔 Σ2⨂Σ1 = Σ2⨂Σ1 𝕀 𝑚1 𝑚2×1 ↔
↔ Σ1 𝕀 𝑚1×𝑚2
Σ2
𝑇
= 𝑑𝑖𝑎𝑔 Σ1 × 𝑑𝑖𝑎𝑔 𝑇 Σ2
o We now define the Picard surface as
 𝜌 = 𝑙𝑜𝑔 𝑈1 𝑆𝑉2
𝑇
− 𝑙𝑜𝑔 𝜎1 × 𝜎2
𝑇
 Where 𝜎1 and 𝜎2 are the vector diagonals of the
singular values respectively.
o Assuming a simple problem consists of the
same data structure from our LR-NMR
experimental
o signal measurements matrix
 S 16384 by 70
 values stored as a double precision (8 bytes)
o our kernel matrices
 K1 300 by 70
 k2 300 by 16384
o we would need 45 Megabytes to store our raw
data measurements.
o In our example, the Picard Plot suggested to use
only the first 9 singular values for the K1 kernel
and the first 12 singular values of the second K2
kernel
o New signal measurements vector
 s 108 by 1
o New kernel matrices
 K1 300 by 9
 k2 300 by 12
o we would need 0.05 Megabytes to store our
raw data measurements.
o Compression ratio of 1,000:1
o Consider our method to map the data into a 1D
problem
o New signal measurements vector
 s 1146880
o New kernel matrix
 K’ 1146880 by 90000
o we would need at least 768.9 Gigabytes of storage
space.
o Using the 2D Picard Condition
o New signal measurements vector
 s 108
o New kernel matrix
 K’ 108 by 90000
o we would need at least 0.073 Gigabytes of storage
space.
o Compression ratio of 10,000:1
o Impossible to get more knowledge from the
information inside the data.
o If the resolution is fictitious increased, the
information is falsified.
o The regularization parameters are a function of
the distribution f and the signal noise Δs.
o No Mathematical basis.
Results and Conclusion
Solution Development
Regularization Methods
The need for
Regularization
Mathematical modeling
Understanding the problem
o Define the functional 𝛷 𝑓
 Min 𝛷 𝑓 = 1
2
𝐾𝑓 − 𝑠 2
2
+ 1
2
𝜆2 𝑓 2
2
+ 𝜆1 𝑓 1
 s.t. 𝑓 ∈ 𝐶
 𝜆1 ≥ 0 , 𝜆2 ≥ 0
 𝐿2
: 𝐶 = 𝑓: 𝑓 𝑇 ≥ 0, 𝑓 2
< ∞
o If K∈L2, then Φ(f) has a directional derivative,
donated by ∇Φ.
o 𝛻𝛷 𝑓 = 1
2
𝜕
𝜕𝑓
𝑒, 𝑒 + 1
2
𝜆2
𝜕
𝜕𝑓
𝑓, 𝑓 + 𝜆1
𝜕
𝜕𝑓
𝑡𝑟 𝑓 ∙ 𝐼
o 𝛻𝛷 𝑓 = 𝐾′ 𝐾𝑓 − 𝑠 + 𝜆2 𝑓 + 𝜆1
o The Kuhn-Tucker condition:
 ∇Φ(f)=0 → f>0: When the derivative is 0 then we are
at a calculus minimum
 ∇Φ(f)≥0 → f=0: When its not, a small decrease of f
will reduce the function value, however, that is when
the constraint is reached.
o Rearranging
 f 𝜆1,𝜆2
= max(0, 𝐾′
𝐾 + 𝜆2 𝐼 −1
(𝐾′
𝑠 − 𝜆1))
o We can use the SVD to obtain more insight into
the Tikhonov solution
 f 𝜆1,𝜆2
= max 0, 𝑉 Σ2
+ 𝜆2
2
𝐼
−1
Σ𝑈′
𝑠 − 𝑉′
𝜆1
Artifact!
Bad shape
and position
Good peak
Bad shape
 𝛷 𝑓 = 1
2
𝐾𝑓 − 𝑠 2
2
+ 1
2
𝜆2 𝑓 2
2
+ 𝜆1 𝑓 1
 𝛷 𝑓 = 1
2
𝐾𝑓 − 𝑠 2
2
+ 1
2
𝜆2 𝑓 2
2
 𝛷 𝑓 = 1
2
𝐾𝑓 − 𝑠 2
2
+ 𝜆1 𝑓 1
 𝛷 𝑓 = 1
2
𝐾𝑓 − 𝑠 2
2
o The Primal-Dual convex optimization (PDCO) is
a state of the art optimization solver
implemented in Matlab.
o It applies a primal-dual interior method to
linearly constrained optimization problems
with a convex objective function.
o The problems are assumed to be of the following
form:
 min
𝑓,𝑟
𝑟 2
2
+ 𝐷1 𝑓 2
2
+ 𝜑(𝑓)
𝑠. 𝑡
𝐴𝑓 + 𝐷2 𝑟 = 𝑏
𝑙 ≤ 𝑓 ≤ 𝑢
o Where f and r are variables, and 𝐷1 and 𝐷2 are
positive- definite diagonal matrices.
o Each PDCO iteration is generating search directions
∆f and ∆y for the primal variables f and the dual
variables y associated with 𝐴𝑓 + 𝐷2 𝑟 = 𝑏
o Until recently many models have only used the
l2 penalty function because the solving methods
are simple and fast.
o The introduction of the least absolute values, l1,
to model fitting have greatly improved many
applications.
o adopt a hybrid between l1 and l2.
o Hybrid function 𝐻𝑦𝑏𝑐(𝑓′
) = 𝑔(𝑓′𝑖) with a
regularization parameter c, where
o 𝑔(𝑓′𝑖) =
𝑓′ 𝑖
2
2𝑐
+
𝑐
2
𝑓′𝑖
𝑓′𝑖 ≤ 𝑐
𝑓′𝑖 > 𝑐
o The most popular entropy functional is the Shannon
entropy formula
o the Entropy function 𝐸𝑛𝑡 𝑎 𝑓′ = 𝑓′
𝑖
𝑎
log 𝑓′
𝑖
𝑎
with
a regularization parameter a
o Its origins in information theory
o The motivation
 does not introduce correlations into the data beyond
those which are required by the data


n
i
ii xLogxxE
1
)()()(
1)()())((  xLogxExEGrad
x
xEdiagxEHess ii
1
))(())(( 
Results and Conclusion
Solution Development
Regularization Methods
The need for
Regularization
Mathematical modeling
Understanding the problem
o The solution method proposed in [] is the mathematical
formulation of the linearly constrained convex problem:
 min
𝑓′
𝜆1 𝑘′ 𝑓′ − 𝑏′
2
2
+ 𝜆2 𝑓′
2
2
+ 𝜑(𝑓′)
𝑠. 𝑡
𝑘′ 𝑓′ + 𝑟 = 𝑏′
𝑓′ ≥ 0
o Where k’ is the Kroneker tensor product of K1 and K2,
o f’ is the unknown spectrum vector,
o b’ is the transformed measurements vector,
o r is the residual vector
o the convex function 𝜑 𝑓′
is either the Entropy function
𝐸𝑛𝑡 𝑎 𝑓′ with a regularization parameter a or the Hybrid
function 𝐻𝑦𝑏 𝑐(𝑓′) with a regularization parameter c
o one-dimensional image restoration model
0.1% 1%
10%5%
o Inverse heat equation
0.1% 1%
10%5%
Perfect!
Good Peak
Good Shape
And Position
No Artifacts!
Ok Height
Perfect!
Perfect!
Good Peak
No Artifacts!
Good Peak
Perfect!
Good Shape
And Position
Ok Height
Perfect!
Good Shape
And Position
Perfect!
Good Shape
And Position
o Our algorithm produces reconstructions of far
greater quality than the other methods but at a cost
of convergence time that comes from the need of
tuning several parameters
o In contrast, our approach keeps the reconstruction
quality regardless of the data structure and data
size
o we have taken advantage of the inherited stability of
the 2D Picard condition to regularize the solution
and make it less sensitive to perturbations in the
measurement array.
o As a result, all required quantities such as gradient,
Hessian-vector product are computed with reduced
memory storage and computation time

More Related Content

What's hot (20)

Applications of nanotechnology
Applications of nanotechnologyApplications of nanotechnology
Applications of nanotechnology
 
Atomic and nuclear physics
Atomic and nuclear physicsAtomic and nuclear physics
Atomic and nuclear physics
 
Quantum entanglement
Quantum entanglementQuantum entanglement
Quantum entanglement
 
ppt of Phy.(Nanophysics)
ppt of Phy.(Nanophysics)ppt of Phy.(Nanophysics)
ppt of Phy.(Nanophysics)
 
Magnetic materials final
Magnetic materials finalMagnetic materials final
Magnetic materials final
 
Introduction to quantum mechanics and schrodinger equation
Introduction to quantum mechanics and schrodinger equationIntroduction to quantum mechanics and schrodinger equation
Introduction to quantum mechanics and schrodinger equation
 
Einstein M Theory , String Theory and The Future
Einstein M Theory , String Theory and The Future Einstein M Theory , String Theory and The Future
Einstein M Theory , String Theory and The Future
 
Cyclotron
CyclotronCyclotron
Cyclotron
 
Plasma
PlasmaPlasma
Plasma
 
RELATIVITY THEORY
RELATIVITY THEORYRELATIVITY THEORY
RELATIVITY THEORY
 
Wave particle duality
Wave particle dualityWave particle duality
Wave particle duality
 
CERN-LHC presentation
CERN-LHC presentationCERN-LHC presentation
CERN-LHC presentation
 
Plancks Law.ppt
Plancks Law.pptPlancks Law.ppt
Plancks Law.ppt
 
Quinck's method
Quinck's methodQuinck's method
Quinck's method
 
The Large Hadron Collider
The Large Hadron ColliderThe Large Hadron Collider
The Large Hadron Collider
 
Introduction to Nanotechnology
Introduction to NanotechnologyIntroduction to Nanotechnology
Introduction to Nanotechnology
 
Dark matter
Dark matterDark matter
Dark matter
 
Gravitational Waves
Gravitational WavesGravitational Waves
Gravitational Waves
 
Wave particle duality
Wave particle dualityWave particle duality
Wave particle duality
 
Synchrotron radiation
Synchrotron radiationSynchrotron radiation
Synchrotron radiation
 

Similar to 1 d,2d laplace inversion of lr nmr

Intro to Quant Trading Strategies (Lecture 7 of 10)
Intro to Quant Trading Strategies (Lecture 7 of 10)Intro to Quant Trading Strategies (Lecture 7 of 10)
Intro to Quant Trading Strategies (Lecture 7 of 10)Adrian Aley
 
Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3Fabian Pedregosa
 
Numerical Solution of Diffusion Equation by Finite Difference Method
Numerical Solution of Diffusion Equation by Finite Difference MethodNumerical Solution of Diffusion Equation by Finite Difference Method
Numerical Solution of Diffusion Equation by Finite Difference Methodiosrjce
 
Chapter24rev1.pptPart 6Chapter 24Boundary-Valu.docx
Chapter24rev1.pptPart 6Chapter 24Boundary-Valu.docxChapter24rev1.pptPart 6Chapter 24Boundary-Valu.docx
Chapter24rev1.pptPart 6Chapter 24Boundary-Valu.docxtiffanyd4
 
Optimization
OptimizationOptimization
Optimizationyesheeka
 
Linear regression [Theory and Application (In physics point of view) using py...
Linear regression [Theory and Application (In physics point of view) using py...Linear regression [Theory and Application (In physics point of view) using py...
Linear regression [Theory and Application (In physics point of view) using py...ANIRBANMAJUMDAR18
 
Schrodinger Equation of Hydrogen Atom
Schrodinger Equation of Hydrogen AtomSchrodinger Equation of Hydrogen Atom
Schrodinger Equation of Hydrogen AtomSaad Shaukat
 
DOMV No 8 MDOF LINEAR SYSTEMS - RAYLEIGH'S METHOD - FREE VIBRATION.pdf
DOMV No 8  MDOF LINEAR SYSTEMS - RAYLEIGH'S METHOD  - FREE VIBRATION.pdfDOMV No 8  MDOF LINEAR SYSTEMS - RAYLEIGH'S METHOD  - FREE VIBRATION.pdf
DOMV No 8 MDOF LINEAR SYSTEMS - RAYLEIGH'S METHOD - FREE VIBRATION.pdfahmedelsharkawy98
 
머피의 머신러닝: 17장 Markov Chain and HMM
머피의 머신러닝: 17장  Markov Chain and HMM머피의 머신러닝: 17장  Markov Chain and HMM
머피의 머신러닝: 17장 Markov Chain and HMMJungkyu Lee
 
Paper study: Learning to solve circuit sat
Paper study: Learning to solve circuit satPaper study: Learning to solve circuit sat
Paper study: Learning to solve circuit satChenYiHuang5
 
01-Chapter 01 Moaveni.pdf
01-Chapter 01 Moaveni.pdf01-Chapter 01 Moaveni.pdf
01-Chapter 01 Moaveni.pdfhamza218909
 

Similar to 1 d,2d laplace inversion of lr nmr (20)

Intro to Quant Trading Strategies (Lecture 7 of 10)
Intro to Quant Trading Strategies (Lecture 7 of 10)Intro to Quant Trading Strategies (Lecture 7 of 10)
Intro to Quant Trading Strategies (Lecture 7 of 10)
 
sbs.pdf
sbs.pdfsbs.pdf
sbs.pdf
 
Ep 5512 lecture-02
Ep 5512 lecture-02Ep 5512 lecture-02
Ep 5512 lecture-02
 
G04123844
G04123844G04123844
G04123844
 
Signals And Systems Assignment Help
Signals And Systems Assignment HelpSignals And Systems Assignment Help
Signals And Systems Assignment Help
 
Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3
 
Numerical Solution of Diffusion Equation by Finite Difference Method
Numerical Solution of Diffusion Equation by Finite Difference MethodNumerical Solution of Diffusion Equation by Finite Difference Method
Numerical Solution of Diffusion Equation by Finite Difference Method
 
Chapter24rev1.pptPart 6Chapter 24Boundary-Valu.docx
Chapter24rev1.pptPart 6Chapter 24Boundary-Valu.docxChapter24rev1.pptPart 6Chapter 24Boundary-Valu.docx
Chapter24rev1.pptPart 6Chapter 24Boundary-Valu.docx
 
Optimization
OptimizationOptimization
Optimization
 
Linear regression [Theory and Application (In physics point of view) using py...
Linear regression [Theory and Application (In physics point of view) using py...Linear regression [Theory and Application (In physics point of view) using py...
Linear regression [Theory and Application (In physics point of view) using py...
 
Finite Element Methods
Finite Element  MethodsFinite Element  Methods
Finite Element Methods
 
final_report
final_reportfinal_report
final_report
 
Schrodinger Equation of Hydrogen Atom
Schrodinger Equation of Hydrogen AtomSchrodinger Equation of Hydrogen Atom
Schrodinger Equation of Hydrogen Atom
 
lecture_09.pptx
lecture_09.pptxlecture_09.pptx
lecture_09.pptx
 
DOMV No 8 MDOF LINEAR SYSTEMS - RAYLEIGH'S METHOD - FREE VIBRATION.pdf
DOMV No 8  MDOF LINEAR SYSTEMS - RAYLEIGH'S METHOD  - FREE VIBRATION.pdfDOMV No 8  MDOF LINEAR SYSTEMS - RAYLEIGH'S METHOD  - FREE VIBRATION.pdf
DOMV No 8 MDOF LINEAR SYSTEMS - RAYLEIGH'S METHOD - FREE VIBRATION.pdf
 
머피의 머신러닝: 17장 Markov Chain and HMM
머피의 머신러닝: 17장  Markov Chain and HMM머피의 머신러닝: 17장  Markov Chain and HMM
머피의 머신러닝: 17장 Markov Chain and HMM
 
Paper study: Learning to solve circuit sat
Paper study: Learning to solve circuit satPaper study: Learning to solve circuit sat
Paper study: Learning to solve circuit sat
 
Klt
KltKlt
Klt
 
B02402012022
B02402012022B02402012022
B02402012022
 
01-Chapter 01 Moaveni.pdf
01-Chapter 01 Moaveni.pdf01-Chapter 01 Moaveni.pdf
01-Chapter 01 Moaveni.pdf
 

Recently uploaded

Call Girls In Dwarka 9654467111 Escorts Service
Call Girls In Dwarka 9654467111 Escorts ServiceCall Girls In Dwarka 9654467111 Escorts Service
Call Girls In Dwarka 9654467111 Escorts ServiceSapana Sha
 
{Pooja: 9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...
{Pooja:  9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...{Pooja:  9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...
{Pooja: 9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...Pooja Nehwal
 
Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...
Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...
Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...Sapana Sha
 
代办国外大学文凭《原版美国UCLA文凭证书》加州大学洛杉矶分校毕业证制作成绩单修改
代办国外大学文凭《原版美国UCLA文凭证书》加州大学洛杉矶分校毕业证制作成绩单修改代办国外大学文凭《原版美国UCLA文凭证书》加州大学洛杉矶分校毕业证制作成绩单修改
代办国外大学文凭《原版美国UCLA文凭证书》加州大学洛杉矶分校毕业证制作成绩单修改atducpo
 
Schema on read is obsolete. Welcome metaprogramming..pdf
Schema on read is obsolete. Welcome metaprogramming..pdfSchema on read is obsolete. Welcome metaprogramming..pdf
Schema on read is obsolete. Welcome metaprogramming..pdfLars Albertsson
 
Brighton SEO | April 2024 | Data Storytelling
Brighton SEO | April 2024 | Data StorytellingBrighton SEO | April 2024 | Data Storytelling
Brighton SEO | April 2024 | Data StorytellingNeil Barnes
 
RA-11058_IRR-COMPRESS Do 198 series of 1998
RA-11058_IRR-COMPRESS Do 198 series of 1998RA-11058_IRR-COMPRESS Do 198 series of 1998
RA-11058_IRR-COMPRESS Do 198 series of 1998YohFuh
 
DBA Basics: Getting Started with Performance Tuning.pdf
DBA Basics: Getting Started with Performance Tuning.pdfDBA Basics: Getting Started with Performance Tuning.pdf
DBA Basics: Getting Started with Performance Tuning.pdfJohn Sterrett
 
dokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.ppt
dokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.pptdokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.ppt
dokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.pptSonatrach
 
VIP Call Girls Service Miyapur Hyderabad Call +91-8250192130
VIP Call Girls Service Miyapur Hyderabad Call +91-8250192130VIP Call Girls Service Miyapur Hyderabad Call +91-8250192130
VIP Call Girls Service Miyapur Hyderabad Call +91-8250192130Suhani Kapoor
 
9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service
9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service
9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort servicejennyeacort
 
04242024_CCC TUG_Joins and Relationships
04242024_CCC TUG_Joins and Relationships04242024_CCC TUG_Joins and Relationships
04242024_CCC TUG_Joins and Relationshipsccctableauusergroup
 
VIP High Profile Call Girls Amravati Aarushi 8250192130 Independent Escort Se...
VIP High Profile Call Girls Amravati Aarushi 8250192130 Independent Escort Se...VIP High Profile Call Girls Amravati Aarushi 8250192130 Independent Escort Se...
VIP High Profile Call Girls Amravati Aarushi 8250192130 Independent Escort Se...Suhani Kapoor
 
RadioAdProWritingCinderellabyButleri.pdf
RadioAdProWritingCinderellabyButleri.pdfRadioAdProWritingCinderellabyButleri.pdf
RadioAdProWritingCinderellabyButleri.pdfgstagge
 
办理(Vancouver毕业证书)加拿大温哥华岛大学毕业证成绩单原版一比一
办理(Vancouver毕业证书)加拿大温哥华岛大学毕业证成绩单原版一比一办理(Vancouver毕业证书)加拿大温哥华岛大学毕业证成绩单原版一比一
办理(Vancouver毕业证书)加拿大温哥华岛大学毕业证成绩单原版一比一F La
 
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...dajasot375
 
办理学位证中佛罗里达大学毕业证,UCF成绩单原版一比一
办理学位证中佛罗里达大学毕业证,UCF成绩单原版一比一办理学位证中佛罗里达大学毕业证,UCF成绩单原版一比一
办理学位证中佛罗里达大学毕业证,UCF成绩单原版一比一F sss
 
EMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM TRACKING WITH GOOGLE ANALYTICS.pptx
EMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM  TRACKING WITH GOOGLE ANALYTICS.pptxEMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM  TRACKING WITH GOOGLE ANALYTICS.pptx
EMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM TRACKING WITH GOOGLE ANALYTICS.pptxthyngster
 

Recently uploaded (20)

Call Girls In Dwarka 9654467111 Escorts Service
Call Girls In Dwarka 9654467111 Escorts ServiceCall Girls In Dwarka 9654467111 Escorts Service
Call Girls In Dwarka 9654467111 Escorts Service
 
{Pooja: 9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...
{Pooja:  9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...{Pooja:  9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...
{Pooja: 9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...
 
Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...
Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...
Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...
 
代办国外大学文凭《原版美国UCLA文凭证书》加州大学洛杉矶分校毕业证制作成绩单修改
代办国外大学文凭《原版美国UCLA文凭证书》加州大学洛杉矶分校毕业证制作成绩单修改代办国外大学文凭《原版美国UCLA文凭证书》加州大学洛杉矶分校毕业证制作成绩单修改
代办国外大学文凭《原版美国UCLA文凭证书》加州大学洛杉矶分校毕业证制作成绩单修改
 
Schema on read is obsolete. Welcome metaprogramming..pdf
Schema on read is obsolete. Welcome metaprogramming..pdfSchema on read is obsolete. Welcome metaprogramming..pdf
Schema on read is obsolete. Welcome metaprogramming..pdf
 
Brighton SEO | April 2024 | Data Storytelling
Brighton SEO | April 2024 | Data StorytellingBrighton SEO | April 2024 | Data Storytelling
Brighton SEO | April 2024 | Data Storytelling
 
RA-11058_IRR-COMPRESS Do 198 series of 1998
RA-11058_IRR-COMPRESS Do 198 series of 1998RA-11058_IRR-COMPRESS Do 198 series of 1998
RA-11058_IRR-COMPRESS Do 198 series of 1998
 
DBA Basics: Getting Started with Performance Tuning.pdf
DBA Basics: Getting Started with Performance Tuning.pdfDBA Basics: Getting Started with Performance Tuning.pdf
DBA Basics: Getting Started with Performance Tuning.pdf
 
dokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.ppt
dokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.pptdokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.ppt
dokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.ppt
 
VIP Call Girls Service Miyapur Hyderabad Call +91-8250192130
VIP Call Girls Service Miyapur Hyderabad Call +91-8250192130VIP Call Girls Service Miyapur Hyderabad Call +91-8250192130
VIP Call Girls Service Miyapur Hyderabad Call +91-8250192130
 
9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service
9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service
9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service
 
04242024_CCC TUG_Joins and Relationships
04242024_CCC TUG_Joins and Relationships04242024_CCC TUG_Joins and Relationships
04242024_CCC TUG_Joins and Relationships
 
VIP High Profile Call Girls Amravati Aarushi 8250192130 Independent Escort Se...
VIP High Profile Call Girls Amravati Aarushi 8250192130 Independent Escort Se...VIP High Profile Call Girls Amravati Aarushi 8250192130 Independent Escort Se...
VIP High Profile Call Girls Amravati Aarushi 8250192130 Independent Escort Se...
 
RadioAdProWritingCinderellabyButleri.pdf
RadioAdProWritingCinderellabyButleri.pdfRadioAdProWritingCinderellabyButleri.pdf
RadioAdProWritingCinderellabyButleri.pdf
 
E-Commerce Order PredictionShraddha Kamble.pptx
E-Commerce Order PredictionShraddha Kamble.pptxE-Commerce Order PredictionShraddha Kamble.pptx
E-Commerce Order PredictionShraddha Kamble.pptx
 
办理(Vancouver毕业证书)加拿大温哥华岛大学毕业证成绩单原版一比一
办理(Vancouver毕业证书)加拿大温哥华岛大学毕业证成绩单原版一比一办理(Vancouver毕业证书)加拿大温哥华岛大学毕业证成绩单原版一比一
办理(Vancouver毕业证书)加拿大温哥华岛大学毕业证成绩单原版一比一
 
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
 
VIP Call Girls Service Charbagh { Lucknow Call Girls Service 9548273370 } Boo...
VIP Call Girls Service Charbagh { Lucknow Call Girls Service 9548273370 } Boo...VIP Call Girls Service Charbagh { Lucknow Call Girls Service 9548273370 } Boo...
VIP Call Girls Service Charbagh { Lucknow Call Girls Service 9548273370 } Boo...
 
办理学位证中佛罗里达大学毕业证,UCF成绩单原版一比一
办理学位证中佛罗里达大学毕业证,UCF成绩单原版一比一办理学位证中佛罗里达大学毕业证,UCF成绩单原版一比一
办理学位证中佛罗里达大学毕业证,UCF成绩单原版一比一
 
EMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM TRACKING WITH GOOGLE ANALYTICS.pptx
EMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM  TRACKING WITH GOOGLE ANALYTICS.pptxEMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM  TRACKING WITH GOOGLE ANALYTICS.pptx
EMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM TRACKING WITH GOOGLE ANALYTICS.pptx
 

1 d,2d laplace inversion of lr nmr

  • 2. o Understanding the problem o Mathematical modeling o The need for Regularization o Regularization Methods o Solution Development o Results and Conclusion
  • 3. Results and Conclusion Solution Development Regularization Methods The need for Regularization Mathematical modeling Understanding the problem
  • 4. o The study of the structure and dynamic behavior of molecules is extremely important  Medical imaging  Industrial quality control  Chemical and Pharmaceutical analysis  Safety inspections o However, molecules are too small to be observed and studied directly  The nuclear magnetic resonance (NMR) is a versatile and powerful technique for exploring their structure and dynamic behavior.
  • 5. o Protons have a magnetic charge and possess a spin. Due to this, they have a magnetic field. o In an external magnetic field 𝐵∅ (𝐵𝑧) they align parallel or anti-parallel. o The spinning protons wobble, about the axis of the external magnetic field.  This motion is called precession.  a relationship which is defined by the Larmor Equation: 𝜔∅ = 𝛾𝐵∅ o An electromagnetic RF pulse at the resonance frequency causes the protons to presses in phase
  • 6. o T1-The longitudinal relaxation or spin-lattice relaxation.  T1 is the exponential recovery of 𝑀𝑧.  𝑀𝑧 = 𝑀𝑧,𝑒𝑞(1 − 𝑒−𝑡/𝑇1) o T2-The transversal relaxation or spin-spin relaxation.  T2 is the exponential decay of a signal, 𝑀 𝑥𝑦.  𝑀 𝑥𝑦 = 𝑀 𝑥𝑦,𝑒𝑞(𝑒−𝑡/𝑇2) o Simultaneously; the longitudinal magnetization begin to increases again as the excited spins begin to return to the original 𝑀𝑧 orientation.
  • 7. Results and Conclusion Solution Development Regularization Methods The need for Regularization Mathematical modeling Understanding the problem
  • 8. o Fredholm first integral equation  𝑠 𝑡 = 𝑇1∈𝑡 1 − 𝑒 − 𝑡 𝑇1 𝑓 𝑇1 𝑑𝑇1  𝑠 𝑡 = 𝑇2∈𝑡 𝑒 − 𝑡 𝑇2 𝑓 𝑇2 𝑑𝑇2  𝑠 𝑡1, 𝑡2 = 𝑇2∈𝑡2 𝑇1∈𝑡1 (1 − 𝑒−𝑡1/𝑇1)(𝑒−𝑡2/𝑇2)𝑓 𝑇1, 𝑇2 𝑑𝑇1 𝑑𝑇2 o Discretizing the integral  𝑠 = 𝐾𝑓  𝑆 = 𝐾1 𝐹𝐾2
  • 9. o Solving the equation  𝑠 = 𝐾𝑓  𝑓 = 𝐾 𝑇 𝐾 −1 𝐾 𝑇 𝑠 o We are done! o Oh no!  𝑓 ≠ 𝐾 𝑇 𝐾 −1 𝐾 𝑇 𝑠  𝐹 ≠ 𝐾1 𝑇 𝐾1 −1 𝐾1 𝑇 𝑆𝐾2 𝑇 𝐾2 𝐾2 𝑇 −1
  • 10. o Inverse problems  Conversion of the relaxation signal into a continuous distribution of relaxation components is an inverse Laplace transform problem. o Ill-posed problems  Inverse problems, in particular, belong to a class of ill-posed problems and frequently exhibit this extreme sensitivity to changes in the input. o Perturbation theory  That is, even minute perturbations in the data can vastly affect the computed solution.
  • 11. Results and Conclusion Solution Development Regularization Methods The need for Regularization Mathematical modeling Understanding the problem
  • 12. o Consider for example the following system. 𝐴𝑥 = 𝑏 o A the matrix which describes the model. o b the vector which describes the output of the system. o x the solution for the inverse problem  Is the vector which describes the input of the system.
  • 13. o The underdetermined problem have infinitely many solutions.  (1 1)𝑥 = 1 o The problem is replaced by a nearby problem  where the solution is less sensitive to errors in the data. o This replacement is commonly referred to as regularization
  • 14. o If we require the 2-norm of x to be a minimum, that is: o Then there is a unique solution at o compute an approximate solution to the linear least-squares minimization problem associated with the linear system of equations. 1 .. min 21 2  xx ts x 2 1 21  xx
  • 15. o Assume that the solution x can be separated to o Inserting to the above and rearranging o If then this means that the vector 𝑥0 is a null vector (kernel). o The system behaves like an underdetermined system oxxx  ˆ bAxxA o ˆ 0oAx
  • 16. o To stabilize the solution reinforcing an upper bound on the norm of the solution o From optimization theory, we can incorporate the constraint via a Lagrange multiplier 𝛾 .   2 2 .. min x ts bAx    22 2 2 2 . min    xbAx ts x
  • 17. Results and Conclusion Solution Development Regularization Methods The need for Regularization Mathematical modeling Understanding the problem
  • 18. o Tikhonov regularization  Perhaps the most successful and widely used regularization method of is the Tikhonov regularization. o Singular Value Decomposition (SVD)  The singular value decomposition, in the discrete setting, is a power tool for many useful applications in signal processing and statistics.
  • 19. o Specifically, the tikhonov solution xλ is defined, for the strictly positive weighting regularization parameter λ, as the solution to the problem min 𝑥 𝐴𝑥 − 𝑏 2 2 + 𝜆2 𝑥 2 2 o The first term 𝐴𝑥 − 𝑏 2 2 is a measure of the goodness of fit o If the term is too large, then x cannot be considered a good solution because we are under fitting the model. o If the term is too small, then we are over fitting our model to the noisy measurements.
  • 20. o If we can control the norm of x, then we can suppress most of the large noise components. o The objective is to find a suitable balance via the regularization parameter λ for these two terms, such that, the regularized solution xλ fits the data thoroughly and is sufficiently regularized o The balance between the two terms is structured by the factor λ.
  • 21. o It is obvious that for λ =0 we obtain the least square problem  more weights are given to fitting the noisy data, resulting in a solution that is less regular. o However, the larger the λ, the more effort is devoted into the regularity of the solution.  more weights are given to the minimization of the L2-norm of the solution, and so as 𝜆 → ∞ we have 𝑥 → 0 .
  • 22.
  • 23. o Discrepancy Principle:  This method is very likely to overestimate the regularization parameter. ⇒ L-Curve:  Some underestimation expected, very robust. o Generalize Cross Validation (GCV):  risk of severe over or under estimation. o Normalized Cumulative Periodogram (NCP) Criterion:  for low or high noise level considerable overestimate.
  • 24. o It is a convenient graphical tool for displaying the trade-off between the size of a regularized solution and its fit to the given data, as the regularization parameter varies. o Advantages of the L-curve criterion are  robustness  ability to treat perturbations consisting of correlated noise. o Disadvantage is of the L-curve criterion is  for a low noise level, the regularization parameter given is much smaller then the optimal parameter.
  • 27. o Formally, the singular value decomposition of an m × n real or complex matrix A is a factorization of the form 𝐴 = 𝑈Σ𝑉∗ = 𝑖=1 𝑛 𝜎𝑖 𝑢𝑖 𝑣𝑖 𝑇 o 𝑈𝜖ℝ 𝑚×𝑚 and 𝑉𝜖ℝ 𝑛×𝑛 are orthogonal matrices o Σ𝜖ℝ 𝑚×𝑛 Is a rectangular diagonal matrix with non-negative real numbers 𝜎𝑖  𝜎1 > 𝜎2 > ⋯ > 𝜎𝑟 > 𝜎𝑟+1 = ⋯ = 𝜎 𝑛 = 0
  • 28. o As the singular value decreases its corresponding singular vector becomes more chaotic and with less information.
  • 29.
  • 30. o This regularization method resolves the issue of the problematic tiny positive singular values by setting them to zero. o The TSVD approximation of A will effectively ignore the smallest singular values.   k i T iiikk vuVUA 1 *  )0,...,0,,...,,( 21 kk diag 
  • 32. o A necessary condition for obtaining good regularized solutions is that the Fourier coefficients of the right-hand side, when expressed in terms of the generalized SVD associated with the regularization problem, on the average decay to zero faster than the generalized singular values o In other words, chop off SVD components that are dominated by the noise o (Note: the need for Fourier coefficients to converge has been understood for many years)
  • 33. o the condition must be satisfied in order to obtain “good regularized solutions”. o The Discrete Picard Condition. Let Ƭ denote the level at which the computed singular values σi level off due to rounding errors. The discrete Picard condition is satisfied if, for all singular values larger than Ƭ, the corresponding coefficient |ui Ts|, on average, decay faster than the σi.
  • 34.
  • 36. o Recall our 2D problem from 𝑆 = 𝐾1 𝐹𝐾2. o Transforming the equation back to 1D.  𝑣𝑒𝑐 𝑆 = 𝐾2 𝑇 ⨂𝐾1 𝑣𝑒𝑐 𝐹  𝐾2 𝑇 ⨂𝐾1 = 𝑉2⨂𝑈1 Σ2⨂Σ1 𝑈2 𝑇 ⨂𝑉1 𝑇  𝑠 = 𝜇𝜉𝜈 𝑇 𝑓. o We now define the Picard curve:  𝜌 = 𝑙𝑜𝑔 𝜇𝑖 𝑇 𝑠 − 𝑙𝑜𝑔 𝑑𝑖𝑎𝑔 𝜉𝑖
  • 37. o However, the matrix µ is extremely large o perform an inverse kronecker product operation.  V2⨂U1 𝑠 ↔ U1SV2 𝑇  𝑑𝑖𝑎𝑔 Σ2⨂Σ1 = Σ2⨂Σ1 𝕀 𝑚1 𝑚2×1 ↔ ↔ Σ1 𝕀 𝑚1×𝑚2 Σ2 𝑇 = 𝑑𝑖𝑎𝑔 Σ1 × 𝑑𝑖𝑎𝑔 𝑇 Σ2 o We now define the Picard surface as  𝜌 = 𝑙𝑜𝑔 𝑈1 𝑆𝑉2 𝑇 − 𝑙𝑜𝑔 𝜎1 × 𝜎2 𝑇  Where 𝜎1 and 𝜎2 are the vector diagonals of the singular values respectively.
  • 38.
  • 39. o Assuming a simple problem consists of the same data structure from our LR-NMR experimental o signal measurements matrix  S 16384 by 70  values stored as a double precision (8 bytes) o our kernel matrices  K1 300 by 70  k2 300 by 16384 o we would need 45 Megabytes to store our raw data measurements.
  • 40. o In our example, the Picard Plot suggested to use only the first 9 singular values for the K1 kernel and the first 12 singular values of the second K2 kernel o New signal measurements vector  s 108 by 1 o New kernel matrices  K1 300 by 9  k2 300 by 12 o we would need 0.05 Megabytes to store our raw data measurements. o Compression ratio of 1,000:1
  • 41. o Consider our method to map the data into a 1D problem o New signal measurements vector  s 1146880 o New kernel matrix  K’ 1146880 by 90000 o we would need at least 768.9 Gigabytes of storage space. o Using the 2D Picard Condition o New signal measurements vector  s 108 o New kernel matrix  K’ 108 by 90000 o we would need at least 0.073 Gigabytes of storage space. o Compression ratio of 10,000:1
  • 42. o Impossible to get more knowledge from the information inside the data. o If the resolution is fictitious increased, the information is falsified. o The regularization parameters are a function of the distribution f and the signal noise Δs. o No Mathematical basis.
  • 43. Results and Conclusion Solution Development Regularization Methods The need for Regularization Mathematical modeling Understanding the problem
  • 44. o Define the functional 𝛷 𝑓  Min 𝛷 𝑓 = 1 2 𝐾𝑓 − 𝑠 2 2 + 1 2 𝜆2 𝑓 2 2 + 𝜆1 𝑓 1  s.t. 𝑓 ∈ 𝐶  𝜆1 ≥ 0 , 𝜆2 ≥ 0  𝐿2 : 𝐶 = 𝑓: 𝑓 𝑇 ≥ 0, 𝑓 2 < ∞ o If K∈L2, then Φ(f) has a directional derivative, donated by ∇Φ. o 𝛻𝛷 𝑓 = 1 2 𝜕 𝜕𝑓 𝑒, 𝑒 + 1 2 𝜆2 𝜕 𝜕𝑓 𝑓, 𝑓 + 𝜆1 𝜕 𝜕𝑓 𝑡𝑟 𝑓 ∙ 𝐼 o 𝛻𝛷 𝑓 = 𝐾′ 𝐾𝑓 − 𝑠 + 𝜆2 𝑓 + 𝜆1
  • 45. o The Kuhn-Tucker condition:  ∇Φ(f)=0 → f>0: When the derivative is 0 then we are at a calculus minimum  ∇Φ(f)≥0 → f=0: When its not, a small decrease of f will reduce the function value, however, that is when the constraint is reached. o Rearranging  f 𝜆1,𝜆2 = max(0, 𝐾′ 𝐾 + 𝜆2 𝐼 −1 (𝐾′ 𝑠 − 𝜆1)) o We can use the SVD to obtain more insight into the Tikhonov solution  f 𝜆1,𝜆2 = max 0, 𝑉 Σ2 + 𝜆2 2 𝐼 −1 Σ𝑈′ 𝑠 − 𝑉′ 𝜆1
  • 46. Artifact! Bad shape and position Good peak Bad shape  𝛷 𝑓 = 1 2 𝐾𝑓 − 𝑠 2 2 + 1 2 𝜆2 𝑓 2 2 + 𝜆1 𝑓 1
  • 47.  𝛷 𝑓 = 1 2 𝐾𝑓 − 𝑠 2 2 + 1 2 𝜆2 𝑓 2 2
  • 48.  𝛷 𝑓 = 1 2 𝐾𝑓 − 𝑠 2 2 + 𝜆1 𝑓 1
  • 49.  𝛷 𝑓 = 1 2 𝐾𝑓 − 𝑠 2 2
  • 50. o The Primal-Dual convex optimization (PDCO) is a state of the art optimization solver implemented in Matlab. o It applies a primal-dual interior method to linearly constrained optimization problems with a convex objective function.
  • 51. o The problems are assumed to be of the following form:  min 𝑓,𝑟 𝑟 2 2 + 𝐷1 𝑓 2 2 + 𝜑(𝑓) 𝑠. 𝑡 𝐴𝑓 + 𝐷2 𝑟 = 𝑏 𝑙 ≤ 𝑓 ≤ 𝑢 o Where f and r are variables, and 𝐷1 and 𝐷2 are positive- definite diagonal matrices. o Each PDCO iteration is generating search directions ∆f and ∆y for the primal variables f and the dual variables y associated with 𝐴𝑓 + 𝐷2 𝑟 = 𝑏
  • 52. o Until recently many models have only used the l2 penalty function because the solving methods are simple and fast. o The introduction of the least absolute values, l1, to model fitting have greatly improved many applications. o adopt a hybrid between l1 and l2.
  • 53. o Hybrid function 𝐻𝑦𝑏𝑐(𝑓′ ) = 𝑔(𝑓′𝑖) with a regularization parameter c, where o 𝑔(𝑓′𝑖) = 𝑓′ 𝑖 2 2𝑐 + 𝑐 2 𝑓′𝑖 𝑓′𝑖 ≤ 𝑐 𝑓′𝑖 > 𝑐
  • 54.
  • 55. o The most popular entropy functional is the Shannon entropy formula o the Entropy function 𝐸𝑛𝑡 𝑎 𝑓′ = 𝑓′ 𝑖 𝑎 log 𝑓′ 𝑖 𝑎 with a regularization parameter a o Its origins in information theory o The motivation  does not introduce correlations into the data beyond those which are required by the data   n i ii xLogxxE 1 )()()( 1)()())((  xLogxExEGrad x xEdiagxEHess ii 1 ))(())(( 
  • 56.
  • 57. Results and Conclusion Solution Development Regularization Methods The need for Regularization Mathematical modeling Understanding the problem
  • 58. o The solution method proposed in [] is the mathematical formulation of the linearly constrained convex problem:  min 𝑓′ 𝜆1 𝑘′ 𝑓′ − 𝑏′ 2 2 + 𝜆2 𝑓′ 2 2 + 𝜑(𝑓′) 𝑠. 𝑡 𝑘′ 𝑓′ + 𝑟 = 𝑏′ 𝑓′ ≥ 0 o Where k’ is the Kroneker tensor product of K1 and K2, o f’ is the unknown spectrum vector, o b’ is the transformed measurements vector, o r is the residual vector o the convex function 𝜑 𝑓′ is either the Entropy function 𝐸𝑛𝑡 𝑎 𝑓′ with a regularization parameter a or the Hybrid function 𝐻𝑦𝑏 𝑐(𝑓′) with a regularization parameter c
  • 59. o one-dimensional image restoration model 0.1% 1% 10%5%
  • 60. o Inverse heat equation 0.1% 1% 10%5%
  • 61. Perfect! Good Peak Good Shape And Position No Artifacts! Ok Height
  • 66. o Our algorithm produces reconstructions of far greater quality than the other methods but at a cost of convergence time that comes from the need of tuning several parameters o In contrast, our approach keeps the reconstruction quality regardless of the data structure and data size o we have taken advantage of the inherited stability of the 2D Picard condition to regularize the solution and make it less sensitive to perturbations in the measurement array. o As a result, all required quantities such as gradient, Hessian-vector product are computed with reduced memory storage and computation time