SlideShare a Scribd company logo
Adam Lee Perelman
25/5/2015
o Understanding the problem
o Mathematical modeling
o The need for Regularization
o Regularization Methods
o Solution Development
o Results and Conclusion
Results and Conclusion
Solution Development
Regularization Methods
The need for
Regularization
Mathematical modeling
Understanding the problem
o The study of the structure and dynamic
behavior of molecules is extremely important
 Medical imaging
 Industrial quality control
 Chemical and Pharmaceutical analysis
 Safety inspections
o However, molecules are too small to be
observed and studied directly
 The nuclear magnetic resonance (NMR) is a
versatile and powerful technique for exploring their
structure and dynamic behavior.
o Protons have a magnetic charge and
possess a spin. Due to this, they have
a magnetic field.
o In an external magnetic field 𝐵∅ (𝐵𝑧)
they align parallel or anti-parallel.
o The spinning protons wobble, about
the axis of the external magnetic field.
 This motion is called precession.
 a relationship which is defined by the
Larmor Equation: 𝜔∅ = 𝛾𝐵∅
o An electromagnetic RF pulse at the
resonance frequency causes the
protons to presses in phase
o T1-The longitudinal relaxation or
spin-lattice relaxation.
 T1 is the exponential recovery of 𝑀𝑧.
 𝑀𝑧 = 𝑀𝑧,𝑒𝑞(1 − 𝑒−𝑡/𝑇1)
o T2-The transversal relaxation or
spin-spin relaxation.
 T2 is the exponential decay of a signal,
𝑀 𝑥𝑦.
 𝑀 𝑥𝑦 = 𝑀 𝑥𝑦,𝑒𝑞(𝑒−𝑡/𝑇2)
o Simultaneously; the longitudinal
magnetization begin to increases
again as the excited spins begin to
return to the original 𝑀𝑧
orientation.
Results and Conclusion
Solution Development
Regularization Methods
The need for
Regularization
Mathematical modeling
Understanding the problem
o Fredholm first integral equation
 𝑠 𝑡 = 𝑇1∈𝑡
1 − 𝑒
−
𝑡
𝑇1 𝑓 𝑇1 𝑑𝑇1
 𝑠 𝑡 = 𝑇2∈𝑡
𝑒
−
𝑡
𝑇2 𝑓 𝑇2 𝑑𝑇2
 𝑠 𝑡1, 𝑡2 = 𝑇2∈𝑡2 𝑇1∈𝑡1
(1 − 𝑒−𝑡1/𝑇1)(𝑒−𝑡2/𝑇2)𝑓 𝑇1, 𝑇2 𝑑𝑇1 𝑑𝑇2
o Discretizing the integral
 𝑠 = 𝐾𝑓
 𝑆 = 𝐾1 𝐹𝐾2
o Solving the equation
 𝑠 = 𝐾𝑓
 𝑓 = 𝐾 𝑇 𝐾 −1 𝐾 𝑇 𝑠
o We are done!
o Oh no!
 𝑓 ≠ 𝐾 𝑇 𝐾 −1 𝐾 𝑇 𝑠
 𝐹 ≠ 𝐾1
𝑇
𝐾1
−1 𝐾1
𝑇
𝑆𝐾2
𝑇
𝐾2 𝐾2
𝑇 −1
o Inverse problems
 Conversion of the relaxation signal into a
continuous distribution of relaxation components is
an inverse Laplace transform problem.
o Ill-posed problems
 Inverse problems, in particular, belong to a class of
ill-posed problems and frequently exhibit this
extreme sensitivity to changes in the input.
o Perturbation theory
 That is, even minute perturbations in the data can
vastly affect the computed solution.
Results and Conclusion
Solution Development
Regularization Methods
The need for
Regularization
Mathematical modeling
Understanding the problem
o Consider for example the following system.
𝐴𝑥 = 𝑏
o A the matrix which describes the model.
o b the vector which describes the output of the
system.
o x the solution for the inverse problem
 Is the vector which describes the input of the system.
o The underdetermined problem have infinitely
many solutions.
 (1 1)𝑥 = 1
o The problem is replaced by a nearby problem
 where the solution is less sensitive to errors in the
data.
o This replacement is commonly referred to as
regularization
o If we require the 2-norm of x to be a minimum,
that is:
o Then there is a unique solution at
o compute an approximate solution to the linear
least-squares minimization problem associated
with the linear system of equations.
1
..
min
21
2
 xx
ts
x
2
1
21  xx
o Assume that the solution x can be separated to
o Inserting to the above and rearranging
o If then this means that the vector 𝑥0 is a
null vector (kernel).
o The system behaves like an underdetermined
system
oxxx  ˆ
bAxxA o ˆ
0oAx
o To stabilize the solution reinforcing an upper
bound on the norm of the solution
o From optimization theory, we can incorporate
the constraint via a Lagrange multiplier 𝛾 .


2
2
..
min
x
ts
bAx
 
 22
2
2
2
.
min
 

xbAx
ts
x
Results and Conclusion
Solution Development
Regularization Methods
The need for
Regularization
Mathematical modeling
Understanding the problem
o Tikhonov regularization
 Perhaps the most successful and widely used
regularization method of is the Tikhonov
regularization.
o Singular Value Decomposition (SVD)
 The singular value decomposition, in the discrete
setting, is a power tool for many useful applications
in signal processing and statistics.
o Specifically, the tikhonov solution xλ is defined, for the
strictly positive weighting regularization parameter λ, as
the solution to the problem
min
𝑥
𝐴𝑥 − 𝑏 2
2
+ 𝜆2
𝑥 2
2
o The first term 𝐴𝑥 − 𝑏 2
2
is a measure of the goodness of
fit
o If the term is too large, then x cannot be considered a
good solution because we are under fitting the model.
o If the term is too small, then we are over fitting our
model to the noisy measurements.
o If we can control the norm of x, then we can
suppress most of the large noise components.
o The objective is to find a suitable balance via the
regularization parameter λ for these two terms,
such that, the regularized solution xλ fits the
data thoroughly and is sufficiently regularized
o The balance between the two terms is
structured by the factor λ.
o It is obvious that for λ =0 we obtain the least
square problem
 more weights are given to fitting the noisy data,
resulting in a solution that is less regular.
o However, the larger the λ, the more effort is
devoted into the regularity of the solution.
 more weights are given to the minimization of the
L2-norm of the solution, and so as 𝜆 → ∞ we have
𝑥 → 0 .
o Discrepancy Principle:
 This method is very likely to overestimate the
regularization parameter.
⇒ L-Curve:
 Some underestimation expected, very robust.
o Generalize Cross Validation (GCV):
 risk of severe over or under estimation.
o Normalized Cumulative Periodogram (NCP)
Criterion:
 for low or high noise level considerable overestimate.
o It is a convenient graphical tool for displaying
the trade-off between the size of a regularized
solution and its fit to the given data, as the
regularization parameter varies.
o Advantages of the L-curve criterion are
 robustness
 ability to treat perturbations consisting of
correlated noise.
o Disadvantage is of the L-curve criterion is
 for a low noise level, the regularization parameter
given is much smaller then the optimal parameter.
Vortex
Minimum
Optimal
o Formally, the singular value decomposition of
an m × n real or complex matrix A is a
factorization of the form
𝐴 = 𝑈Σ𝑉∗
=
𝑖=1
𝑛
𝜎𝑖 𝑢𝑖 𝑣𝑖
𝑇
o 𝑈𝜖ℝ 𝑚×𝑚
and 𝑉𝜖ℝ 𝑛×𝑛
are orthogonal matrices
o Σ𝜖ℝ 𝑚×𝑛 Is a rectangular diagonal matrix with
non-negative real numbers 𝜎𝑖
 𝜎1 > 𝜎2 > ⋯ > 𝜎𝑟 > 𝜎𝑟+1 = ⋯ = 𝜎 𝑛 = 0
o As the singular value decreases its corresponding
singular vector becomes more chaotic and with less
information.
o This regularization method resolves the issue of
the problematic tiny positive singular values by
setting them to zero.
o The TSVD approximation of A will effectively
ignore the smallest singular values.


k
i
T
iiikk vuVUA
1
*

)0,...,0,,...,,( 21 kk diag 
Too thin
and high!
Artifact?
Convoluted peak
or one peak?
o A necessary condition for obtaining good
regularized solutions is that the Fourier
coefficients of the right-hand side, when
expressed in terms of the generalized SVD
associated with the regularization problem, on
the average decay to zero faster than the
generalized singular values
o In other words, chop off SVD components that
are dominated by the noise
o (Note: the need for Fourier coefficients to
converge has been understood for many years)
o the condition must be satisfied in order to
obtain “good regularized solutions”.
o The Discrete Picard Condition. Let Ƭ denote the
level at which the computed singular values σi
level off due to rounding errors. The discrete
Picard condition is satisfied if, for all singular
values larger than Ƭ, the corresponding
coefficient |ui
Ts|, on average, decay faster than
the σi.
Artifact!
Better resolution!
No thin
peaks
o Recall our 2D problem from 𝑆 = 𝐾1 𝐹𝐾2.
o Transforming the equation back to 1D.
 𝑣𝑒𝑐 𝑆 = 𝐾2
𝑇
⨂𝐾1 𝑣𝑒𝑐 𝐹
 𝐾2
𝑇
⨂𝐾1 = 𝑉2⨂𝑈1 Σ2⨂Σ1 𝑈2
𝑇
⨂𝑉1
𝑇
 𝑠 = 𝜇𝜉𝜈 𝑇
𝑓.
o We now define the Picard curve:
 𝜌 = 𝑙𝑜𝑔 𝜇𝑖
𝑇
𝑠 − 𝑙𝑜𝑔 𝑑𝑖𝑎𝑔 𝜉𝑖
o However, the matrix µ is extremely large
o perform an inverse kronecker product operation.
 V2⨂U1 𝑠 ↔ U1SV2
𝑇
 𝑑𝑖𝑎𝑔 Σ2⨂Σ1 = Σ2⨂Σ1 𝕀 𝑚1 𝑚2×1 ↔
↔ Σ1 𝕀 𝑚1×𝑚2
Σ2
𝑇
= 𝑑𝑖𝑎𝑔 Σ1 × 𝑑𝑖𝑎𝑔 𝑇 Σ2
o We now define the Picard surface as
 𝜌 = 𝑙𝑜𝑔 𝑈1 𝑆𝑉2
𝑇
− 𝑙𝑜𝑔 𝜎1 × 𝜎2
𝑇
 Where 𝜎1 and 𝜎2 are the vector diagonals of the
singular values respectively.
o Assuming a simple problem consists of the
same data structure from our LR-NMR
experimental
o signal measurements matrix
 S 16384 by 70
 values stored as a double precision (8 bytes)
o our kernel matrices
 K1 300 by 70
 k2 300 by 16384
o we would need 45 Megabytes to store our raw
data measurements.
o In our example, the Picard Plot suggested to use
only the first 9 singular values for the K1 kernel
and the first 12 singular values of the second K2
kernel
o New signal measurements vector
 s 108 by 1
o New kernel matrices
 K1 300 by 9
 k2 300 by 12
o we would need 0.05 Megabytes to store our
raw data measurements.
o Compression ratio of 1,000:1
o Consider our method to map the data into a 1D
problem
o New signal measurements vector
 s 1146880
o New kernel matrix
 K’ 1146880 by 90000
o we would need at least 768.9 Gigabytes of storage
space.
o Using the 2D Picard Condition
o New signal measurements vector
 s 108
o New kernel matrix
 K’ 108 by 90000
o we would need at least 0.073 Gigabytes of storage
space.
o Compression ratio of 10,000:1
o Impossible to get more knowledge from the
information inside the data.
o If the resolution is fictitious increased, the
information is falsified.
o The regularization parameters are a function of
the distribution f and the signal noise Δs.
o No Mathematical basis.
Results and Conclusion
Solution Development
Regularization Methods
The need for
Regularization
Mathematical modeling
Understanding the problem
o Define the functional 𝛷 𝑓
 Min 𝛷 𝑓 = 1
2
𝐾𝑓 − 𝑠 2
2
+ 1
2
𝜆2 𝑓 2
2
+ 𝜆1 𝑓 1
 s.t. 𝑓 ∈ 𝐶
 𝜆1 ≥ 0 , 𝜆2 ≥ 0
 𝐿2
: 𝐶 = 𝑓: 𝑓 𝑇 ≥ 0, 𝑓 2
< ∞
o If K∈L2, then Φ(f) has a directional derivative,
donated by ∇Φ.
o 𝛻𝛷 𝑓 = 1
2
𝜕
𝜕𝑓
𝑒, 𝑒 + 1
2
𝜆2
𝜕
𝜕𝑓
𝑓, 𝑓 + 𝜆1
𝜕
𝜕𝑓
𝑡𝑟 𝑓 ∙ 𝐼
o 𝛻𝛷 𝑓 = 𝐾′ 𝐾𝑓 − 𝑠 + 𝜆2 𝑓 + 𝜆1
o The Kuhn-Tucker condition:
 ∇Φ(f)=0 → f>0: When the derivative is 0 then we are
at a calculus minimum
 ∇Φ(f)≥0 → f=0: When its not, a small decrease of f
will reduce the function value, however, that is when
the constraint is reached.
o Rearranging
 f 𝜆1,𝜆2
= max(0, 𝐾′
𝐾 + 𝜆2 𝐼 −1
(𝐾′
𝑠 − 𝜆1))
o We can use the SVD to obtain more insight into
the Tikhonov solution
 f 𝜆1,𝜆2
= max 0, 𝑉 Σ2
+ 𝜆2
2
𝐼
−1
Σ𝑈′
𝑠 − 𝑉′
𝜆1
Artifact!
Bad shape
and position
Good peak
Bad shape
 𝛷 𝑓 = 1
2
𝐾𝑓 − 𝑠 2
2
+ 1
2
𝜆2 𝑓 2
2
+ 𝜆1 𝑓 1
 𝛷 𝑓 = 1
2
𝐾𝑓 − 𝑠 2
2
+ 1
2
𝜆2 𝑓 2
2
 𝛷 𝑓 = 1
2
𝐾𝑓 − 𝑠 2
2
+ 𝜆1 𝑓 1
 𝛷 𝑓 = 1
2
𝐾𝑓 − 𝑠 2
2
o The Primal-Dual convex optimization (PDCO) is
a state of the art optimization solver
implemented in Matlab.
o It applies a primal-dual interior method to
linearly constrained optimization problems
with a convex objective function.
o The problems are assumed to be of the following
form:
 min
𝑓,𝑟
𝑟 2
2
+ 𝐷1 𝑓 2
2
+ 𝜑(𝑓)
𝑠. 𝑡
𝐴𝑓 + 𝐷2 𝑟 = 𝑏
𝑙 ≤ 𝑓 ≤ 𝑢
o Where f and r are variables, and 𝐷1 and 𝐷2 are
positive- definite diagonal matrices.
o Each PDCO iteration is generating search directions
∆f and ∆y for the primal variables f and the dual
variables y associated with 𝐴𝑓 + 𝐷2 𝑟 = 𝑏
o Until recently many models have only used the
l2 penalty function because the solving methods
are simple and fast.
o The introduction of the least absolute values, l1,
to model fitting have greatly improved many
applications.
o adopt a hybrid between l1 and l2.
o Hybrid function 𝐻𝑦𝑏𝑐(𝑓′
) = 𝑔(𝑓′𝑖) with a
regularization parameter c, where
o 𝑔(𝑓′𝑖) =
𝑓′ 𝑖
2
2𝑐
+
𝑐
2
𝑓′𝑖
𝑓′𝑖 ≤ 𝑐
𝑓′𝑖 > 𝑐
o The most popular entropy functional is the Shannon
entropy formula
o the Entropy function 𝐸𝑛𝑡 𝑎 𝑓′ = 𝑓′
𝑖
𝑎
log 𝑓′
𝑖
𝑎
with
a regularization parameter a
o Its origins in information theory
o The motivation
 does not introduce correlations into the data beyond
those which are required by the data


n
i
ii xLogxxE
1
)()()(
1)()())((  xLogxExEGrad
x
xEdiagxEHess ii
1
))(())(( 
Results and Conclusion
Solution Development
Regularization Methods
The need for
Regularization
Mathematical modeling
Understanding the problem
o The solution method proposed in [] is the mathematical
formulation of the linearly constrained convex problem:
 min
𝑓′
𝜆1 𝑘′ 𝑓′ − 𝑏′
2
2
+ 𝜆2 𝑓′
2
2
+ 𝜑(𝑓′)
𝑠. 𝑡
𝑘′ 𝑓′ + 𝑟 = 𝑏′
𝑓′ ≥ 0
o Where k’ is the Kroneker tensor product of K1 and K2,
o f’ is the unknown spectrum vector,
o b’ is the transformed measurements vector,
o r is the residual vector
o the convex function 𝜑 𝑓′
is either the Entropy function
𝐸𝑛𝑡 𝑎 𝑓′ with a regularization parameter a or the Hybrid
function 𝐻𝑦𝑏 𝑐(𝑓′) with a regularization parameter c
o one-dimensional image restoration model
0.1% 1%
10%5%
o Inverse heat equation
0.1% 1%
10%5%
Perfect!
Good Peak
Good Shape
And Position
No Artifacts!
Ok Height
Perfect!
Perfect!
Good Peak
No Artifacts!
Good Peak
Perfect!
Good Shape
And Position
Ok Height
Perfect!
Good Shape
And Position
Perfect!
Good Shape
And Position
o Our algorithm produces reconstructions of far
greater quality than the other methods but at a cost
of convergence time that comes from the need of
tuning several parameters
o In contrast, our approach keeps the reconstruction
quality regardless of the data structure and data
size
o we have taken advantage of the inherited stability of
the 2D Picard condition to regularize the solution
and make it less sensitive to perturbations in the
measurement array.
o As a result, all required quantities such as gradient,
Hessian-vector product are computed with reduced
memory storage and computation time

More Related Content

What's hot

Reaction Kinetics
Reaction KineticsReaction Kinetics
Reaction Kineticsmiss j
 
Radiometric titrations
Radiometric titrations  Radiometric titrations
Radiometric titrations
Madan Lal
 
Synthetic Dyes
Synthetic DyesSynthetic Dyes
Chemical Kinetics
Chemical KineticsChemical Kinetics
Chemical Kinetics
LALIT SHARMA
 
Microwave Spectroscopy
Microwave SpectroscopyMicrowave Spectroscopy
Microwave Spectroscopy
krishslide
 
Application of emf measurements for pH Determination by S E Bhandarkar
Application of emf measurements for pH Determination by S E BhandarkarApplication of emf measurements for pH Determination by S E Bhandarkar
Application of emf measurements for pH Determination by S E Bhandarkar
subodhbhandarkar1
 
ELECTROCHEMISTRY - ELECTRICAL DOUBLE LAYER
ELECTROCHEMISTRY - ELECTRICAL DOUBLE LAYERELECTROCHEMISTRY - ELECTRICAL DOUBLE LAYER
ELECTROCHEMISTRY - ELECTRICAL DOUBLE LAYER
Saiva Bhanu Kshatriya College, Aruppukottai.
 
Green Chemistry In Real World Practices Pharmaceutical Industry Experience
Green Chemistry In Real World Practices   Pharmaceutical Industry ExperienceGreen Chemistry In Real World Practices   Pharmaceutical Industry Experience
Green Chemistry In Real World Practices Pharmaceutical Industry Experience
Newreka Green Synth Technologies
 
Born–Oppenheimer Approximation.pdf
Born–Oppenheimer Approximation.pdfBorn–Oppenheimer Approximation.pdf
Born–Oppenheimer Approximation.pdf
Anjali Devi J S
 
radiometric titration.pptx
radiometric titration.pptxradiometric titration.pptx
radiometric titration.pptx
MuhammadShafiq649632
 
CARDIOVASCULAR AGENTS / CLASSFICATION / MOA/ APPICATIONS
CARDIOVASCULAR AGENTS / CLASSFICATION / MOA/ APPICATIONSCARDIOVASCULAR AGENTS / CLASSFICATION / MOA/ APPICATIONS
CARDIOVASCULAR AGENTS / CLASSFICATION / MOA/ APPICATIONS
Shikha Popali
 
Chain Reactions
Chain ReactionsChain Reactions
Chain Reactions
Arvind Singh Heer
 
Dielectric constant and polarizibality
Dielectric constant and polarizibalityDielectric constant and polarizibality
Dielectric constant and polarizibality
junaidhassan71
 
Potentiometry titration
Potentiometry titrationPotentiometry titration
Potentiometry titration
Anoop Singh
 
Viscosity measurement
Viscosity measurementViscosity measurement
Viscosity measurement
Nur Fatihah
 
Paterno buchi reaction
Paterno buchi reactionPaterno buchi reaction
Paterno buchi reaction
Harish Chopra
 
Lecture
LectureLecture
Lecture
Talha Javed
 
Polarography- Pharmaceutical Analysis
Polarography- Pharmaceutical AnalysisPolarography- Pharmaceutical Analysis
Polarography- Pharmaceutical Analysis
Sanchit Dhankhar
 

What's hot (20)

Reaction Kinetics
Reaction KineticsReaction Kinetics
Reaction Kinetics
 
Radiometric titrations
Radiometric titrations  Radiometric titrations
Radiometric titrations
 
Synthetic Dyes
Synthetic DyesSynthetic Dyes
Synthetic Dyes
 
Chemical Kinetics
Chemical KineticsChemical Kinetics
Chemical Kinetics
 
Microwave Spectroscopy
Microwave SpectroscopyMicrowave Spectroscopy
Microwave Spectroscopy
 
Application of emf measurements for pH Determination by S E Bhandarkar
Application of emf measurements for pH Determination by S E BhandarkarApplication of emf measurements for pH Determination by S E Bhandarkar
Application of emf measurements for pH Determination by S E Bhandarkar
 
ELECTROCHEMISTRY - ELECTRICAL DOUBLE LAYER
ELECTROCHEMISTRY - ELECTRICAL DOUBLE LAYERELECTROCHEMISTRY - ELECTRICAL DOUBLE LAYER
ELECTROCHEMISTRY - ELECTRICAL DOUBLE LAYER
 
Green Chemistry In Real World Practices Pharmaceutical Industry Experience
Green Chemistry In Real World Practices   Pharmaceutical Industry ExperienceGreen Chemistry In Real World Practices   Pharmaceutical Industry Experience
Green Chemistry In Real World Practices Pharmaceutical Industry Experience
 
Born–Oppenheimer Approximation.pdf
Born–Oppenheimer Approximation.pdfBorn–Oppenheimer Approximation.pdf
Born–Oppenheimer Approximation.pdf
 
radiometric titration.pptx
radiometric titration.pptxradiometric titration.pptx
radiometric titration.pptx
 
CARDIOVASCULAR AGENTS / CLASSFICATION / MOA/ APPICATIONS
CARDIOVASCULAR AGENTS / CLASSFICATION / MOA/ APPICATIONSCARDIOVASCULAR AGENTS / CLASSFICATION / MOA/ APPICATIONS
CARDIOVASCULAR AGENTS / CLASSFICATION / MOA/ APPICATIONS
 
Chapter 7 activity
Chapter 7 activityChapter 7 activity
Chapter 7 activity
 
Chain Reactions
Chain ReactionsChain Reactions
Chain Reactions
 
Dielectric constant and polarizibality
Dielectric constant and polarizibalityDielectric constant and polarizibality
Dielectric constant and polarizibality
 
Potentiometry titration
Potentiometry titrationPotentiometry titration
Potentiometry titration
 
Viscosity measurement
Viscosity measurementViscosity measurement
Viscosity measurement
 
Paterno buchi reaction
Paterno buchi reactionPaterno buchi reaction
Paterno buchi reaction
 
Lecture
LectureLecture
Lecture
 
potentiometry
potentiometrypotentiometry
potentiometry
 
Polarography- Pharmaceutical Analysis
Polarography- Pharmaceutical AnalysisPolarography- Pharmaceutical Analysis
Polarography- Pharmaceutical Analysis
 

Similar to 1 d,2d laplace inversion of lr nmr

Intro to Quant Trading Strategies (Lecture 7 of 10)
Intro to Quant Trading Strategies (Lecture 7 of 10)Intro to Quant Trading Strategies (Lecture 7 of 10)
Intro to Quant Trading Strategies (Lecture 7 of 10)
Adrian Aley
 
sbs.pdf
sbs.pdfsbs.pdf
Ep 5512 lecture-02
Ep 5512 lecture-02Ep 5512 lecture-02
Ep 5512 lecture-02
Kindshih Berihun
 
G04123844
G04123844G04123844
Signals And Systems Assignment Help
Signals And Systems Assignment HelpSignals And Systems Assignment Help
Signals And Systems Assignment Help
Matlab Assignment Experts
 
Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3
Fabian Pedregosa
 
Numerical Solution of Diffusion Equation by Finite Difference Method
Numerical Solution of Diffusion Equation by Finite Difference MethodNumerical Solution of Diffusion Equation by Finite Difference Method
Numerical Solution of Diffusion Equation by Finite Difference Method
iosrjce
 
Chapter24rev1.pptPart 6Chapter 24Boundary-Valu.docx
Chapter24rev1.pptPart 6Chapter 24Boundary-Valu.docxChapter24rev1.pptPart 6Chapter 24Boundary-Valu.docx
Chapter24rev1.pptPart 6Chapter 24Boundary-Valu.docx
tiffanyd4
 
Introduction to Quantum Monte Carlo
Introduction to Quantum Monte CarloIntroduction to Quantum Monte Carlo
Introduction to Quantum Monte Carlo
Claudio Attaccalite
 
Optimization
OptimizationOptimization
Optimization
yesheeka
 
Linear regression [Theory and Application (In physics point of view) using py...
Linear regression [Theory and Application (In physics point of view) using py...Linear regression [Theory and Application (In physics point of view) using py...
Linear regression [Theory and Application (In physics point of view) using py...
ANIRBANMAJUMDAR18
 
Finite Element Methods
Finite Element  MethodsFinite Element  Methods
Finite Element Methods
Dr.Vikas Deulgaonkar
 
Schrodinger Equation of Hydrogen Atom
Schrodinger Equation of Hydrogen AtomSchrodinger Equation of Hydrogen Atom
Schrodinger Equation of Hydrogen Atom
Saad Shaukat
 
lecture_09.pptx
lecture_09.pptxlecture_09.pptx
lecture_09.pptx
PeruruFamidaNajumun
 
DOMV No 8 MDOF LINEAR SYSTEMS - RAYLEIGH'S METHOD - FREE VIBRATION.pdf
DOMV No 8  MDOF LINEAR SYSTEMS - RAYLEIGH'S METHOD  - FREE VIBRATION.pdfDOMV No 8  MDOF LINEAR SYSTEMS - RAYLEIGH'S METHOD  - FREE VIBRATION.pdf
DOMV No 8 MDOF LINEAR SYSTEMS - RAYLEIGH'S METHOD - FREE VIBRATION.pdf
ahmedelsharkawy98
 
머피의 머신러닝: 17장 Markov Chain and HMM
머피의 머신러닝: 17장  Markov Chain and HMM머피의 머신러닝: 17장  Markov Chain and HMM
머피의 머신러닝: 17장 Markov Chain and HMMJungkyu Lee
 
Paper study: Learning to solve circuit sat
Paper study: Learning to solve circuit satPaper study: Learning to solve circuit sat
Paper study: Learning to solve circuit sat
ChenYiHuang5
 
Klt
KltKlt
B02402012022
B02402012022B02402012022
B02402012022
inventionjournals
 

Similar to 1 d,2d laplace inversion of lr nmr (20)

Intro to Quant Trading Strategies (Lecture 7 of 10)
Intro to Quant Trading Strategies (Lecture 7 of 10)Intro to Quant Trading Strategies (Lecture 7 of 10)
Intro to Quant Trading Strategies (Lecture 7 of 10)
 
sbs.pdf
sbs.pdfsbs.pdf
sbs.pdf
 
Ep 5512 lecture-02
Ep 5512 lecture-02Ep 5512 lecture-02
Ep 5512 lecture-02
 
G04123844
G04123844G04123844
G04123844
 
Signals And Systems Assignment Help
Signals And Systems Assignment HelpSignals And Systems Assignment Help
Signals And Systems Assignment Help
 
Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3
 
Numerical Solution of Diffusion Equation by Finite Difference Method
Numerical Solution of Diffusion Equation by Finite Difference MethodNumerical Solution of Diffusion Equation by Finite Difference Method
Numerical Solution of Diffusion Equation by Finite Difference Method
 
Chapter24rev1.pptPart 6Chapter 24Boundary-Valu.docx
Chapter24rev1.pptPart 6Chapter 24Boundary-Valu.docxChapter24rev1.pptPart 6Chapter 24Boundary-Valu.docx
Chapter24rev1.pptPart 6Chapter 24Boundary-Valu.docx
 
Introduction to Quantum Monte Carlo
Introduction to Quantum Monte CarloIntroduction to Quantum Monte Carlo
Introduction to Quantum Monte Carlo
 
Optimization
OptimizationOptimization
Optimization
 
Linear regression [Theory and Application (In physics point of view) using py...
Linear regression [Theory and Application (In physics point of view) using py...Linear regression [Theory and Application (In physics point of view) using py...
Linear regression [Theory and Application (In physics point of view) using py...
 
Finite Element Methods
Finite Element  MethodsFinite Element  Methods
Finite Element Methods
 
final_report
final_reportfinal_report
final_report
 
Schrodinger Equation of Hydrogen Atom
Schrodinger Equation of Hydrogen AtomSchrodinger Equation of Hydrogen Atom
Schrodinger Equation of Hydrogen Atom
 
lecture_09.pptx
lecture_09.pptxlecture_09.pptx
lecture_09.pptx
 
DOMV No 8 MDOF LINEAR SYSTEMS - RAYLEIGH'S METHOD - FREE VIBRATION.pdf
DOMV No 8  MDOF LINEAR SYSTEMS - RAYLEIGH'S METHOD  - FREE VIBRATION.pdfDOMV No 8  MDOF LINEAR SYSTEMS - RAYLEIGH'S METHOD  - FREE VIBRATION.pdf
DOMV No 8 MDOF LINEAR SYSTEMS - RAYLEIGH'S METHOD - FREE VIBRATION.pdf
 
머피의 머신러닝: 17장 Markov Chain and HMM
머피의 머신러닝: 17장  Markov Chain and HMM머피의 머신러닝: 17장  Markov Chain and HMM
머피의 머신러닝: 17장 Markov Chain and HMM
 
Paper study: Learning to solve circuit sat
Paper study: Learning to solve circuit satPaper study: Learning to solve circuit sat
Paper study: Learning to solve circuit sat
 
Klt
KltKlt
Klt
 
B02402012022
B02402012022B02402012022
B02402012022
 

Recently uploaded

Machine learning and optimization techniques for electrical drives.pptx
Machine learning and optimization techniques for electrical drives.pptxMachine learning and optimization techniques for electrical drives.pptx
Machine learning and optimization techniques for electrical drives.pptx
balafet
 
The affect of service quality and online reviews on customer loyalty in the E...
The affect of service quality and online reviews on customer loyalty in the E...The affect of service quality and online reviews on customer loyalty in the E...
The affect of service quality and online reviews on customer loyalty in the E...
jerlynmaetalle
 
一比一原版(CBU毕业证)不列颠海角大学毕业证成绩单
一比一原版(CBU毕业证)不列颠海角大学毕业证成绩单一比一原版(CBU毕业证)不列颠海角大学毕业证成绩单
一比一原版(CBU毕业证)不列颠海角大学毕业证成绩单
nscud
 
Criminal IP - Threat Hunting Webinar.pdf
Criminal IP - Threat Hunting Webinar.pdfCriminal IP - Threat Hunting Webinar.pdf
Criminal IP - Threat Hunting Webinar.pdf
Criminal IP
 
Opendatabay - Open Data Marketplace.pptx
Opendatabay - Open Data Marketplace.pptxOpendatabay - Open Data Marketplace.pptx
Opendatabay - Open Data Marketplace.pptx
Opendatabay
 
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单一比一原版(UVic毕业证)维多利亚大学毕业证成绩单
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单
ukgaet
 
一比一原版(UIUC毕业证)伊利诺伊大学|厄巴纳-香槟分校毕业证如何办理
一比一原版(UIUC毕业证)伊利诺伊大学|厄巴纳-香槟分校毕业证如何办理一比一原版(UIUC毕业证)伊利诺伊大学|厄巴纳-香槟分校毕业证如何办理
一比一原版(UIUC毕业证)伊利诺伊大学|厄巴纳-香槟分校毕业证如何办理
ahzuo
 
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
axoqas
 
一比一原版(YU毕业证)约克大学毕业证成绩单
一比一原版(YU毕业证)约克大学毕业证成绩单一比一原版(YU毕业证)约克大学毕业证成绩单
一比一原版(YU毕业证)约克大学毕业证成绩单
enxupq
 
Empowering Data Analytics Ecosystem.pptx
Empowering Data Analytics Ecosystem.pptxEmpowering Data Analytics Ecosystem.pptx
Empowering Data Analytics Ecosystem.pptx
benishzehra469
 
standardisation of garbhpala offhgfffghh
standardisation of garbhpala offhgfffghhstandardisation of garbhpala offhgfffghh
standardisation of garbhpala offhgfffghh
ArpitMalhotra16
 
Criminal IP - Threat Hunting Webinar.pdf
Criminal IP - Threat Hunting Webinar.pdfCriminal IP - Threat Hunting Webinar.pdf
Criminal IP - Threat Hunting Webinar.pdf
Criminal IP
 
一比一原版(Coventry毕业证书)考文垂大学毕业证如何办理
一比一原版(Coventry毕业证书)考文垂大学毕业证如何办理一比一原版(Coventry毕业证书)考文垂大学毕业证如何办理
一比一原版(Coventry毕业证书)考文垂大学毕业证如何办理
74nqk8xf
 
一比一原版(QU毕业证)皇后大学毕业证成绩单
一比一原版(QU毕业证)皇后大学毕业证成绩单一比一原版(QU毕业证)皇后大学毕业证成绩单
一比一原版(QU毕业证)皇后大学毕业证成绩单
enxupq
 
一比一原版(CBU毕业证)卡普顿大学毕业证成绩单
一比一原版(CBU毕业证)卡普顿大学毕业证成绩单一比一原版(CBU毕业证)卡普顿大学毕业证成绩单
一比一原版(CBU毕业证)卡普顿大学毕业证成绩单
nscud
 
Influence of Marketing Strategy and Market Competition on Business Plan
Influence of Marketing Strategy and Market Competition on Business PlanInfluence of Marketing Strategy and Market Competition on Business Plan
Influence of Marketing Strategy and Market Competition on Business Plan
jerlynmaetalle
 
一比一原版(UniSA毕业证书)南澳大学毕业证如何办理
一比一原版(UniSA毕业证书)南澳大学毕业证如何办理一比一原版(UniSA毕业证书)南澳大学毕业证如何办理
一比一原版(UniSA毕业证书)南澳大学毕业证如何办理
slg6lamcq
 
Predicting Product Ad Campaign Performance: A Data Analysis Project Presentation
Predicting Product Ad Campaign Performance: A Data Analysis Project PresentationPredicting Product Ad Campaign Performance: A Data Analysis Project Presentation
Predicting Product Ad Campaign Performance: A Data Analysis Project Presentation
Boston Institute of Analytics
 
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Dat...
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Dat...06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Dat...
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Dat...
Timothy Spann
 
一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单
一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单
一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单
ewymefz
 

Recently uploaded (20)

Machine learning and optimization techniques for electrical drives.pptx
Machine learning and optimization techniques for electrical drives.pptxMachine learning and optimization techniques for electrical drives.pptx
Machine learning and optimization techniques for electrical drives.pptx
 
The affect of service quality and online reviews on customer loyalty in the E...
The affect of service quality and online reviews on customer loyalty in the E...The affect of service quality and online reviews on customer loyalty in the E...
The affect of service quality and online reviews on customer loyalty in the E...
 
一比一原版(CBU毕业证)不列颠海角大学毕业证成绩单
一比一原版(CBU毕业证)不列颠海角大学毕业证成绩单一比一原版(CBU毕业证)不列颠海角大学毕业证成绩单
一比一原版(CBU毕业证)不列颠海角大学毕业证成绩单
 
Criminal IP - Threat Hunting Webinar.pdf
Criminal IP - Threat Hunting Webinar.pdfCriminal IP - Threat Hunting Webinar.pdf
Criminal IP - Threat Hunting Webinar.pdf
 
Opendatabay - Open Data Marketplace.pptx
Opendatabay - Open Data Marketplace.pptxOpendatabay - Open Data Marketplace.pptx
Opendatabay - Open Data Marketplace.pptx
 
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单一比一原版(UVic毕业证)维多利亚大学毕业证成绩单
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单
 
一比一原版(UIUC毕业证)伊利诺伊大学|厄巴纳-香槟分校毕业证如何办理
一比一原版(UIUC毕业证)伊利诺伊大学|厄巴纳-香槟分校毕业证如何办理一比一原版(UIUC毕业证)伊利诺伊大学|厄巴纳-香槟分校毕业证如何办理
一比一原版(UIUC毕业证)伊利诺伊大学|厄巴纳-香槟分校毕业证如何办理
 
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
 
一比一原版(YU毕业证)约克大学毕业证成绩单
一比一原版(YU毕业证)约克大学毕业证成绩单一比一原版(YU毕业证)约克大学毕业证成绩单
一比一原版(YU毕业证)约克大学毕业证成绩单
 
Empowering Data Analytics Ecosystem.pptx
Empowering Data Analytics Ecosystem.pptxEmpowering Data Analytics Ecosystem.pptx
Empowering Data Analytics Ecosystem.pptx
 
standardisation of garbhpala offhgfffghh
standardisation of garbhpala offhgfffghhstandardisation of garbhpala offhgfffghh
standardisation of garbhpala offhgfffghh
 
Criminal IP - Threat Hunting Webinar.pdf
Criminal IP - Threat Hunting Webinar.pdfCriminal IP - Threat Hunting Webinar.pdf
Criminal IP - Threat Hunting Webinar.pdf
 
一比一原版(Coventry毕业证书)考文垂大学毕业证如何办理
一比一原版(Coventry毕业证书)考文垂大学毕业证如何办理一比一原版(Coventry毕业证书)考文垂大学毕业证如何办理
一比一原版(Coventry毕业证书)考文垂大学毕业证如何办理
 
一比一原版(QU毕业证)皇后大学毕业证成绩单
一比一原版(QU毕业证)皇后大学毕业证成绩单一比一原版(QU毕业证)皇后大学毕业证成绩单
一比一原版(QU毕业证)皇后大学毕业证成绩单
 
一比一原版(CBU毕业证)卡普顿大学毕业证成绩单
一比一原版(CBU毕业证)卡普顿大学毕业证成绩单一比一原版(CBU毕业证)卡普顿大学毕业证成绩单
一比一原版(CBU毕业证)卡普顿大学毕业证成绩单
 
Influence of Marketing Strategy and Market Competition on Business Plan
Influence of Marketing Strategy and Market Competition on Business PlanInfluence of Marketing Strategy and Market Competition on Business Plan
Influence of Marketing Strategy and Market Competition on Business Plan
 
一比一原版(UniSA毕业证书)南澳大学毕业证如何办理
一比一原版(UniSA毕业证书)南澳大学毕业证如何办理一比一原版(UniSA毕业证书)南澳大学毕业证如何办理
一比一原版(UniSA毕业证书)南澳大学毕业证如何办理
 
Predicting Product Ad Campaign Performance: A Data Analysis Project Presentation
Predicting Product Ad Campaign Performance: A Data Analysis Project PresentationPredicting Product Ad Campaign Performance: A Data Analysis Project Presentation
Predicting Product Ad Campaign Performance: A Data Analysis Project Presentation
 
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Dat...
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Dat...06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Dat...
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Dat...
 
一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单
一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单
一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单
 

1 d,2d laplace inversion of lr nmr

  • 2. o Understanding the problem o Mathematical modeling o The need for Regularization o Regularization Methods o Solution Development o Results and Conclusion
  • 3. Results and Conclusion Solution Development Regularization Methods The need for Regularization Mathematical modeling Understanding the problem
  • 4. o The study of the structure and dynamic behavior of molecules is extremely important  Medical imaging  Industrial quality control  Chemical and Pharmaceutical analysis  Safety inspections o However, molecules are too small to be observed and studied directly  The nuclear magnetic resonance (NMR) is a versatile and powerful technique for exploring their structure and dynamic behavior.
  • 5. o Protons have a magnetic charge and possess a spin. Due to this, they have a magnetic field. o In an external magnetic field 𝐵∅ (𝐵𝑧) they align parallel or anti-parallel. o The spinning protons wobble, about the axis of the external magnetic field.  This motion is called precession.  a relationship which is defined by the Larmor Equation: 𝜔∅ = 𝛾𝐵∅ o An electromagnetic RF pulse at the resonance frequency causes the protons to presses in phase
  • 6. o T1-The longitudinal relaxation or spin-lattice relaxation.  T1 is the exponential recovery of 𝑀𝑧.  𝑀𝑧 = 𝑀𝑧,𝑒𝑞(1 − 𝑒−𝑡/𝑇1) o T2-The transversal relaxation or spin-spin relaxation.  T2 is the exponential decay of a signal, 𝑀 𝑥𝑦.  𝑀 𝑥𝑦 = 𝑀 𝑥𝑦,𝑒𝑞(𝑒−𝑡/𝑇2) o Simultaneously; the longitudinal magnetization begin to increases again as the excited spins begin to return to the original 𝑀𝑧 orientation.
  • 7. Results and Conclusion Solution Development Regularization Methods The need for Regularization Mathematical modeling Understanding the problem
  • 8. o Fredholm first integral equation  𝑠 𝑡 = 𝑇1∈𝑡 1 − 𝑒 − 𝑡 𝑇1 𝑓 𝑇1 𝑑𝑇1  𝑠 𝑡 = 𝑇2∈𝑡 𝑒 − 𝑡 𝑇2 𝑓 𝑇2 𝑑𝑇2  𝑠 𝑡1, 𝑡2 = 𝑇2∈𝑡2 𝑇1∈𝑡1 (1 − 𝑒−𝑡1/𝑇1)(𝑒−𝑡2/𝑇2)𝑓 𝑇1, 𝑇2 𝑑𝑇1 𝑑𝑇2 o Discretizing the integral  𝑠 = 𝐾𝑓  𝑆 = 𝐾1 𝐹𝐾2
  • 9. o Solving the equation  𝑠 = 𝐾𝑓  𝑓 = 𝐾 𝑇 𝐾 −1 𝐾 𝑇 𝑠 o We are done! o Oh no!  𝑓 ≠ 𝐾 𝑇 𝐾 −1 𝐾 𝑇 𝑠  𝐹 ≠ 𝐾1 𝑇 𝐾1 −1 𝐾1 𝑇 𝑆𝐾2 𝑇 𝐾2 𝐾2 𝑇 −1
  • 10. o Inverse problems  Conversion of the relaxation signal into a continuous distribution of relaxation components is an inverse Laplace transform problem. o Ill-posed problems  Inverse problems, in particular, belong to a class of ill-posed problems and frequently exhibit this extreme sensitivity to changes in the input. o Perturbation theory  That is, even minute perturbations in the data can vastly affect the computed solution.
  • 11. Results and Conclusion Solution Development Regularization Methods The need for Regularization Mathematical modeling Understanding the problem
  • 12. o Consider for example the following system. 𝐴𝑥 = 𝑏 o A the matrix which describes the model. o b the vector which describes the output of the system. o x the solution for the inverse problem  Is the vector which describes the input of the system.
  • 13. o The underdetermined problem have infinitely many solutions.  (1 1)𝑥 = 1 o The problem is replaced by a nearby problem  where the solution is less sensitive to errors in the data. o This replacement is commonly referred to as regularization
  • 14. o If we require the 2-norm of x to be a minimum, that is: o Then there is a unique solution at o compute an approximate solution to the linear least-squares minimization problem associated with the linear system of equations. 1 .. min 21 2  xx ts x 2 1 21  xx
  • 15. o Assume that the solution x can be separated to o Inserting to the above and rearranging o If then this means that the vector 𝑥0 is a null vector (kernel). o The system behaves like an underdetermined system oxxx  ˆ bAxxA o ˆ 0oAx
  • 16. o To stabilize the solution reinforcing an upper bound on the norm of the solution o From optimization theory, we can incorporate the constraint via a Lagrange multiplier 𝛾 .   2 2 .. min x ts bAx    22 2 2 2 . min    xbAx ts x
  • 17. Results and Conclusion Solution Development Regularization Methods The need for Regularization Mathematical modeling Understanding the problem
  • 18. o Tikhonov regularization  Perhaps the most successful and widely used regularization method of is the Tikhonov regularization. o Singular Value Decomposition (SVD)  The singular value decomposition, in the discrete setting, is a power tool for many useful applications in signal processing and statistics.
  • 19. o Specifically, the tikhonov solution xλ is defined, for the strictly positive weighting regularization parameter λ, as the solution to the problem min 𝑥 𝐴𝑥 − 𝑏 2 2 + 𝜆2 𝑥 2 2 o The first term 𝐴𝑥 − 𝑏 2 2 is a measure of the goodness of fit o If the term is too large, then x cannot be considered a good solution because we are under fitting the model. o If the term is too small, then we are over fitting our model to the noisy measurements.
  • 20. o If we can control the norm of x, then we can suppress most of the large noise components. o The objective is to find a suitable balance via the regularization parameter λ for these two terms, such that, the regularized solution xλ fits the data thoroughly and is sufficiently regularized o The balance between the two terms is structured by the factor λ.
  • 21. o It is obvious that for λ =0 we obtain the least square problem  more weights are given to fitting the noisy data, resulting in a solution that is less regular. o However, the larger the λ, the more effort is devoted into the regularity of the solution.  more weights are given to the minimization of the L2-norm of the solution, and so as 𝜆 → ∞ we have 𝑥 → 0 .
  • 22.
  • 23. o Discrepancy Principle:  This method is very likely to overestimate the regularization parameter. ⇒ L-Curve:  Some underestimation expected, very robust. o Generalize Cross Validation (GCV):  risk of severe over or under estimation. o Normalized Cumulative Periodogram (NCP) Criterion:  for low or high noise level considerable overestimate.
  • 24. o It is a convenient graphical tool for displaying the trade-off between the size of a regularized solution and its fit to the given data, as the regularization parameter varies. o Advantages of the L-curve criterion are  robustness  ability to treat perturbations consisting of correlated noise. o Disadvantage is of the L-curve criterion is  for a low noise level, the regularization parameter given is much smaller then the optimal parameter.
  • 27. o Formally, the singular value decomposition of an m × n real or complex matrix A is a factorization of the form 𝐴 = 𝑈Σ𝑉∗ = 𝑖=1 𝑛 𝜎𝑖 𝑢𝑖 𝑣𝑖 𝑇 o 𝑈𝜖ℝ 𝑚×𝑚 and 𝑉𝜖ℝ 𝑛×𝑛 are orthogonal matrices o Σ𝜖ℝ 𝑚×𝑛 Is a rectangular diagonal matrix with non-negative real numbers 𝜎𝑖  𝜎1 > 𝜎2 > ⋯ > 𝜎𝑟 > 𝜎𝑟+1 = ⋯ = 𝜎 𝑛 = 0
  • 28. o As the singular value decreases its corresponding singular vector becomes more chaotic and with less information.
  • 29.
  • 30. o This regularization method resolves the issue of the problematic tiny positive singular values by setting them to zero. o The TSVD approximation of A will effectively ignore the smallest singular values.   k i T iiikk vuVUA 1 *  )0,...,0,,...,,( 21 kk diag 
  • 32. o A necessary condition for obtaining good regularized solutions is that the Fourier coefficients of the right-hand side, when expressed in terms of the generalized SVD associated with the regularization problem, on the average decay to zero faster than the generalized singular values o In other words, chop off SVD components that are dominated by the noise o (Note: the need for Fourier coefficients to converge has been understood for many years)
  • 33. o the condition must be satisfied in order to obtain “good regularized solutions”. o The Discrete Picard Condition. Let Ƭ denote the level at which the computed singular values σi level off due to rounding errors. The discrete Picard condition is satisfied if, for all singular values larger than Ƭ, the corresponding coefficient |ui Ts|, on average, decay faster than the σi.
  • 34.
  • 36. o Recall our 2D problem from 𝑆 = 𝐾1 𝐹𝐾2. o Transforming the equation back to 1D.  𝑣𝑒𝑐 𝑆 = 𝐾2 𝑇 ⨂𝐾1 𝑣𝑒𝑐 𝐹  𝐾2 𝑇 ⨂𝐾1 = 𝑉2⨂𝑈1 Σ2⨂Σ1 𝑈2 𝑇 ⨂𝑉1 𝑇  𝑠 = 𝜇𝜉𝜈 𝑇 𝑓. o We now define the Picard curve:  𝜌 = 𝑙𝑜𝑔 𝜇𝑖 𝑇 𝑠 − 𝑙𝑜𝑔 𝑑𝑖𝑎𝑔 𝜉𝑖
  • 37. o However, the matrix µ is extremely large o perform an inverse kronecker product operation.  V2⨂U1 𝑠 ↔ U1SV2 𝑇  𝑑𝑖𝑎𝑔 Σ2⨂Σ1 = Σ2⨂Σ1 𝕀 𝑚1 𝑚2×1 ↔ ↔ Σ1 𝕀 𝑚1×𝑚2 Σ2 𝑇 = 𝑑𝑖𝑎𝑔 Σ1 × 𝑑𝑖𝑎𝑔 𝑇 Σ2 o We now define the Picard surface as  𝜌 = 𝑙𝑜𝑔 𝑈1 𝑆𝑉2 𝑇 − 𝑙𝑜𝑔 𝜎1 × 𝜎2 𝑇  Where 𝜎1 and 𝜎2 are the vector diagonals of the singular values respectively.
  • 38.
  • 39. o Assuming a simple problem consists of the same data structure from our LR-NMR experimental o signal measurements matrix  S 16384 by 70  values stored as a double precision (8 bytes) o our kernel matrices  K1 300 by 70  k2 300 by 16384 o we would need 45 Megabytes to store our raw data measurements.
  • 40. o In our example, the Picard Plot suggested to use only the first 9 singular values for the K1 kernel and the first 12 singular values of the second K2 kernel o New signal measurements vector  s 108 by 1 o New kernel matrices  K1 300 by 9  k2 300 by 12 o we would need 0.05 Megabytes to store our raw data measurements. o Compression ratio of 1,000:1
  • 41. o Consider our method to map the data into a 1D problem o New signal measurements vector  s 1146880 o New kernel matrix  K’ 1146880 by 90000 o we would need at least 768.9 Gigabytes of storage space. o Using the 2D Picard Condition o New signal measurements vector  s 108 o New kernel matrix  K’ 108 by 90000 o we would need at least 0.073 Gigabytes of storage space. o Compression ratio of 10,000:1
  • 42. o Impossible to get more knowledge from the information inside the data. o If the resolution is fictitious increased, the information is falsified. o The regularization parameters are a function of the distribution f and the signal noise Δs. o No Mathematical basis.
  • 43. Results and Conclusion Solution Development Regularization Methods The need for Regularization Mathematical modeling Understanding the problem
  • 44. o Define the functional 𝛷 𝑓  Min 𝛷 𝑓 = 1 2 𝐾𝑓 − 𝑠 2 2 + 1 2 𝜆2 𝑓 2 2 + 𝜆1 𝑓 1  s.t. 𝑓 ∈ 𝐶  𝜆1 ≥ 0 , 𝜆2 ≥ 0  𝐿2 : 𝐶 = 𝑓: 𝑓 𝑇 ≥ 0, 𝑓 2 < ∞ o If K∈L2, then Φ(f) has a directional derivative, donated by ∇Φ. o 𝛻𝛷 𝑓 = 1 2 𝜕 𝜕𝑓 𝑒, 𝑒 + 1 2 𝜆2 𝜕 𝜕𝑓 𝑓, 𝑓 + 𝜆1 𝜕 𝜕𝑓 𝑡𝑟 𝑓 ∙ 𝐼 o 𝛻𝛷 𝑓 = 𝐾′ 𝐾𝑓 − 𝑠 + 𝜆2 𝑓 + 𝜆1
  • 45. o The Kuhn-Tucker condition:  ∇Φ(f)=0 → f>0: When the derivative is 0 then we are at a calculus minimum  ∇Φ(f)≥0 → f=0: When its not, a small decrease of f will reduce the function value, however, that is when the constraint is reached. o Rearranging  f 𝜆1,𝜆2 = max(0, 𝐾′ 𝐾 + 𝜆2 𝐼 −1 (𝐾′ 𝑠 − 𝜆1)) o We can use the SVD to obtain more insight into the Tikhonov solution  f 𝜆1,𝜆2 = max 0, 𝑉 Σ2 + 𝜆2 2 𝐼 −1 Σ𝑈′ 𝑠 − 𝑉′ 𝜆1
  • 46. Artifact! Bad shape and position Good peak Bad shape  𝛷 𝑓 = 1 2 𝐾𝑓 − 𝑠 2 2 + 1 2 𝜆2 𝑓 2 2 + 𝜆1 𝑓 1
  • 47.  𝛷 𝑓 = 1 2 𝐾𝑓 − 𝑠 2 2 + 1 2 𝜆2 𝑓 2 2
  • 48.  𝛷 𝑓 = 1 2 𝐾𝑓 − 𝑠 2 2 + 𝜆1 𝑓 1
  • 49.  𝛷 𝑓 = 1 2 𝐾𝑓 − 𝑠 2 2
  • 50. o The Primal-Dual convex optimization (PDCO) is a state of the art optimization solver implemented in Matlab. o It applies a primal-dual interior method to linearly constrained optimization problems with a convex objective function.
  • 51. o The problems are assumed to be of the following form:  min 𝑓,𝑟 𝑟 2 2 + 𝐷1 𝑓 2 2 + 𝜑(𝑓) 𝑠. 𝑡 𝐴𝑓 + 𝐷2 𝑟 = 𝑏 𝑙 ≤ 𝑓 ≤ 𝑢 o Where f and r are variables, and 𝐷1 and 𝐷2 are positive- definite diagonal matrices. o Each PDCO iteration is generating search directions ∆f and ∆y for the primal variables f and the dual variables y associated with 𝐴𝑓 + 𝐷2 𝑟 = 𝑏
  • 52. o Until recently many models have only used the l2 penalty function because the solving methods are simple and fast. o The introduction of the least absolute values, l1, to model fitting have greatly improved many applications. o adopt a hybrid between l1 and l2.
  • 53. o Hybrid function 𝐻𝑦𝑏𝑐(𝑓′ ) = 𝑔(𝑓′𝑖) with a regularization parameter c, where o 𝑔(𝑓′𝑖) = 𝑓′ 𝑖 2 2𝑐 + 𝑐 2 𝑓′𝑖 𝑓′𝑖 ≤ 𝑐 𝑓′𝑖 > 𝑐
  • 54.
  • 55. o The most popular entropy functional is the Shannon entropy formula o the Entropy function 𝐸𝑛𝑡 𝑎 𝑓′ = 𝑓′ 𝑖 𝑎 log 𝑓′ 𝑖 𝑎 with a regularization parameter a o Its origins in information theory o The motivation  does not introduce correlations into the data beyond those which are required by the data   n i ii xLogxxE 1 )()()( 1)()())((  xLogxExEGrad x xEdiagxEHess ii 1 ))(())(( 
  • 56.
  • 57. Results and Conclusion Solution Development Regularization Methods The need for Regularization Mathematical modeling Understanding the problem
  • 58. o The solution method proposed in [] is the mathematical formulation of the linearly constrained convex problem:  min 𝑓′ 𝜆1 𝑘′ 𝑓′ − 𝑏′ 2 2 + 𝜆2 𝑓′ 2 2 + 𝜑(𝑓′) 𝑠. 𝑡 𝑘′ 𝑓′ + 𝑟 = 𝑏′ 𝑓′ ≥ 0 o Where k’ is the Kroneker tensor product of K1 and K2, o f’ is the unknown spectrum vector, o b’ is the transformed measurements vector, o r is the residual vector o the convex function 𝜑 𝑓′ is either the Entropy function 𝐸𝑛𝑡 𝑎 𝑓′ with a regularization parameter a or the Hybrid function 𝐻𝑦𝑏 𝑐(𝑓′) with a regularization parameter c
  • 59. o one-dimensional image restoration model 0.1% 1% 10%5%
  • 60. o Inverse heat equation 0.1% 1% 10%5%
  • 61. Perfect! Good Peak Good Shape And Position No Artifacts! Ok Height
  • 66. o Our algorithm produces reconstructions of far greater quality than the other methods but at a cost of convergence time that comes from the need of tuning several parameters o In contrast, our approach keeps the reconstruction quality regardless of the data structure and data size o we have taken advantage of the inherited stability of the 2D Picard condition to regularize the solution and make it less sensitive to perturbations in the measurement array. o As a result, all required quantities such as gradient, Hessian-vector product are computed with reduced memory storage and computation time