SlideShare a Scribd company logo
Iterative Approach to the Tikhanov Regularization
Problem in Matrix Inversion using Numpy
Katya Vasilaky1
November 21, 2015
1
New Yorker @ Columbia University
Matrix inversion problems have been around a while.
They’re not easy.
But relevant.
Inverse problems: compute information about some “interior” properties using
“exterior” measurements.
Tomography: Xray source − > Object − > Dampening
Inference: Attributes − > Effect Size − > Outcome
Machine Learning: Features − > Predictors − > Classifier
X is my back.
Problem.
Don’t know x.
Figure: Xray of Spine
Solution.
Throw A at x (Ax)
Collect b
Infer x (x = A−1
b).
or for linear regression....
β = (XT
X)−1
(XT
Y )
Methodology
The pseudoinverse solution, which always exists, is often computed as the
unique, minimum norm solution to the least-squares formulation
ˆx = Minx ||Ax − b||2
2
such that ||x||2
2 solves for the minimum.
Think of an underdetermined system of equations:
x + y + z = 1 (1)
x + y + 2z = 3 (2)
There are many solutions.
The pseudo inverse is not very useful for applications.
Lots of useful inverse problems are ill-conditioned linear systems.
Standard numerical methods x = A−1b or β = (XT X)−1XT Y produce useless
results.
That A (or X) will be singular or not full rank, so it’s not invertible, or
ill-conditioned
If you perturb b a little bit, x = A−1
(b + e)
Will map to something crazy!
Why you ask?
If A is not full rank or ill-conditioned then the solution x is unstable
A−1b will be vastly different from A−1(b + e)
The inverse of A is not a continuous mapping − > b
b gets mapped to something very different than b+e
What do people do then?
Regularization problems formulate a nearby problem, which has a more stable
solution:
Minx (||Ax − b||2
2 + λ||x||2
2), δ > 0
where we introduce the term λ||x||2
2 perturbing the least-squares formulation.
The regularization can be L1 (Lasso) or L2. Typically L2 (Tikhanov and
Truncated SVD) better. Most people try to choose a good lambda, and solve
the above, hoping that they’re close to the noiseless solution. Which penalizes
outliers more?
Problem with regularization is how to choose λ or how much to truncate.
Tikhonov and truncated SVD are equivalent in that chopping off singular
values is equivalent to choosing a λ?
Minx (||Ax − b||2
2 + λ||x||2
2), δ > 0
This solution: Instead of computing a solution we compute a sequence of
solutions which approximates the noiseless solution (A−
1b) on its way to the
noisy (A−
1(b + e) solution.
Replace λ||x||
Minx (||Ax − b||2
2 + λ||x||2
2), δ > 0
with λ||x − x0||
Minx (||Ax − b||2
2 + λ||x − x0||2
2), δ > 0
and then plug and re-plug.......
Normal Equations
Remember your normal equations?
(AT
A)x = AT
b
Instead solve new normal equations with added constraint:
(AT
A + λI)x = AT
b + λx0
Solution for x1 is:
x1 = (AT
A + λI)−1
AT
b + λ(AT
A + λI)−1
x0
and the kth iterate is:
xk = Σk
i=1λi−1
((AT
A + λI)−1
(AT
b) + λk
(AT
A + λI)−1
x0)
Now we the use single value decomposition (SVD) of A: A = UΣV T
,
σ1, .., σn, 0....0) ∈ Rmxn
with σ1 ≥ σ2 ≥ · · · ≥ σn > 0, where n is the rank of A,
we then obtain after a lot of derivation, the following kth
iterate.
V




1
σ1
− 1
σ1
( λ
λ+σ2
1
)k
. . . 0 . . . 0
...
...
...
0 . . . 1
σn
− 1
σn
( λ
λ+σ2
n
)k
0 . . . 0



 UT
b
For practical ease–I’m setting x0=0. It does not affect the solution.
Step 1
InverseProblem codes the above formula to invert A.
So inverse of A times b from InverseProblem will give us the x back.
Let’s try it out using the Inverse Problem package!
https://github.com/kathrynthegreat/InverseProblem
Step 2: Brace yourself. Next few steps:
Get a theoretical derivation for xk − x, the error between kith iterate and the true
x.
Show that as k goes to infinity xk approaches the noisy x solution
Need to stop k before that happens!
First, look at the residual error against the number of iterations.
One thing is sure. We want λ > 1
Next, it turns out (from experiments), that xk passes the true noiseless x when
the residual error ||Axk − b|| “turns,” a heuristic that still needs a proof.
Somehow this minimizes the error between the kith iterate and the true x:
xk − x expression.
(S1)
-V




1
σ1
( λ
λ+σ2
1
)k
. . . 0 0 . . . 0
...
...
...
0 . . . 1
σn
( λ
λ+σ2
n
)k
0 . . . 0



 UT
b
(S2) V




1
σ1
− 1
σ1
( λ
λ+σ2
1
)k
. . . 0 0 . . . 0
...
...
...
0 . . . 1
σn
− 1
σn
( λ
λ+σ2
n
)k
0 . . . 0



 UT
e
Now that we’ve seen the tractable cases. Here are some other uses.
We will reconstruct an image from projections. Some reconstructions from
projections are:
1 CT scans
2 MRI
3 Oil Exploration
4 Seismic imaging
Notes
Iterative approach is good for extremely ill conditioned matrices that are large.
For small matrices there are other approaches.
This approach is good for images because you can see when you’re doing
better-the image is clearer.
We need to see the graph (or the picture) to get an idea about lambda and k, the
good news is that we don’t have to know them optimally as in classical Tikhonov.
Otherwise if we know something about e (like it’s SD or norm ||e||) we can plot
that error graph (still need to package that)
Remember, of course, that if the data are extremely noisy then it is worthless, so
no recovery is possible.
Thanks.
https://github.com/kathrynthegreat/InverseProblem
import InverseProblem.functions as ip
ip.optimalk(A) this will print out optimal k
ip.invert(A,be,k,l) this will invert your A matrix, where be is noisy be, k is the
no. of iterations, and lambda is your dampening effect

More Related Content

What's hot

Seismic data processing lecture 3
Seismic data processing lecture 3Seismic data processing lecture 3
Seismic data processing lecture 3
Amin khalil
 
Eigen values and eigen vectors engineering
Eigen values and eigen vectors engineeringEigen values and eigen vectors engineering
Eigen values and eigen vectors engineering
shubham211
 
Eigenvalue problems .ppt
Eigenvalue problems .pptEigenvalue problems .ppt
Eigenvalue problems .ppt
Self-employed
 
Statistical Physics Assignment Help
Statistical Physics Assignment HelpStatistical Physics Assignment Help
Statistical Physics Assignment Help
Statistics Assignment Help
 
Matlab Assignment Help
Matlab Assignment HelpMatlab Assignment Help
Matlab Assignment Help
Matlab Assignment Experts
 
Solution to linear equhgations
Solution to linear equhgationsSolution to linear equhgations
Solution to linear equhgationsRobin Singh
 
2. Linear Algebra for Machine Learning: Basis and Dimension
2. Linear Algebra for Machine Learning: Basis and Dimension2. Linear Algebra for Machine Learning: Basis and Dimension
2. Linear Algebra for Machine Learning: Basis and Dimension
Ceni Babaoglu, PhD
 
Convergence Criteria
Convergence CriteriaConvergence Criteria
Convergence Criteria
Tarun Gehlot
 
Robust Control of Uncertain Switched Linear Systems based on Stochastic Reach...
Robust Control of Uncertain Switched Linear Systems based on Stochastic Reach...Robust Control of Uncertain Switched Linear Systems based on Stochastic Reach...
Robust Control of Uncertain Switched Linear Systems based on Stochastic Reach...
Leo Asselborn
 
The proof complexity of matrix algebra - Newton Institute, Cambridge 2006
The proof complexity of matrix algebra - Newton Institute, Cambridge 2006The proof complexity of matrix algebra - Newton Institute, Cambridge 2006
The proof complexity of matrix algebra - Newton Institute, Cambridge 2006Michael Soltys
 
Jacobi and gauss-seidel
Jacobi and gauss-seidelJacobi and gauss-seidel
Jacobi and gauss-seidel
arunsmm
 
Eigenvalues and eigenvectors
Eigenvalues and eigenvectorsEigenvalues and eigenvectors
Eigenvalues and eigenvectors
iraq
 
Integration presentation
Integration presentationIntegration presentation
Integration presentation
Urmila Bhardwaj
 
Eigen vector
Eigen vectorEigen vector
Eigen vector
Yuji Oyamada
 
Lecture 8 - Splines
Lecture 8 - SplinesLecture 8 - Splines
Lecture 8 - Splines
Eric Cochran
 
Eighan values and diagonalization
Eighan values and diagonalization Eighan values and diagonalization
Eighan values and diagonalization
gandhinagar
 
Linear models for classification
Linear models for classificationLinear models for classification
Linear models for classification
Sung Yub Kim
 

What's hot (20)

Numerical Methods Solving Linear Equations
Numerical Methods Solving Linear EquationsNumerical Methods Solving Linear Equations
Numerical Methods Solving Linear Equations
 
Seismic data processing lecture 3
Seismic data processing lecture 3Seismic data processing lecture 3
Seismic data processing lecture 3
 
Eigen values and eigen vectors engineering
Eigen values and eigen vectors engineeringEigen values and eigen vectors engineering
Eigen values and eigen vectors engineering
 
Cse41
Cse41Cse41
Cse41
 
Eigenvalue problems .ppt
Eigenvalue problems .pptEigenvalue problems .ppt
Eigenvalue problems .ppt
 
Statistical Physics Assignment Help
Statistical Physics Assignment HelpStatistical Physics Assignment Help
Statistical Physics Assignment Help
 
Matlab Assignment Help
Matlab Assignment HelpMatlab Assignment Help
Matlab Assignment Help
 
Solution to linear equhgations
Solution to linear equhgationsSolution to linear equhgations
Solution to linear equhgations
 
2. Linear Algebra for Machine Learning: Basis and Dimension
2. Linear Algebra for Machine Learning: Basis and Dimension2. Linear Algebra for Machine Learning: Basis and Dimension
2. Linear Algebra for Machine Learning: Basis and Dimension
 
Convergence Criteria
Convergence CriteriaConvergence Criteria
Convergence Criteria
 
Robust Control of Uncertain Switched Linear Systems based on Stochastic Reach...
Robust Control of Uncertain Switched Linear Systems based on Stochastic Reach...Robust Control of Uncertain Switched Linear Systems based on Stochastic Reach...
Robust Control of Uncertain Switched Linear Systems based on Stochastic Reach...
 
The proof complexity of matrix algebra - Newton Institute, Cambridge 2006
The proof complexity of matrix algebra - Newton Institute, Cambridge 2006The proof complexity of matrix algebra - Newton Institute, Cambridge 2006
The proof complexity of matrix algebra - Newton Institute, Cambridge 2006
 
Jacobi and gauss-seidel
Jacobi and gauss-seidelJacobi and gauss-seidel
Jacobi and gauss-seidel
 
Eigenvalues and eigenvectors
Eigenvalues and eigenvectorsEigenvalues and eigenvectors
Eigenvalues and eigenvectors
 
Integration presentation
Integration presentationIntegration presentation
Integration presentation
 
Eigen vector
Eigen vectorEigen vector
Eigen vector
 
Sacolloq
SacolloqSacolloq
Sacolloq
 
Lecture 8 - Splines
Lecture 8 - SplinesLecture 8 - Splines
Lecture 8 - Splines
 
Eighan values and diagonalization
Eighan values and diagonalization Eighan values and diagonalization
Eighan values and diagonalization
 
Linear models for classification
Linear models for classificationLinear models for classification
Linear models for classification
 

Similar to Pydata Katya Vasilaky

smtlecture.6
smtlecture.6smtlecture.6
smtlecture.6
Roberto Bruttomesso
 
Unit 4 jwfiles
Unit 4 jwfilesUnit 4 jwfiles
Unit 4 jwfiles
Nv Thejaswini
 
Kernel based models for geo- and environmental sciences- Alexei Pozdnoukhov –...
Kernel based models for geo- and environmental sciences- Alexei Pozdnoukhov –...Kernel based models for geo- and environmental sciences- Alexei Pozdnoukhov –...
Kernel based models for geo- and environmental sciences- Alexei Pozdnoukhov –...
Beniamino Murgante
 
MVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priorsMVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priors
Elvis DOHMATOB
 
Digital Signal Processing[ECEG-3171]-Ch1_L03
Digital Signal Processing[ECEG-3171]-Ch1_L03Digital Signal Processing[ECEG-3171]-Ch1_L03
Digital Signal Processing[ECEG-3171]-Ch1_L03
Rediet Moges
 
Ch9-Gauss_Elimination4.pdf
Ch9-Gauss_Elimination4.pdfCh9-Gauss_Elimination4.pdf
Ch9-Gauss_Elimination4.pdf
RahulUkhande
 
Linear Algebra and Matrix
Linear Algebra and MatrixLinear Algebra and Matrix
Linear Algebra and Matrixitutor
 
Matrices ppt
Matrices pptMatrices ppt
Matrices ppt
aakashray33
 
Supporting Vector Machine
Supporting Vector MachineSupporting Vector Machine
Supporting Vector Machine
Sumit Singh
 
LP special cases and Duality.pptx
LP special cases and Duality.pptxLP special cases and Duality.pptx
LP special cases and Duality.pptx
Snehal Athawale
 
Lsd ee n
Lsd ee nLsd ee n
Lsd ee n
Asyraf Ghani
 
Simplex3
Simplex3Simplex3
Simplex3
Bhanu Sharma
 
Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3
Fabian Pedregosa
 
On the Seidel’s Method, a Stronger Contraction Fixed Point Iterative Method o...
On the Seidel’s Method, a Stronger Contraction Fixed Point Iterative Method o...On the Seidel’s Method, a Stronger Contraction Fixed Point Iterative Method o...
On the Seidel’s Method, a Stronger Contraction Fixed Point Iterative Method o...
BRNSS Publication Hub
 
Lecture_note2.pdf
Lecture_note2.pdfLecture_note2.pdf
Lecture_note2.pdf
EssaAlMadhagi
 
Computer Network Homework Help
Computer Network Homework HelpComputer Network Homework Help
Computer Network Homework Help
Computer Network Assignment Help
 
Subgradient Methods for Huge-Scale Optimization Problems - Юрий Нестеров, Cat...
Subgradient Methods for Huge-Scale Optimization Problems - Юрий Нестеров, Cat...Subgradient Methods for Huge-Scale Optimization Problems - Юрий Нестеров, Cat...
Subgradient Methods for Huge-Scale Optimization Problems - Юрий Нестеров, Cat...
Yandex
 
The world of loss function
The world of loss functionThe world of loss function
The world of loss function
홍배 김
 

Similar to Pydata Katya Vasilaky (20)

algorithm Unit 4
algorithm Unit 4 algorithm Unit 4
algorithm Unit 4
 
smtlecture.6
smtlecture.6smtlecture.6
smtlecture.6
 
Unit 4 jwfiles
Unit 4 jwfilesUnit 4 jwfiles
Unit 4 jwfiles
 
Kernel based models for geo- and environmental sciences- Alexei Pozdnoukhov –...
Kernel based models for geo- and environmental sciences- Alexei Pozdnoukhov –...Kernel based models for geo- and environmental sciences- Alexei Pozdnoukhov –...
Kernel based models for geo- and environmental sciences- Alexei Pozdnoukhov –...
 
MVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priorsMVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priors
 
Digital Signal Processing[ECEG-3171]-Ch1_L03
Digital Signal Processing[ECEG-3171]-Ch1_L03Digital Signal Processing[ECEG-3171]-Ch1_L03
Digital Signal Processing[ECEG-3171]-Ch1_L03
 
Ch07 3
Ch07 3Ch07 3
Ch07 3
 
Ch9-Gauss_Elimination4.pdf
Ch9-Gauss_Elimination4.pdfCh9-Gauss_Elimination4.pdf
Ch9-Gauss_Elimination4.pdf
 
Linear Algebra and Matrix
Linear Algebra and MatrixLinear Algebra and Matrix
Linear Algebra and Matrix
 
Matrices ppt
Matrices pptMatrices ppt
Matrices ppt
 
Supporting Vector Machine
Supporting Vector MachineSupporting Vector Machine
Supporting Vector Machine
 
LP special cases and Duality.pptx
LP special cases and Duality.pptxLP special cases and Duality.pptx
LP special cases and Duality.pptx
 
Lsd ee n
Lsd ee nLsd ee n
Lsd ee n
 
Simplex3
Simplex3Simplex3
Simplex3
 
Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3
 
On the Seidel’s Method, a Stronger Contraction Fixed Point Iterative Method o...
On the Seidel’s Method, a Stronger Contraction Fixed Point Iterative Method o...On the Seidel’s Method, a Stronger Contraction Fixed Point Iterative Method o...
On the Seidel’s Method, a Stronger Contraction Fixed Point Iterative Method o...
 
Lecture_note2.pdf
Lecture_note2.pdfLecture_note2.pdf
Lecture_note2.pdf
 
Computer Network Homework Help
Computer Network Homework HelpComputer Network Homework Help
Computer Network Homework Help
 
Subgradient Methods for Huge-Scale Optimization Problems - Юрий Нестеров, Cat...
Subgradient Methods for Huge-Scale Optimization Problems - Юрий Нестеров, Cat...Subgradient Methods for Huge-Scale Optimization Problems - Юрий Нестеров, Cat...
Subgradient Methods for Huge-Scale Optimization Problems - Юрий Нестеров, Cat...
 
The world of loss function
The world of loss functionThe world of loss function
The world of loss function
 

Recently uploaded

一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证成绩单
一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证成绩单一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证成绩单
一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证成绩单
ewymefz
 
一比一原版(UniSA毕业证书)南澳大学毕业证如何办理
一比一原版(UniSA毕业证书)南澳大学毕业证如何办理一比一原版(UniSA毕业证书)南澳大学毕业证如何办理
一比一原版(UniSA毕业证书)南澳大学毕业证如何办理
slg6lamcq
 
一比一原版(Adelaide毕业证书)阿德莱德大学毕业证如何办理
一比一原版(Adelaide毕业证书)阿德莱德大学毕业证如何办理一比一原版(Adelaide毕业证书)阿德莱德大学毕业证如何办理
一比一原版(Adelaide毕业证书)阿德莱德大学毕业证如何办理
slg6lamcq
 
一比一原版(QU毕业证)皇后大学毕业证成绩单
一比一原版(QU毕业证)皇后大学毕业证成绩单一比一原版(QU毕业证)皇后大学毕业证成绩单
一比一原版(QU毕业证)皇后大学毕业证成绩单
enxupq
 
一比一原版(CBU毕业证)不列颠海角大学毕业证成绩单
一比一原版(CBU毕业证)不列颠海角大学毕业证成绩单一比一原版(CBU毕业证)不列颠海角大学毕业证成绩单
一比一原版(CBU毕业证)不列颠海角大学毕业证成绩单
nscud
 
standardisation of garbhpala offhgfffghh
standardisation of garbhpala offhgfffghhstandardisation of garbhpala offhgfffghh
standardisation of garbhpala offhgfffghh
ArpitMalhotra16
 
一比一原版(CBU毕业证)卡普顿大学毕业证成绩单
一比一原版(CBU毕业证)卡普顿大学毕业证成绩单一比一原版(CBU毕业证)卡普顿大学毕业证成绩单
一比一原版(CBU毕业证)卡普顿大学毕业证成绩单
nscud
 
Predicting Product Ad Campaign Performance: A Data Analysis Project Presentation
Predicting Product Ad Campaign Performance: A Data Analysis Project PresentationPredicting Product Ad Campaign Performance: A Data Analysis Project Presentation
Predicting Product Ad Campaign Performance: A Data Analysis Project Presentation
Boston Institute of Analytics
 
原版制作(Deakin毕业证书)迪肯大学毕业证学位证一模一样
原版制作(Deakin毕业证书)迪肯大学毕业证学位证一模一样原版制作(Deakin毕业证书)迪肯大学毕业证学位证一模一样
原版制作(Deakin毕业证书)迪肯大学毕业证学位证一模一样
u86oixdj
 
一比一原版(NYU毕业证)纽约大学毕业证成绩单
一比一原版(NYU毕业证)纽约大学毕业证成绩单一比一原版(NYU毕业证)纽约大学毕业证成绩单
一比一原版(NYU毕业证)纽约大学毕业证成绩单
ewymefz
 
Opendatabay - Open Data Marketplace.pptx
Opendatabay - Open Data Marketplace.pptxOpendatabay - Open Data Marketplace.pptx
Opendatabay - Open Data Marketplace.pptx
Opendatabay
 
社内勉強会資料_LLM Agents                              .
社内勉強会資料_LLM Agents                              .社内勉強会資料_LLM Agents                              .
社内勉強会資料_LLM Agents                              .
NABLAS株式会社
 
一比一原版(RUG毕业证)格罗宁根大学毕业证成绩单
一比一原版(RUG毕业证)格罗宁根大学毕业证成绩单一比一原版(RUG毕业证)格罗宁根大学毕业证成绩单
一比一原版(RUG毕业证)格罗宁根大学毕业证成绩单
vcaxypu
 
一比一原版(TWU毕业证)西三一大学毕业证成绩单
一比一原版(TWU毕业证)西三一大学毕业证成绩单一比一原版(TWU毕业证)西三一大学毕业证成绩单
一比一原版(TWU毕业证)西三一大学毕业证成绩单
ocavb
 
SOCRadar Germany 2024 Threat Landscape Report
SOCRadar Germany 2024 Threat Landscape ReportSOCRadar Germany 2024 Threat Landscape Report
SOCRadar Germany 2024 Threat Landscape Report
SOCRadar
 
【社内勉強会資料_Octo: An Open-Source Generalist Robot Policy】
【社内勉強会資料_Octo: An Open-Source Generalist Robot Policy】【社内勉強会資料_Octo: An Open-Source Generalist Robot Policy】
【社内勉強会資料_Octo: An Open-Source Generalist Robot Policy】
NABLAS株式会社
 
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
Subhajit Sahu
 
做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样
做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样
做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样
axoqas
 
一比一原版(UPenn毕业证)宾夕法尼亚大学毕业证成绩单
一比一原版(UPenn毕业证)宾夕法尼亚大学毕业证成绩单一比一原版(UPenn毕业证)宾夕法尼亚大学毕业证成绩单
一比一原版(UPenn毕业证)宾夕法尼亚大学毕业证成绩单
ewymefz
 
Machine learning and optimization techniques for electrical drives.pptx
Machine learning and optimization techniques for electrical drives.pptxMachine learning and optimization techniques for electrical drives.pptx
Machine learning and optimization techniques for electrical drives.pptx
balafet
 

Recently uploaded (20)

一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证成绩单
一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证成绩单一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证成绩单
一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证成绩单
 
一比一原版(UniSA毕业证书)南澳大学毕业证如何办理
一比一原版(UniSA毕业证书)南澳大学毕业证如何办理一比一原版(UniSA毕业证书)南澳大学毕业证如何办理
一比一原版(UniSA毕业证书)南澳大学毕业证如何办理
 
一比一原版(Adelaide毕业证书)阿德莱德大学毕业证如何办理
一比一原版(Adelaide毕业证书)阿德莱德大学毕业证如何办理一比一原版(Adelaide毕业证书)阿德莱德大学毕业证如何办理
一比一原版(Adelaide毕业证书)阿德莱德大学毕业证如何办理
 
一比一原版(QU毕业证)皇后大学毕业证成绩单
一比一原版(QU毕业证)皇后大学毕业证成绩单一比一原版(QU毕业证)皇后大学毕业证成绩单
一比一原版(QU毕业证)皇后大学毕业证成绩单
 
一比一原版(CBU毕业证)不列颠海角大学毕业证成绩单
一比一原版(CBU毕业证)不列颠海角大学毕业证成绩单一比一原版(CBU毕业证)不列颠海角大学毕业证成绩单
一比一原版(CBU毕业证)不列颠海角大学毕业证成绩单
 
standardisation of garbhpala offhgfffghh
standardisation of garbhpala offhgfffghhstandardisation of garbhpala offhgfffghh
standardisation of garbhpala offhgfffghh
 
一比一原版(CBU毕业证)卡普顿大学毕业证成绩单
一比一原版(CBU毕业证)卡普顿大学毕业证成绩单一比一原版(CBU毕业证)卡普顿大学毕业证成绩单
一比一原版(CBU毕业证)卡普顿大学毕业证成绩单
 
Predicting Product Ad Campaign Performance: A Data Analysis Project Presentation
Predicting Product Ad Campaign Performance: A Data Analysis Project PresentationPredicting Product Ad Campaign Performance: A Data Analysis Project Presentation
Predicting Product Ad Campaign Performance: A Data Analysis Project Presentation
 
原版制作(Deakin毕业证书)迪肯大学毕业证学位证一模一样
原版制作(Deakin毕业证书)迪肯大学毕业证学位证一模一样原版制作(Deakin毕业证书)迪肯大学毕业证学位证一模一样
原版制作(Deakin毕业证书)迪肯大学毕业证学位证一模一样
 
一比一原版(NYU毕业证)纽约大学毕业证成绩单
一比一原版(NYU毕业证)纽约大学毕业证成绩单一比一原版(NYU毕业证)纽约大学毕业证成绩单
一比一原版(NYU毕业证)纽约大学毕业证成绩单
 
Opendatabay - Open Data Marketplace.pptx
Opendatabay - Open Data Marketplace.pptxOpendatabay - Open Data Marketplace.pptx
Opendatabay - Open Data Marketplace.pptx
 
社内勉強会資料_LLM Agents                              .
社内勉強会資料_LLM Agents                              .社内勉強会資料_LLM Agents                              .
社内勉強会資料_LLM Agents                              .
 
一比一原版(RUG毕业证)格罗宁根大学毕业证成绩单
一比一原版(RUG毕业证)格罗宁根大学毕业证成绩单一比一原版(RUG毕业证)格罗宁根大学毕业证成绩单
一比一原版(RUG毕业证)格罗宁根大学毕业证成绩单
 
一比一原版(TWU毕业证)西三一大学毕业证成绩单
一比一原版(TWU毕业证)西三一大学毕业证成绩单一比一原版(TWU毕业证)西三一大学毕业证成绩单
一比一原版(TWU毕业证)西三一大学毕业证成绩单
 
SOCRadar Germany 2024 Threat Landscape Report
SOCRadar Germany 2024 Threat Landscape ReportSOCRadar Germany 2024 Threat Landscape Report
SOCRadar Germany 2024 Threat Landscape Report
 
【社内勉強会資料_Octo: An Open-Source Generalist Robot Policy】
【社内勉強会資料_Octo: An Open-Source Generalist Robot Policy】【社内勉強会資料_Octo: An Open-Source Generalist Robot Policy】
【社内勉強会資料_Octo: An Open-Source Generalist Robot Policy】
 
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
 
做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样
做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样
做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样
 
一比一原版(UPenn毕业证)宾夕法尼亚大学毕业证成绩单
一比一原版(UPenn毕业证)宾夕法尼亚大学毕业证成绩单一比一原版(UPenn毕业证)宾夕法尼亚大学毕业证成绩单
一比一原版(UPenn毕业证)宾夕法尼亚大学毕业证成绩单
 
Machine learning and optimization techniques for electrical drives.pptx
Machine learning and optimization techniques for electrical drives.pptxMachine learning and optimization techniques for electrical drives.pptx
Machine learning and optimization techniques for electrical drives.pptx
 

Pydata Katya Vasilaky

  • 1. Iterative Approach to the Tikhanov Regularization Problem in Matrix Inversion using Numpy Katya Vasilaky1 November 21, 2015 1 New Yorker @ Columbia University
  • 2. Matrix inversion problems have been around a while.
  • 5.
  • 6.
  • 7. Inverse problems: compute information about some “interior” properties using “exterior” measurements. Tomography: Xray source − > Object − > Dampening Inference: Attributes − > Effect Size − > Outcome Machine Learning: Features − > Predictors − > Classifier
  • 8. X is my back. Problem. Don’t know x. Figure: Xray of Spine
  • 9. Solution. Throw A at x (Ax) Collect b Infer x (x = A−1 b). or for linear regression.... β = (XT X)−1 (XT Y )
  • 10.
  • 11. Methodology The pseudoinverse solution, which always exists, is often computed as the unique, minimum norm solution to the least-squares formulation ˆx = Minx ||Ax − b||2 2 such that ||x||2 2 solves for the minimum. Think of an underdetermined system of equations: x + y + z = 1 (1) x + y + 2z = 3 (2) There are many solutions. The pseudo inverse is not very useful for applications.
  • 12. Lots of useful inverse problems are ill-conditioned linear systems. Standard numerical methods x = A−1b or β = (XT X)−1XT Y produce useless results. That A (or X) will be singular or not full rank, so it’s not invertible, or ill-conditioned
  • 13. If you perturb b a little bit, x = A−1 (b + e) Will map to something crazy!
  • 14. Why you ask? If A is not full rank or ill-conditioned then the solution x is unstable A−1b will be vastly different from A−1(b + e) The inverse of A is not a continuous mapping − > b b gets mapped to something very different than b+e
  • 15. What do people do then? Regularization problems formulate a nearby problem, which has a more stable solution: Minx (||Ax − b||2 2 + λ||x||2 2), δ > 0 where we introduce the term λ||x||2 2 perturbing the least-squares formulation. The regularization can be L1 (Lasso) or L2. Typically L2 (Tikhanov and Truncated SVD) better. Most people try to choose a good lambda, and solve the above, hoping that they’re close to the noiseless solution. Which penalizes outliers more?
  • 16. Problem with regularization is how to choose λ or how much to truncate. Tikhonov and truncated SVD are equivalent in that chopping off singular values is equivalent to choosing a λ? Minx (||Ax − b||2 2 + λ||x||2 2), δ > 0 This solution: Instead of computing a solution we compute a sequence of solutions which approximates the noiseless solution (A− 1b) on its way to the noisy (A− 1(b + e) solution.
  • 17.
  • 18. Replace λ||x|| Minx (||Ax − b||2 2 + λ||x||2 2), δ > 0 with λ||x − x0|| Minx (||Ax − b||2 2 + λ||x − x0||2 2), δ > 0 and then plug and re-plug.......
  • 19. Normal Equations Remember your normal equations? (AT A)x = AT b Instead solve new normal equations with added constraint: (AT A + λI)x = AT b + λx0
  • 20. Solution for x1 is: x1 = (AT A + λI)−1 AT b + λ(AT A + λI)−1 x0 and the kth iterate is: xk = Σk i=1λi−1 ((AT A + λI)−1 (AT b) + λk (AT A + λI)−1 x0)
  • 21. Now we the use single value decomposition (SVD) of A: A = UΣV T , σ1, .., σn, 0....0) ∈ Rmxn with σ1 ≥ σ2 ≥ · · · ≥ σn > 0, where n is the rank of A, we then obtain after a lot of derivation, the following kth iterate. V     1 σ1 − 1 σ1 ( λ λ+σ2 1 )k . . . 0 . . . 0 ... ... ... 0 . . . 1 σn − 1 σn ( λ λ+σ2 n )k 0 . . . 0     UT b For practical ease–I’m setting x0=0. It does not affect the solution.
  • 22. Step 1 InverseProblem codes the above formula to invert A. So inverse of A times b from InverseProblem will give us the x back. Let’s try it out using the Inverse Problem package! https://github.com/kathrynthegreat/InverseProblem
  • 23. Step 2: Brace yourself. Next few steps: Get a theoretical derivation for xk − x, the error between kith iterate and the true x. Show that as k goes to infinity xk approaches the noisy x solution Need to stop k before that happens!
  • 24. First, look at the residual error against the number of iterations. One thing is sure. We want λ > 1
  • 25. Next, it turns out (from experiments), that xk passes the true noiseless x when the residual error ||Axk − b|| “turns,” a heuristic that still needs a proof. Somehow this minimizes the error between the kith iterate and the true x: xk − x expression. (S1) -V     1 σ1 ( λ λ+σ2 1 )k . . . 0 0 . . . 0 ... ... ... 0 . . . 1 σn ( λ λ+σ2 n )k 0 . . . 0     UT b (S2) V     1 σ1 − 1 σ1 ( λ λ+σ2 1 )k . . . 0 0 . . . 0 ... ... ... 0 . . . 1 σn − 1 σn ( λ λ+σ2 n )k 0 . . . 0     UT e
  • 26.
  • 27. Now that we’ve seen the tractable cases. Here are some other uses. We will reconstruct an image from projections. Some reconstructions from projections are: 1 CT scans 2 MRI 3 Oil Exploration 4 Seismic imaging
  • 28.
  • 29.
  • 30.
  • 31. Notes Iterative approach is good for extremely ill conditioned matrices that are large. For small matrices there are other approaches. This approach is good for images because you can see when you’re doing better-the image is clearer. We need to see the graph (or the picture) to get an idea about lambda and k, the good news is that we don’t have to know them optimally as in classical Tikhonov. Otherwise if we know something about e (like it’s SD or norm ||e||) we can plot that error graph (still need to package that) Remember, of course, that if the data are extremely noisy then it is worthless, so no recovery is possible.
  • 32. Thanks. https://github.com/kathrynthegreat/InverseProblem import InverseProblem.functions as ip ip.optimalk(A) this will print out optimal k ip.invert(A,be,k,l) this will invert your A matrix, where be is noisy be, k is the no. of iterations, and lambda is your dampening effect