SlideShare a Scribd company logo
Julián Tachella
CNRS
Physics Laboratory
École Normale Supérieure de Lyon
Equivariant Imaging: learning to
solve inverse problems without
ground truth
ENSL Colloquium
Joint work with Dongdong Chen and Mike Davies
2
Inverse problems
Definition 𝑦 = 𝐴𝑥 + 𝑛
where 𝑥 ∈ ℝ𝑛 signal
𝑦 ∈ ℝ𝑚 observed measurements
𝐴 ∈ ℝ𝑚×𝑛 ill-posed forward sensing operator (𝑚 ≤ 𝑛)
𝑛 ∈ ℝ𝑚 is noise
• Goal: recover signal 𝑥 from 𝑦
• Inverse problems are ubiquitous in science and engineering
3
Examples
Magnetic resonance imaging
• 𝐴 = subset of Fourier modes
(𝑘 − space) of 2D/3D images
Image inpainting
• 𝐴 = diagonal matrix
with 1’s and 0s.
Computed tomography
• 𝐴 = 2D projections
(sinograms) of 2D/3D
images
𝑥
𝑦 𝑦 𝑥
𝑦 𝑥
4
Why it is hard to invert?
Even in the absence of noise, infinitely many 𝑥 consistent with 𝑦:
𝑥 = 𝐴†𝑦 + 𝑣
where 𝐴†
is the pseudo-inverse of 𝐴 and 𝑣 is any vector in nullspace of 𝐴
• Unique solution only possible if set of plausible 𝑥 is low-dimensional
reconstruct
5
Regularised reconstruction
Idea: define a loss 𝜌 𝑥 that allows valid signals from low-dim set 𝒳
𝑥 = argmin 𝑦 − 𝐴𝑥
2
+ 𝜌(𝑥)
Examples: total-variation, sparsity, etc.
Disadvantages: hard to define a good 𝜌 𝑥 in real world problems,
loose with respect to the true signal distribution
𝑥
6
Learning approach
Idea: use training pairs of signals and measurements (𝑥𝑖, 𝑦𝑖) to directly
learn the inversion function
argmin
𝑖
𝑥𝑖 − 𝑓(𝑦𝑖)
2
where 𝑓: ℝ𝑚 ↦ ℝ𝑛 is parameterized as a deep neural network.
𝑓
𝒇
7
Advantages:
• State-of-the-art reconstructions
• Once trained, 𝒇 is easy to evaluate
Learning approach
x8 accelerated MRI [Zbontar et al., 2019]
Deep network
(34.5 dB)
Ground-truth
Total variation
(28.2 dB)
8
Learning approach
Main disadvantage: Obtaining training signals 𝑥𝑖 can be expensive or
impossible.
• Medical and scientific imaging
• Only solves inverse problems which we already know what to expect
• Risk of training with signals from a different distribution
train test?
9
Learning from only measurements 𝒚?
argmin
𝑖
𝑦𝑖 − 𝐴𝑓(𝑦𝑖)
2
Proposition: Any reconstruction function 𝑓 𝑦 = 𝐴†𝑦 + 𝑔(𝑦) where
𝑔: ℝ𝑚
↦ 𝒩
𝐴 is any function whose image belongs to the nullspace of 𝐴.
Learning approach
𝒇
𝐴
𝑓
10
Geometric intuition
Toy example (𝑛 = 3, 𝑚 = 2): Signal set is 𝒳 = span 1,1,1 T
Forward operator 𝐴 keeps first 2 coordinates.
𝐴 𝐴†
11
Purpose of this talk
How can we learn reconstruction 𝑓: 𝑦 ↦ 𝑥 from noisy
measurement data only 𝑦𝑖?
1. Learning with no noise 𝑦 = 𝐴𝑥
2. Learning with noise 𝑦 = 𝐴𝑥 + 𝑛
12
Symmetry prior
How to learn from only 𝒚? We need some prior information
Idea: Most natural signals distributions are invariant to certain groups of
transformations:
∀𝑥 ∈ 𝒳, ∀𝑔 ∈ 𝐺, 𝑥′
= 𝑇𝑔𝑥 ∈ 𝒳
Example: natural images are shift invariant
𝐺 = group of 2D shifts
𝑇𝑔𝑥 for different 𝑔 ∈ 𝐺
13
Exploiting invariance
How to learn from only 𝒚? We need some prior information
For all 𝑔 ∈ 𝐺 we have
𝑦 = 𝐴𝑥 = 𝐴𝑇𝑔𝑥′ = 𝐴𝑔𝑥′
• Implicit access to multiple operators 𝐴𝑔
• Each operator with different nullspace
𝐴𝑇𝑔𝑥 for different 𝑔
14
Geometric intuition
Toy example (𝑛 = 3, 𝑚 = 2): Signal set is 𝑋 = span 1,1,1 T
Forward operator 𝐴 keeps first 2 coordinates.
𝐴 𝐴𝑇1 𝐴𝑇2
15
Basic definitions
• Let 𝐺 = {𝑔1, … , 𝑔 𝐺 } be a finite group with |𝐺| elements. A linear
representation of 𝐺 acting on ℝ𝑛
is a mapping 𝑇: 𝐺 ↦ 𝐺𝐿(ℝ𝑛
) which verifies
𝑇𝑔1
𝑇𝑔2
= 𝑇𝑔1𝑔2
𝑇𝑔1
−1 = 𝑇𝑔1
• A mapping 𝐻: ℝ𝑛 ↦ ℝ𝑚 is equivariant if
𝐻𝑇𝑔 = 𝑇′𝑔𝐻 for all 𝑔 ∈ 𝐺
where 𝑇′: 𝐺 ↦ 𝐺𝐿(ℝ𝑚) is a linear representation of 𝐺 acting on ℝ𝑚.
16
Theory
Necessary conditions: the operators 𝐴𝑇𝑔 𝑔∈𝐺
should be different enough
Proposition: Learning only possible if rank
𝐴𝑇1
⋮
𝐴𝑇|𝐺|
= 𝑛
• Number of measurements should verify
𝑚 ≥ max
𝑗
𝑐𝑗
𝑠𝑗
≥
𝑛
𝐺
where 𝑠𝑗 and 𝑐𝑗 are the degree and multiplicities of the linear representation 𝑇𝑔.
Examples: group of shifts 𝑚 ≥ 1, group of 1 reflection 𝑚 ≥
𝑛
2
17
Theory
Necessary conditions: the operators 𝐴𝑇𝑔 𝑔∈𝐺
should be different enough
Theorem: Learning only possible if 𝐴 is not equivariant to the group action
• If 𝐴 is equivariant, all 𝐴𝑇𝑔 have the same nullspace
Example: Any operator 𝐴 consisting of a subset of Fourier measurements is
equivariant to the group of shifts
• MRI, CT, super-resolution, etc.
18
Equivariant imaging loss
How can we enforce invariance in practice?
Idea: we have 𝑓 𝐴𝑇𝑔𝑥 = 𝑇𝑔𝑓(𝐴𝑥), i.e. 𝑓 ∘ 𝐴 should be 𝐺-equivariant
𝐴
𝑓
19
Equivariant imaging
Unsupervised training loss
argmin ℒ𝑀𝐶 𝑓
• ℒ𝑀𝐶 𝑓 = 𝑖 𝑦𝑖 − 𝐴𝑓 𝑦𝑖
2
measurement consistency
• ℒ𝐸𝐼 𝑓 = 𝑖,𝑔 𝑓 𝐴𝑇𝑔𝑓 𝑦𝑖 − 𝑇𝑔𝑓 𝑦𝑖
2
enforces equivariance of 𝐴 ∘ 𝑓
𝑓
Network-agnostic: applicable to any existing deep model!
+ℒ𝐸𝐼 𝑓
20
Experiments
Tasks:
• Magnetic resonance imaging
Network
• 𝑓 = 𝑔𝜃 ∘ 𝐴† where 𝑔𝜃 is a U-net CNN
Comparison
• Pseudo-inverse 𝐴†𝑦𝑖 (no training)
• Meas. consistency 𝐴𝑓 𝑦𝑖 = 𝑦𝑖
• Fully supervised loss: 𝑓 𝑦𝑖 = 𝑥𝑖
• Equivariant imaging (unsupervised)
𝐴𝑓 𝑦𝑖 = 𝑦𝑖 and equivariant 𝐴 ∘ 𝑓
Equivariant imaging Fully supervised
𝐴†𝑦 Meas. consistency
21
Magnetic resonance imaging
• Operator 𝐴 is a subset of Fourier measurements (x2 downsampling)
• Dataset is approximately rotation invariant
Signal 𝑥 Measurements 𝑦
22
2. Learning from noisy
measurements
23
What about noise?
Noisy measurements 𝑦|𝑢 ~ 𝑞𝑢(𝑦)
𝑢 = 𝐴𝑥
Examples: Gaussian noise, Poisson noise, Poisson-Gaussian noise
MRI with different noise levels:
• EI degrades with noise!
Gaussian noise standard deviation
24
Handling noise via SURE
Oracle consistency loss with clean/noisy measurements pairs (𝑢𝑖, 𝑦𝑖)
However, we don’t have clean 𝑢𝑖!
Idea: Proxy unsupervised loss ℒ𝑆𝑈𝑅𝐸 𝑓 which is an unbiased estimator, i.e.
𝔼𝑦,𝑢 ℒ𝑀𝐶 𝑓 = 𝔼𝑦 ℒ𝑆𝑈𝑅𝐸 𝑓
ℒ𝑀𝐶 𝑓 =
𝑖
𝑢𝑖 − 𝐴𝑓 𝑦𝑖
2
25
Gaussian noise 𝑦 ∼ 𝒩(𝑢, 𝐼𝜎2)
ℒ𝑆𝑈𝑅𝐸 𝑓 =
𝑖
𝑦𝑖 − 𝐴𝑓 𝑦𝑖
2
− 𝜎2𝑚 + 2𝜎2div(𝐴 ∘ 𝑓)(𝑦𝑖)
where div ℎ(𝑥) = 𝑗
𝛿ℎ𝑗
𝛿𝑥𝑗
is approximated with a Monte Carlo estimate which only
requires evaluations of ℎ [Ramani, 2008]
Theorem [Stein, 1981] Under mild differentiability conditions on the function 𝐴 ∘ 𝑓,
the following holds
Handling noise via SURE
𝔼𝑦,𝑢 ℒ𝑀𝐶 𝑓 = 𝔼𝑦 ℒ𝑆𝑈𝑅𝐸 𝑓
26
Robust EI: SURE+EI
Robust Equivariant Imaging
argmin ℒ𝑆𝑈𝑅𝐸 𝑓 + ℒ𝐸𝐼 𝑓
• ℒ𝑆𝑈𝑅𝐸 𝑓 : unbiased estimator of oracle measurement consistency
- noise dependent
- Gaussian, Poisson, Poisson-Gaussian
• ℒ𝐸𝐼 𝑓 : enforces equivariance of 𝐴 ∘ 𝑓
𝑓
27
Experiments
Tasks:
• Magnetic resonance imaging (Gaussian noise)
• Image inpainting (Poisson noise)
• Computed tomography (Poisson-Gaussian noise)
Network
• 𝑓 = 𝑔𝜃 ∘ 𝐴†
where 𝑔𝜃 is a U-net CNN
Comparison
• Meas. consistency 𝐴𝑓 𝑦𝑖 = 𝑦𝑖
• Fully supervised loss: 𝑓 𝑦𝑖 = 𝑥𝑖
• Equivariant imaging (unsupervised)
𝐴𝑓 𝑦𝑖 = 𝑦𝑖 and equivariant 𝐴 ∘ 𝑓
28
Magnetic resonance imaging
Gaussian noise standard deviation
29
Magnetic resonance imaging
• Operator 𝐴 is a subset of Fourier measurements (x4 downsampling)
• Gaussian noise (𝜎 = 0.2)
• Dataset is approximately rotation invariant
Measurements 𝑦 Signal 𝑥 Supervised Meas. consistency Robust EI
30
Inpainting
Signal 𝑥
Measurements 𝑦
• Operator 𝐴 is an inpainting mask (30% pixels dropped)
• Poisson noise (rate=10)
• Dataset is approximately shift invariant
Supervised Meas. consistency Robust EI
31
Noisy
measurements 𝑦
Robust EI
Supervised
Clean signal 𝑥 Meas. consistency
Computed tomography
• Operator 𝐴 is (non-linear variant) sparse radon transform (50 views)
• Mixed Poisson-Gaussian noise
• Dataset is approximately rotation invariant
32
Conclusions
Novel unsupervised learning framework
• Theory: Necessary conditions for learning
• Number of measurements
• Interplay between forward operator/ data invariance
• Practice: deep learning approach
• Unsupervised loss which can be applied to any model
33
Conclusions
Novel unsupervised learning framework
• Ongoing/future work
• Sufficient conditions for learning
• More inverse problems
• Other signal domains
• Learning continuous representations
34
Papers
[1] “Equivariant Imaging: Learning Beyond the Range Space” (2021), Chen, Tachella and Davies, ICCV 2021
[2] “Robust Equivariant Imaging: a fully unsupervised framework for learning to image from noisy and partial
measurements” (2021), Chen, Tachella and Davies, Arxiv
[3] “Sampling Theorems for Unsupervised Learning in Inverse Problems” (2022), Tachella, Chen and Davies, to appear.
Thanks for your attention!
Tachella.github.io
 Codes
 Presentations
 … and more
35

More Related Content

What's hot

Finding connections among images using CycleGAN
Finding connections among images using CycleGANFinding connections among images using CycleGAN
Finding connections among images using CycleGAN
NAVER Engineering
 
Automatic Skin Lesion Segmentation and Melanoma Detection: Transfer Learning ...
Automatic Skin Lesion Segmentation and Melanoma Detection: Transfer Learning ...Automatic Skin Lesion Segmentation and Melanoma Detection: Transfer Learning ...
Automatic Skin Lesion Segmentation and Melanoma Detection: Transfer Learning ...
Zabir Al Nazi Nabil
 

What's hot (20)

4. operations of signals
4. operations of signals 4. operations of signals
4. operations of signals
 
Deep Object Detectors #1 (~2016.6)
Deep Object Detectors #1 (~2016.6)Deep Object Detectors #1 (~2016.6)
Deep Object Detectors #1 (~2016.6)
 
Pixel Recurrent Neural Networks
Pixel Recurrent Neural NetworksPixel Recurrent Neural Networks
Pixel Recurrent Neural Networks
 
fundamentals-of-neural-networks-laurene-fausett
fundamentals-of-neural-networks-laurene-fausettfundamentals-of-neural-networks-laurene-fausett
fundamentals-of-neural-networks-laurene-fausett
 
Densely Connected Convolutional Networks
Densely Connected Convolutional NetworksDensely Connected Convolutional Networks
Densely Connected Convolutional Networks
 
Convolutional Neural Network (CNN) - image recognition
Convolutional Neural Network (CNN)  - image recognitionConvolutional Neural Network (CNN)  - image recognition
Convolutional Neural Network (CNN) - image recognition
 
Deformable ConvNets V2, DCNV2
Deformable ConvNets V2, DCNV2Deformable ConvNets V2, DCNV2
Deformable ConvNets V2, DCNV2
 
Masked Autoencoders Are Scalable Vision Learners.pptx
Masked Autoencoders Are Scalable Vision Learners.pptxMasked Autoencoders Are Scalable Vision Learners.pptx
Masked Autoencoders Are Scalable Vision Learners.pptx
 
DSP_FOEHU - Lec 11 - IIR Filter Design
DSP_FOEHU - Lec 11 - IIR Filter DesignDSP_FOEHU - Lec 11 - IIR Filter Design
DSP_FOEHU - Lec 11 - IIR Filter Design
 
Swin transformer
Swin transformerSwin transformer
Swin transformer
 
Visualizing data using t-SNE
Visualizing data using t-SNEVisualizing data using t-SNE
Visualizing data using t-SNE
 
Particle filter
Particle filterParticle filter
Particle filter
 
Neural Networks on Steroids (Poster)
Neural Networks on Steroids (Poster)Neural Networks on Steroids (Poster)
Neural Networks on Steroids (Poster)
 
Optic flow estimation with deep learning
Optic flow estimation with deep learningOptic flow estimation with deep learning
Optic flow estimation with deep learning
 
Densenet CNN
Densenet CNNDensenet CNN
Densenet CNN
 
알기쉬운 Variational autoencoder
알기쉬운 Variational autoencoder알기쉬운 Variational autoencoder
알기쉬운 Variational autoencoder
 
Finding connections among images using CycleGAN
Finding connections among images using CycleGANFinding connections among images using CycleGAN
Finding connections among images using CycleGAN
 
Automatic Skin Lesion Segmentation and Melanoma Detection: Transfer Learning ...
Automatic Skin Lesion Segmentation and Melanoma Detection: Transfer Learning ...Automatic Skin Lesion Segmentation and Melanoma Detection: Transfer Learning ...
Automatic Skin Lesion Segmentation and Melanoma Detection: Transfer Learning ...
 
Mask R-CNN
Mask R-CNNMask R-CNN
Mask R-CNN
 
Machine Learning - Convolutional Neural Network
Machine Learning - Convolutional Neural NetworkMachine Learning - Convolutional Neural Network
Machine Learning - Convolutional Neural Network
 

Similar to Equivariant Imaging

Neural Inverse Rendering for General Reflectance Photometric Stereo (ICML 2018)
Neural Inverse Rendering for General Reflectance Photometric Stereo (ICML 2018)Neural Inverse Rendering for General Reflectance Photometric Stereo (ICML 2018)
Neural Inverse Rendering for General Reflectance Photometric Stereo (ICML 2018)
Tatsunori Taniai
 
Analysis of large scale spiking networks dynamics with spatio-temporal constr...
Analysis of large scale spiking networks dynamics with spatio-temporal constr...Analysis of large scale spiking networks dynamics with spatio-temporal constr...
Analysis of large scale spiking networks dynamics with spatio-temporal constr...
Hassan Nasser
 
Defense_Talk
Defense_TalkDefense_Talk
Defense_Talk
castanan2
 

Similar to Equivariant Imaging (20)

Equivariant Imaging SELW'22
Equivariant Imaging SELW'22Equivariant Imaging SELW'22
Equivariant Imaging SELW'22
 
Tutorial Equivariance in Imaging ICMS 23.pptx
Tutorial Equivariance in Imaging ICMS 23.pptxTutorial Equivariance in Imaging ICMS 23.pptx
Tutorial Equivariance in Imaging ICMS 23.pptx
 
NeurIPS22.pptx
NeurIPS22.pptxNeurIPS22.pptx
NeurIPS22.pptx
 
Optimum Engineering Design - Day 2b. Classical Optimization methods
Optimum Engineering Design - Day 2b. Classical Optimization methodsOptimum Engineering Design - Day 2b. Classical Optimization methods
Optimum Engineering Design - Day 2b. Classical Optimization methods
 
Image denoising
Image denoisingImage denoising
Image denoising
 
Neural Networks
Neural NetworksNeural Networks
Neural Networks
 
Deconvolution
DeconvolutionDeconvolution
Deconvolution
 
BEGAN Boundary Equilibrium Generative Adversarial Networks
BEGAN Boundary Equilibrium Generative Adversarial NetworksBEGAN Boundary Equilibrium Generative Adversarial Networks
BEGAN Boundary Equilibrium Generative Adversarial Networks
 
Linear regression, costs & gradient descent
Linear regression, costs & gradient descentLinear regression, costs & gradient descent
Linear regression, costs & gradient descent
 
Neural Inverse Rendering for General Reflectance Photometric Stereo (ICML 2018)
Neural Inverse Rendering for General Reflectance Photometric Stereo (ICML 2018)Neural Inverse Rendering for General Reflectance Photometric Stereo (ICML 2018)
Neural Inverse Rendering for General Reflectance Photometric Stereo (ICML 2018)
 
Elements of Statistical Learning 読み会 第2章
Elements of Statistical Learning 読み会 第2章Elements of Statistical Learning 読み会 第2章
Elements of Statistical Learning 読み会 第2章
 
Fortran chapter 2.pdf
Fortran chapter 2.pdfFortran chapter 2.pdf
Fortran chapter 2.pdf
 
Intro. to computational Physics ch2.pdf
Intro. to computational Physics ch2.pdfIntro. to computational Physics ch2.pdf
Intro. to computational Physics ch2.pdf
 
A brief introduction to mutual information and its application
A brief introduction to mutual information and its applicationA brief introduction to mutual information and its application
A brief introduction to mutual information and its application
 
QTML2021 UAP Quantum Feature Map
QTML2021 UAP Quantum Feature MapQTML2021 UAP Quantum Feature Map
QTML2021 UAP Quantum Feature Map
 
Noisy optimization --- (theory oriented) Survey
Noisy optimization --- (theory oriented) SurveyNoisy optimization --- (theory oriented) Survey
Noisy optimization --- (theory oriented) Survey
 
Image denoising with unknown Non-Periodic Noises
Image denoising with unknown Non-Periodic NoisesImage denoising with unknown Non-Periodic Noises
Image denoising with unknown Non-Periodic Noises
 
A machine learning method for efficient design optimization in nano-optics
A machine learning method for efficient design optimization in nano-opticsA machine learning method for efficient design optimization in nano-optics
A machine learning method for efficient design optimization in nano-optics
 
Analysis of large scale spiking networks dynamics with spatio-temporal constr...
Analysis of large scale spiking networks dynamics with spatio-temporal constr...Analysis of large scale spiking networks dynamics with spatio-temporal constr...
Analysis of large scale spiking networks dynamics with spatio-temporal constr...
 
Defense_Talk
Defense_TalkDefense_Talk
Defense_Talk
 

More from Julián Tachella (6)

The role of overparameterization and optimization in CNN denoisers
The role of overparameterization and optimization in CNN denoisersThe role of overparameterization and optimization in CNN denoisers
The role of overparameterization and optimization in CNN denoisers
 
Thesis presentation
Thesis presentationThesis presentation
Thesis presentation
 
CAMSAP19
CAMSAP19CAMSAP19
CAMSAP19
 
EUSIPCO19
EUSIPCO19EUSIPCO19
EUSIPCO19
 
ICASSP19
ICASSP19ICASSP19
ICASSP19
 
Bayesian restoration of high-dimensional photon-starved images
Bayesian restoration of high-dimensional photon-starved imagesBayesian restoration of high-dimensional photon-starved images
Bayesian restoration of high-dimensional photon-starved images
 

Recently uploaded

RS Khurmi Machine Design Clutch and Brake Exercise Numerical Solutions
RS Khurmi Machine Design Clutch and Brake Exercise Numerical SolutionsRS Khurmi Machine Design Clutch and Brake Exercise Numerical Solutions
RS Khurmi Machine Design Clutch and Brake Exercise Numerical Solutions
Atif Razi
 
Laundry management system project report.pdf
Laundry management system project report.pdfLaundry management system project report.pdf
Laundry management system project report.pdf
Kamal Acharya
 
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptx
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxCFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptx
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptx
R&R Consult
 

Recently uploaded (20)

2024 DevOps Pro Europe - Growing at the edge
2024 DevOps Pro Europe - Growing at the edge2024 DevOps Pro Europe - Growing at the edge
2024 DevOps Pro Europe - Growing at the edge
 
RS Khurmi Machine Design Clutch and Brake Exercise Numerical Solutions
RS Khurmi Machine Design Clutch and Brake Exercise Numerical SolutionsRS Khurmi Machine Design Clutch and Brake Exercise Numerical Solutions
RS Khurmi Machine Design Clutch and Brake Exercise Numerical Solutions
 
Laundry management system project report.pdf
Laundry management system project report.pdfLaundry management system project report.pdf
Laundry management system project report.pdf
 
shape functions of 1D and 2 D rectangular elements.pptx
shape functions of 1D and 2 D rectangular elements.pptxshape functions of 1D and 2 D rectangular elements.pptx
shape functions of 1D and 2 D rectangular elements.pptx
 
Immunizing Image Classifiers Against Localized Adversary Attacks
Immunizing Image Classifiers Against Localized Adversary AttacksImmunizing Image Classifiers Against Localized Adversary Attacks
Immunizing Image Classifiers Against Localized Adversary Attacks
 
NO1 Pandit Amil Baba In Bahawalpur, Sargodha, Sialkot, Sheikhupura, Rahim Yar...
NO1 Pandit Amil Baba In Bahawalpur, Sargodha, Sialkot, Sheikhupura, Rahim Yar...NO1 Pandit Amil Baba In Bahawalpur, Sargodha, Sialkot, Sheikhupura, Rahim Yar...
NO1 Pandit Amil Baba In Bahawalpur, Sargodha, Sialkot, Sheikhupura, Rahim Yar...
 
Final project report on grocery store management system..pdf
Final project report on grocery store management system..pdfFinal project report on grocery store management system..pdf
Final project report on grocery store management system..pdf
 
The Ultimate Guide to External Floating Roofs for Oil Storage Tanks.docx
The Ultimate Guide to External Floating Roofs for Oil Storage Tanks.docxThe Ultimate Guide to External Floating Roofs for Oil Storage Tanks.docx
The Ultimate Guide to External Floating Roofs for Oil Storage Tanks.docx
 
Scaling in conventional MOSFET for constant electric field and constant voltage
Scaling in conventional MOSFET for constant electric field and constant voltageScaling in conventional MOSFET for constant electric field and constant voltage
Scaling in conventional MOSFET for constant electric field and constant voltage
 
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...
 
The Benefits and Techniques of Trenchless Pipe Repair.pdf
The Benefits and Techniques of Trenchless Pipe Repair.pdfThe Benefits and Techniques of Trenchless Pipe Repair.pdf
The Benefits and Techniques of Trenchless Pipe Repair.pdf
 
Explosives Industry manufacturing process.pdf
Explosives Industry manufacturing process.pdfExplosives Industry manufacturing process.pdf
Explosives Industry manufacturing process.pdf
 
Introduction to Casting Processes in Manufacturing
Introduction to Casting Processes in ManufacturingIntroduction to Casting Processes in Manufacturing
Introduction to Casting Processes in Manufacturing
 
fluid mechanics gate notes . gate all pyqs answer
fluid mechanics gate notes . gate all pyqs answerfluid mechanics gate notes . gate all pyqs answer
fluid mechanics gate notes . gate all pyqs answer
 
Danfoss NeoCharge Technology -A Revolution in 2024.pdf
Danfoss NeoCharge Technology -A Revolution in 2024.pdfDanfoss NeoCharge Technology -A Revolution in 2024.pdf
Danfoss NeoCharge Technology -A Revolution in 2024.pdf
 
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptx
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxCFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptx
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptx
 
Construction method of steel structure space frame .pptx
Construction method of steel structure space frame .pptxConstruction method of steel structure space frame .pptx
Construction method of steel structure space frame .pptx
 
İTÜ CAD and Reverse Engineering Workshop
İTÜ CAD and Reverse Engineering WorkshopİTÜ CAD and Reverse Engineering Workshop
İTÜ CAD and Reverse Engineering Workshop
 
ENERGY STORAGE DEVICES INTRODUCTION UNIT-I
ENERGY STORAGE DEVICES  INTRODUCTION UNIT-IENERGY STORAGE DEVICES  INTRODUCTION UNIT-I
ENERGY STORAGE DEVICES INTRODUCTION UNIT-I
 
Quality defects in TMT Bars, Possible causes and Potential Solutions.
Quality defects in TMT Bars, Possible causes and Potential Solutions.Quality defects in TMT Bars, Possible causes and Potential Solutions.
Quality defects in TMT Bars, Possible causes and Potential Solutions.
 

Equivariant Imaging

  • 1. Julián Tachella CNRS Physics Laboratory École Normale Supérieure de Lyon Equivariant Imaging: learning to solve inverse problems without ground truth ENSL Colloquium Joint work with Dongdong Chen and Mike Davies
  • 2. 2 Inverse problems Definition 𝑦 = 𝐴𝑥 + 𝑛 where 𝑥 ∈ ℝ𝑛 signal 𝑦 ∈ ℝ𝑚 observed measurements 𝐴 ∈ ℝ𝑚×𝑛 ill-posed forward sensing operator (𝑚 ≤ 𝑛) 𝑛 ∈ ℝ𝑚 is noise • Goal: recover signal 𝑥 from 𝑦 • Inverse problems are ubiquitous in science and engineering
  • 3. 3 Examples Magnetic resonance imaging • 𝐴 = subset of Fourier modes (𝑘 − space) of 2D/3D images Image inpainting • 𝐴 = diagonal matrix with 1’s and 0s. Computed tomography • 𝐴 = 2D projections (sinograms) of 2D/3D images 𝑥 𝑦 𝑦 𝑥 𝑦 𝑥
  • 4. 4 Why it is hard to invert? Even in the absence of noise, infinitely many 𝑥 consistent with 𝑦: 𝑥 = 𝐴†𝑦 + 𝑣 where 𝐴† is the pseudo-inverse of 𝐴 and 𝑣 is any vector in nullspace of 𝐴 • Unique solution only possible if set of plausible 𝑥 is low-dimensional reconstruct
  • 5. 5 Regularised reconstruction Idea: define a loss 𝜌 𝑥 that allows valid signals from low-dim set 𝒳 𝑥 = argmin 𝑦 − 𝐴𝑥 2 + 𝜌(𝑥) Examples: total-variation, sparsity, etc. Disadvantages: hard to define a good 𝜌 𝑥 in real world problems, loose with respect to the true signal distribution 𝑥
  • 6. 6 Learning approach Idea: use training pairs of signals and measurements (𝑥𝑖, 𝑦𝑖) to directly learn the inversion function argmin 𝑖 𝑥𝑖 − 𝑓(𝑦𝑖) 2 where 𝑓: ℝ𝑚 ↦ ℝ𝑛 is parameterized as a deep neural network. 𝑓 𝒇
  • 7. 7 Advantages: • State-of-the-art reconstructions • Once trained, 𝒇 is easy to evaluate Learning approach x8 accelerated MRI [Zbontar et al., 2019] Deep network (34.5 dB) Ground-truth Total variation (28.2 dB)
  • 8. 8 Learning approach Main disadvantage: Obtaining training signals 𝑥𝑖 can be expensive or impossible. • Medical and scientific imaging • Only solves inverse problems which we already know what to expect • Risk of training with signals from a different distribution train test?
  • 9. 9 Learning from only measurements 𝒚? argmin 𝑖 𝑦𝑖 − 𝐴𝑓(𝑦𝑖) 2 Proposition: Any reconstruction function 𝑓 𝑦 = 𝐴†𝑦 + 𝑔(𝑦) where 𝑔: ℝ𝑚 ↦ 𝒩 𝐴 is any function whose image belongs to the nullspace of 𝐴. Learning approach 𝒇 𝐴 𝑓
  • 10. 10 Geometric intuition Toy example (𝑛 = 3, 𝑚 = 2): Signal set is 𝒳 = span 1,1,1 T Forward operator 𝐴 keeps first 2 coordinates. 𝐴 𝐴†
  • 11. 11 Purpose of this talk How can we learn reconstruction 𝑓: 𝑦 ↦ 𝑥 from noisy measurement data only 𝑦𝑖? 1. Learning with no noise 𝑦 = 𝐴𝑥 2. Learning with noise 𝑦 = 𝐴𝑥 + 𝑛
  • 12. 12 Symmetry prior How to learn from only 𝒚? We need some prior information Idea: Most natural signals distributions are invariant to certain groups of transformations: ∀𝑥 ∈ 𝒳, ∀𝑔 ∈ 𝐺, 𝑥′ = 𝑇𝑔𝑥 ∈ 𝒳 Example: natural images are shift invariant 𝐺 = group of 2D shifts 𝑇𝑔𝑥 for different 𝑔 ∈ 𝐺
  • 13. 13 Exploiting invariance How to learn from only 𝒚? We need some prior information For all 𝑔 ∈ 𝐺 we have 𝑦 = 𝐴𝑥 = 𝐴𝑇𝑔𝑥′ = 𝐴𝑔𝑥′ • Implicit access to multiple operators 𝐴𝑔 • Each operator with different nullspace 𝐴𝑇𝑔𝑥 for different 𝑔
  • 14. 14 Geometric intuition Toy example (𝑛 = 3, 𝑚 = 2): Signal set is 𝑋 = span 1,1,1 T Forward operator 𝐴 keeps first 2 coordinates. 𝐴 𝐴𝑇1 𝐴𝑇2
  • 15. 15 Basic definitions • Let 𝐺 = {𝑔1, … , 𝑔 𝐺 } be a finite group with |𝐺| elements. A linear representation of 𝐺 acting on ℝ𝑛 is a mapping 𝑇: 𝐺 ↦ 𝐺𝐿(ℝ𝑛 ) which verifies 𝑇𝑔1 𝑇𝑔2 = 𝑇𝑔1𝑔2 𝑇𝑔1 −1 = 𝑇𝑔1 • A mapping 𝐻: ℝ𝑛 ↦ ℝ𝑚 is equivariant if 𝐻𝑇𝑔 = 𝑇′𝑔𝐻 for all 𝑔 ∈ 𝐺 where 𝑇′: 𝐺 ↦ 𝐺𝐿(ℝ𝑚) is a linear representation of 𝐺 acting on ℝ𝑚.
  • 16. 16 Theory Necessary conditions: the operators 𝐴𝑇𝑔 𝑔∈𝐺 should be different enough Proposition: Learning only possible if rank 𝐴𝑇1 ⋮ 𝐴𝑇|𝐺| = 𝑛 • Number of measurements should verify 𝑚 ≥ max 𝑗 𝑐𝑗 𝑠𝑗 ≥ 𝑛 𝐺 where 𝑠𝑗 and 𝑐𝑗 are the degree and multiplicities of the linear representation 𝑇𝑔. Examples: group of shifts 𝑚 ≥ 1, group of 1 reflection 𝑚 ≥ 𝑛 2
  • 17. 17 Theory Necessary conditions: the operators 𝐴𝑇𝑔 𝑔∈𝐺 should be different enough Theorem: Learning only possible if 𝐴 is not equivariant to the group action • If 𝐴 is equivariant, all 𝐴𝑇𝑔 have the same nullspace Example: Any operator 𝐴 consisting of a subset of Fourier measurements is equivariant to the group of shifts • MRI, CT, super-resolution, etc.
  • 18. 18 Equivariant imaging loss How can we enforce invariance in practice? Idea: we have 𝑓 𝐴𝑇𝑔𝑥 = 𝑇𝑔𝑓(𝐴𝑥), i.e. 𝑓 ∘ 𝐴 should be 𝐺-equivariant 𝐴 𝑓
  • 19. 19 Equivariant imaging Unsupervised training loss argmin ℒ𝑀𝐶 𝑓 • ℒ𝑀𝐶 𝑓 = 𝑖 𝑦𝑖 − 𝐴𝑓 𝑦𝑖 2 measurement consistency • ℒ𝐸𝐼 𝑓 = 𝑖,𝑔 𝑓 𝐴𝑇𝑔𝑓 𝑦𝑖 − 𝑇𝑔𝑓 𝑦𝑖 2 enforces equivariance of 𝐴 ∘ 𝑓 𝑓 Network-agnostic: applicable to any existing deep model! +ℒ𝐸𝐼 𝑓
  • 20. 20 Experiments Tasks: • Magnetic resonance imaging Network • 𝑓 = 𝑔𝜃 ∘ 𝐴† where 𝑔𝜃 is a U-net CNN Comparison • Pseudo-inverse 𝐴†𝑦𝑖 (no training) • Meas. consistency 𝐴𝑓 𝑦𝑖 = 𝑦𝑖 • Fully supervised loss: 𝑓 𝑦𝑖 = 𝑥𝑖 • Equivariant imaging (unsupervised) 𝐴𝑓 𝑦𝑖 = 𝑦𝑖 and equivariant 𝐴 ∘ 𝑓
  • 21. Equivariant imaging Fully supervised 𝐴†𝑦 Meas. consistency 21 Magnetic resonance imaging • Operator 𝐴 is a subset of Fourier measurements (x2 downsampling) • Dataset is approximately rotation invariant Signal 𝑥 Measurements 𝑦
  • 22. 22 2. Learning from noisy measurements
  • 23. 23 What about noise? Noisy measurements 𝑦|𝑢 ~ 𝑞𝑢(𝑦) 𝑢 = 𝐴𝑥 Examples: Gaussian noise, Poisson noise, Poisson-Gaussian noise MRI with different noise levels: • EI degrades with noise! Gaussian noise standard deviation
  • 24. 24 Handling noise via SURE Oracle consistency loss with clean/noisy measurements pairs (𝑢𝑖, 𝑦𝑖) However, we don’t have clean 𝑢𝑖! Idea: Proxy unsupervised loss ℒ𝑆𝑈𝑅𝐸 𝑓 which is an unbiased estimator, i.e. 𝔼𝑦,𝑢 ℒ𝑀𝐶 𝑓 = 𝔼𝑦 ℒ𝑆𝑈𝑅𝐸 𝑓 ℒ𝑀𝐶 𝑓 = 𝑖 𝑢𝑖 − 𝐴𝑓 𝑦𝑖 2
  • 25. 25 Gaussian noise 𝑦 ∼ 𝒩(𝑢, 𝐼𝜎2) ℒ𝑆𝑈𝑅𝐸 𝑓 = 𝑖 𝑦𝑖 − 𝐴𝑓 𝑦𝑖 2 − 𝜎2𝑚 + 2𝜎2div(𝐴 ∘ 𝑓)(𝑦𝑖) where div ℎ(𝑥) = 𝑗 𝛿ℎ𝑗 𝛿𝑥𝑗 is approximated with a Monte Carlo estimate which only requires evaluations of ℎ [Ramani, 2008] Theorem [Stein, 1981] Under mild differentiability conditions on the function 𝐴 ∘ 𝑓, the following holds Handling noise via SURE 𝔼𝑦,𝑢 ℒ𝑀𝐶 𝑓 = 𝔼𝑦 ℒ𝑆𝑈𝑅𝐸 𝑓
  • 26. 26 Robust EI: SURE+EI Robust Equivariant Imaging argmin ℒ𝑆𝑈𝑅𝐸 𝑓 + ℒ𝐸𝐼 𝑓 • ℒ𝑆𝑈𝑅𝐸 𝑓 : unbiased estimator of oracle measurement consistency - noise dependent - Gaussian, Poisson, Poisson-Gaussian • ℒ𝐸𝐼 𝑓 : enforces equivariance of 𝐴 ∘ 𝑓 𝑓
  • 27. 27 Experiments Tasks: • Magnetic resonance imaging (Gaussian noise) • Image inpainting (Poisson noise) • Computed tomography (Poisson-Gaussian noise) Network • 𝑓 = 𝑔𝜃 ∘ 𝐴† where 𝑔𝜃 is a U-net CNN Comparison • Meas. consistency 𝐴𝑓 𝑦𝑖 = 𝑦𝑖 • Fully supervised loss: 𝑓 𝑦𝑖 = 𝑥𝑖 • Equivariant imaging (unsupervised) 𝐴𝑓 𝑦𝑖 = 𝑦𝑖 and equivariant 𝐴 ∘ 𝑓
  • 28. 28 Magnetic resonance imaging Gaussian noise standard deviation
  • 29. 29 Magnetic resonance imaging • Operator 𝐴 is a subset of Fourier measurements (x4 downsampling) • Gaussian noise (𝜎 = 0.2) • Dataset is approximately rotation invariant Measurements 𝑦 Signal 𝑥 Supervised Meas. consistency Robust EI
  • 30. 30 Inpainting Signal 𝑥 Measurements 𝑦 • Operator 𝐴 is an inpainting mask (30% pixels dropped) • Poisson noise (rate=10) • Dataset is approximately shift invariant Supervised Meas. consistency Robust EI
  • 31. 31 Noisy measurements 𝑦 Robust EI Supervised Clean signal 𝑥 Meas. consistency Computed tomography • Operator 𝐴 is (non-linear variant) sparse radon transform (50 views) • Mixed Poisson-Gaussian noise • Dataset is approximately rotation invariant
  • 32. 32 Conclusions Novel unsupervised learning framework • Theory: Necessary conditions for learning • Number of measurements • Interplay between forward operator/ data invariance • Practice: deep learning approach • Unsupervised loss which can be applied to any model
  • 33. 33 Conclusions Novel unsupervised learning framework • Ongoing/future work • Sufficient conditions for learning • More inverse problems • Other signal domains • Learning continuous representations
  • 34. 34 Papers [1] “Equivariant Imaging: Learning Beyond the Range Space” (2021), Chen, Tachella and Davies, ICCV 2021 [2] “Robust Equivariant Imaging: a fully unsupervised framework for learning to image from noisy and partial measurements” (2021), Chen, Tachella and Davies, Arxiv [3] “Sampling Theorems for Unsupervised Learning in Inverse Problems” (2022), Tachella, Chen and Davies, to appear.
  • 35. Thanks for your attention! Tachella.github.io  Codes  Presentations  … and more 35