Julián Tachella
CNRS
Physics Laboratory
École Normale Supérieure de Lyon
Equivariant Imaging: learning to
solve inverse problems without
ground truth
ENSL Colloquium
Joint work with Dongdong Chen and Mike Davies
2
Inverse problems
Definition 𝑦 = 𝐴𝑥 + 𝑛
where 𝑥 ∈ ℝ𝑛 signal
𝑦 ∈ ℝ𝑚 observed measurements
𝐴 ∈ ℝ𝑚×𝑛 ill-posed forward sensing operator (𝑚 ≤ 𝑛)
𝑛 ∈ ℝ𝑚 is noise
• Goal: recover signal 𝑥 from 𝑦
• Inverse problems are ubiquitous in science and engineering
3
Examples
Magnetic resonance imaging
• 𝐴 = subset of Fourier modes
(𝑘 − space) of 2D/3D images
Image inpainting
• 𝐴 = diagonal matrix
with 1’s and 0s.
Computed tomography
• 𝐴 = 2D projections
(sinograms) of 2D/3D
images
𝑥
𝑦 𝑦 𝑥
𝑦 𝑥
4
Why it is hard to invert?
Even in the absence of noise, infinitely many 𝑥 consistent with 𝑦:
𝑥 = 𝐴†𝑦 + 𝑣
where 𝐴†
is the pseudo-inverse of 𝐴 and 𝑣 is any vector in nullspace of 𝐴
• Unique solution only possible if set of plausible 𝑥 is low-dimensional
reconstruct
5
Regularised reconstruction
Idea: define a loss 𝜌 𝑥 that allows valid signals from low-dim set 𝒳
𝑥 = argmin 𝑦 − 𝐴𝑥
2
+ 𝜌(𝑥)
Examples: total-variation, sparsity, etc.
Disadvantages: hard to define a good 𝜌 𝑥 in real world problems,
loose with respect to the true signal distribution
𝑥
6
Learning approach
Idea: use training pairs of signals and measurements (𝑥𝑖, 𝑦𝑖) to directly
learn the inversion function
argmin
𝑖
𝑥𝑖 − 𝑓(𝑦𝑖)
2
where 𝑓: ℝ𝑚 ↦ ℝ𝑛 is parameterized as a deep neural network.
𝑓
𝒇
7
Advantages:
• State-of-the-art reconstructions
• Once trained, 𝒇 is easy to evaluate
Learning approach
x8 accelerated MRI [Zbontar et al., 2019]
Deep network
(34.5 dB)
Ground-truth
Total variation
(28.2 dB)
8
Learning approach
Main disadvantage: Obtaining training signals 𝑥𝑖 can be expensive or
impossible.
• Medical and scientific imaging
• Only solves inverse problems which we already know what to expect
• Risk of training with signals from a different distribution
train test?
9
Learning from only measurements 𝒚?
argmin
𝑖
𝑦𝑖 − 𝐴𝑓(𝑦𝑖)
2
Proposition: Any reconstruction function 𝑓 𝑦 = 𝐴†𝑦 + 𝑔(𝑦) where
𝑔: ℝ𝑚
↦ 𝒩
𝐴 is any function whose image belongs to the nullspace of 𝐴.
Learning approach
𝒇
𝐴
𝑓
10
Geometric intuition
Toy example (𝑛 = 3, 𝑚 = 2): Signal set is 𝒳 = span 1,1,1 T
Forward operator 𝐴 keeps first 2 coordinates.
𝐴 𝐴†
11
Purpose of this talk
How can we learn reconstruction 𝑓: 𝑦 ↦ 𝑥 from noisy
measurement data only 𝑦𝑖?
1. Learning with no noise 𝑦 = 𝐴𝑥
2. Learning with noise 𝑦 = 𝐴𝑥 + 𝑛
12
Symmetry prior
How to learn from only 𝒚? We need some prior information
Idea: Most natural signals distributions are invariant to certain groups of
transformations:
∀𝑥 ∈ 𝒳, ∀𝑔 ∈ 𝐺, 𝑥′
= 𝑇𝑔𝑥 ∈ 𝒳
Example: natural images are shift invariant
𝐺 = group of 2D shifts
𝑇𝑔𝑥 for different 𝑔 ∈ 𝐺
13
Exploiting invariance
How to learn from only 𝒚? We need some prior information
For all 𝑔 ∈ 𝐺 we have
𝑦 = 𝐴𝑥 = 𝐴𝑇𝑔𝑥′ = 𝐴𝑔𝑥′
• Implicit access to multiple operators 𝐴𝑔
• Each operator with different nullspace
𝐴𝑇𝑔𝑥 for different 𝑔
14
Geometric intuition
Toy example (𝑛 = 3, 𝑚 = 2): Signal set is 𝑋 = span 1,1,1 T
Forward operator 𝐴 keeps first 2 coordinates.
𝐴 𝐴𝑇1 𝐴𝑇2
15
Basic definitions
• Let 𝐺 = {𝑔1, … , 𝑔 𝐺 } be a finite group with |𝐺| elements. A linear
representation of 𝐺 acting on ℝ𝑛
is a mapping 𝑇: 𝐺 ↦ 𝐺𝐿(ℝ𝑛
) which verifies
𝑇𝑔1
𝑇𝑔2
= 𝑇𝑔1𝑔2
𝑇𝑔1
−1 = 𝑇𝑔1
• A mapping 𝐻: ℝ𝑛 ↦ ℝ𝑚 is equivariant if
𝐻𝑇𝑔 = 𝑇′𝑔𝐻 for all 𝑔 ∈ 𝐺
where 𝑇′: 𝐺 ↦ 𝐺𝐿(ℝ𝑚) is a linear representation of 𝐺 acting on ℝ𝑚.
16
Theory
Necessary conditions: the operators 𝐴𝑇𝑔 𝑔∈𝐺
should be different enough
Proposition: Learning only possible if rank
𝐴𝑇1
⋮
𝐴𝑇|𝐺|
= 𝑛
• Number of measurements should verify
𝑚 ≥ max
𝑗
𝑐𝑗
𝑠𝑗
≥
𝑛
𝐺
where 𝑠𝑗 and 𝑐𝑗 are the degree and multiplicities of the linear representation 𝑇𝑔.
Examples: group of shifts 𝑚 ≥ 1, group of 1 reflection 𝑚 ≥
𝑛
2
17
Theory
Necessary conditions: the operators 𝐴𝑇𝑔 𝑔∈𝐺
should be different enough
Theorem: Learning only possible if 𝐴 is not equivariant to the group action
• If 𝐴 is equivariant, all 𝐴𝑇𝑔 have the same nullspace
Example: Any operator 𝐴 consisting of a subset of Fourier measurements is
equivariant to the group of shifts
• MRI, CT, super-resolution, etc.
18
Equivariant imaging loss
How can we enforce invariance in practice?
Idea: we have 𝑓 𝐴𝑇𝑔𝑥 = 𝑇𝑔𝑓(𝐴𝑥), i.e. 𝑓 ∘ 𝐴 should be 𝐺-equivariant
𝐴
𝑓
19
Equivariant imaging
Unsupervised training loss
argmin ℒ𝑀𝐶 𝑓
• ℒ𝑀𝐶 𝑓 = 𝑖 𝑦𝑖 − 𝐴𝑓 𝑦𝑖
2
measurement consistency
• ℒ𝐸𝐼 𝑓 = 𝑖,𝑔 𝑓 𝐴𝑇𝑔𝑓 𝑦𝑖 − 𝑇𝑔𝑓 𝑦𝑖
2
enforces equivariance of 𝐴 ∘ 𝑓
𝑓
Network-agnostic: applicable to any existing deep model!
+ℒ𝐸𝐼 𝑓
20
Experiments
Tasks:
• Magnetic resonance imaging
Network
• 𝑓 = 𝑔𝜃 ∘ 𝐴† where 𝑔𝜃 is a U-net CNN
Comparison
• Pseudo-inverse 𝐴†𝑦𝑖 (no training)
• Meas. consistency 𝐴𝑓 𝑦𝑖 = 𝑦𝑖
• Fully supervised loss: 𝑓 𝑦𝑖 = 𝑥𝑖
• Equivariant imaging (unsupervised)
𝐴𝑓 𝑦𝑖 = 𝑦𝑖 and equivariant 𝐴 ∘ 𝑓
Equivariant imaging Fully supervised
𝐴†𝑦 Meas. consistency
21
Magnetic resonance imaging
• Operator 𝐴 is a subset of Fourier measurements (x2 downsampling)
• Dataset is approximately rotation invariant
Signal 𝑥 Measurements 𝑦
22
2. Learning from noisy
measurements
23
What about noise?
Noisy measurements 𝑦|𝑢 ~ 𝑞𝑢(𝑦)
𝑢 = 𝐴𝑥
Examples: Gaussian noise, Poisson noise, Poisson-Gaussian noise
MRI with different noise levels:
• EI degrades with noise!
Gaussian noise standard deviation
24
Handling noise via SURE
Oracle consistency loss with clean/noisy measurements pairs (𝑢𝑖, 𝑦𝑖)
However, we don’t have clean 𝑢𝑖!
Idea: Proxy unsupervised loss ℒ𝑆𝑈𝑅𝐸 𝑓 which is an unbiased estimator, i.e.
𝔼𝑦,𝑢 ℒ𝑀𝐶 𝑓 = 𝔼𝑦 ℒ𝑆𝑈𝑅𝐸 𝑓
ℒ𝑀𝐶 𝑓 =
𝑖
𝑢𝑖 − 𝐴𝑓 𝑦𝑖
2
25
Gaussian noise 𝑦 ∼ 𝒩(𝑢, 𝐼𝜎2)
ℒ𝑆𝑈𝑅𝐸 𝑓 =
𝑖
𝑦𝑖 − 𝐴𝑓 𝑦𝑖
2
− 𝜎2𝑚 + 2𝜎2div(𝐴 ∘ 𝑓)(𝑦𝑖)
where div ℎ(𝑥) = 𝑗
𝛿ℎ𝑗
𝛿𝑥𝑗
is approximated with a Monte Carlo estimate which only
requires evaluations of ℎ [Ramani, 2008]
Theorem [Stein, 1981] Under mild differentiability conditions on the function 𝐴 ∘ 𝑓,
the following holds
Handling noise via SURE
𝔼𝑦,𝑢 ℒ𝑀𝐶 𝑓 = 𝔼𝑦 ℒ𝑆𝑈𝑅𝐸 𝑓
26
Robust EI: SURE+EI
Robust Equivariant Imaging
argmin ℒ𝑆𝑈𝑅𝐸 𝑓 + ℒ𝐸𝐼 𝑓
• ℒ𝑆𝑈𝑅𝐸 𝑓 : unbiased estimator of oracle measurement consistency
- noise dependent
- Gaussian, Poisson, Poisson-Gaussian
• ℒ𝐸𝐼 𝑓 : enforces equivariance of 𝐴 ∘ 𝑓
𝑓
27
Experiments
Tasks:
• Magnetic resonance imaging (Gaussian noise)
• Image inpainting (Poisson noise)
• Computed tomography (Poisson-Gaussian noise)
Network
• 𝑓 = 𝑔𝜃 ∘ 𝐴†
where 𝑔𝜃 is a U-net CNN
Comparison
• Meas. consistency 𝐴𝑓 𝑦𝑖 = 𝑦𝑖
• Fully supervised loss: 𝑓 𝑦𝑖 = 𝑥𝑖
• Equivariant imaging (unsupervised)
𝐴𝑓 𝑦𝑖 = 𝑦𝑖 and equivariant 𝐴 ∘ 𝑓
28
Magnetic resonance imaging
Gaussian noise standard deviation
29
Magnetic resonance imaging
• Operator 𝐴 is a subset of Fourier measurements (x4 downsampling)
• Gaussian noise (𝜎 = 0.2)
• Dataset is approximately rotation invariant
Measurements 𝑦 Signal 𝑥 Supervised Meas. consistency Robust EI
30
Inpainting
Signal 𝑥
Measurements 𝑦
• Operator 𝐴 is an inpainting mask (30% pixels dropped)
• Poisson noise (rate=10)
• Dataset is approximately shift invariant
Supervised Meas. consistency Robust EI
31
Noisy
measurements 𝑦
Robust EI
Supervised
Clean signal 𝑥 Meas. consistency
Computed tomography
• Operator 𝐴 is (non-linear variant) sparse radon transform (50 views)
• Mixed Poisson-Gaussian noise
• Dataset is approximately rotation invariant
32
Conclusions
Novel unsupervised learning framework
• Theory: Necessary conditions for learning
• Number of measurements
• Interplay between forward operator/ data invariance
• Practice: deep learning approach
• Unsupervised loss which can be applied to any model
33
Conclusions
Novel unsupervised learning framework
• Ongoing/future work
• Sufficient conditions for learning
• More inverse problems
• Other signal domains
• Learning continuous representations
34
Papers
[1] “Equivariant Imaging: Learning Beyond the Range Space” (2021), Chen, Tachella and Davies, ICCV 2021
[2] “Robust Equivariant Imaging: a fully unsupervised framework for learning to image from noisy and partial
measurements” (2021), Chen, Tachella and Davies, Arxiv
[3] “Sampling Theorems for Unsupervised Learning in Inverse Problems” (2022), Tachella, Chen and Davies, to appear.
Thanks for your attention!
Tachella.github.io
 Codes
 Presentations
 … and more
35

Equivariant Imaging

  • 1.
    Julián Tachella CNRS Physics Laboratory ÉcoleNormale Supérieure de Lyon Equivariant Imaging: learning to solve inverse problems without ground truth ENSL Colloquium Joint work with Dongdong Chen and Mike Davies
  • 2.
    2 Inverse problems Definition 𝑦= 𝐴𝑥 + 𝑛 where 𝑥 ∈ ℝ𝑛 signal 𝑦 ∈ ℝ𝑚 observed measurements 𝐴 ∈ ℝ𝑚×𝑛 ill-posed forward sensing operator (𝑚 ≤ 𝑛) 𝑛 ∈ ℝ𝑚 is noise • Goal: recover signal 𝑥 from 𝑦 • Inverse problems are ubiquitous in science and engineering
  • 3.
    3 Examples Magnetic resonance imaging •𝐴 = subset of Fourier modes (𝑘 − space) of 2D/3D images Image inpainting • 𝐴 = diagonal matrix with 1’s and 0s. Computed tomography • 𝐴 = 2D projections (sinograms) of 2D/3D images 𝑥 𝑦 𝑦 𝑥 𝑦 𝑥
  • 4.
    4 Why it ishard to invert? Even in the absence of noise, infinitely many 𝑥 consistent with 𝑦: 𝑥 = 𝐴†𝑦 + 𝑣 where 𝐴† is the pseudo-inverse of 𝐴 and 𝑣 is any vector in nullspace of 𝐴 • Unique solution only possible if set of plausible 𝑥 is low-dimensional reconstruct
  • 5.
    5 Regularised reconstruction Idea: definea loss 𝜌 𝑥 that allows valid signals from low-dim set 𝒳 𝑥 = argmin 𝑦 − 𝐴𝑥 2 + 𝜌(𝑥) Examples: total-variation, sparsity, etc. Disadvantages: hard to define a good 𝜌 𝑥 in real world problems, loose with respect to the true signal distribution 𝑥
  • 6.
    6 Learning approach Idea: usetraining pairs of signals and measurements (𝑥𝑖, 𝑦𝑖) to directly learn the inversion function argmin 𝑖 𝑥𝑖 − 𝑓(𝑦𝑖) 2 where 𝑓: ℝ𝑚 ↦ ℝ𝑛 is parameterized as a deep neural network. 𝑓 𝒇
  • 7.
    7 Advantages: • State-of-the-art reconstructions •Once trained, 𝒇 is easy to evaluate Learning approach x8 accelerated MRI [Zbontar et al., 2019] Deep network (34.5 dB) Ground-truth Total variation (28.2 dB)
  • 8.
    8 Learning approach Main disadvantage:Obtaining training signals 𝑥𝑖 can be expensive or impossible. • Medical and scientific imaging • Only solves inverse problems which we already know what to expect • Risk of training with signals from a different distribution train test?
  • 9.
    9 Learning from onlymeasurements 𝒚? argmin 𝑖 𝑦𝑖 − 𝐴𝑓(𝑦𝑖) 2 Proposition: Any reconstruction function 𝑓 𝑦 = 𝐴†𝑦 + 𝑔(𝑦) where 𝑔: ℝ𝑚 ↦ 𝒩 𝐴 is any function whose image belongs to the nullspace of 𝐴. Learning approach 𝒇 𝐴 𝑓
  • 10.
    10 Geometric intuition Toy example(𝑛 = 3, 𝑚 = 2): Signal set is 𝒳 = span 1,1,1 T Forward operator 𝐴 keeps first 2 coordinates. 𝐴 𝐴†
  • 11.
    11 Purpose of thistalk How can we learn reconstruction 𝑓: 𝑦 ↦ 𝑥 from noisy measurement data only 𝑦𝑖? 1. Learning with no noise 𝑦 = 𝐴𝑥 2. Learning with noise 𝑦 = 𝐴𝑥 + 𝑛
  • 12.
    12 Symmetry prior How tolearn from only 𝒚? We need some prior information Idea: Most natural signals distributions are invariant to certain groups of transformations: ∀𝑥 ∈ 𝒳, ∀𝑔 ∈ 𝐺, 𝑥′ = 𝑇𝑔𝑥 ∈ 𝒳 Example: natural images are shift invariant 𝐺 = group of 2D shifts 𝑇𝑔𝑥 for different 𝑔 ∈ 𝐺
  • 13.
    13 Exploiting invariance How tolearn from only 𝒚? We need some prior information For all 𝑔 ∈ 𝐺 we have 𝑦 = 𝐴𝑥 = 𝐴𝑇𝑔𝑥′ = 𝐴𝑔𝑥′ • Implicit access to multiple operators 𝐴𝑔 • Each operator with different nullspace 𝐴𝑇𝑔𝑥 for different 𝑔
  • 14.
    14 Geometric intuition Toy example(𝑛 = 3, 𝑚 = 2): Signal set is 𝑋 = span 1,1,1 T Forward operator 𝐴 keeps first 2 coordinates. 𝐴 𝐴𝑇1 𝐴𝑇2
  • 15.
    15 Basic definitions • Let𝐺 = {𝑔1, … , 𝑔 𝐺 } be a finite group with |𝐺| elements. A linear representation of 𝐺 acting on ℝ𝑛 is a mapping 𝑇: 𝐺 ↦ 𝐺𝐿(ℝ𝑛 ) which verifies 𝑇𝑔1 𝑇𝑔2 = 𝑇𝑔1𝑔2 𝑇𝑔1 −1 = 𝑇𝑔1 • A mapping 𝐻: ℝ𝑛 ↦ ℝ𝑚 is equivariant if 𝐻𝑇𝑔 = 𝑇′𝑔𝐻 for all 𝑔 ∈ 𝐺 where 𝑇′: 𝐺 ↦ 𝐺𝐿(ℝ𝑚) is a linear representation of 𝐺 acting on ℝ𝑚.
  • 16.
    16 Theory Necessary conditions: theoperators 𝐴𝑇𝑔 𝑔∈𝐺 should be different enough Proposition: Learning only possible if rank 𝐴𝑇1 ⋮ 𝐴𝑇|𝐺| = 𝑛 • Number of measurements should verify 𝑚 ≥ max 𝑗 𝑐𝑗 𝑠𝑗 ≥ 𝑛 𝐺 where 𝑠𝑗 and 𝑐𝑗 are the degree and multiplicities of the linear representation 𝑇𝑔. Examples: group of shifts 𝑚 ≥ 1, group of 1 reflection 𝑚 ≥ 𝑛 2
  • 17.
    17 Theory Necessary conditions: theoperators 𝐴𝑇𝑔 𝑔∈𝐺 should be different enough Theorem: Learning only possible if 𝐴 is not equivariant to the group action • If 𝐴 is equivariant, all 𝐴𝑇𝑔 have the same nullspace Example: Any operator 𝐴 consisting of a subset of Fourier measurements is equivariant to the group of shifts • MRI, CT, super-resolution, etc.
  • 18.
    18 Equivariant imaging loss Howcan we enforce invariance in practice? Idea: we have 𝑓 𝐴𝑇𝑔𝑥 = 𝑇𝑔𝑓(𝐴𝑥), i.e. 𝑓 ∘ 𝐴 should be 𝐺-equivariant 𝐴 𝑓
  • 19.
    19 Equivariant imaging Unsupervised trainingloss argmin ℒ𝑀𝐶 𝑓 • ℒ𝑀𝐶 𝑓 = 𝑖 𝑦𝑖 − 𝐴𝑓 𝑦𝑖 2 measurement consistency • ℒ𝐸𝐼 𝑓 = 𝑖,𝑔 𝑓 𝐴𝑇𝑔𝑓 𝑦𝑖 − 𝑇𝑔𝑓 𝑦𝑖 2 enforces equivariance of 𝐴 ∘ 𝑓 𝑓 Network-agnostic: applicable to any existing deep model! +ℒ𝐸𝐼 𝑓
  • 20.
    20 Experiments Tasks: • Magnetic resonanceimaging Network • 𝑓 = 𝑔𝜃 ∘ 𝐴† where 𝑔𝜃 is a U-net CNN Comparison • Pseudo-inverse 𝐴†𝑦𝑖 (no training) • Meas. consistency 𝐴𝑓 𝑦𝑖 = 𝑦𝑖 • Fully supervised loss: 𝑓 𝑦𝑖 = 𝑥𝑖 • Equivariant imaging (unsupervised) 𝐴𝑓 𝑦𝑖 = 𝑦𝑖 and equivariant 𝐴 ∘ 𝑓
  • 21.
    Equivariant imaging Fullysupervised 𝐴†𝑦 Meas. consistency 21 Magnetic resonance imaging • Operator 𝐴 is a subset of Fourier measurements (x2 downsampling) • Dataset is approximately rotation invariant Signal 𝑥 Measurements 𝑦
  • 22.
    22 2. Learning fromnoisy measurements
  • 23.
    23 What about noise? Noisymeasurements 𝑦|𝑢 ~ 𝑞𝑢(𝑦) 𝑢 = 𝐴𝑥 Examples: Gaussian noise, Poisson noise, Poisson-Gaussian noise MRI with different noise levels: • EI degrades with noise! Gaussian noise standard deviation
  • 24.
    24 Handling noise viaSURE Oracle consistency loss with clean/noisy measurements pairs (𝑢𝑖, 𝑦𝑖) However, we don’t have clean 𝑢𝑖! Idea: Proxy unsupervised loss ℒ𝑆𝑈𝑅𝐸 𝑓 which is an unbiased estimator, i.e. 𝔼𝑦,𝑢 ℒ𝑀𝐶 𝑓 = 𝔼𝑦 ℒ𝑆𝑈𝑅𝐸 𝑓 ℒ𝑀𝐶 𝑓 = 𝑖 𝑢𝑖 − 𝐴𝑓 𝑦𝑖 2
  • 25.
    25 Gaussian noise 𝑦∼ 𝒩(𝑢, 𝐼𝜎2) ℒ𝑆𝑈𝑅𝐸 𝑓 = 𝑖 𝑦𝑖 − 𝐴𝑓 𝑦𝑖 2 − 𝜎2𝑚 + 2𝜎2div(𝐴 ∘ 𝑓)(𝑦𝑖) where div ℎ(𝑥) = 𝑗 𝛿ℎ𝑗 𝛿𝑥𝑗 is approximated with a Monte Carlo estimate which only requires evaluations of ℎ [Ramani, 2008] Theorem [Stein, 1981] Under mild differentiability conditions on the function 𝐴 ∘ 𝑓, the following holds Handling noise via SURE 𝔼𝑦,𝑢 ℒ𝑀𝐶 𝑓 = 𝔼𝑦 ℒ𝑆𝑈𝑅𝐸 𝑓
  • 26.
    26 Robust EI: SURE+EI RobustEquivariant Imaging argmin ℒ𝑆𝑈𝑅𝐸 𝑓 + ℒ𝐸𝐼 𝑓 • ℒ𝑆𝑈𝑅𝐸 𝑓 : unbiased estimator of oracle measurement consistency - noise dependent - Gaussian, Poisson, Poisson-Gaussian • ℒ𝐸𝐼 𝑓 : enforces equivariance of 𝐴 ∘ 𝑓 𝑓
  • 27.
    27 Experiments Tasks: • Magnetic resonanceimaging (Gaussian noise) • Image inpainting (Poisson noise) • Computed tomography (Poisson-Gaussian noise) Network • 𝑓 = 𝑔𝜃 ∘ 𝐴† where 𝑔𝜃 is a U-net CNN Comparison • Meas. consistency 𝐴𝑓 𝑦𝑖 = 𝑦𝑖 • Fully supervised loss: 𝑓 𝑦𝑖 = 𝑥𝑖 • Equivariant imaging (unsupervised) 𝐴𝑓 𝑦𝑖 = 𝑦𝑖 and equivariant 𝐴 ∘ 𝑓
  • 28.
  • 29.
    29 Magnetic resonance imaging •Operator 𝐴 is a subset of Fourier measurements (x4 downsampling) • Gaussian noise (𝜎 = 0.2) • Dataset is approximately rotation invariant Measurements 𝑦 Signal 𝑥 Supervised Meas. consistency Robust EI
  • 30.
    30 Inpainting Signal 𝑥 Measurements 𝑦 •Operator 𝐴 is an inpainting mask (30% pixels dropped) • Poisson noise (rate=10) • Dataset is approximately shift invariant Supervised Meas. consistency Robust EI
  • 31.
    31 Noisy measurements 𝑦 Robust EI Supervised Cleansignal 𝑥 Meas. consistency Computed tomography • Operator 𝐴 is (non-linear variant) sparse radon transform (50 views) • Mixed Poisson-Gaussian noise • Dataset is approximately rotation invariant
  • 32.
    32 Conclusions Novel unsupervised learningframework • Theory: Necessary conditions for learning • Number of measurements • Interplay between forward operator/ data invariance • Practice: deep learning approach • Unsupervised loss which can be applied to any model
  • 33.
    33 Conclusions Novel unsupervised learningframework • Ongoing/future work • Sufficient conditions for learning • More inverse problems • Other signal domains • Learning continuous representations
  • 34.
    34 Papers [1] “Equivariant Imaging:Learning Beyond the Range Space” (2021), Chen, Tachella and Davies, ICCV 2021 [2] “Robust Equivariant Imaging: a fully unsupervised framework for learning to image from noisy and partial measurements” (2021), Chen, Tachella and Davies, Arxiv [3] “Sampling Theorems for Unsupervised Learning in Inverse Problems” (2022), Tachella, Chen and Davies, to appear.
  • 35.
    Thanks for yourattention! Tachella.github.io  Codes  Presentations  … and more 35