2022 TPAMI
 What are Diffusion Models?
 Forward diffusion process
𝑥0 → 𝑥𝑇 逐步添加噪声 马尔科夫过程 扩散步长受 𝛽𝑡 ∈ 0,1 𝑡=1
𝑇
影响
𝑇 → ∞, 𝑋𝑇 → 𝑁(0, 𝐼)
𝛼𝑡 = 1 − 𝛽𝑡, 𝛼𝑡 = 𝑖=1
𝑇
𝛼𝑖
 What are Diffusion Models?
 Reverse diffusion process
𝑥𝑇 → 𝑥0 逐步去除噪声
 What are Diffusion Models?
 What are Diffusion Models?
 What are Diffusion Models?
 Loss Function
 What are Diffusion Models?
( Ho J, Jain A, Abbeel P. Denoising diffusion probabilistic models[J]. Advances in Neural Information
Processing Systems, 2020, 33: 6840-6851. )
 Improved Denoising Diffusion Probabilistic Models
( Nichol A Q, Dhariwal P. Improved denoising diffusion probabilistic models[C]//International Conference
on Machine Learning. PMLR, 2021: 8162-8171. )
Help diffusion models to obtain lower NLL
 Improving the Noise Schedule 𝜷𝒕 cosine-based variance schedule
( a small offset s to prevent βt from being too small near t = 0 )
 Improved Denoising Diffusion Probabilistic Models
( Nichol A Q, Dhariwal P. Improved denoising diffusion probabilistic models[C]//International Conference
on Machine Learning. PMLR, 2021: 8162-8171. )
 Learning 𝜽(𝒙𝒕, 𝒕)
 Conditional Denoising Diffusion Model
 Gaussian Diffusion Process
 Optimizing the Denoising Model
 Conditional Denoising Diffusion Model
 Inference via Iterative Refinement
 Conditional Denoising Diffusion Model
 Inference via Iterative Refinement
 SR3 Model Architecture
 Noise Schedules
( Chen N, Zhang Y, Zen H, et al. WaveGrad: Estimating gradients for waveform generation[J].
arXiv preprint arXiv:2009.00713, 2020. )
𝛾~𝑝(𝛾) 𝑝(𝛾) = 𝑡=1
𝑇 1
𝑇
𝑈(𝛾𝑡−1, 𝛾𝑡)
𝑡~𝑈({1,2, . . . , 𝑇}), 𝛾~𝑈(𝛾𝑡−1, 𝛾𝑡)
 Experiments
Super-resolution results (64×64 →
256×256) for SR3 and Regression on
ImageNet test images
 Experiments
Results of a SR3 model (64×64 →
512×512), trained on FFHQ, and
applied to images outside of the
training set
 Experiments
 Experiments
2AFC (2-alternative forced-choice)
 Cascaded High-Resolution Image Synthesis
 Ablation Studies
Thank you for your attention!

Image super-resolution via iterative refinement.pptx

  • 1.
  • 2.
     What areDiffusion Models?  Forward diffusion process 𝑥0 → 𝑥𝑇 逐步添加噪声 马尔科夫过程 扩散步长受 𝛽𝑡 ∈ 0,1 𝑡=1 𝑇 影响 𝑇 → ∞, 𝑋𝑇 → 𝑁(0, 𝐼) 𝛼𝑡 = 1 − 𝛽𝑡, 𝛼𝑡 = 𝑖=1 𝑇 𝛼𝑖
  • 3.
     What areDiffusion Models?  Reverse diffusion process 𝑥𝑇 → 𝑥0 逐步去除噪声
  • 4.
     What areDiffusion Models?
  • 5.
     What areDiffusion Models?
  • 6.
     What areDiffusion Models?  Loss Function
  • 7.
     What areDiffusion Models? ( Ho J, Jain A, Abbeel P. Denoising diffusion probabilistic models[J]. Advances in Neural Information Processing Systems, 2020, 33: 6840-6851. )
  • 8.
     Improved DenoisingDiffusion Probabilistic Models ( Nichol A Q, Dhariwal P. Improved denoising diffusion probabilistic models[C]//International Conference on Machine Learning. PMLR, 2021: 8162-8171. ) Help diffusion models to obtain lower NLL  Improving the Noise Schedule 𝜷𝒕 cosine-based variance schedule ( a small offset s to prevent βt from being too small near t = 0 )
  • 9.
     Improved DenoisingDiffusion Probabilistic Models ( Nichol A Q, Dhariwal P. Improved denoising diffusion probabilistic models[C]//International Conference on Machine Learning. PMLR, 2021: 8162-8171. )  Learning 𝜽(𝒙𝒕, 𝒕)
  • 10.
     Conditional DenoisingDiffusion Model  Gaussian Diffusion Process  Optimizing the Denoising Model
  • 11.
     Conditional DenoisingDiffusion Model  Inference via Iterative Refinement
  • 12.
     Conditional DenoisingDiffusion Model  Inference via Iterative Refinement
  • 13.
     SR3 ModelArchitecture
  • 14.
     Noise Schedules (Chen N, Zhang Y, Zen H, et al. WaveGrad: Estimating gradients for waveform generation[J]. arXiv preprint arXiv:2009.00713, 2020. ) 𝛾~𝑝(𝛾) 𝑝(𝛾) = 𝑡=1 𝑇 1 𝑇 𝑈(𝛾𝑡−1, 𝛾𝑡) 𝑡~𝑈({1,2, . . . , 𝑇}), 𝛾~𝑈(𝛾𝑡−1, 𝛾𝑡)
  • 15.
     Experiments Super-resolution results(64×64 → 256×256) for SR3 and Regression on ImageNet test images
  • 16.
     Experiments Results ofa SR3 model (64×64 → 512×512), trained on FFHQ, and applied to images outside of the training set
  • 17.
  • 18.
  • 19.
  • 20.
  • 21.
    Thank you foryour attention!