Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...
Review DRCN
1. Deeply-Recursive Convolutional
Network for Image Super-Resolution
2016 CVPR (Accepted)(arXiv 버전)
Jiwon Kim, Jung Kwon Lee, Kyoung Mu Lee
발표자 : 정우진
한양대학교 컴퓨터 비전 및 패턴 인식 연구실
Super-resolution results of “baby”(Set5) with scale factor x2 Red winner, Blue runner-up
2. / 20
• Introduction
• Related Work
• Proposed Method
– Basic Model
– Advanced Model
– Training
• Experimental Results
• Conclusion
2
Contents
3. / 20
Single-Image Super-Resolution
• Statistical image prior
– J. Sun, Z. Xu, and H.-Y. Shum. Image super-resolution using gradient profile prior. In CVPR, 2008.
– K. I. Kim and Y. Kwon. Single-image super-resolution using sparse regression and natural image prior.
TPAMI, 2010.
• Internal patch recurrence
– D. Glasner, S. Bagon, and M. Irani. Super-resolution from a single image. In ICCV, 2009.
– J.-B. Huang, A. Singh, and N. Ahuja. Single image super-resolution using transformed self-exemplars. In
CVPR, 2015.
• Neighbor embedding
– H. Chang, D.-Y. Yeung, and Y. Xiong. Super-resolution through neighbor embedding. In CVPR, 2004.
– C. G. Marco Bevilacqua, Aline Roumy and M.-L. A.Morel. Low-complexity single-image super-resolution
based on nonnegative neighbor embedding. In BMVC, 2012.
• Sparse coding
– J. Yang, J. Wright, T. S. Huang, and Y. Ma. Image super-resolution via sparse representation. TIP, 2010.
– R. Zeyde, M. Elad, and M. Protter. On single image scale-up using sparse-representations. In Curves and
Surfaces. Springer, 2012.
– R. Timofte, V. De, and L. V. Gool. Anchored neighborhood regression for fast example-based super-
resolution. In ICCV, 2013.
– R. Timofte, V. De Smet, and L. Van Gool. A+: Adjusted anchored neighborhood regression for fast super-
rsolution. In ACCV, 2014.
• Convolutional neural network
– C. Dong, C. C. Loy, K. He, and X. Tang. Image super-resolution using deep convolutional networks. TPAMI,
2014.
• Random forest
– S. Schulter, C. Leistner, and H. Bischof. Fast and accurate image upscaling with super-resolution forests. In
CVPR, 2015.
3
Related Work
4. / 20
SRCNN
• Super-Resolution Convolutional Neural Network
• 3 convolution layers
• Shallow network
• Fully connected layer is not used.
4
Introduction
Structure of SRCNN PSNR curve on Set5
5. / 20
Weakness of SRCNN
• Shallow
– Performance :
3 Layers == 4 Layers
– Reasonable solution :
Stacking blocks
• Small receptive field
– Receptive field :
field of the input pixels
for a output pixel
– Large field ≥ Small field
5
Introduction
3 layers vs. 4 layers
SRCNN(9-1-5) vs. SRCNN(9-3-5) vs. SRCNN(9-5-5)
6. / 20
Overview
• Embedding network : extracting features
• Inference network : main component
• Reconstruction network : reconstruction
6
Basic Model
Proposed Method >>
Architecture of basic model
7. / 20
Description for a Block
• Convolution layer
– Filter size : 3x3x256
• Lack of the pooling layer
– Main role of pooling layer : drop
the details
– Unuseful layer
• ReLU
7
Basic Model
Proposed Method >>
input
output
A block of DRCN
8. / 20
Unfolding Inference Network
• Shared filter
– Identical weight for all blocks
• Recursions
– 16 is used by experiment.
8
Basic Model
Proposed Method >>
Unfolding inference network
9. / 20
Advanced Model
• Skip connection
– SR에서 입력영상은 Output 복원에 필요한 대부분의 정보를 포함
– Residual
• Outputs integration
– Weighted sum
9
Advanced Model
Proposed Method >>
Architecture of Advanced model
10. / 20
Recursive Supervision
]
• To ease the difficulty of training
• Supervise the same HR image for each block.
• Tool to break the limit of SRCNN’s 3 layers
10
Training
Proposed Method >>
Architecture of Advanced model
11. / 20
Loss Function
• Term for the blocks
• Term for the weight of final integration
• Loss function
11
Training
Proposed Method >>
각 블록의 결과
12. / 20
Training Setups
• Recursions : 16
• Momentum and weight decay : 0.9, 0.0001 (typical)
• Patch size : 41x41 (receptive field is 41x41)
• Learning rate
– Initialization : 0.01
– Decreasing factor : 10
– Decreasing condition : error does not decrease for 5 epochs
– Termination : 10-6
12
Training
Proposed Method >>
18. / 20
Recursion versus Performance
• Log-like relationship
18
Experimental Result
Recursion versus Performance for the scale factor x3
on the dataset Set5
19. / 20
Ensemble
• Single << Ensemble
• Deep network versus Sallow network
– Before the critical point,
Deeper = Better
– After the critical point,
Deeper = worse
– Isolated critical point for each
upscale factor
19
Experimental Result
Ensemble effect