Single Image Super-Resolution
Supervision: Eng. Fadi Taki Al-Deen
Students:
Abdulrahman Baqdunes Mhd Wajeeh Ajajeh
Manaf Alabd Alrahim Nour Eddin Ramadan
1
What is Single Image Super-Resolution
SISR
 Estimating a high-resolution image from its counterpart low-resolution image.
 What distinguishes SISR is that it is more flexible and has wider range of
applications than Multiple-Image Super Resolution since it needs only one picture
for its input, but it is indeed more challenging.
2
Applications
 Improves normal video resolution.
 Improves surveillance video resolution.
 Medical usages.
 Improves satellite picture resolution.
3
Bicubic
 Filter.
 Very fast.
 Oversimplifies the picture.
 Yields solutions with overly smooth textures.
4
SRCNN
 Three layers.
 Fully feed-forward, thus pretty fast.
 Upscale it to the desired size using bicubic interpolation.
 The first convolutional layer of the SRCNN extracts a set of feature maps. The
second layer maps these feature maps nonlinearly to high-resolution patch
representations. The last layer combines the predictions within a spatial
neighborhood to produce the final high-resolution image.
5
SRCNN cont.
 Loss function is Mean Squared Error (MSE)
6
DRCN
 The network takes an interpolated input image (to the desired size).
 Consists of three sub-networks: embedding, inference and reconstruction
networks.
 Embedding net is used to represent the given image as feature maps ready for
inference.
 The inference net solves the task.
7
DRCN Cont.
 For intermediate outputs, we have the loss function.
 For the final output, we have
8
DRCN Cont.
 Now we give the final loss function L(θ). The training is regularized by weight decay
(L2 penalty multiplied by β).
9
SRRes Net
 16 blocks deep ResNet.
 Optimized for MSE.
 Has skip-connections.
10
SRGAN
 Generator.
 Deep network with B residual blocks.
 Two trained sub-pixel convolutional layers with small kernel 3*3.
 Batch-Normalization layers.
 Activation function is ParametricReLU
11
SRGAN Cont.
 Discriminator.
 Eight convolutional layers.
 Activation function is Sigmoid.
12
SRGAN Cont.
 Introduces Perceptual Loss Function.
13
SRGAN Cont.
 Classic content loss.
 Perceptual contest loss.
14
SRGAN Cont.
 Adversarial loss function.
15
SRGAN Cont.
 Introduces Mean Opinion Score (MOS).
 26 raters were asked to assign an integral score from 1 (bad quality) to 5 (excellent
quality) to the super-resolved images.
 Each rater rated 1128 images from the given datasets.
16
Datasets
 The models were tested on three of the most known benchmark datasets for SISR,
Set5, Set14 and BSD100.
17
Results18
State of the Art
 https://www.shamra.sy/academia/show/5b0cf3d685112
19
Thank You !
20

Gan

  • 1.
    Single Image Super-Resolution Supervision:Eng. Fadi Taki Al-Deen Students: Abdulrahman Baqdunes Mhd Wajeeh Ajajeh Manaf Alabd Alrahim Nour Eddin Ramadan 1
  • 2.
    What is SingleImage Super-Resolution SISR  Estimating a high-resolution image from its counterpart low-resolution image.  What distinguishes SISR is that it is more flexible and has wider range of applications than Multiple-Image Super Resolution since it needs only one picture for its input, but it is indeed more challenging. 2
  • 3.
    Applications  Improves normalvideo resolution.  Improves surveillance video resolution.  Medical usages.  Improves satellite picture resolution. 3
  • 4.
    Bicubic  Filter.  Veryfast.  Oversimplifies the picture.  Yields solutions with overly smooth textures. 4
  • 5.
    SRCNN  Three layers. Fully feed-forward, thus pretty fast.  Upscale it to the desired size using bicubic interpolation.  The first convolutional layer of the SRCNN extracts a set of feature maps. The second layer maps these feature maps nonlinearly to high-resolution patch representations. The last layer combines the predictions within a spatial neighborhood to produce the final high-resolution image. 5
  • 6.
    SRCNN cont.  Lossfunction is Mean Squared Error (MSE) 6
  • 7.
    DRCN  The networktakes an interpolated input image (to the desired size).  Consists of three sub-networks: embedding, inference and reconstruction networks.  Embedding net is used to represent the given image as feature maps ready for inference.  The inference net solves the task. 7
  • 8.
    DRCN Cont.  Forintermediate outputs, we have the loss function.  For the final output, we have 8
  • 9.
    DRCN Cont.  Nowwe give the final loss function L(θ). The training is regularized by weight decay (L2 penalty multiplied by β). 9
  • 10.
    SRRes Net  16blocks deep ResNet.  Optimized for MSE.  Has skip-connections. 10
  • 11.
    SRGAN  Generator.  Deepnetwork with B residual blocks.  Two trained sub-pixel convolutional layers with small kernel 3*3.  Batch-Normalization layers.  Activation function is ParametricReLU 11
  • 12.
    SRGAN Cont.  Discriminator. Eight convolutional layers.  Activation function is Sigmoid. 12
  • 13.
    SRGAN Cont.  IntroducesPerceptual Loss Function. 13
  • 14.
    SRGAN Cont.  Classiccontent loss.  Perceptual contest loss. 14
  • 15.
    SRGAN Cont.  Adversarialloss function. 15
  • 16.
    SRGAN Cont.  IntroducesMean Opinion Score (MOS).  26 raters were asked to assign an integral score from 1 (bad quality) to 5 (excellent quality) to the super-resolved images.  Each rater rated 1128 images from the given datasets. 16
  • 17.
    Datasets  The modelswere tested on three of the most known benchmark datasets for SISR, Set5, Set14 and BSD100. 17
  • 18.
  • 19.
    State of theArt  https://www.shamra.sy/academia/show/5b0cf3d685112 19
  • 20.