Texture Synthesis
Texture Synthesis
Texture Synthesis
• Visual texture synthesis is to infer a generating process from an example texture, which then
allows to produce arbitrarily many new samples of that texture.
• Goal
The output should have the size given by the user.
The output should be as similar as possible to the sample.
The output should not have visible artifacts such as seams, blocks and misfitting edges.
The output should not repeat. (the same structures in the output image should not appear multiple
places)
Texture Synthesis
Texture Analysis
1. Original texture is passed
through pretrained CNN
2. Compute Gram matrix at
each layer
Texture Synthesis
1. White noise image is
passed through pretrained
CNN
2. Compute Gram matrix at
each layer
Texture Synthesis
Texture Synthesis
Original
High layer
Low layer
Loss terms above a certain layer we set the weights = 0,
while for the loss terms below and including that layer,
we set w = 1.
Texture Synthesis
For each layer we computed the Gram-matrix
representation of each image in the ImageNet training
set and trained a linear soft-max classifier to predict
object identity.
We expect that object information can be read
out independently from the spatial information in the
feature maps
Image Style Transfer
Image Style Transfer
Image Style Transfer
Image Style Transfer
Image Style Transfer
The key finding of this paper is that the representations
of content and style in the Convolutional Neural Network
are well separable.
Tradeoff between content and style matching
Image Style Transfer
Matching the content on a lower layer
→ Texture of the artworks merely blended over the photograph
Matching the content on a higher layer
→ Texture of the artwork and the content of the photograph
are properly merged
Image Style Transfer
Why Gram Matrix???
Demystifying Neural Style Transfer
Demystifying Neural Style Transfer
Demystifying Neural Style Transfer
Demystifying Neural Style Transfer
❖ Domain Adaptation
• Domain adaptation belongs to the area of transfer learning
• It aims to transfer the model that is learned on the source domain to the target domain.
• In domain adaptation, the source and target domains all have the same feature space (but
different distributions)
• Key component of domain adaptation is to measure and minimize the difference between
source and target distributions
Demystifying Neural Style Transfer
❖ Domain Adaptation
Demystifying Neural Style Transfer
❖ Maximum Mean Discrepancy(MMD)
• Maximum Mean Discrepancy (MMD) is a popular test statistic for the two-sample testing
• Measure the difference of sample mean in a Reproducing Kernel Hilbert Space(RKHS)
• MMD statistic can be used to measure the difference between two distributions.
• Null hypothesis(H0): p = q
𝑘 𝑥, 𝑦 = < ϕ 𝑥 , ϕ 𝑦 >
Demystifying Neural Style Transfer
❖ Loss
• Total Loss
• Content Loss
• Style Loss
Demystifying Neural Style Transfer
❖ Reformulation of the Style Loss
• Style Loss
𝑘 𝑥, 𝑦 = (𝑥 𝑇 𝑦)2
Demystifying Neural Style Transfer
❖ Reformulation of the Style Loss
• We consider the style of one image in a certain layer of CNN as a “domain”
• The activations at each position of feature maps is considered as an individual sample
• The style loss ignores the positions of the features, which is desired for style transfer.
Demystifying Neural Style Transfer
❖ Different Adaption Methods for Neural Style Transfer
• Kernel
• Statistics
• MMD → Batch Normalization(BN) statistics
• Batch Normalization (BN) layers contains the traits of different domains
𝑘 𝑥, 𝑦 = (𝑥 𝑇 𝑦)2
Demystifying Neural Style Transfer
❖ Result
Demystifying Neural Style Transfer
❖ Result
Reference
• https://en.wikipedia.org/wiki/Texture_synthesis
• http://cs231n.stanford.edu/slides/2017/cs231n_2017_lecture12.
• https://github.com/YBIGTA/data-science-2018/blob/master/DLCV/2018-02-02-Neural-style-
transefer.md
• “Texture Synthesis Using Convolutional Neural Networks”, Leon A. Gatys, Alexander S. Ecker,
Matthias Bethge, Neural Information Processing Systems (NIPS), 2015
• “Image Style Transfer Using Convolutional Neural Networks”, Leon A. Gatys, Alexander S. Ecker, Matthias
Bethge; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 2414-2423
• Yanghao Li, Naiyan Wang, Jiaying Liu, Xiaodi Hou, Demystifying Neural Style Transfer, International Joint
Conference on Artificial Intelligence (IJCAI), 2017
Demystifying Neural Style Transfer

Demystifying Neural Style Transfer

  • 2.
  • 3.
    Texture Synthesis • Visualtexture synthesis is to infer a generating process from an example texture, which then allows to produce arbitrarily many new samples of that texture. • Goal The output should have the size given by the user. The output should be as similar as possible to the sample. The output should not have visible artifacts such as seams, blocks and misfitting edges. The output should not repeat. (the same structures in the output image should not appear multiple places)
  • 4.
    Texture Synthesis Texture Analysis 1.Original texture is passed through pretrained CNN 2. Compute Gram matrix at each layer Texture Synthesis 1. White noise image is passed through pretrained CNN 2. Compute Gram matrix at each layer
  • 5.
  • 6.
    Texture Synthesis Original High layer Lowlayer Loss terms above a certain layer we set the weights = 0, while for the loss terms below and including that layer, we set w = 1.
  • 7.
    Texture Synthesis For eachlayer we computed the Gram-matrix representation of each image in the ImageNet training set and trained a linear soft-max classifier to predict object identity. We expect that object information can be read out independently from the spatial information in the feature maps
  • 8.
  • 9.
  • 10.
  • 11.
    Image Style Transfer Thekey finding of this paper is that the representations of content and style in the Convolutional Neural Network are well separable. Tradeoff between content and style matching
  • 12.
    Image Style Transfer Matchingthe content on a lower layer → Texture of the artworks merely blended over the photograph Matching the content on a higher layer → Texture of the artwork and the content of the photograph are properly merged
  • 13.
  • 14.
    Demystifying Neural StyleTransfer Demystifying Neural Style Transfer
  • 15.
  • 16.
    Demystifying Neural StyleTransfer ❖ Domain Adaptation • Domain adaptation belongs to the area of transfer learning • It aims to transfer the model that is learned on the source domain to the target domain. • In domain adaptation, the source and target domains all have the same feature space (but different distributions) • Key component of domain adaptation is to measure and minimize the difference between source and target distributions
  • 17.
    Demystifying Neural StyleTransfer ❖ Domain Adaptation
  • 18.
    Demystifying Neural StyleTransfer ❖ Maximum Mean Discrepancy(MMD) • Maximum Mean Discrepancy (MMD) is a popular test statistic for the two-sample testing • Measure the difference of sample mean in a Reproducing Kernel Hilbert Space(RKHS) • MMD statistic can be used to measure the difference between two distributions. • Null hypothesis(H0): p = q 𝑘 𝑥, 𝑦 = < ϕ 𝑥 , ϕ 𝑦 >
  • 19.
    Demystifying Neural StyleTransfer ❖ Loss • Total Loss • Content Loss • Style Loss
  • 20.
    Demystifying Neural StyleTransfer ❖ Reformulation of the Style Loss • Style Loss 𝑘 𝑥, 𝑦 = (𝑥 𝑇 𝑦)2
  • 21.
    Demystifying Neural StyleTransfer ❖ Reformulation of the Style Loss • We consider the style of one image in a certain layer of CNN as a “domain” • The activations at each position of feature maps is considered as an individual sample • The style loss ignores the positions of the features, which is desired for style transfer.
  • 22.
    Demystifying Neural StyleTransfer ❖ Different Adaption Methods for Neural Style Transfer • Kernel • Statistics • MMD → Batch Normalization(BN) statistics • Batch Normalization (BN) layers contains the traits of different domains 𝑘 𝑥, 𝑦 = (𝑥 𝑇 𝑦)2
  • 23.
    Demystifying Neural StyleTransfer ❖ Result
  • 24.
    Demystifying Neural StyleTransfer ❖ Result
  • 25.
    Reference • https://en.wikipedia.org/wiki/Texture_synthesis • http://cs231n.stanford.edu/slides/2017/cs231n_2017_lecture12. •https://github.com/YBIGTA/data-science-2018/blob/master/DLCV/2018-02-02-Neural-style- transefer.md • “Texture Synthesis Using Convolutional Neural Networks”, Leon A. Gatys, Alexander S. Ecker, Matthias Bethge, Neural Information Processing Systems (NIPS), 2015 • “Image Style Transfer Using Convolutional Neural Networks”, Leon A. Gatys, Alexander S. Ecker, Matthias Bethge; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 2414-2423 • Yanghao Li, Naiyan Wang, Jiaying Liu, Xiaodi Hou, Demystifying Neural Style Transfer, International Joint Conference on Artificial Intelligence (IJCAI), 2017