Prisma uses deep learning techniques like neural style transfer to transform photos into artworks. Neural style transfer uses convolutional neural networks to extract features from content and style images, then finds an image that minimizes differences in these features. Early work used iterative optimization, but real-time style transfer trains a generative CNN on a dataset to synthesize stylized images with one forward pass. Prisma's offline mode likely uses a similar generative approach to enable fast stylization on mobile.
1. Deep Learning behind Prisma
——Image style transfer with Convolutional Neural Network
lostleaf
2. Agenda
• Introduce deep learning models for image style transfer via recent
papers
• Prisma is kind of a stunt, but it should have used similar techniques
• Agenda
• A brief introduction to convolutional neural network
• Neural style
• Real-Time Style Transfer
3. Prisma
• An Russian mobile app
• Turns your photos into
awesome artworks
• With Deep Learning!!!
Hotel Ukraine rendered by Prisma from Premier Medvedev’s
Instagram
9. Convolution
• The brown numbers in the
yellow part is called
conventional kernel / filter
• Convolve the filter with the
image: slide over the image
spatially, computing dot
products
• Right: A 3*3 convolution sums
up the diagonals
From Prof. Andrew Ng’s UFLDL tutorial
13. Convolutional layer
• A convolutional layer
consists of several filters
• For example, if we had 6
5*5 filters, we’ll get 6
separate activation maps
• Stack these up to get a
tensor of size 28*28*6
• May add padding to
obtain same output size
14. Why convolution?
• Each value could be considered as an
output of a neuron
• Features of image data:
• pixels only related to small
neighborhood (local connection)
• repeat pattern & content move around
(weight sharing)
• Reduces the complexity and
computation of neural network by utilizing
natures of images 6
15. Pooling Layer
• Right: max pooling for example
• Operate independently on every
depth slice of the input
• Reduce the reduce the spatial size
of activation map (reduce amount
of parameters and computation)
• Increase the shift invariance
16. Case study1: MNIST & Lenet
• MNIST handwritten digits recognition
• “hello world” of deep learning
17. Lenet
LeCun, Yann, et al. "Gradient-based learning applied to document recognition." Proceedings of the IEEE 86.11 (1998): 2278-2324.
pooling pooling
18. Case study2: ImageNet & VggNet
• ImageNet: a large image dataset in thousands of classes
19. VggNet(Vgg19)
Image by Mark Chang
• Runner-up of Imagenet
challenge 2014
• 19 trainable layers
• 16 convolutional layers (3*3)
• 5 max pooling layers (2*2)
• 3 fully connected layers
Simonyan, Karen, and Andrew Zisserman. "Very deep convolutional networks for large-scale image recognition."
arXiv preprint arXiv:1409.1556 (2014).
21. Neural style
Gatys, Leon A., Alexander S. Ecker, and Matthias Bethge. "A neural algorithm of artistic style."
arXiv preprint arXiv:1508.06576 (2015).
Gatys, Leon A., Alexander S. Ecker, and Matthias Bethge. "Image style transfer using convolutional neural networks."
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016.
22. Intuition
• Convolutional neural networks well trained on large datasets
(VggNet) could be a powerful feature extractor, like human brains
• Human painters are talented in combining content and style
23. Goal
• Given a content image p and a style image a
• Find an image x that
• Similar to p in content
• Similar to a in style
≈ ? ≈
p a
x
24. Formulation
• Use Vgg19(Convolutional part) for feature extraction
• Two loss function
• Content loss: difference in content between x and p
• Style loss: difference in style between x and a
• Find an image x that minimize the weighted sum between content
and style loss
30. Drawbacks
• Iterative optimization
• Slow: 65s to render the 600 * 400 arch image with GTX 980M
• Power consuming: not acceptable for mobile apps like Prisma
32. Intuition
• Style transfer is essentially a image transformation problem: image
in, image out
• Generative CNN’s are proved to be powerful in many other image
transformation problems
33. Goal
• For a specific style image a, train a CNN that
• Accepts a content image p as input
• Outputs a synthesized image x has content similar to p and style
similar to a
34. Generative CNN
• Pre trained VggNet for formulating the loss function
• Style target: a fixed style image, e.g. starry night
• Input image & content target: images sampled from a large dataset
• Image Transform Net: fully convolutional network (and some fancy new staffs)
Johnson, Justin, Alexandre Alahi, and Li Fei-Fei. "Perceptual losses for real-time style transfer and super-resolution."
arXiv preprint arXiv:1603.08155 (2016).
35. Details & Improvements
• Image size 256 * 256
• Trained on a large image dataset for 4h with GTX Titan X
• 200 ~ 1000X rendering speedup
38. Comparison
• Original neural style: hundreds of optimization iterations
• Generative CNN: tens of thousands of training iterations, one
forward pass for synthesize
• Prisma's offline mode probably uses similar technologies
39. Parallel work — Texture Network
Ulyanov, Dmitry, et al. "Texture Networks: Feed-forward Synthesis of Textures and Stylized Images."
arXiv preprint arXiv:1603.03417 (2016).
40. Take home
• What make up a CNN
• Convolution, pooling, fully connected layer...
• How neural style works
• CNN for feature extraction & iterative optimization
• Fast style transfer
• Train a generative CNN for a specific style
41.
42. Some open course resources
• Introduction to Computer Vision, Udacity
• Deep Learning, Udacity
• Convolutional Neural Networks for Visual Recognition, Stanford
CS231n *
• Deep Learning for Natural Language Processing, Stanford
CS224d