Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.



Published on

Few-shot unsupervised image-to-image translation

Published in: Technology
  • Be the first to comment

  • Be the first to like this


  1. 1. Few-Shot Unsupervised Image-to-Image Translation Ming-Yu Liu Xun Huang Arun Mallya Tero Karras Timo Aila Jaakko Lehtinen Jan Kautz Bingwen Hu 2019-05-19
  2. 2. Problems While unsupervised/unpaired image-to-image translation methods (e.g., Liu and Tuzel, Liu et. al., Zhu et. al., and Huang et. al.) have achieved remarkable success, they are still limited in two aspects. • First, they generally require seeing a lot of images from target class in the training time; generating poor translation outputs if only few images are given at training time • Second, a trained model for a translation task cannot be repurposed for another translation task in the test time, the learned models are limited for translating images between two classes.
  3. 3. FUNIT • The proposed FUNIT framework aims at mapping an image of a source class to an analogous image of an unseen target class by leveraging a few target class images that are made available at test time. • In the training time, the FUNIT model learns to translate images between any two classes sampled from a set of source classes. In the test time, the model is presented a few images of a target class that the model has never seen before. The model leverages these few example images to translate an input image of a source class to the target class.
  4. 4. We assume the content image belongs to object class cx while each of the K class images belong to object class cy. In general, K is a small number and cx is different from cy.
  5. 5. where LGAN, LR, and LF are the GAN loss, the content image reconstruction loss, and the feature matching loss. Learning GAN loss: Total: Content reconstruction loss: Feature matching loss: