This document describes a method for image synthesis from reconfigurable layout and style using a layout- and style-based neural network architecture called LostGAN. It introduces the task of image synthesis from layout, background knowledge on image-to-image translation, and how LostGAN allows reconfiguration of image style, object style, and layout. LostGAN embeds labels, performs object instance projections, predicts masks, and computes spatially-adaptive normalization parameters to generate images conditioned on layout and style codes. The document concludes by discussing experiments where an object classifier is trained on synthesized images to evaluate synthesis quality.