This document summarizes an image-based virtual try-on network that can synthesize multiple garments onto a person from unpaired data using GANs. The network uses a geometric matching module to generate segmentation maps and DensePose to map pixels to 3D surfaces. It then uses an appearance generation network inspired by SwapNet along with online optimization to accurately synthesize textures, logos, and embroidery. Evaluation metrics include FID, IS, and human evaluations, demonstrating the ability to realistically render outfits from single images.