This document describes a deep learning approach for facial in-painting to generate plausible facial structures for missing pixels in face images. The approach uses the observable region of a face as well as inferred attributes like gender, ethnicity, and expression. It selects a guidance face matching the attributes to in-paint missing areas like the mouth or eyes. In-painting is done on intrinsic image layers instead of RGB to handle illumination differences. The proposed method uses face detection, alignment, representation, and matching with real-time datasets to fill missing or masked regions with realistic synthesized content.