[DL輪読会]Generative Models of Visually Grounded ImaginationDeep Learning JP
The document proposes a new model for visually grounded semantic imagination that can generate images from linguistic descriptions of concepts specified by attributes. The model uses a variational autoencoder with three inference networks to handle images, attributes, and missing modalities. It represents the attribute inference distribution as the product of expert Gaussians, allowing generation of concepts not seen during training by combining learned attributes. The paper introduces three criteria for evaluating such models: correctness, coverage, and compositionality.
[DL輪読会]Generative Models of Visually Grounded ImaginationDeep Learning JP
The document proposes a new model for visually grounded semantic imagination that can generate images from linguistic descriptions of concepts specified by attributes. The model uses a variational autoencoder with three inference networks to handle images, attributes, and missing modalities. It represents the attribute inference distribution as the product of expert Gaussians, allowing generation of concepts not seen during training by combining learned attributes. The paper introduces three criteria for evaluating such models: correctness, coverage, and compositionality.