This document proposes a method for multi-label, multi-class text classification using polylingual embeddings. It generates document embeddings in different languages using pooling methods and learns cross-language embeddings with an autoencoder. Experimental results on a dataset with 12,670 instances across 100 classes show that distributed representations perform better with limited labeled data compared to bag-of-words models. Neighborhood-based classifiers like k-NN outperform SVMs on the polylingual embeddings, likely due to their semantic nature. The authors conclude more work is needed on composition functions for word representations and efficiently combining them with bag-of-words models.