This document summarizes a research paper that proposes enhancing word embedding models with subword information by incorporating character n-grams. It finds that a skip-gram model augmented with character n-grams outperforms traditional skip-gram and CBOW models on word similarity and analogy tasks, especially for morphologically rich languages. Experiments show the character n-gram approach is more robust to the size of training data and helps learn better representations for rare and out-of-vocabulary words.