This document discusses language modeling for Amharic, a morphologically rich language spoken in Ethiopia. It explores using subword units like morphemes and roots to address data sparsity problems in Amharic language models. The authors develop root-based and factored language models that represent words as bundles of linguistic features. Evaluation shows the root-based models have lower perplexity than word-based models but do not improve speech recognition accuracy when rescoring lattices. Future work is needed to better integrate the root-based models while maintaining word-level dependencies.