This document discusses using locally-trained word embeddings for query expansion. It shows that training word embeddings on documents relevant to a query (local model) provides a better representation than training globally on the entire corpus. In experiments on three datasets, the local model improved average NDCG@10 scores over using global embeddings or no expansion. The local model identifies query and expansion terms more closely related to relevant documents. Future work could improve effectiveness and efficiency of the local model approach.