V. Malykh presents an approach for creating robust word vectors for the Russian language that does not rely on a predefined vocabulary or word co-occurrence matrices. The approach uses a LSTM neural network and BME representations of words at the character level to learn word embeddings. Experiments on Russian corpora for paraphrase identification and plagiarism detection show the approach outperforms standard word2vec models, especially in noisy conditions with character substitutions and additions/deletions.