This master's thesis investigates methods for computing multiple sense embeddings per polysemous word. The author extends the word2vec model to build a sense assignment model that simultaneously selects word senses in sentences and adapts the sense embedding vectors in an unsupervised learning approach. The model is implemented in Spark to enable training on large corpora. Sense vectors are trained on a Wikipedia corpus of nearly 1 billion tokens and evaluated on word similarity tasks using the SCWS and WordSim-353 datasets. Hyperparameter tuning is performed to analyze the effect of vector size, learning rate, and other parameters on model performance and training time. Nearest neighbors and example sentences are also examined to analyze the quality of the computed sense embeddings.