This document discusses building a next word prediction model using deep learning methods like LSTMs. It will take text as input, preprocess and tokenize the data, then build a deep learning model to predict the next word based on the previous words. Simple word prediction models use n-grams to calculate conditional word probabilities based on occurrence counts from text corpora. Bigram and trigram models are discussed as ways to predict the next word based on the previous one or two words in a sequence.