Word embedding is a technique in natural language processing where words are represented as dense vectors in a continuous vector space. These representations are designed to capture semantic and syntactic relationships between words based on their distributional properties in large amounts of text. Two popular word embedding models are Word2Vec and GloVe. Word2Vec uses a shallow neural network to learn word vectors that place words with similar meanings close to each other in the vector space. GloVe is an unsupervised learning algorithm that trains word vectors based on global word-word co-occurrence statistics from a corpus.