word2vec is a family of NLP machine learning approaches used to learn word embeddings. It maps each word to a fixed-length vector, and relies on two models: a continuous bag of words encoder, which predicts the target from the context, and a skipgram decoder, which predicts the context from the target.

word2vec works best when focusing on local context, by learning word similarities based on the words that appear nearby in text (i.e., [[n-gram|n-grams]]). It excels at capturing semantic and syntactic relationships between words that occur close together.

See also