Word Embedding (word2vec)


I could not understand something so clearly about function of two submodels in word2vec. When we want to train word2vec model, where one word is mapped to one real vector, should we leverage both submodels, or just select one of them?

We usually just use either CBOW or skip-gram.