Pretraining word2vec

What exactly nn.embbeding layer did? it seems just a linear layer. then the skip_gram() function looks different with the paper showed flow. I did not understand this part. I saw some skip_gram implementation and this one is exactly same with the paper showed flow:

The nn.embedding is a kind of lookup table, it will return the weights, then the implementation is same as the paper.

There is a typo in cell 2: print(f’Parameter embedding_weight ({embed.weight.shape}, ’

should be: print(f’Parameter embedding_weight ({embed.weight.shape}, ’

Currently embed.weight.dtype is being treated as a string and is printed as it is