Sequence-Aware Recommender Systems

https://d2l.ai/chapter_recommender-systems/seqrec.html

Why are the examples so small?
9.4 VS 8220.1 :roll_eyes: Factorization Machines

Your examples/second is small because you are running this on CPU, not GPU. Try running it in Google Colab if you do not have access to a GPU. Also, that value is not really indicative of examples per epoch but rather it is the speed at which examples are training the model and that is why it won’t significantly effect your final AUC and hit-rate results.

1 Like

Screen Shot 2021-03-12 at 4.44.23 PM
Hi, The paper didn’t say the W’ in the output layer come from another embedding matrix. It is a purely fully connected layer in the paper. However, in this chapter, the v’ in the output layers comes from embedding matrix? could you elaborate that? many thanks!