Deep Factorization Machines

https://d2l.ai/chapter_recommender-systems/deepfm.html


Why is my cpu quicker(more examples each second) than your two GPUs?
image


Sorry to miss a number…

Hi @StevenJokes, great question! It is always to point out that CPU may outperform GPU on small networks and less dimension feature inputs.

So you may wonder how to define “a small network”? A simple example: a 100 unit MLP trained on 10 input features may count as a small network. Since there are only 100x10 = 1000 parameters that needs to trained in this network. However, if the input feature dimension increases to 1 million, then the neural net may need more compute resources provided by GPU.

Sorry to miss a number…
GPU is quicker.

I found a question. When I initialized parameters in your way, I could get a similar result as yours. But, if
I done nothing for neural network’ s parameters, the acc is much higher. And for binary classification
question,shouldn’ t its acc exceed 0.5 at least?

I don’t know how doing nothing influences the parameters.
Are all the parameters 0 at start?

OK! I found it!

MXNet will use the default random initialization method: weight parameters for each element are randomly sampled from -0.07 to 0.07 evenly distributed, and all deviation parameters are cleared to zero.
This isn’t binary classification question. :neutral_face: