Tensorflow consistency in d2l

Just a note, I am trying to teach this class without hiding all of the functions in the d2l (e.g. in the tensorflow.py file). This is so that my students can get a better grasp and not have all of the details
hidden from them. However, I ran across this (in the tensorflow.py file) which seems strange:

d2l.reshape
tf.reshape
reshape

But reshape and d2l.reshape are just calling tf.reshape. Am I missing something here or is there just a naming convention inconsistency with tf.reshape??

@number9 Most, if not all, objects from d2l package are first introduced with full implementations in dedicated chapters, and then, if necessary, subsequent chapters use them without declaration. So, I wouldn’t say that they are “hidden”.
As for reshape function, creating d2l wrapper makes it possible to have similar usage across all the backends so that you don’t have to remember whether it was x.reshape(shape), tf.reshape(x, shape), or something else - just follow d2l usage. Same with other d2l objects like Module, batch_matmul, and others.

Ok, so on MLP what is the easiest way to say print out the loss in each epoch as we did in the previous linear regression so we can see progress? I suppose what I am asking is, I of course could shove a print statement in train_ch3, but I am thinking, how can we more easily show its working? Print loss function after each epoch?