The loss was lower for the net trained on
train_with_data_aug(no_aug, no_aug) while at the same time the test accuracy was worse. With train_augs the loss was a bit higher, but the test accuracy was better. I would say, the fact that the loss was higher with train_augs means that it didn’t overfit and generalized better (due to the better test acc).
In “Flipping and Cropping” section, the names of pytorch-version functions are wrong in the text. You are writing mxnet-version functions.
def train_ch13(net, train_iter, test_iter, loss, trainer, num_epochs, devices=d2l.try_all_gpus()):
net = nn.DataParallel(net, device_ids=devices).to(devices)
Can anybody tell me why the nn.DataParallel() here is followed by .to(devices) ?
Don’t we train a model on multiple GPUs in function train_ch13()?