I want to know why we have not used with torch.no_grad(): while calculating the loss in cross_entropy() function or while evaluating the model. As in the linear regression from scratch we have used it. Why we are not using it in this chapter?
def evaluate_accuracy(net, data_iter): #@save
"""Compute the accuracy for a model on a dataset."""
if isinstance(net, torch.nn.Module):
net.eval() # Set the model to evaluation mode
metric = Accumulator(2) # No. of correct predictions, no. of predictions
for _, (X, y) in enumerate(data_iter):
metric.add(accuracy(net(X), y), d2l.size(y))
return metric[0] / metric[1]
Hi all, is there anyone encounter the issue – “The kernel appears to have died. It will restart automatically.” when running train_ch3? I checked it is the code below in the train_ch3 causes the issue:
@Gavin
Have you googled it?
You can try pip uninstall numpy, then pip install -U numpy from https://www.youtube.com/watch?reload=9&v=RhpkTBvb-WU
If you have any other questions, try to solve it by googling it.
If you still have problem, then publish all your code or give me a github URL, and more informations of your environment.
In the train_epoch_ch3(), I wonder why we can call l.backward() without passing a tensor as argument since l is non-scalar, and why call l.sum() in the else block before .backward().
def train_epoch_ch3(net, train_iter, loss, updater): #@save
"""The training loop defined in Chapter 3."""
# Set the model to training mode
if isinstance(net, torch.nn.Module):
net.train()
# Sum of training loss, sum of training accuracy, no. of examples
metric = Accumulator(3)
for X, y in train_iter:
# Compute gradients and update parameters
y_hat = net(X)
l = loss(y_hat, y)
if isinstance(updater, torch.optim.Optimizer):
updater.zero_grad()
l.backward()
updater.step()
metric.add(float(l) * len(y), accuracy(y_hat, y),
y.size().numel())
else:
l.sum().backward()
updater(X.shape[0])
metric.add(float(l.sum()), accuracy(y_hat, y), y.numel())
# Return training loss and training accuracy
return metric[0] / metric[2], metric[1] / metric[2]
Thanks for your reply but I still don’t get it, I think .backward() method has default argument torch.tensor(1) for scalar, but when it is called by non-scalar, argument is required, am I right? What’s difference between built-in modules and custom ones?
The inbuilt loss criterion in PyTorch used here automatically reduces the loss to a scalar value using the argument reduction = “mean”/“sum” (default is mean). You can check this out here. For our custom loss we need to achieve the same reduction and hence we do a l.sum() before calling backward().
It’s very fun to study with this material. It’s quite amazing, a lot of good stuff.
I wanted to ask:
3.6.9
Solution 3.)
How to over come the problem of overflow for the softmax probabilities. Since, we’re dealing with exponencial function, we normalize it all. I mean to take z_i = x_i - mu(x_i) / std(x_i) and plug it into the exponential function so we can compute exp(x_i) without overflow.
@goldpiggy in accuracy function why don’t we simply use the mean instead of using the sum and divide by the length later?
i mean use tf.math.reduce_mean(cmp.type(y.dtype))
def train_epoch_ch3(net, train_iter, loss, updater): #@save
“”“The training loop defined in Chapter 3.”""
# Set the model to training mode
if isinstance(net, torch.nn.Module):
net.train()
# Sum of training loss, sum of training accuracy, no. of examples
metric = Accumulator(3)
for X, y in train_iter:
# Compute gradients and update parameters
y_hat = net(X)
l = loss(y_hat, y)
if isinstance(updater, torch.optim.Optimizer):
# Using PyTorch in-built optimizer & loss criterion
updater.zero_grad()
l.backward()
updater.step()
metric.add(float(l) * len(y), accuracy(y_hat, y),
y.size().numel())
else:
# Using custom built optimizer & loss criterion
l.sum().backward()
updater(X.shape[0])
metric.add(float(l.sum()), accuracy(y_hat, y), y.numel())
# Return training loss and training accuracy
return metric[0] / metric[2], metric[1] / metric[2]
In the above code, it looks good to use custom updater. But it will raise error if we use the inbuilt optimizer since the loss function should be using the inbuilt loss function as well. Please update the block for better clarification.