The inbuilt loss criterion in PyTorch used here automatically reduces the loss to a scalar value using the argument reduction = “mean”/“sum” (default is mean). You can check this out here. For our custom loss we need to achieve the same reduction and hence we do a l.sum() before calling backward().
It’s very fun to study with this material. It’s quite amazing, a lot of good stuff.
I wanted to ask:
3.6.9
Solution 3.)
How to over come the problem of overflow for the softmax probabilities. Since, we’re dealing with exponencial function, we normalize it all. I mean to take z_i = x_i - mu(x_i) / std(x_i) and plug it into the exponential function so we can compute exp(x_i) without overflow.
@goldpiggy in accuracy function why don’t we simply use the mean instead of using the sum and divide by the length later?
i mean use tf.math.reduce_mean(cmp.type(y.dtype))
def train_epoch_ch3(net, train_iter, loss, updater): #@save
“”“The training loop defined in Chapter 3.”""
# Set the model to training mode
if isinstance(net, torch.nn.Module):
net.train()
# Sum of training loss, sum of training accuracy, no. of examples
metric = Accumulator(3)
for X, y in train_iter:
# Compute gradients and update parameters
y_hat = net(X)
l = loss(y_hat, y)
if isinstance(updater, torch.optim.Optimizer):
# Using PyTorch in-built optimizer & loss criterion
updater.zero_grad()
l.backward()
updater.step()
metric.add(float(l) * len(y), accuracy(y_hat, y),
y.size().numel())
else:
# Using custom built optimizer & loss criterion
l.sum().backward()
updater(X.shape[0])
metric.add(float(l.sum()), accuracy(y_hat, y), y.numel())
# Return training loss and training accuracy
return metric[0] / metric[2], metric[1] / metric[2]
In the above code, it looks good to use custom updater. But it will raise error if we use the inbuilt optimizer since the loss function should be using the inbuilt loss function as well. Please update the block for better clarification.
In this section, we directly implemented the softmax function based on the mathematical
definition of the softmax operation. What problems might this cause? Hint: try to calculate
the size of exp(50).
the number is too big
The function cross_entropy in this section was implemented according to the definition of
the cross-entropy loss function. What could be the problem with this implementation? Hint:
consider the domain of the logarithm.
domain of logarithm is all non negative numbers.
What solutions you can think of to fix the two problems above?
normalise the data first
Is it always a good idea to return the most likely label? For example, would you do this for
medical diagnosis?
no in medical dignosis we need high confidence in our predictions, so we need most likely label above a confidence threshold (probability)
Assume that we want to use softmax regression to predict the next word based on some
features. What are some problems that might arise from a large vocabulary?
if the vocabulary is large then the one hotencoding of the array would be big, and sparse too, we cannot directly apply softmax over all possible y values which will be equal to length of the vocabulary.
y_hat[0] return the first tensor[0.1,0.3,0.6], and the y_hat[1] returns the second. We want the possibility of the true label of each tensor 0.1 and 0.5 returned, which is [0, 2] therefore the y. So y_hat[[0,1], [0,2]] returns the first posibility of the first tensor and the third posibility of the second tensor
@anirudh Why do we always check for the instance type (torch.nn.Module) of net? For the training and evaluation method we can just directly use net.train() and net.eval() respectively. Am I missing something here? Thanks!
Hi @Debanjan_Das,
We also have a few models which are built from scratch and those models do not have the train or eval attribute since they are not subclassing nn.Module. This is just a way to reuse the saved functions making them compatible with scratch and concise versions of PyTorch code.
hello, in the first place, thanks for your reading. I have compared the English vision of this textbooks with Chinese vision, but i find that there is much difference between the two vision’s code and what i want to verify is that if we have changed the code in English vision cause the code is always use class for encapsulation