Hi @machine_machine, please check my response at Full pytorch code book for d2l.ai [help]. Thanks for your patience.
while creating parameters we are multiplying tensor with 0.01 ,can anyone explain why are we doing so ?
also while initializing weights is it required to lie between 0 - 1 ?
torch.randn
from pytorch docs is generated with mean 0 and variance 1. We multiply the tensor by 0.01 to scale the parameters to this range.
Ans 2. Initializing with small numbers is required for stable training, not particularly 0 to 1 we can take a distribution from -1 to 1 as well. The only thing to keep in mind for stable training of Deep Neural Nets is that we should set the parameters in a way that avoids exploding gradients as well as avoid vanishing gradients.
My answers
Exercises
- Change the value of the hyperparameter num_hiddens and see how this hyperparameter influences your results. Determine the best value of this hyperparameter, keeping all others
constant.
- Try adding an additional hidden layer to see how it affects the results.
# increasing number of hidden layers
W1 = nn.Parameter(torch.randn(num_inputs, 128) * 0.01,requires_grad=True)
b1 = nn.Parameter(torch.zeros(128),requires_grad=True)
W2 = nn.Parameter(torch.randn(128, 64) * 0.01, requires_grad=True)
b2 = nn.Parameter(torch.zeros(64), requires_grad=True)
W3 = nn.Parameter(torch.randn(64,num_outputs)*0.01, requires_grad=True)
b3 = nn.Parameter(torch.zeros(num_outputs),requires_grad=True)
def net(X):
X=X.reshape(-1,num_inputs)
out = relu(torch.matmul(X,W1) + b1)
out = relu(torch.matmul(out,W2)+b2)
return torch.matmul(out,W3) + b3
- How does changing the learning rate alter your results? Fixing the model architecture and
other hyperparameters (including number of epochs), what learning rate gives you the best
results?
- changes the rate of convergence
- What is the best result you can get by optimizing over all the hyperparameters (learning rate,
number of epochs, number of hidden layers, number of hidden units per layer) jointly?
- loss of 0.5
- Describe why it is much more challenging to deal with multiple hyperparameters.
- combinatorial explosion because of more combination of hyperparameters
- What is the smartest strategy you can think of for structuring a search over multiple hyperparameters?
- creating a matrices of all parameters and then optimally training over the combinationto find the result. some heuristic may be required.
- Pass
- For me it was better if I increased the hidden layer from 1 to 2
- pass
- How do we define best result? Is it the minimum test loss, or maximum accuracy on test dataset? Do we keep epochs constant?
- If we have multiple hyperparameters, so mix-and-match of all the parameters will create an exponential amount of parameters to optimize.
- Grid search can be a good way to go about it. Increase the value exponentially. Maybe try binary search. The idea is to not go linearly but in order of log.
Thanks
My answers: (I tried to maximize test accuracy)
- The knee of the curve is between 8 and 128 neurons. 8 neurons produced a surprisingly high test accuracy of 82.55% whereas 128 neurons achieved a test accuracy of 85.33%. There were very little gains in test accuracy (if any) for numbers of neurons > 128.
- Additional hidden layers did not increase my test accuracy at h = 256 and other default parameters
- Slower learning rates appear to achieve the same test accuracy as 0.1, however they require more epochs. Learning rates > 0.4 can get numerically unstable.
- About 86% with lr = 0.2, epochs = 20, # hidden layers = 2, h = 256
- Optimizing multiple hyperparameters is difficult because the optimization function is not necessarily convex and a sensitivity study requires a lot of computation time to run multiple trainings.
- Except when adjusting learning rate, keep epochs constant at a low number like 5 and then iterate through a number of hyper parameters and try to find a maximum for test accuracy?
def net(X):
X = X.reshape((-1, num_inputs))
H = relu(X@W1 + b1) # Here ‘@’ stands for matrix multiplication
return (H@W2 + b2)
For this block of code, I understand we want to use reshape method to flatten the input, but I don’t understand why we need to use the -1 in reshape method to let the computer automatically match the first axis, I think it will be always the shape of (1, num_inputs), so I think there is no need to auto-match the first axis.
Please let me know where my thinking is wrong.
Thank you.
@kevinmo Keep in mind that the first axis (the batch axis) can vary in size. In the present example, it is 256 (not 1), but it’s best if we let it be automatically inferred.