@kwang this is already fixed in master. It will be updated in the next release.
Also, it will just be
net = MySequential(nn.Linear(20, 256), nn.ReLU(), nn.Linear(256, 10)) net(x)
self.linear = nn.Linear(10+10,10)
self.input_net = args
def forward(self,X,X2): X = self.input_net(X) X2 = self.input_net(X2) return self.linear(torch.cat((X,X2),dim=1))
p = ParallelBlock(MLP(),MLP())
Can someone explain to me in the following code?
# Declare a layer with model parameters. Here, we declare two fully
# connected layers
# Call the constructor of the
# the necessary initialization. In this way, other function arguments
# can also be specified during class instantiation, such as the model
params(to be described later)
self.hidden = nn.Linear(20, 256) # Hidden layer
self.out = nn.Linear(256, 10) # Output layer
# Define the forward propagation of the model, that is, how to return the # required model output based on the input `X` def forward(self, X): # Note here we use the funtional version of ReLU defined in the # nn.functional module. return self.out(F.relu(self.hidden(X)))
How do we use the forward function directly with the object of MLP. as in
net = MLP()
How are we using forward directly rather than
The reason for not calling
forward explicitly by
net.forward is due to the hooks being dispatched in
Thanks so much @anirudh
def init(self, *args):
for block in args:
block is an instance of a
Module subclass. We save it
# in the member variable
_modules of the
Module class, and its
# type is OrderedDict
self._modules[block] = block
is it typo to use the module class as the key and value in the same time? or you have better reasons?
It’s a typo. Keys are supposed to be
ids of [str] type - not Module’s.
class MySequential(nn.Module): def __init__(self, *args): super().__init__() for idx, block in enumerate(args): self._modules[str(idx)] = block ...
Thanks for raising the issue, this is now fixed here.
I tried to use Python’s list to implement class MySequential. Here are the codes:
def init(self, *args):
self.list = [block for block in args]
def forward(self, X): for block in self.list: X = block(X) return X
net = MySequential(nn.Linear(20, 256), nn.ReLU(), nn.Linear(256, 10))
tensor([[ 0.2324, 0.0579, -0.0106, -0.0143, 0.1208, -0.2896, -0.0271, -0.1762,
[ 0.3362, 0.0312, -0.0852, -0.1253, 0.1525, -0.1945, 0.0685, 0.0335,
-0.1404, -0.0617]], grad_fn=)
And the output looks fine to me. I am curious about the problem of using list to replace self._modules in implementing MySequencial?
Hi! I found a typo in Section 5.1.1 of the PyTorch version. In the code snippet used to define class MLP, inside the
__init__() function there is a comment that reads “# Call the constructor of the
MLP parent class ‘Block’ to perform […]”. The correct name of the parent class is ‘Module’ (‘Block’ is the mxnet version).
Great book! Thanks!