@sushmit86forward is called inside the built in metod __call__ . You can look at this function in pytorch source code and see that self.forward is called inside this method.
The reason for not calling forward explicitly by net.forward is due to the hooks being dispatched in __call__ method.
class MySequential(nn.Module):
def init(self, *args):
super().init()
for block in args:
# Here, block is an instance of a Module subclass. We save it
# in the member variable _modules of the Module class, and its
# type is OrderedDict
self._modules[block] = block
is it typo to use the module class as the key and value in the same time? or you have better reasons?
thx
Hi! I found a typo in Section 5.1.1 of the PyTorch version. In the code snippet used to define class MLP, inside the __init__() function there is a comment that reads “# Call the constructor of the MLP parent class ‘Block’ to perform […]”. The correct name of the parent class is ‘Module’ (‘Block’ is the mxnet version).
For example, the first fully-connected layer in our model above ingests an input of arbitrary dimension but returns an output of dimension 256.
The “model above” is defined as: nn.Sequential(nn.Linear(20, 256), nn.ReLU(), nn.Linear(256, 10)) .
From what I understand, the first fully connected layer in this model is nn.Linear(20, 256), and it takes an input of dimension exactly 20, no more and no less.
Why is it stated that this layer takes an input of arbitrary dimension? What am I missing here?
Thanks, @gphilip for raising this. Most part of the book has common text and we are trying to fix issues like these where the frameworks differ in design. Feel free to raise any other issues if you find something similar in other sections on the forum or the Github repo. Really appreciate it!
Hi,
In the custom Mysequential class, why we need idx to be str ? ‘self._modules[str(idx)] = module’
and as the comment of this line, you meant ‘_module’ is a type of Ordereddict right? Instead of module.
class DaisyX(nn.Module):
def __init__(self, genericModule: nn.Module, chain_length=5):
super().__init__()
for idx in range(chain_length):
self: nn.Module
self.add_module(str(idx)+ genericModule.__name__, genericModule())
def forward(self, X):
for m in self.children():
X = m(X)
return X
class Increment(nn.Module):
def __init__(self):
super().__init__()
def forward(self, X):
return (X + 1)
net = DaisyX(Increment, 5)
X = torch.zeros((2, 2))
net(X)
I don’t really understand this question, we use self.add_module method in nn.module to store modules (?), if we want to store modules in a Python list, meaning we have to rewrite the whole structure?
class parallelModule(nn.Module):
def init(self, net1, net2, dim):
super().init()
self.net1 = net1
self.net2 = net2
self.dim = dim
My layer factory:
def layer_factory(num_layers):
layers =
for _ in range(num_layers):
layers.append(MLP())
return layers
My deep net work:
class DeepMLP(nn.Module):
def init(self, num_layers):
super().init()
layers = layer_factory(num_layers)
for idx, layer in enumerate(layers):
self.add_module(str(idx), layer)
def forward(self, X):
for module in self.children():
X = module(X)
return X