As stated in the text:
As in all, if we feed X into a convolutional layer f to output Y=f(X) and create a transposed convolutional layer g with the same hyperparameters as f except for the number of output channels being the number of channels in X , then g(Y) will have the same shape as X . This can be illustrated in the following example.
which is not true as in the following code:
X = torch.rand(size=(1, 10, 16, 16))
conv = nn.Conv2d(10, 20, kernel_size=3, padding=1, stride=2)
tconv = nn.ConvTranspose2d(20, 10, kernel_size=3, padding=1, stride=2)
tconv(conv(X)).shape == X.shape
The conv(X)
yields a tensor with shape [1, 20, 8, 8], while tconv(conv(X))
yields a tensor with shape [1, 10, 15, 15]
At least it is not the case in the implementation of pytorch 1.7.1 & 1.9.1 which I’ve tested with. Anyone please correct me if I am wrong about this.
It’s because of the rounding error when dividing by integers. In general we can say that (X/Y) * Y = X. However, that statement is not always true when we are working with integers. For example, let X = 5 and Y = 2. X/Y will then be 5 / 2 = 2 and (X/Y) * Y will be 2 * 2 = 4, which is not equal to the original X.