http://d2l.ai/chapter_convolutionalneuralnetworks/convlayer.html
Hey @anirudh in section 6.2.4 Learning A Kernel
when printing this at the bottom of our for loop:
if (i+1) % 2 == 0:
print(f'batch {i+1}, loss {l.sum():.3f}')
should it be batch or epoch? I thought it was epoch, could you explain why its batch instead?
 When you try to automatically find the gradient for the Conv2D class we created, what kind of error message do you see?
Got Error Message: Inplace operations are not supported using autograd .
 How do you represent a crosscorrelation operation as a matrix multiplication by changing the input and kernel tensors?
–> flip the twodimensional kernel tensor both horizontally and vertically, and then perform the crosscorrelation operation with the input tensor
K = torch.tensor([[1.0, 1.0]]) # filter shape: (1, 2)
# flip horizontally
K = torch.flip(K, [1])
# flip vertically
K = torch.flip(K, [0])
print(K)
print(K)
Y = corr2d(X, K)
plt.imshow(Y, cmap="gray")
What is the minimum size of a kernel to obtain a derivative of degree d?
–> I have no idea about this. Can someone clarify?
For Exercise 2, l.sum().backward() is already computing the gradient, is it not?
how did you automatically try to find gradient?
yes through backpropagation the leaf tensor gradient is stored in net.grad
Exercises

Construct an image X with diagonal edges.

What happens if you apply the kernel K in this section to it?
 zero matrix.

What happens if you transpose X?
 No change

What happens if you transpose K?
 zero matrix.


When you try to automatically find the gradient for the Conv2D class we created, what kind
of error message do you see?
* I am able to do `net.weights.grad`, when I try `net.grad` I get the error `'Conv2d' object has no attribute 'grad'`
 How do you represent a crosscorrelation operation as a matrix multiplication by changing
the input and kernel tensors?
* cross correlation is basically matrix multiplication between slices of tensorfrom X of the shape of kernel and summing.
* It can be done by padding Kand X based on what is needed to multiply

Design some kernels manually.

What is the form of a kernel for the second derivative?
 okay in order to compute one way would be to manually compute the second derivative and then let see a kernel be made using backpropogation
https://dsp.stackexchange.com/questions/10605/kernelstocomputesecondorderderivativeofdigitalimage
 okay in order to compute one way would be to manually compute the second derivative and then let see a kernel be made using backpropogation

What is the kernel for an integral?
 how do you actually make it manually


What is the minimum size of a kernel to obtain a derivative of degree d
* dont know.
I think so，epoch is batter than batch.
Construct an image X
with diagonal edges.

What happens if you apply the kernel
K
in this section to it?
it detects the diagonal edges 
What happens if you transpose
X
?
same 
What happens if you transpose
K
?
same also 
How do you represent a crosscorrelation operation as a matrix multiplication by changing the input and kernel tensors?
transforming the kernal in a matrix
Km = torch.zeros((9,5))
kv = torch.tensor([0.0,1.0,0.0,2.0,3.0])
for i in range(4):
Km[i:i+5,i] = kv
Km = Km.t()
Km = Km[torch.arange(Km.size(0))!=2]
Km = Km.t()
Km = tensor([[0., 0., 0., 0.],
[1., 0., 0., 0.],
[0., 1., 0., 0.],
[2., 0., 0., 0.],
[3., 2., 1., 0.],
[0., 3., 0., 0.],
[0., 0., 2., 0.],
[0., 0., 3., 0.],
[0., 0., 0., 0.]])
transforming the input X in a vector
X = torch.tensor([[float(i) for i in range(9)]])
X = tensor([[0., 1., 2., 3., 4., 5., 6., 7., 8.]])
X @ Km #matrix multiplication
result : tensor([[19., 25., 37., 0.]])
We reprint a key figure in Fig. 7.2.2 to illustrate the striking similarities.
What similarities is Fig. 7.2.2 trying to illustrate? I mean, what is being compared to what?