Calculus

http://d2l.ai/chapter_preliminaries/calculus.html

I think it would be better to make a quote that you need to install these packages before running this code such as in the code-block below. Anaconda comes with these pre-installed but Miniconda doesn’t. I am saying this cause the book suggested to download Miniconda.

Also in that example this didn’t work

from d2l import mxnet as d2l

this works

import mxnet as d2l

Hey @gpk2000, thanks for the suggestion of Anaconda. We recommend installing miniconda as it has most of the necessary libraries but not as heavy-lifting as anaconda.

As for your question, please make sure you have the latest version of D2L and MXNet installed. :wink:

1 Like

Is exercise 4 in section 2.4 possible?
How would we do it?

Hey @smizerex, yes chain rule can be applied here. Feel free to share your idea and discuss here.

1 Like

Does the first part of answer for Q.4 in 2.4 is: du/da = (du/dx)(dx/da) + (du/dy)(dy/da) + (du/dz)*(dz/da)

Hi @asadalam, I believe you are right!

Hey, can you please provide deeper explanation of matrix differentiation rules given?

1 Like

Hi @anant_jain, great question. We have the math chapter talking about in-depth math, such as https://d2l.ai/chapter_appendix-mathematics-for-deep-learning/multivariable-calculus.html

I just wanted to thank the authors of this book. This book is a godsend!

1 Like

http://d2l.ai/chapter_preliminaries/calculus.html#gradients
In the above section how is the derivative defined? I may be wrong but I think the derivative for Ax w.r.t. x be A instead of A transpose, assuming the usual definitions of matrix differentiation. I did it in a similar way to Matrix Derivative (page 5).
Thanks in advance.

Hi @Abhinav_Raj, great question! There are two layouts in matrix calculus: numerator layout and denominator layout, and we are using the denominator layout. Check more details in “Layout conventions” section in https://en.wikipedia.org/wiki/Matrix_calculus.

@goldpiggy, I see, thanks for clearing it up. So, basically we lay out the y as a row vector and then calculate the gradient in a usual way (column wise). Also, is there some specific reason for choosing this convention say it makes calculations easier or that’s how it’s done in MxNet/Pytorch?TF?
Thanks again.