Calculus

http://d2l.ai/chapter_preliminaries/calculus.html

I think it would be better to make a quote that you need to install these packages before running this code such as in the code-block below. Anaconda comes with these pre-installed but Miniconda doesn’t. I am saying this cause the book suggested to download Miniconda.

Also in that example this didn’t work

from d2l import mxnet as d2l

this works

import mxnet as d2l

1 Like

Hey @gpk2000, thanks for the suggestion of Anaconda. We recommend installing miniconda as it has most of the necessary libraries but not as heavy-lifting as anaconda.

As for your question, please make sure you have the latest version of D2L and MXNet installed. :wink:

1 Like

Is exercise 4 in section 2.4 possible?
How would we do it?

Hey @smizerex, yes chain rule can be applied here. Feel free to share your idea and discuss here.

1 Like

Does the first part of answer for Q.4 in 2.4 is: du/da = (du/dx)(dx/da) + (du/dy)(dy/da) + (du/dz)*(dz/da)

1 Like

Hi @asadalam, I believe you are right!

Hey, can you please provide deeper explanation of matrix differentiation rules given?

1 Like

Hi @anant_jain, great question. We have the math chapter talking about in-depth math, such as https://d2l.ai/chapter_appendix-mathematics-for-deep-learning/multivariable-calculus.html

I just wanted to thank the authors of this book. This book is a godsend!

1 Like

http://d2l.ai/chapter_preliminaries/calculus.html#gradients
In the above section how is the derivative defined? I may be wrong but I think the derivative for Ax w.r.t. x be A instead of A transpose, assuming the usual definitions of matrix differentiation. I did it in a similar way to Matrix Derivative (page 5).
Thanks in advance.

Hi @Abhinav_Raj, great question! There are two layouts in matrix calculus: numerator layout and denominator layout, and we are using the denominator layout. Check more details in “Layout conventions” section in https://en.wikipedia.org/wiki/Matrix_calculus.

@goldpiggy, I see, thanks for clearing it up. So, basically we lay out the y as a row vector and then calculate the gradient in a usual way (column wise). Also, is there some specific reason for choosing this convention say it makes calculations easier or that’s how it’s done in MxNet/Pytorch?TF?
Thanks again.

Hey @Abhinav_Raj, it doesn’t matter that much to programming. The difference are row vector or column vector, but they are both vectors in programming.

http://d2l.ai/chapter_preliminaries/calculus.html#gradients
In the above section gradient was defined for the function f: ℝⁿ → ℝ in (2.4.9),
but 𝗔𝘅 and 𝘅ᵀ𝗔 are column and row vectors respectively, in these cases f(𝘅) is f:ℝⁿ → ℝᵐ. How do you define ∇f(𝘅) in this case? Is it transposed Jacobian?

I may be wrong, but I believe that the nabla (or del) operator is not appropriate for the derivatives of Ax and x^TA, as it is used for gradient vectors, but these functions are R^n->R^m, thus their derivatives are matrices (Jacobian matrices). I think denoting these by delta / delta(x) Ax and delta / delta(x) x^TA would be better. The same could be said for the squared Frobenius norm at the end.

If my understanding is right, I will happily contribute. If not, at least I learned something new. :slight_smile:

display.set_matplotlib_formats('svg') is deprecated. It is recommended to use matplotlib_inline.backend_inline.set_matplotlib_formats() instead