Auto Differentiation

http://d2l.ai/chapter_preliminaries/autograd.html

my solution to question 5

import numpy as np
from d2l import torch as d2l
x = np.linspace(- np.pi,np.pi,100)
x = torch.tensor(x, requires_grad=True)
y = torch.sin(x)
for i in range(100):
y[i].backward(retain_graph = True)

d2l.plot(x.detach(),(y.detach(),x.grad),legend = ((‘sin(x)’,“grad w.s.t x”)))
image

I think it would be cool if in section 2.5.1 (and further where it occurs) there would be something like this $\frac{d}{dx}[2x^Tx]$.

Any take on this question :slight_smile:

Why is the second derivative much more expensive to compute than the first derivative?

I know second derivatives could be useful to give extra information on critical points found using 1st derivative, but how does 2nd derivatives are expensive ?

@rammy_vadlamudi

chain rule

1 Like

My solution:

from mxnet import np, npx
npx.set_np()
from mxnet import autograd
def f(x):
      return np.sin(x)

x = np.linspace(- np.pi,np.pi,100)
x.attach_grad()
with autograd.record():
        y = f(x)
    
y.backward()
d2l.plot(x,(y,x.grad),legend = [('sin(x)','cos(x)')])
1 Like

@asadalam
You can use image or a github URLto show code…
And we should aviod import torch and mxnet at the same time…
It is confusing…

Any specific reason not to use both torch and mxnet and how can we use the functions like attach_grad() and build a graph without using mxnet and only simple numpy ndarrays?

@asadalam
For your first question, the reason is that in your code, you didn’t use anything related to pytorch.
So it is unnecessary to import torch
For your second question,
https://mxnet.apache.org/versions/1.6/api/python/docs/api/autograd/index.html#mxnet.autograd.backward
Check for source code


I don’t understand now…

What does these mean?
@szha

Oh yes, sorry, I initially used torch to use it’s sine function but couldn’t integrate with attach_grad and building the graph. Switched to numpy function but forgot to remove import torch :slightly_smiling_face:

You can try pytorch code too. It will work.

I did a few examples and discovered a problem:

from mxnet import autograd, np, npx
import math # function exp()
npx.set_np()
x = np.arange(5)
print(x)
def f(a):
    # return 2 * a * a # works fine
    return math.exp(a) # produces error
print(f(1)) # shows 2.78181...
x.attach_grad()
print(x.grad)
with autograd.record():
    fstrich = f(x)
fstrich.backward()
print(x.grad)

works fine with the function 2xx (or other polynoms) but produces error with exp(x)
TypeError: only size-1 arrays can be converted to Python scalars

[0. 1. 2. 3. 4.]
2.718281828459045
[0. 0. 0. 0. 0.]
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-78-cea180af7b0c> in <module>
     11 print(x.grad)
     12 with autograd.record():
---> 13     fstrich = f(x)
     14 fstrich.backward()
     15 print(x.grad)

<ipython-input-78-cea180af7b0c> in f(a)
      6 def f(a):
      7     # return 2 * a * a # works fine
----> 8     return math.exp(a) # produces error
      9 print(f(1)) # shows 2.78181...
     10 x.attach_grad()

c:\us....e\lib\site-packages\mxnet\numpy\multiarray.py in __float__(self)
    791         num_elements = self.size
    792         if num_elements != 1:
--> 793             raise TypeError('only size-1 arrays can be converted to Python scalars')
    794         return float(self.item())
    795 

TypeError: only size-1 arrays can be converted to Python scalars

I have no idea, why exp() doesn’t work.

1 Like

Screenshot from 2020-11-05 20-39-47

1 Like