A quick question. Does mxnet NumPy implement all the functions available in standard NumPy libraries? I was looking for an inverse function and something like this

```
> M = np.array([[1, 2], [1, 4]])
> M_inv_1 = np.linalg.inv(M)
```

However, it throws an error

Hi @sushmit86, great question! Most of the operators at Numpy are covered or will be covered. Please check the existing one at https://mxnet.apache.org/versions/1.6/api/python/docs/api/ndarray/ndarray.html

Thanks a lot for your reply. While reading the docs I got a little confused. what is the difference between mxnet ndarrays and using numpy extention from mxnet ?

from mxnet import gluon, np, npx

import mxnet as mx

b = mx.nd.array([[1,2,3], [2,3,4]])

print(type(b))

M = np.array([[1, 2], [1, 4]])

print(type(M))

Hi @sushmit86. MXNet ndarray has extended functions towards deep learning (such as autograd). Check more details at https://medium.com/apache-mxnet/a-new-numpy-interface-for-apache-mxnet-incubating-dbb4a4096f9f

Hi, I’m new to deep learning.

What should I learn?

I want to choose a deep learning framework, but I’m confused to select one of these frameworks. (TensorFlow, Mxnet, Pytorch)

What is the main difference between These frameworks? And should I learn it first?

Hey @Iman_Jowkar, great question! The fundamental syntax for the three frameworks are similar, so don’t worry too much about choosing which one. Once you diving into a deeper level, you will see more differences in computation performance, ease of use, etc.

Alex has taught a deep learning crash course in PyTorch, in total it is less than 4 hours and you will get a fundamental understanding of deep learning. Have fun learning DL!