## Description
Using the **HuberLoss()** (with or without parameters) from the… module loss raise a **TypeError:** exception with the message using it in a simple regression computation where for example L2Loss or L1Loss raise no exception or problem
### Error Message
> TypeError: Operator `abs` registered in backend is known as `abs` in Python. This is a legacy operator which can only accept legacy ndarrays, while received an MXNet numpy ndarray. Please call `as_nd_ndarray()` upon the numpy ndarray to convert it to a legacy ndarray, and then feed the converted array to this operator.
Stack Trace:
Traceback (most recent call last)
```
<ipython-input-13-46119558e5f5> in <module>
3 for X, y in data_iter:
4 with autograd.record():
----> 5 l = loss(net(X), y)
6 l.backward()
7 trainer.step(batch_size)
~/.conda/envs/d2l/lib/python3.7/site-packages/mxnet/gluon/block.py in __call__(self, *args)
691 hook(self, args)
692
--> 693 out = self.forward(*args)
694
695 for hook in self._forward_hooks.values():
~/.conda/envs/d2l/lib/python3.7/site-packages/mxnet/gluon/block.py in forward(self, x, *args)
1156 params = {k: v.data(ctx) for k, v in self._reg_params.items()}
1157
-> 1158 return self.hybrid_forward(ndarray, x, *args, **params)
1159
1160 params = {i: j.var() for i, j in self._reg_params.items()}
~/.conda/envs/d2l/lib/python3.7/site-packages/mxnet/gluon/loss.py in hybrid_forward(self, F, pred, label, sample_weight)
605 def hybrid_forward(self, F, pred, label, sample_weight=None):
606 label = _reshape_like(F, label, pred)
--> 607 loss = F.abs(label - pred)
608 loss = F.where(loss > self._rho, loss - 0.5 * self._rho,
609 (0.5 / self._rho) * F.square(loss))
~/.conda/envs/d2l/lib/python3.7/site-packages/mxnet/ndarray/register.py in abs(data, out, name, **kwargs)
~/.conda/envs/d2l/lib/python3.7/site-packages/mxnet/ndarray/register.py in _verify_all_legacy_ndarrays(op_name, func_name, args, out)
97 'convert it to a legacy ndarray, and then feed the converted '
98 'array to this operator.'
---> 99 .format(op_name, func_name))
100 if out is None:
101 return
TypeError: Operator `abs` registered in backend is known as `abs` in Python. This is a legacy operator which can only accept legacy ndarrays, while received an MXNet numpy ndarray. Please call `as_nd_ndarray()` upon the numpy ndarray to convert it to a legacy ndarray, and then feed the converted array to this operator.
```
## To Reproduce
In [d2l-course chapter 3.3](https://d2l.ai/chapter_linear-neural-networks/linear-regression-gluon.html) is a question of substituting the L2Loss by the HuberLoss. Doing this cause the problem
but it also fails with a simple code like this:
```
from mxnet.gluon import loss as gloss
loss = gloss.HuberLoss()
input_scalar = np.array([5])
output_scalar = np.array([6])
loss(input_scalar,output_scalar)
```
### Steps to reproduce
```
from mxnet.gluon import loss as gloss
loss = gloss.HuberLoss() # The squared loss is also known as the L2 norm loss
input_scalar = np.array([5])
output_scalar = np.array([6])
loss(input_scalar,output_scalar)
```
## What have you tried to solve it?
1. I have tried to reinstall last version of mxnet again and the problem keeps.
2. I have tried other losses to see if there was a more general problem, with L1Loss and L2Loss the problem disappears.
## Environment
We recommend using our script for collecting the diagnositc information. Run the following command and paste the outputs below:
```
curl --retry 10 -s https://raw.githubusercontent.com/dmlc/gluon-nlp/master/tools/diagnose.py | python
# ----------Python Info----------
Version : 3.7.5
Compiler : Clang 4.0.1 (tags/RELEASE_401/final)
Build : ('default', 'Oct 25 2019 10:52:18')
Arch : ('64bit', '')
------------Pip Info-----------
Version : 19.3.1
Directory : /Users/gonzalopolo/.conda/envs/d2l/lib/python3.7/site-packages/pip
----------MXNet Info-----------
Version : 1.6.0
Directory : /Users/gonzalopolo/.conda/envs/d2l/lib/python3.7/site-packages/mxnet
Num GPUs : 0
Commit Hash : 4da14a22385622c35e9a5c9c3e8a17c07f718cad
----------System Info----------
Platform : Darwin-19.2.0-x86_64-i386-64bit
system : Darwin
node : Gonzalos-MacBook-Pro.local
release : 19.2.0
version : Darwin Kernel Version 19.2.0: Sat Nov 9 03:47:04 PST 2019; root:xnu-6153.61.1~20/RELEASE_X86_64
----------Hardware Info----------
machine : x86_64
processor : i386
b'machdep.cpu.brand_string: Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz'
b'machdep.cpu.features: FPU VME DE PSE TSC MSR PAE MCE CX8 APIC SEP MTRR PGE MCA CMOV PAT PSE36 CLFSH DS ACPI MMX FXSR SSE SSE2 SS HTT TM PBE SSE3 PCLMULQDQ DTES64 MON DSCPL VMX EST TM2 SSSE3 FMA CX16 TPR PDCM SSE4.1 SSE4.2 x2APIC MOVBE POPCNT AES PCID XSAVE OSXSAVE SEGLIM64 TSCTMR AVX1.0 RDRAND F16C'
b'machdep.cpu.leaf7_features: RDWRFSGS TSC_THREAD_OFFSET SGX BMI1 AVX2 SMEP BMI2 ERMS INVPCID FPU_CSDS MPX RDSEED ADX SMAP CLFSOPT IPT SGXLC MDCLEAR TSXFA IBRS STIBP L1DF SSBD'
b'machdep.cpu.extfeatures: SYSCALL XD 1GBPAGE EM64T LAHF LZCNT PREFETCHW RDTSCP TSCI'
----------Network Test----------
Setting timeout: 10
Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0147 sec, LOAD: 1.0370 sec.
Timing for GluonNLP GitHub: https://github.com/dmlc/gluon-nlp, DNS: 0.0005 sec, LOAD: 0.9149 sec.
Timing for GluonNLP: http://gluon-nlp.mxnet.io, DNS: 0.0511 sec, LOAD: 0.5621 sec.
Timing for D2L: http://d2l.ai, DNS: 0.0405 sec, LOAD: 0.0580 sec.
Timing for D2L (zh-cn): http://zh.d2l.ai, DNS: 0.0492 sec, LOAD: 0.4651 sec.
Timing for FashionMNIST: https://repo.mxnet.io/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz, DNS: 0.0528 sec, LOAD: 0.8639 sec.
Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0340 sec, LOAD: 0.5830 sec.
Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0264 sec, LOAD: 0.0592 sec.
```