Hello guys,
In section 2.1.6 [Conversion to Other Python Objects], what do you mean by the line in bold; Unfortunately I can’t get it and I need help. Thanks
Converting to a NumPy tensor, or vice versa, is easy. The converted result does not share memory. This minor inconvenience is actually quite important: when you perform operations on the CPU or on GPUs, you do not want to halt computation, waiting to see whether the NumPy package of Python might want to be doing something else with the same chunk of memory.
Does it mean NumPy and PyTorch are completely independent and may have conflicts with each other?
Edit: According to PyTorch documentation:
Converting a torch Tensor to a NumPy array and vice versa is a breeze. The torch Tensor and NumPy array will share their underlying memory locations, and changing one will change the other.
It says this conversion shares the memory but you said not! Am I right?