Data Manipulation

http://d2l.ai/chapter_preliminaries/ndarray.html

1 Like

Hi I had an issue while trying to follow with this.
I had installed everythging as per the instructions in the installation chapter, but when I try to import tensorflow in a new jupyter notebook, it gives the following error


Can someone help me with this?

Hi @ikjot-2605, you may need to conda activate d2l before you do jupyter notebook. If you indeed followed the above, check pip list and see if tensorflow is properly installed there.

Thank you so much @goldpiggy, the issue is resolved.

Hi,

The documentation states that by default the tensor will hold the value of dtype float.
But in TensorFlow, the resulting tensor produces dtype of int32 which I guess is an integer.

" Here, we produced the vector-valued F:Rd,Rd→RdF:Rd,Rd→Rd by lifting the scalar function to an elementwise vector operation." Can someone elaborate on what is meant by lifting?

In our case, instead of calling x.reshape(3, 4) , we could have equivalently called x.reshape(-1, 4) or x.reshape(3, -1). Can someone elaborate more on this?

That means you just put the desired row/column( (3, -1) / (-1, 4) ) shape and leave the other with -1 so that it automatically shapes up the entire matrix.

1 Like

Section: Saving Memory (Tensorflow)

How will Z be pruned from the computation function, please?
I didn’t understand how the compiler will determine which variables won’t be used.

Section: Conversion to other python objects (tensorflow)

I didn’t understand this line “This minor inconvenience is actually quite important: when you perform operations on the CPU or on GPUs, you do not want to halt computation, waiting to see whether the NumPy package of Python might want to be doing something else with the same chunk of memory.”

Can anyone please explain this?

1 Like

@Chandan_Kumar
As far as I understand, it means that the tensorflow tensor and the numpy array should not share memory.This is because if they both perform operations on the same memory space then it would lead to consistency issues and errors. Computation are not stopped in this process as neither tensorflow nor numpy have to wait for each other to finish using the shared memory as both use different memory locations.
Hope that answers your question :slight_smile: