数据操作

https://zh.d2l.ai/chapter_preliminaries/ndarray.html

Can someone explain the result of excersice 1?

as the same of X == Y

TensorFlow中的Tensors可以使用这个tf.tensor_scatter_nd_update函数更新, 并不是不可变的吧, 只是更新比较麻烦

excersice 1:
X > Y:
<tf.Tensor: shape=(3, 4), dtype=bool, numpy=
array([[False, False, False, False],
[ True, True, True, True],
[ True, True, True, True]])>

X < Y:
<tf.Tensor: shape=(3, 4), dtype=bool, numpy=
array([[ True, False, True, False],
[False, False, False, False],
[False, False, False, False]])>

excersice 2:
input:
c = tf.reshape(tf.range(3), (3, 1, 1))
d = tf.reshape(tf.range(2), (1, 2, 1))
c, d

c+d

output:
(<tf.Tensor: shape=(3, 1, 1), dtype=int32, numpy=
array([[[0]],

    [[1]],

    [[2]]])>,

<tf.Tensor: shape=(1, 2, 1), dtype=int32, numpy=
array([[[0],
[1]]])>)

<tf.Tensor: shape=(3, 2, 1), dtype=int32, numpy=
array([[[0],
[1]],

   [[1],
    [2]],

   [[2],
    [3]]])>

So the tensor will broadcast to the same shape (3, 2, 1) and then add by element.The same as expected.

If you get different output in exercise 1, check the value of Y because it has been changed by the later command in this book.

excersice 2:
code:

a=tf.reshape(tf.range(6),(3,1,2))
print(a)
b=tf.reshape(tf.range(2),(1,2))
print(b)
a+b

out:

tf.Tensor(
[[[0 1]]

 [[2 3]]

 [[4 5]]], shape=(3, 1, 2), dtype=int32)
tf.Tensor([[0 1]], shape=(1, 2), dtype=int32)
<tf.Tensor: shape=(3, 1, 2), dtype=int32, numpy=
array([[[0, 2]],

       [[2, 4]],

       [[4, 6]]])>

just like above,the shape of a/b is (3,1,2) / (1,2).
在广播机制中,如果你想要实现a+b,那么a和b会先广播成待操作张量中shape最大的张量。具体地,b会广播成

tf.Tensor(
[[[0 1]]

 [[0 1]]

 [[0 1]]], shape=(3, 1, 2), dtype=int32)

请问,为什么“ TensorFlow中的梯度不会通过Variable 反向传播”? 变量不是可以动态刷新吗