### 1.

```
x = torch.arange(12, dtype=torch.float32).reshape((3,4))
y = torch.tensor([[2.0, 1, 4, 3], [1, 2, 3, 4], [4, 3, 2, 1]])
x,y,x == y,x < y,x > y
```

(tensor([[ 0., 1., 2., 3.],

[ 4., 5., 6., 7.],

[ 8., 9., 10., 11.]]),

tensor([[2., 1., 4., 3.],

[1., 2., 3., 4.],

[4., 3., 2., 1.]]),

tensor([[False, True, False, True],

[False, False, False, False],

[False, False, False, False]]),

tensor([[ True, False, True, False],

[False, False, False, False],

[False, False, False, False]]),

tensor([[False, False, False, False],

[ True, True, True, True],

[ True, True, True, True]]))

### 2.

```
a = torch.arange(1, 6, dtype =torch.float32).reshape((5, 1))
b = torch.arange(1, 3).reshape((1, 2))
a, b
```

(tensor([[1],

[2],

[3],

[4],

[5]]),

tensor([[1, 2]]))

```
a + b
```

tensor([[2., 3.],

[3., 4.],

[4., 5.],

[5., 6.],

[6., 7.]])

```
a - b
```

tensor([[ 0., -1.],

[ 1., 0.],

[ 2., 1.],

[ 3., 2.],

[ 4., 3.]])

```
a * b
```

tensor([[1.0000, 0.5000],

[2.0000, 1.0000],

[3.0000, 1.5000],

[4.0000, 2.0000],

[5.0000, 2.5000]])

```
a / b
```

tensor([[1, 0],

[2, 1],

[3, 1],

[4, 2],

[5, 2]])

```
a // b
```

tensor([[1., 0.],

[2., 1.],

[3., 1.],

[4., 2.],

[5., 2.]])

```
a \ b
```

**File â€śâ€ť , line 1**

**a \ b**

**^**

**SyntaxError**

**:**unexpected character after line continuation character

```
a ** b
```

tensor([[ 1., 1.],

[ 2., 4.],

[ 3., 9.],

[ 4., 16.],

[ 5., 25.]])

@StevenJokes There is no `\`

operator in pytorch. It is actually a special chracter in python, also called the â€śescapeâ€ť character. Hence the error.

Let me know if this is not clear.

I have got it from the doc, but thanks anyway.

Iâ€™m a new bee of pytorch.

`a % b`

tensor([[0., 1.],

[0., 0.],

[0., 1.],

[0., 0.],

[0., 1.]])

When having PyTorch selected:

2.1.5. Saving Memory Â§3:

`Fortunately, performing in-place operations in MXNet is easy.`

Is it intentional to discuss MXNet even though PyTorch is selected for the code examples?

Allow me to point out a small error in Section 2.1.2

â€śFor stylistic convenience, we can write `x.sum()`

as `np.sum(x)`

.â€ť should not appear in PyTorch version because it is not possible to run `np.sum(x)`

if x is a PyTorch tensor.

```
x = torch.arange(12)
np.sum(x)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-30-1393831a87e1> in <module>
1 x = torch.arange(12)
----> 2 np.sum(x)
```

Thanks @hehao98 for pointng that out. We have already fixed that line in this commit and it will be updated with our next release.

98/5000

Is it only possible to use broadcast when the two arrays have a dimension has value equal to one?

Hi @jairo.venegas, no need to be one. You can broadcast anything This example may give you more idea!

in 2.1.4. Indexing and Slicing

X[0:2, :]

this code suppose to take the 1th and 2nd rows, but why it isnâ€™t typed like that -> X[0:1,:]

thanks for responding

in the slicing code. we want to take the 1th row (index = 0) and the 2nd row (index = 1) in the example. put the code start from index 0 to index 2 (0:2).

shouldnâ€™t it be (0:1)

Slicing is an indexing syntax that extracts a portion from the tensor. `X[m:n]`

returns the portion of `X`

:

- Starting with position
`m`

- Up to but not including
`n`

thanks man, "Up to but not including `n`

" is the key that i was looking for