Linear Algebra

http://d2l.ai/chapter_preliminaries/linear-algebra.html

  1. pass

  2. pass

  3. yes,pass

  4. first dimension:2

  5. first dimension

  6. not match!


A = torch.arange(20, dtype = torch.float32).reshape(5, 4)

A / A.sum(axis=1)

RuntimeError: The size of tensor a (4) must match the size of tensor b (5) at non-singleton dimension 1


It will be fine.


B = torch.arange(25, dtype = torch.float32).reshape(5, 5)

B / B.sum(axis=1)

tensor([[0.0000, 0.0286, 0.0333, 0.0353, 0.0364],

    [0.5000, 0.1714, 0.1167, 0.0941, 0.0818],

    [1.0000, 0.3143, 0.2000, 0.1529, 0.1273],

    [1.5000, 0.4571, 0.2833, 0.2118, 0.1727],

    [2.0000, 0.6000, 0.3667, 0.2706, 0.2182]])

  1. Walk:Manhattan’s distance.the â„“1 norm

# distances of avenues and streets

dist_ave = 30.0

dist_str = 40.0

dis_2pt = torch.tensor([dist_ave, dist_str])

torch.abs(dis_2pt).sum()

Can. Fly straightly and diagonally.the â„“2 norm


torch.norm(dis_2pt)

tensor(50.)

  1. The shape is just the shape of the original tensor that deleted the axis required.

X.sum(axis = 0).size() torch.Size([3, 4])

X.sum(axis = 1).size() torch.Size([2, 4])

X.sum(axis = 2).size() torch.Size([2, 3])

  1. $|\mathbf{x}|{2}=\sqrt{\sum{i=1}^{n} x_{i}^{2}}$

Y= torch.arange(24,dtype = torch.float32).reshape(2, 3, 4)

torch.norm(Y)

tensor(65.7571)


i = 0

for j in range(24):

    i += j**2

    j += 1

import math

print(math.sqrt(i))

65.75712889109438

The numbers are same.


For more:

  1. https://pytorch.org/docs/master/generated/torch.norm.html

  2. https://www.cnblogs.com/wanghui-garcia/p/11266298.html

http://d2l.ai/chapter_preliminaries/linear-algebra.html#equation-chapter-preliminaries-linear-algebra-0

The matrix should be indexed via nm instead of mn, since the original matrix is mn and this is the transposed version.

Hi @manuel-arno-korfmann, could you specify which is “the matrix” you were referred to ?

Hey @goldpiggy, the link given includes a # query parameter which directly links to “the matrix”. Does that work for you?

Hi @manuel-arno-korfmann, you mean the matirx index like $a_12$ and $a_21$? The indexes’ location is flipped, while they have to keep the original values. Ultimately, $a_mn$ and $a_nm$ have different values at the original matrix

@manuel-arno-korfmann
I think @goldpiggy is right.
A_mn:
A_mn
AT_nm:

Its said that “By default, invoking the function for calculating the sum reduces a tensor along all its axes to a scalar. We can also specify the axes along which the tensor is reduced via summation. Take matrices as an example. To reduce the row dimension (axis 0) by summing up elements of all the rows, we specify axis=0 when invoking the function.” Are you sure? Look at my code:

A = torch.arange(25).reshape(5,5)

A, A.sum(axis = 1)

(tensor([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19],
[20, 21, 22, 23, 24]]),
tensor([ 10, 35, 60, 85, 110]))

When axis = 1, all elements in row 0 are added.
Otherwse (axis = 0), all elements colummns are added.

I didn’t understand that. Could you explain it with more details, please?

Maybe this reply is too late but here you go.

Your understanding of row vs column is wrong.

In,

tensor([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19],
[20, 21, 22, 23, 24]])

[ 0, 1, 2, 3, 4] is a column and [ 0, 5, 10, 15, 20] is a row.

It feels like the following solution is more appropriate for question 6 as there is no change in the actual value of A like in your answer.

A = torch.arange(20, dtype = torch.float32).reshape(5, 4)

A / A.sum(axis=1, keep_dim=True)

After using keep_dim=True, broadcasting happens and dimensionality error would disapper.