# Linear Algebra

1. pass

2. pass

3. yes,pass

4. first dimension:2

5. first dimension

6. not match!


A = torch.arange(20, dtype = torch.float32).reshape(5, 4)

A / A.sum(axis=1)



RuntimeError: The size of tensor a (4) must match the size of tensor b (5) at non-singleton dimension 1

It will be fine.


B = torch.arange(25, dtype = torch.float32).reshape(5, 5)

B / B.sum(axis=1)



tensor([[0.0000, 0.0286, 0.0333, 0.0353, 0.0364],

    [0.5000, 0.1714, 0.1167, 0.0941, 0.0818],

[1.0000, 0.3143, 0.2000, 0.1529, 0.1273],

[1.5000, 0.4571, 0.2833, 0.2118, 0.1727],

[2.0000, 0.6000, 0.3667, 0.2706, 0.2182]])


1. Walk:Manhattan’s distance.the â„“1 norm

# distances of avenues and streets

dist_ave = 30.0

dist_str = 40.0

dis_2pt = torch.tensor([dist_ave, dist_str])

torch.abs(dis_2pt).sum()



Can. Fly straightly and diagonally.the â„“2 norm


torch.norm(dis_2pt)



tensor(50.)

1. The shape is just the shape of the original tensor that deleted the axis required.

X.sum(axis = 0).size() torch.Size([3, 4])

X.sum(axis = 1).size() torch.Size([2, 4])

X.sum(axis = 2).size() torch.Size([2, 3])

1. $|\mathbf{x}|{2}=\sqrt{\sum{i=1}^{n} x_{i}^{2}}$

Y= torch.arange(24,dtype = torch.float32).reshape(2, 3, 4)

torch.norm(Y)



tensor(65.7571)


i = 0

for j in range(24):

i += j**2

j += 1

import math

print(math.sqrt(i))



65.75712889109438

The numbers are same.

For more:

http://d2l.ai/chapter_preliminaries/linear-algebra.html#equation-chapter-preliminaries-linear-algebra-0

The matrix should be indexed via nm instead of mn, since the original matrix is mn and this is the transposed version.

Hi @manuel-arno-korfmann, could you specify which is “the matrix” you were referred to ?

Hey @goldpiggy, the link given includes a # query parameter which directly links to “the matrix”. Does that work for you?

Hi @manuel-arno-korfmann, you mean the matirx index like $a_12$ and $a_21$? The indexes’ location is flipped, while they have to keep the original values. Ultimately, $a_mn$ and $a_nm$ have different values at the original matrix

@manuel-arno-korfmann
I think @goldpiggy is right.
A_mn:

AT_nm:

Its said that “By default, invoking the function for calculating the sum reduces a tensor along all its axes to a scalar. We can also specify the axes along which the tensor is reduced via summation. Take matrices as an example. To reduce the row dimension (axis 0) by summing up elements of all the rows, we specify axis=0 when invoking the function.” Are you sure? Look at my code:

A = torch.arange(25).reshape(5,5)

A, A.sum(axis = 1)

(tensor([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19],
[20, 21, 22, 23, 24]]),
tensor([ 10, 35, 60, 85, 110]))

When axis = 1, all elements in row 0 are added.
Otherwse (axis = 0), all elements colummns are added.

I didn’t understand that. Could you explain it with more details, please?

Maybe this reply is too late but here you go.

Your understanding of row vs column is wrong.

In,

tensor([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19],
[20, 21, 22, 23, 24]])


[ 0, 1, 2, 3, 4] is a column and [ 0, 5, 10, 15, 20] is a row.

It feels like the following solution is more appropriate for question 6 as there is no change in the actual value of A like in your answer.

A = torch.arange(20, dtype = torch.float32).reshape(5, 4)

A / A.sum(axis=1, keep_dim=True)


After using keep_dim=True, broadcasting happens and dimensionality error would disapper.

Your understanding of row vs column is wrong.

Are you sure you don’t have it backwards?

A = torch.arange(25, dtype = torch.float32).reshape(5,  5)

"""
>>> A
tensor([[ 0.,  1.,  2.,  3.,  4.],
[ 5.,  6.,  7.,  8.,  9.],
[10., 11., 12., 13., 14.],
[15., 16., 17., 18., 19.],
[20., 21., 22., 23., 24.]])

>>> A[0]
tensor([0., 1., 2., 3., 4.])

>>> A[:,0] # Fix column 0, run across rows
tensor([ 0.,  5., 10., 15., 20.])

>>> A[0, :] # Fix row 0, run across columns
tensor([0., 1., 2., 3., 4.])
"""



About the exercises, the questions needed to be coded or mathematically solved?

Can anybody solve the No. 9 in the exercise.

Hi, I gave it a shot and here is what I found:

The torch.linalg.norm function computes the L2-norm no matter the rank of a Tensor. In other words, it squares all the elements of a Tensor, sums them up, and reports the square root of the sum.

I hope this helps. Thanks.