Information Theory

In the cross-entropy implementation

def cross_entropy(y_hat, y):
ce = -torch.log(y_hat[range(len(y_hat)), y])
return ce.mean()

why are we returning a mean instead of sum?

1 Like

Hi @sushmit86, great question! Cross entropy loss is defined as the “expectation” of the probability distribution of a random variable 𝑋, and that’s why we use mean instead of sum.

In my point of view, in one-hot encoding mode (0, 1, …, 0, 0), the y_i = 1 is a probability, and the equation sum_i {-y_i*log(y_hat_i)} is cross-entropy (not mean) as in equation (3.4.8) in the ‘loss function’ subsection of “softmax regression” in chapter 3 “linear neural networks.”

Great article! And it can be slightly improved: Properties of Entropy - fix latex rendering Applications of Mutual Information - fix typo in the 1st sentence: in it pure definition --> its