Probability

As a note for the authors, the citation of Revels et. al 2016 in 2.6.7 looks wrong. Maybe this was supposed to be Kaplan et al. 2020, or Hoffmann et al. 2022.

My answers to these exercises:

  1. Any entirely deterministic process - for example, determining the weight in kilograms of an arbitrary quantity of lithium.
  2. Any process with stochastic components - for example, predicting tomorrow’s stock prices. One can get to a certain point of accuracy if they closely follow news events and filings, and get good at modeling, but there’s always uncertainty as to the exact decisions others will make. One might argue against such processes existing on fatalist grounds!
  3. The variance is equal to p*(1-p) / n. This means the variance scales with 1/n, where n is the number of observations. Using Chebyshev’s inequality, we can bound \hat{p} with P\left(|\hat p - p| \ge k \cdot \sqrt{\frac{p(1-p)}{n}}\right) \le \frac{1}{k^2} (with p=0.5 in our case, assuming the coin is fair). Chebyshev’s inequality gives us a distribution-free bound, but as n grows (typically for n > 30), the central limit theorem tells us that \hat p \approx \mathcal{N} \left(p, \frac{p(1-p)}{n}\right)
  4. I’m not sure if I’m interpreting the phrase “compute the averages” correctly, but I wrote the following snippet:
l = 100
np.random.randn(l).cumsum() / np.arange(1, l+1)

As for the second part of the question - Chebyshev’s inequality always holds for a single random variable with a finite variance. You can apply Chebyshev’s inequality to a specific z_m, but you cannot apply it for each z_m independently. This is because the z_ms are not i.i.d. - they share most of the same underlying terms!
5. For P(A \cup B), the lower bound is max(P(A), P(B)), and the upper bound is max(1, P(A) + P(B)). For P(A \cap B), the lower bound is max(0, P(A)+P(B)-1) (remember that P(A)+P(B) can be larger than 1) and the upper bound is min(P(A), P(B))
6. One could factor the joint probability P(A, B, C) as P(C|B) * P(B|A) * P(A), but this isn’t simpler. I’m not sure what exactly this question is looking for.
7. We know that the false positive rate for each test must add up to 0.1, and that the joint probabilities must add up to 1. So the mixed probabilities are 0.08, and P(D_1=0, D_2=0 | H=0) = 0.82. For 7.2, I obtained 1.47%. For 7.3, I obtained 6.86%.
8. The expected return for a given portfolio \boldsymbol \alpha are \boldsymbol \alpha^\top \boldsymbol \mu. To maximize the expected returns of the portfolio, one should find the largest entry in \mu_i and invest the entire portfolio into it - \boldsymbol \alpha should have a single non-zero entry at the corresponding index. The variance of the portfolio is \boldsymbol \alpha^\top \Sigma \boldsymbol \alpha. So the optimization problem described can be formalized as: maximize \boldsymbol \alpha^\top \boldsymbol \mu for some maximum variance \boldsymbol \alpha^\top \Sigma \boldsymbol \alpha, where \sum_{i=1}^n \alpha_i = 1 and \alpha_i \ge 0.