As a note for the authors, the citation of Revels et. al 2016 in 2.6.7 looks wrong. Maybe this was supposed to be Kaplan et al. 2020, or Hoffmann et al. 2022.
My answers to these exercises:
- Any entirely deterministic process - for example, determining the weight in kilograms of an arbitrary quantity of lithium.
- Any process with stochastic components - for example, predicting tomorrowâs stock prices. One can get to a certain point of accuracy if they closely follow news events and filings, and get good at modeling, but thereâs always uncertainty as to the exact decisions others will make. One might argue against such processes existing on fatalist grounds!
- The variance is equal to
p*(1-p) / n
. This means the variance scales with1/n
, wheren
is the number of observations. Using Chebyshevâs inequality, we can bound\hat{p}
withP\left(|\hat p - p| \ge k \cdot \sqrt{\frac{p(1-p)}{n}}\right) \le \frac{1}{k^2}
(withp=0.5
in our case, assuming the coin is fair). Chebyshevâs inequality gives us a distribution-free bound, but asn
grows (typically forn > 30
), the central limit theorem tells us that\hat p \approx \mathcal{N} \left(p, \frac{p(1-p)}{n}\right)
- Iâm not sure if Iâm interpreting the phrase âcompute the averagesâ correctly, but I wrote the following snippet:
l = 100
np.random.randn(l).cumsum() / np.arange(1, l+1)
As for the second part of the question - Chebyshevâs inequality always holds for a single random variable with a finite variance. You can apply Chebyshevâs inequality to a specific z_m
, but you cannot apply it for each z_m
independently. This is because the z_m
s are not i.i.d. - they share most of the same underlying terms!
5. For P(A \cup B)
, the lower bound is max(P(A), P(B))
, and the upper bound is max(1, P(A) + P(B))
. For P(A \cap B)
, the lower bound is max(0, P(A)+P(B)-1)
(remember that P(A)+P(B) can be larger than 1) and the upper bound is min(P(A), P(B))
6. One could factor the joint probability P(A, B, C)
as P(C|B) * P(B|A) * P(A)
, but this isnât simpler. Iâm not sure what exactly this question is looking for.
7. We know that the false positive rate for each test must add up to 0.1, and that the joint probabilities must add up to 1. So the mixed probabilities are 0.08, and P(D_1=0, D_2=0 | H=0) = 0.82
. For 7.2, I obtained 1.47%. For 7.3, I obtained 6.86%.
8. The expected return for a given portfolio \boldsymbol \alpha
are \boldsymbol \alpha^\top \boldsymbol \mu
. To maximize the expected returns of the portfolio, one should find the largest entry in \mu_i
and invest the entire portfolio into it - \boldsymbol \alpha
should have a single non-zero entry at the corresponding index. The variance of the portfolio is \boldsymbol \alpha^\top \Sigma \boldsymbol \alpha
. So the optimization problem described can be formalized as: maximize \boldsymbol \alpha^\top \boldsymbol \mu
for some maximum variance \boldsymbol \alpha^\top \Sigma \boldsymbol \alpha
, where \sum_{i=1}^n \alpha_i = 1
and \alpha_i \ge 0
.