1. Interpretation of a simulation experiment (i.e., a bootstrap). Recall from Chapter 5, slide 17, the following interpretation of a simulation experiment where we repeat the estimation procedure on simulations carried out with parameter value \(\hat\theta\).

    “For large Monte Carlo sample size \(J\), the coverage of the proposed confidence interval (CI) is well approximated, for models in a neighborhood of \(\hat\theta\), by the proportion of the intervals \(\big[\hat\theta^{[j]}_{1,\mathrm lo}, \, \hat\theta^{[j]}_{1,\mathrm hi}\big]\) that include \(\hat\theta_1\).”

    How does checking the number of times our estimated parameter value falls within our Monte Carlo simulation CI’s tell us anything about the performance of our original CI’s?

  1. Interpretation of statistical tests. Suppose you make a statistical test and obtain a p-value \(\alpha = 0.182\), so you fail to reject the null hypothesis at all usual levels of statistical significance. What is the correct interpretation, (a) or (b) below? When might it matter which interpretation you use?
  1. You have shown that the null hypothesis is correct.

  2. You have shown that this particular test does not reveal the null hypothesis to be false, which gives some justification for continuing to work under the assumption that the null is correct.


  1. Improved likelihood maximization using arima2. The arima2::arima2 function in the arima2 R package uses multiple starting values to improve the optimization performance of stats::arima. This can lead to greater consistency of AIC tables. It does not necessarily protect against roots which cancel or lie close to the unit circle. If you choose to experiment with arima2, it would be interesting to hear about any favorable or unfavorable experiences.