Interpretation of a simulation experiment (i.e., a bootstrap). Recall from Chapter 5, slide 17, the following interpretation of a simulation experiment where we repeat the estimation procedure on simulations carried out with parameter value \(\hat\theta\).
“For large Monte Carlo sample size \(J\), the coverage of the proposed confidence interval (CI) is well approximated, for models in a neighborhood of \(\hat\theta\), by the proportion of the intervals \(\big[\hat\theta^{[j]}_{1,\mathrm lo}, \, \hat\theta^{[j]}_{1,\mathrm hi}\big]\) that include \(\hat\theta_1\).”
How does checking the number of times our estimated parameter value falls within our Monte Carlo simulation CI’s tell us anything about the performance of our original CI’s?
Interpretation of statistical tests. Suppose you make a statistical test and obtain a p-value \(\alpha = 0.182\), so you fail to reject the null hypothesis at all usual levels of statistical significance. What is the correct interpretation, (a) or (b) below? When might it matter which interpretation you use?
You have shown that the null hypothesis is correct.
You have shown that this particular test does not reveal the null hypothesis to be false, which gives some justification for continuing to work under the assumption that the null is correct.