Solutions for this homework can be hand-written or typed. Later in the course, we will be using Rmarkdown for projects, so you might like to use that also for homeworks. In that case, the source code for this assignment is available on github and might help get you started.
Recall from the syllabus that homework will be graded on completeness, together with the sources and feedback statements. If necessary, you can read solutions from the 531w16 course. In this case, please include in your source statement how far you got before resorting to this.
Question 1.1. Recall the basic properties of covariance, \({\mathrm{Cov}}(X,Y) = E\big[(X-{\mathbb{E}}[X])(Y-{\mathbb{E}}[Y])\big]\), following the convention that upper case letters are random variables and lower case letters are constants:
P1. \(\quad {\mathrm{Cov}}(X,X)= {\mathrm{Var}}(X)\),
P2. \(\quad {\mathrm{Cov}}(X,Y)={\mathrm{Cov}}(Y,X)\),
P3. \(\quad {\mathrm{Cov}}(aX,bY)= ab\,{\mathrm{Cov}}(X,Y)\),
P4. \(\quad {\mathrm{Cov}}\left(\sum_{m=1}^M X_m,\sum_{n=1}^N X_n\right)= \sum_{m=1}^M \sum_{n=1}^N {\mathrm{Cov}}(X_m,X_n)\).
Let \(X_{1:N}\) be a covariance stationary time series model with autocovariance function \(\gamma_h\) and constant mean function, \(\mu_n=\mu\). Consider the sample mean as an estimator of \(\mu\), \[\hat\mu(x_{1:N}) = \frac{1}{N}\sum_{n=1}^N x_n.\] Show how the basic properties of covariance can be used to derive the expression, \[{\mathrm{Var}}\big(\hat\mu(X_{1:N})\big) = \frac{1}{N}\gamma_0 + \frac{2}{N^2}\sum_{h=1}^{N-1}(N-h)\gamma_h.\]
Question 1.2. The sample autocorrelation is perhaps the second most common type of plot in time series analysis, after simply plotting the data. We investigate how R represents chance variation in the plot of the sample autocorrelation function produced by the acf
function. We seek to check what R actually does when it constructs the dashed horizontal lines in this plot. What approximation is being made? How should the lines be interpreted statistically?
If you type acf
in R, you get the source code for the acf function. You’ll see that the plotting is done by a service function plot.acf
. This service function is part of the package, and is not immediately accessible to you. Nevertheless, you can check the source code as follows:
Notice, either from the help documentation ?acf
or the last line of the source code acf
that this function resides in the package stats
.
Now, you can access this namespace directly, to list the source code, by
stats:::plot.acf
Now we can see how the horizontal dashed lines are constructed. The critical line of code seems to be
clim0 <- if (with.ci) qnorm((1 + ci)/2)/sqrt(x$n.used)
This appears to correspond to a normal distribution approximation for the sample autocorrelation estimator, with mean zero and standard deviation \(1/\sqrt{N}\).
A. Write an argument to justify the use of \(1/\sqrt{N}\) as an approximation to the standard deviation of the sample autocorrelation estimator under the null hypothesis that the time series is a sequence of independent, identically distributed (IID) mean zero random variables.
Hint: you will probably want to make an argument based on linearization. You can reason at whatever level of math stat formalization you’re happy with. According to Mathematical Statistics and Data Analysis by John Rice, a textbook used for the undergraduate upper level Math Stats course, STATS 426,
“When confronted with a nonlinear problem we cannot solve, we linearize. In probability and statistics, this method is called propagation of errors or the \(\delta\) method. Linearization is carried out through a Taylor Series expansion.”
Rice then proceeds to describe the delta method in a way very similar to the Wikipedia article on this topic. In summary, suppose \(X\) is a random variable with mean \(\mu^{}_X\) and small variance \(\sigma^2_X\), and \(g(x)\) is a nonlinear function with derivative \(g^\prime(x)=dg/dx\). To study the random variable \(Y=g(X)\) we can make a Taylor series approximation, \[ Y \approx g(\mu^{}_X) + (X-\mu^{}_X) g^\prime(\mu^{}_X).\] This approximates \(Y\) as a linear function of \(X\), so we have
\(\quad \mu^{}_Y = {\mathbb{E}}[Y]\approx g(\mu^{}_X)\).
\(\quad \sigma^2_Y = {\mathrm{Var}}(Y) \approx \sigma^2_X \big\{g^\prime(\mu^{}_X)\big\}^2\).
\(\quad\)If \(X\sim N\big[\mu^{}_X,\sigma_X^2\big]\), then \(Y\) approximately follows a \(N\big[g(\mu^{}_X), \sigma^2_X \big\{g^\prime(\mu^{}_X)\big\}^2\big]\) distribution.
B. It is often asserted that the horizontal dashed lines on the sample ACF plot represent a confidence interval. For example, in the documentation produced by ?plot.acf
we read
ci: coverage probability for confidence interval. Plotting of the confidence interval is suppressed if ‘ci’ is zero or negative.
Use a definition of a confidence interval to explain how these lines do, or do not, construct a confidence interval.