——————————————————————————–

1.Introduction

As one of the most successful and famous tech company in the world, Apple‘s stock has always been financial practioner’s focus ever since Apply stocks were traded publicly in NASDAQ from 1980. What’s more interesting, Apple(AAPL) has gone through a very big Internet booming period last year and three great meltdowns recently. Last year is a great case for study stocks, especially for Apple, a representitive company which involves in both hardware and software high-end tech. Aside the effects from the market environment, Apple’s stock performance is also affected by its productions’ performances. For example, the launch of Iphone 11 and Ipad Pro took a hit in the market and improved Apple’s stock performances. Gievn the US stock markets just went through three melt-downs, the instability certainly becomes a hot and interesting topic that people would like to investigate in. Volatility, as a measure of market’s instability, certainly becomes an interesting topic that we would like to apply statistical knowledge to model it. In this paper, we will model Apple stock prices in both classic statistical models and Markov-property based models, and compare their results. We hope this project could provide some insight for investors in understanding stock volatility in time series sense.


1.1 Questions

Following our discussion above, I would like to raise one questions that I want to solve in my report:

  • Question: Could POMP model help us to describe stock price volatility better than GARCH model?

——————————————————————————–

2.Data

2.1 Data Source and Setting up

The data were collected open-sourcely from Yahoo! Finance. The data covered Apple’s stock information daily from Nov.13 2018 to Apr.28 2020. The data includes the timestamp, open price, daily high/low prices, close price, adjusted close prices, and volumn. Since we are interested in the volatility, we will use the adjusted close prices, which is a financial concept that the prices are adjusted for splits and dividends. More detailed information could be refered at Investopedia. Let’s first take a look at the original data attributes(Figure 1).

##            Date       Open       High        Low      Close  Adj.Close
## 9564 2018-11-13 191.630005 197.179993 191.449997 192.229996 188.936081
## 9565 2018-11-14 193.899994 194.479996 185.929993 186.800003 183.599152
## 9566 2018-11-15 188.389999 191.970001 186.899994 191.410004 188.130142
## 9567 2018-11-16 190.500000 194.970001 189.460007 193.529999 190.213806
## 9568 2018-11-19 190.000000 190.699997 184.990005 185.860001 182.675247
## 9569 2018-11-20 178.369995 181.470001 175.509995 176.979996 173.947418
##        Volume
## 9564 46882900
## 9565 60801000
## 9566 46478800
## 9567 36928300
## 9568 41925300
## 9569 67825200

2.2 Explortorary Data Analysis

First we could take a look at the general distribution by looking at the summary statistics and plot for the time series along with the the time series on the log scale(Figure 1):

##             closingPrice
## nobs          365.000000
## NAs             0.000000
## Minimum       139.753540
## Maximum       327.200012
## 1. Quartile   183.715332
## 3. Quartile   262.470001
## Mean          221.352705
## Median        205.394424
## Sum         80793.737187
## SE Mean         2.582177
## LCL Mean      216.274847
## UCL Mean      226.430562
## Variance     2433.687616
## Stdev          49.332420
## Skewness        0.512615
## Kurtosis       -0.847412
Figure 1: Adjusted Closing Price of Apple from 2018 to 2020 (Daily)

Figure 1: Adjusted Closing Price of Apple from 2018 to 2020 (Daily)

From the statistics summary, we could see that the daily adjusted closing prices for Apple has a very wide distribution across the 1 year(from $139 to $327) with a big variance of 2433.6876. Along with the plots on the right, we could see that the variances are pretty big across the whole time series. In general, the stock has an obvious increasing trend. The stock has some sudden spikes and dips happening during the gradual increases in its adjusted prices. After the log for the adjusted closing prices, the data pattern did not change, with only the magnitude becoming smaller

However, since we are dealing with the volatility related problem, we need to introduce the concept of return, which has the concept introduced in class note 14:

\(Return_{n}=\log \left(Price_{n}\right)-\log \left(Price_{n-1}\right)\)

where \(n\) denotes the timestamp as day.

In order to make more statoinary, we would de-mean the \(Return_{n}\) by:

\(DemeanedReturn_{n}=Return_{n} - (1/N) *\sum_{k=1}^{N} Return_{k}\)

Then we could look at the plot of the time series data (Figure 2):

Figure 2: LogDiffAppleDmean Closing Price from 2018 to 2020 (Daily)

Figure 2: LogDiffAppleDmean Closing Price from 2018 to 2020 (Daily)

We could see that after taking the difference of logrithmed adjusted closing prices and demeaning the data, the time series is much more weakly stationary. We could clealy observe the volatility across different sections on the time series, with a heavier tail due to the affect to market melt-down and corona virus. But we could observe that the variance is pretty random while the mean is around constant around 0, which still supports the facts towards the weakly stationary assumption.

Before we move on to model the data, we would like to check the ACF(Figure 3) for the de-meaned return too.

ACF is defined as based on class note 2:

\(\rho_h = \frac{\gamma_h}{\gamma_0}\)

where \(\gamma_h\) and \(\gamma_0\) are actually \(\gamma_{n,n+h}\) and \(\gamma_{n,n}\) repectively. They are the autocovariances. h is the lag inbetween time.

For random variable \(Y_{1:N}\), autocovariances are \(\gamma_{m, n}=\mathbb{E}\left[\left(Y_{m}-\mu_{m}\right)\left(Y_{n}-\mu_{n}\right)\right]\)

By plotting thr autocovariance versus the lags, we can get ACF plot for the de-meaned return:

Figure 3: ACF plot of the Demeaned Return for Apple Stock

Figure 3: ACF plot of the Demeaned Return for Apple Stock

We could observe that there are several spikes that indicates the autocorrealtions exisiting in between some of the lags, which we need to pay attention to in the later modeling. Though it seems correlated for some time points, it could coming from the relatively short period we sample from, and also it could be coming from the recent frequent emergencies happening all around. It is hard to conclude the autocorrelation is robust and valid from a simple ACF. But knowing the potentail relationship in between data points could help us to draw careful conclusion later on.

One way to compare the result to is to leverage some mature package on the shelf to comapre our randomness to. For example, the decompose function is a additive model that can help users to dempose a time series into seasonal (seasonal pattern), trend(long-term tendency), and short-term pattern with moving average. The additive model used is:

\(Y_{t}=T_{t}+S_{t}+e_{t}\)
Figure 4: Decompose of the Apple Stock Price by Seasonality and Trend

Figure 4: Decompose of the Apple Stock Price by Seasonality and Trend

I chose to assume a seasonl pattern of weekly pattern(infered from ACF above) on the stock and extract the long-term upward trend to check on the errors left(Figure 4). We could see the “random” section is showing the residuals defined as random volatility. By the begining of the year 2020, we could see that there are data with random high variance. Also the fluctuation at the end of year 2018 is higher than the middle of the year. Those patterns match the de-meaned return we showed in (Figure 2)

——————————————————————————–

3.Model Fitting

3.1 GARCH Model Fitting

From class note 14 we learned the generalized Autogressive Conditional Heteroskedasticity model(GARCH), which is GARCH(p,q) has the form of:

\(Y_{n}=\epsilon_{n} \sqrt{V_{n}}\)
\(V_{n}=\alpha_{0}+\sum_{j=1}^{p} \alpha_{j} Y_{n-j}^{2}+\sum_{k=1}^{q} \beta_{k} V_{n-k}\)

where \(\epsilon_{1: N}\) is the white noise.

## 
## Call:
## garch(x = LogDiffAppleDmean, grad = "numerical", trace = FALSE)
## 
## Coefficient(s):
##        a0         a1         b1  
## 1.612e-05  1.831e-01  7.954e-01
## 'log Lik.' 916.0265 (df=3)

The 3 parameter GARCH model has a maximized log likelihood of 916.0265. The three parameters concludes at \(\alpha_{0} = 1.612e-05\), \(\alpha_{1} = 1.831e-01\), and $_{1} = 7.954e-01 $. This GARCH model’s likelihood will be used as a benchmark for later comparison with POMP. However, we need to pay attention to that GARCH is hard to interpret that the parameters does not have a clear meaning. It is hard to draw conclusion from GARCH’s output directly.

3.2 POMP Model

3.2.1 Model Background

The POMP model fisrt would need a volatility model. Using the introduced concept of leverage in class note 14, we want to define \(R_{n}\) as the correlation between returns on day n and day n-1. The idea is from Breto(’o 2014), which uses a transformed scale to model a AR(1) random walk.

\(R_{n}=\frac{\exp \left\{2 G_{n}\right\}-1}{\exp \left\{2 G_{n}\right\}+1}\)

where \(G_{n}\) is the random walk with Gaussian property.

Given \(G_{n}\) defines \(R_{n}\) we would have

\(Y_{n}=\exp \left\{H_{n}* 0.5\right\} \epsilon_{n}\)
\(H_{n}=\mu_{h}(1-\phi)+\phi H_{n-1}+\beta_{n-1} R_{n} \exp \left\{-H_{n-1} / 2\right\}+\omega_{n}\)
\(G_{n}=G_{n-1}+\nu_{n}\)

where we would have:

\(H_{n}\) is the log of volatility on the return we are modeling on

\(\beta_{n}=Y_{n} \sigma_{\eta} \sqrt{1-\phi^{2}}\), which indicates \(\beta_{n}\) depends on \(Y_{n}\)

\(\epsilon_{n}\) is iid \(N(0,1)\)

\(v_{n}\) is iid \(N\left(0, \sigma_{\nu}^{2}\right)\) sequence

\(\left\{\omega_{n}\right\}\) is an iid \(N\left(0, \sigma_{\omega}^{2}\right)\) sequence

3.2.2 Model Structure

Gievn above-mentioned backgournd, we have constructed our volatility model. Then by transform the latent variables from \((G_{n},H{n})\) to \((G_{n + 1},H{n + 1})\) , we could use the state variable \(X_{n}=\left(G_{n}, H_{n}\), Y_{n}) to model the process as a observation of \(Y_{n}\) . The derivation could be refered at class note 14’s appendix. In this way, we are building a pomp object that could do filtering and estimation for parameters.

In other words, we will define a recurssive particle filter by defining particle \(j\) at time n-1:

\(X_{n-1, j}^{F}=\left(G_{n-1, j}^{F}, H_{n-1, j}^{F}, y_{n-1}\right)\)

Then we can build prediction particles:

\(\left(G_{n, j}^{P}, H_{n, j}^{P}\right) \sim f_{G_{n}, H_{n} | G_{n-1}, H_{n-1}, Y_{n-1}}\left(g_{n} | G_{n-1, j}^{F}, H_{n-1, j}^{F}, y_{n-1}\right)\) with weight \(w_{n, j}=f_{Y_{n} | G_{n}, H_{n}}\left(y_{n} | G_{n, j}^{P}, H_{n, j}^{P}\right)\)

The notations follows the class notes introduced in class note 12To combine the above-metioned idea with Monte Carlo, we start with speicfying the POMP model parameters and volatility model, we would have two pomp objects (filter abd simulation). Per report’s requirement, I will show the model structure so readers do not need to review my code by themselves. The model is built on the lecture node and with my own parameters choice for fine-tuning the results

Moreover, we need to initilize Y, G, and H for the pomp model, then we could define the initilizing value of rmeasure and dmeasure.

Here we build the main body of the POMP model by sub-in all the parameters we showed above

3.2.4 Global Search for Likelihood with Randomized Starting Value

In order to address the potential problem, and double check our model’s validaility, we need to run global search for likelihood with randomized starting value. Here we could see the ranges we set for each parameters

##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##   904.7   924.4   925.8   926.0   930.3   932.8

From the summary, we could see that our model has a little bit improvement on maxium likelihood, which reaches 932.8. It imporves a little bit from the Stochastic Leverge Model.

Figure 7: Pair Plot for Global Search

Figure 7: Pair Plot for Global Search

However, from Figure 7, we could the see the projected convergency might be pretty bad given parameters like \(\mu_{h}\) and \(\phi\) are showing inconvergent patterns.

Figure 8: Trajectory of Parameter Convergence over Iterations for Global Search

Figure 8: Trajectory of Parameter Convergence over Iterations for Global Search

Figure 8: Trajectory of Parameter Convergence over Iterations for Global Search

Figure 8: Trajectory of Parameter Convergence over Iterations for Global Search

Not as guessed from the pair plot, we could see from Figure 8 that the parameters have pretty good convergency as iteration grows. \(\sigma_{\nu}\), \(G_{0}\) and \(H_{0}\), and \(\sigma_{\eta}\) shows pretty good trend for convergency. While \(\mu_{h}\) and \(\phi\) behaves similarly from local search.

Moreover, we could see some random dips in effective sample sizes along the iterations, but the average levels seems good.

3.3 Model Results Comparison

Recall from out output above, GARCH model yields the maximum likelihood of 916.0265, which is lower than both local search(932.6) and global search(932.8) from POMP models. Hence based on how AIC works introduced in class notes 5, we would pick POMP models over GARCH model in modeling volatility for apple.

The global search has a maximum likelihood higher the maximum likelihood than local search, but not guranteed to be statistically significant.

Both of the POMP models shows stable performances on the effective sample sizes, but they both suffer from some parameters convergence problems (\(\mu_{h}\) and \(\phi\)).

——————————————————————————–

4.Conclusion and Limitations

4.1 Conclusion

To put it into a nutshell, we could arrive at these conclusion from our analysis and answer the Question raised in section 1.1:

  • Question1: Could POMP model help us to describe stock price volatility better than GARCH model?

By conducting likilihood test based on AIC value, we could see the maximum likelihood from POMP model did defeat the GARCH model. In other words, POMP model could do a better job in modeling Apple stock’s volatility in our case study. To be specific, Random Walk Leverage volatility based POMP model with randomized starting parameters in global search usually do best job in modeling Apple stock volitility. However, the convergency of the model parameters would need more attention, which will be discussed in the following section.


4.2 Limitations and Next Steps

4.2.1 Limitations

The Apple stock is going through market effects, government effects, and COVID-19 with a lot of interesting chain reactions. The current volatility model we choose might need some adptations to better describe the stock. Given we are doing case study on a single stock, we could introduce more specific parameters into the model to catch up the domain knolwdge. While limited by the coding period and learning depth, I did not dig through more detailed model that might be able to improve the POMP model performance.

Given we have several parameters did not converge as we expected, it is not statistically reliable to conclude a “best” model. Even though I tried parameter tunings in many different combinations, I was not able to manage in between the time complexity and space complexity, which left my job using too much runtime. If I could find a better way to monitoring the dynamic relationship inbetween parameters, the job of finding “best” model parameters would be much more efficient.

4.2.2 Next Steps

Given we are using a lot of dataframe in the code, I would try to use a more fundamental data strucutre to try to carry out the computation job in higher dimensions to save some runtime for the project.

I would like to try some optimization methods in leading to POMP model parameters convergency. I was thinking about adding penlties to some parameters to balance the performances in between parameters, which would lead to a more robust POMP model.

In stead of only modeling volatility, I want to leverage POMP model into portfolio building in order to maxmize the return from investment. Learning POMP from time series domain opened up a differnt angle for me to handle time information. I want to apply pomp to generate more features from time series, which could benefit my study in Data Science a lot.

——————————————————————————–

5.Sources

5.1 Packages

All packages used are from cran.r-project.org


5.3 Data

The data set is obtained from Yahoo! Finance


5.4 Out-Class References

’o, Carles Bre. 2014. “On Idiosyncratic Stochasticity of Financial Leverage Effects.” Statistics & Probability Letters 91 (August). Elsevier BV: 20–26. https://doi.org/10.1016/j.spl.2014.04.003.