This package contains several convenient wrappers for tswge, VAR and nnfor, useful for Dr. Sadler's time series class.
git clone https://github.com/josephsdavid/tswgewrapped.git
R CMD INSTALL tswgewrapped
or
if (!require(devtools)) {
install.packages("devtools")
}
devtools::install_github("josephsdavid/tswgewrapped", build_vignettes = TRUE)
NOTE: If you face issues with vignettes during installation, try with build_vignettes = FALSE
or
if (!require(remotes)) {
install.packages("remotes")
}
remotes::install_github("josephsdavid/tswgewrapped", build_vignettes = TRUE)
NOTE: If you face issues with vignettes during installation, try with build_vignettes = FALSE
- This readme provides a small overview of how to run some functions in this library, but for detailed examples, please refer to the 'Articles' section of this website.
- GDP Prediction using tswgewrapped.
the generate(type, ...)
function generates time series, while the fcst(type, ...)
function forecasts
library(tswgewrapped)
armats <- generate(arma, 100, phi = 0.2, theta = 0.4, plot = F)
armafore <- fcst(type = arma, x = armats, phi = 0.2, theta = 0.4, n.ahead = 20)
sznlts <- generate(aruma, n = 100, phi = -.9, s = 12)
sznlfore <- fcst(aruma, sznlts, phi = -.7, s = 12, n.ahead = 20)
arimats <- generate(arima, n = 100, d = 4)
arimafore <- fcst(arima, arimats, d = 4, n.ahead = 20)
We can also transform seasonal and arima time series with the difference(type, x, n)
function:
no_more_seasons <- difference(seasonal, sznlts, 12)
no_more_wandering <- difference(arima, arimats, 4)
Note that difference can accept either strings or plain words as the type argument, and accepts "arima", "Arima", "ARIMA", "Aruma", "ARUMA", "aruma", "seasonal", "Seasonal" as possible values. It is also important to note that when transforming arima data more than one time (n > 1), it will output the plots for each transformation step. This is on purpose! Part of good, consistent time series analysis is to examine these plots.
A lot of times, it is good to not only look at the aic, but also the BIC, of your data when identifying the ARMA order of a model. For this we have the aicbic()
function, which runs both aic5 and bic5 with the same arguments, returning results in the form of a list, where [[1]] is aic and [[2]] is bic (hence the name aicbic).
aicbic(no_more_seasons)
aicbic(no_more_wandering, p = 0:13, q = 0:5)
We can calculate the ASE of a model with the ase
function:
sznback <- fcst(aruma, sznlts, s = 12, phi = -.9, n.ahead = 20, lastn = T)
ase(sznlts, sznback)
We can also use the assess function as a convenient wrapper for ASE, which backcasts the forecast and calculates the ASE all in one go
assess(aruma, sznlts, s = 12, phi = -.9, n.ahead = 20)
Here we present an example workflow for time series analysis, from model identification to forecasting:
examplets <- generate(aruma, n = 500, s = 12, d = 3, phi = c(.2,.4,-.2), theta = c(-.6))
First we examine our data with either plotts.sample.wge or individual tswge plotting functions:
plotts.sample.wge(examplets)
#> or
plotts.wge(examplets)
parzen.wge(examplets)
acf(examplets)
Next we have multiple tools to identify the order of the model: using Tiao-Tay overfitting and using Box-Jenkins-esque methods. For overfitting, we simply do est.ar.wge with a high value:
estim <- est.ar.wge(examplets, p = 20)
However first we likely want to look for the arima order. We can do this the box jenkins method quickly with difference
difference(arima, examplets, 5)
Since this recursively transforms, we can watch until the wandering component goes away, and set that value to be our order. We can also eliminate the seasonal component first too, with a fun guess and check technique
lapply(1:20, difference, x = examplets, type = seasonal)
and pick which seasonal component best fits. These qualitative combined with the quantitative Tiao-Tsay method of overfitting make up a complete, thourough, and well founded framework for determining the nonstationary orders of a time series, with a hopefully terse enough syntax that they will not tax the analyst more than they already are. After applying either of these graphical methods to determine one, it is probably wise to overfit using the other, and then double check by doing assessing the other component qualitatively and overfit the first. Once the order is found, we can then quickly assess the ARMA order:
library(magrittr)
szns <- 12
int_order <- 3
examplets %>% difference(arima, x = ., n = int_order) %>%
difference(seasonal, x = ., n = szns) -> transts
aics <- aicbic(transts, p = 0:20, q = 0:5)
library(pander)
pander(aics[[1]])
pander(aics[[2]])
pqs <- estimate(transts, p = 3, q = 1)
Next we can forecast ahead with our forecast function:
fcst(aruma, examplets, phi = pqs$phi,
theta = pqs$theta, s = szns, d = int_order, n.ahead = 20)
Finally we can assess our forecast with the assess (ASE) function
assess(type = aruma, x = examplets, phi = pqs$phi,
theta = pqs$theta, s = szns, d = int_order, n.ahead = 20)
Generate a random time series to test yourself with playground(n)
xs <- playground(400)
Check out the vignette 'ModelCompareUnivariate'
vignette("ModelCompareUnivariate")
- Supports comparig the performance of multiple univariate models (ARMA, ARIMA and Seasonal ARIMA)
- Suppport for simple forecasts and plotting
- Support for Batch ASE calculations and plotting
- Statistical Comparison of models (when using batch ASE method)
- Boxplot of model comparison (ASE values)
- Tabular metrics for manual anaysis (if needed)
Vignette pending
- Supports plotting of realizations
- Supports scatterplot matrix to check for correlations between variables and independence of dependent variables (assumnption of MLR model with correlated errors)
- Support for plotting and analyzing cross-correlation data (CCF function)
- Support for lag plots (pending)
Check out the vignette 'ModelBuildMultivariateVAR'
vignette("ModelBuildMultivariateVAR")
- VAR models are prone to overfitting if we use a lot of exogenous variables with a large lag order.
- This class supports building multiple VAR models at the same time and provides recommendations for which features to keep in the final model.
- User can then choose to keep all original variables or use the recommended model alternative.
- The output from this class works well with the ModelCompareMultivariateVAR class and all the original and recommended models can be compared with each other using this class
Check out the vignette 'ModelCompareMultivariateVAR'
vignette("ModelCompareMultivariateVAR")
- Supports comparing the performance of multiple multivariate VAR models
- Suppport for simple forecasts and plotting
- Support for Batch ASE calculations and plotting
- Statistical Comparison of models (when using batch ASE method)
- Boxplot of model comparison (ASE values)
- Tabular metrics for manual anaysis (if needed)
Vignette pending
- Builds the model with the caret framework (currently supports only multivariate datasets, but workaround exists for a univariate dataset - see below).
- Suppport for predefined or random grid search
- Supports parallel processing using multiple cores to speed up the grid search
- Support for sliding ASE while building models
- Workaround for Univariate Dataset is to add a dummy multivariate column with some noise and use that in the build object
# example of how to add a dummy noise column to dataset
NUM_OBS = nrow(train_data)
train_data$dummy = rnorm(NUM_OBS, 0, 0.0001)
# Then use train_data in the ModelBuildNNforCaret$new call
Check out the vignette 'ModelCompareNNforCaret'
vignette("ModelCompareNNforCaret")
- Supports comparing the performance of multiple nnfor::mlp() submodels built by caret
- Does not suppport simple forecasts and plotting yet (planned for the future)
- Support for Batch ASE calculations and plotting
- Statistical Comparison of models (when using batch ASE method)
- Boxplot of model comparison (ASE values)
- Tabular metrics for manual anaysis (if needed)