Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix equations in documentation #697

Merged
merged 1 commit into from
Nov 13, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
27 changes: 21 additions & 6 deletions nbs/docs/models/ARCH.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -83,8 +83,9 @@
"\n",
"Specifically, we give the definition of the ARCH model as follows. \n",
"\n",
"**Definition 1.** An $\\text{ARCH(p)}$ model with order $p≥1$ is ofthe form\n",
"**Definition 1.** An $\\text{ARCH(p)}$ model with order $p≥1$ is of the form\n",
"\n",
"$$\n",
"\\begin{equation}\n",
" \\left\\{\n",
"\t \\begin{array}{ll}\n",
Expand All @@ -93,6 +94,7 @@
"\t \\end{array}\n",
"\t\\right.\n",
"\\end{equation}\n",
"$$\n",
"\n",
"where $\\omega ≥ 0, \\alpha_i ≥ 0$, and $\\alpha_p > 0$ are constants, $\\varepsilon_t \\sim iid(0, 1)$, and $\\varepsilon_t$ is independent of $\\{X_k;k ≤ t − 1 \\}$. A stochastic process $X_t$ is called an $ARCH(p)$ process if it satisfies Eq. (1).\n",
"\n",
Expand All @@ -101,11 +103,17 @@
"Let $\\mathscr{F}_s$ denote the information set generated by $\\{X_k;k ≤ s \\}$, namely, the sigma field $\\sigma(X_k;k ≤ s)$. It is easy to see that $\\mathscr{F}_s$ is independent of $\\varepsilon_t$ for any $s <t$. According to Definition 1 and the properties of the conditional mathematical\n",
"expectation, we have that\n",
"\n",
"$$E(X_t|\\mathscr{F}_{t−1}) = E(\\sigma_t \\varepsilon_t|\\mathscr{F}_{t−1}) = \\sigma_t E( \\varepsilon_t|\\mathscr{F}_{t−1}) = \\sigma_t E(\\varepsilon_t) = 0 \\tag 2$$\n",
"$$\n",
"\\begin{equation}\n",
" E(X_t|\\mathscr{F}_{t−1}) = E(\\sigma_t \\varepsilon_t|\\mathscr{F}_{t−1}) = \\sigma_t E( \\varepsilon_t|\\mathscr{F}_{t−1}) = \\sigma_t E(\\varepsilon_t) = 0 \\tag 2\n",
"\\end{equation}\n",
"$$\n",
"\n",
"and\n",
"\n",
"$$ \\text{Var}(X_{t}^2| \\mathscr{F}_{t−1}) = E(X_{t}^2|\\mathscr{F}_{t−1}) = E(\\sigma_{t}^2 \\varepsilon_{t}^2|\\mathscr{F}_{t−1}) = \\sigma_{t}^2 E(\\varepsilon_{t}^2|\\mathscr{F}_{t−1}) = \\sigma_{t}^2 E(\\varepsilon_{t}^2) = \\sigma_{t}^2. $$\n",
"$$\n",
"\\text{Var}(X_{t}^2| \\mathscr{F}_{t−1}) = E(X_{t}^2|\\mathscr{F}_{t−1}) = E(\\sigma_{t}^2 \\varepsilon_{t}^2|\\mathscr{F}_{t−1}) = \\sigma_{t}^2 E(\\varepsilon_{t}^2|\\mathscr{F}_{t−1}) = \\sigma_{t}^2 E(\\varepsilon_{t}^2) = \\sigma_{t}^2.\n",
"$$\n",
"\n",
"This implies that $\\sigma_{t}^2$ is the conditional variance of $X_t$ and it evolves according to the previous values of $\\{X_{k}^2; t −p ≤ k ≤ t −1\\}$ like an $\\text{AR}(p)$ model. And so Model (1) is named an $\\text{ARCH}(p)$ model.\n"
]
Expand All @@ -116,6 +124,7 @@
"source": [
"As an example of $\\text{ARCH}(p)$ models, let us consider the $\\text{ARCH(1)}$ model\n",
"\n",
"$$\n",
"\\begin{equation}\n",
" \\left\\{ \n",
"\t \\begin{array}{ll} \\tag 3\n",
Expand All @@ -124,16 +133,22 @@
"\t \\end{array}\n",
"\t\\right.\n",
"\\end{equation}\n",
"$$\n",
"\n",
"Explicitly, the unconditional mean\n",
"$$E(X_t) = E(\\sigma_t \\varepsilon_t) = E(\\sigma_t) E(\\varepsilon_t) = 0. $$\n",
"\n",
"Additionally, the ARCH(1) model can be expressed as\n",
"\n",
"$$X_{t}^2 =\\sigma_{t}^2 +X_{t}^2 − \\sigma_{t}^2 =\\omega +\\alpha_1 X_{t-1}^2 +\\sigma_{t}^2 \\varepsilon_{t}^2 −\\sigma_{t}^2 =\\omega +\\alpha_1 X_{t}^2 +\\eta_t ,$$\n",
"$$X_{t}^2 =\\sigma_{t}^2 +X_{t}^2 − \\sigma_{t}^2 =\\omega +\\alpha_1 X_{t-1}^2 +\\sigma_{t}^2 \\varepsilon_{t}^2 −\\sigma_{t}^2 =\\omega +\\alpha_1 X_{t}^2 +\\eta_t$$\n",
"\n",
"that is,\n",
"$$X_{t}^2 =\\omega +\\alpha_1 X_{t}^2 +\\eta_t \\tag 4 $$\n",
"\n",
"$$\n",
"\\begin{equation}\n",
" X_{t}^2 =\\omega +\\alpha_1 X_{t}^2 +\\eta_t \\tag 4\n",
"\\end{equation}\n",
"$$\n",
"\n",
"where $\\eta_t = \\sigma_{t}^2(\\varepsilon_{t}^2 − 1)$. It can been shown that $\\eta_t$ is a new white noise, which is left as an exercise for reader. Hence, if $0 < \\alpha_1 < 1$, Eq. (4) is a stationary $\\text{AR(1)}$ model for the series Xt2. Thus, the unconditional variance\n",
"\n",
Expand Down Expand Up @@ -196,7 +211,7 @@
"source": [
"### Autoregressive Conditional Heteroskedasticity (ARCH) Applications\n",
"\n",
"* **Finance** The ARCH model is widely used in finance to model volatility in financial time series, such as stock prices, exchange rates, interest rates, etc.\n",
"* **Finance** - The ARCH model is widely used in finance to model volatility in financial time series, such as stock prices, exchange rates, interest rates, etc.\n",
"\n",
"* **Economics** - The ARCH model can be used to model volatility in economic data, such as GDP, inflation, unemployment, among others.\n",
"\n",
Expand Down
29 changes: 17 additions & 12 deletions nbs/docs/models/ARIMA.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -101,13 +101,13 @@
" - d is the number of differencing required to make the time series stationary\n",
"\n",
"\n",
"- **AR(p) Autoregression** a regression model that utilizes the dependent relationship between a current observation and observations over a previous period. An auto regressive (AR(p)) component refers to the use of past values in the regression equation for the time series.\n",
"- **AR(p) Autoregression** - a regression model that utilizes the dependent relationship between a current observation and observations over a previous period. An auto regressive (AR(p)) component refers to the use of past values in the regression equation for the time series.\n",
"\n",
"\n",
"- **I(d) Integration** uses differencing of observations (subtracting an observation from observation at the previous time step) in order to make the time series stationary. Differencing involves the subtraction of the current values of a series with its previous values d number of times.\n",
"- **I(d) Integration** - uses differencing of observations (subtracting an observation from observation at the previous time step) in order to make the time series stationary. Differencing involves the subtraction of the current values of a series with its previous values d number of times.\n",
"\n",
"\n",
"- **MA(q) Moving Average** a model that uses the dependency between an observation and a residual error from a moving average model applied to lagged observations. A moving average component depicts the error of the model as a combination of previous error terms. The order q represents the number of terms to be included in the model.\n",
"- **MA(q) Moving Average** - a model that uses the dependency between an observation and a residual error from a moving average model applied to lagged observations. A moving average component depicts the error of the model as a combination of previous error terms. The order q represents the number of terms to be included in the model.\n",
"\n",
"\n"
]
Expand Down Expand Up @@ -173,18 +173,23 @@
"\n",
"Thus, an autoregressive model of order p can be written as\n",
"\n",
"$$y_{t} = c + \\phi_{1}y_{t-1} + \\phi_{2}y_{t-2} + \\dots + \\phi_{p}y_{t-p} + \\varepsilon_{t}, \\tag{1}$$\n",
"\n",
"\n",
"$$\n",
"\\begin{equation}\n",
"y_{t} = c + \\phi_{1}y_{t-1} + \\phi_{2}y_{t-2} + \\dots + \\phi_{p}y_{t-p} + \\varepsilon_{t} \\tag{1}\n",
"\\end{equation}\n",
"$$\n",
"\n",
"where $\\epsilon_t$ is white noise. This is like a multiple regression but with lagged values of $y_t$ as predictors. We refer to this as an AR( p) model, an autoregressive model of order p\n",
" .\n",
"where $\\epsilon_t$ is white noise. This is like a multiple regression but with lagged values of $y_t$ as predictors. We refer to this as an AR( p) model, an autoregressive model of order p.\n",
"\n",
"\n",
"### MA model\n",
"Rather than using past values of the forecast variable in a regression, a moving average model uses past forecast errors in a regression-like model,\n",
"\n",
"$$y_{t} = c + \\varepsilon_t + \\theta_{1}\\varepsilon_{t-1} + \\theta_{2}\\varepsilon_{t-2} + \\dots + \\theta_{q}\\varepsilon_{t-q}, \\tag{2}$$\n",
"$$\n",
"\\begin{equation}\n",
"y_{t} = c + \\varepsilon_t + \\theta_{1}\\varepsilon_{t-1} + \\theta_{2}\\varepsilon_{t-2} + \\dots + \\theta_{q}\\varepsilon_{t-q} \\tag{2}\n",
"\\end{equation}\n",
"$$\n",
"\n",
"where $\\epsilon_t$ is white noise. We refer to this as an MA(q) model, a moving average model of order q. Of course, we do not observe the values of \n",
"$\\epsilon_t$ , so it is not really a regression in the usual sense.\n",
Expand Down Expand Up @@ -233,11 +238,11 @@
"autoregressive model|\n",
"|ARIMA (0, 1, 1)|0 1 1 |$\\hat Y_t = Y_{t-1} - \\Phi_1 e^{t-1}$|Simple exponential\n",
"smoothing|\n",
"|ARIMA (0, 0, 1)|0 0 1 |$\\hat Y_t = \\mu_0+ \\epsilon_t \\omega_1 \\epsilon_{t-1}$|MA(1): First-order\n",
"|ARIMA (0, 0, 1)|0 0 1 |$\\hat Y_t = \\mu_0+ \\epsilon_t - \\omega_1 \\epsilon_{t-1}$|MA(1): First-order\n",
"regression model|\n",
"ARIMA (0, 0, 2) |0 0 2 |$\\hat Y_t = \\mu_0+ \\epsilon_t \\omega_1 \\epsilon_{t-1} \\omega_2 \\epsilon_{t-2}$|MA(2): Second-order\n",
"ARIMA (0, 0, 2) |0 0 2 |$\\hat Y_t = \\mu_0+ \\epsilon_t - \\omega_1 \\epsilon_{t-1} - \\omega_2 \\epsilon_{t-2}$|MA(2): Second-order\n",
"regression model|\n",
"|ARIMA (1, 0, 1)|1 0 1 |$\\hat Y_t = \\Phi_0 + \\Phi_1 Y_{t-1}+ \\epsilon_t \\omega_1 \\epsilon_{t-1}$|ARMA model|\n",
"|ARIMA (1, 0, 1)|1 0 1 |$\\hat Y_t = \\Phi_0 + \\Phi_1 Y_{t-1}+ \\epsilon_t - \\omega_1 \\epsilon_{t-1}$|ARMA model|\n",
"|ARIMA (1, 1, 1)|1 1 1 |$\\Delta Y_t = \\Phi_1 Y_{t-1} + \\epsilon_t - \\omega_1 \\epsilon_{t-1}$ |ARIMA model|\n",
"|ARIMA (1, 1, 2)|1 1 2 |$\\hat Y_t = Y_{t-1} + \\Phi_1 (Y_{t-1} - Y_{t-2} )- \\Theta_1 e_{t-1} - \\Theta_1 e_{t-1}$ Damped-trend linear Exponential smoothing|\n",
"|ARIMA (0, 2, 1) OR (0,2,2) |0 2 1 |$\\hat Y_t = 2 Y_{t-1} - Y_{t-2} - \\Theta_1 e_{t-1} - \\Theta_2 e_{t-2}$|Linear exponential smoothing|\n",
Expand Down
6 changes: 3 additions & 3 deletions nbs/docs/models/AutoARIMA.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -105,9 +105,9 @@
"|ARIMA (2, 0, 0)|2 0 0 |$\\hat Y_t = \\Phi_0 + \\Phi_1 Y_{t-1} + \\Phi_2 Y_{t-2} + \\epsilon$| AR(2): Second-order regression model|\n",
"|ARIMA (1, 1, 0)|1 1 0 |$\\hat Y_t = \\mu + Y_{t-1} + \\Phi_1 (Y_{t-1}- Y_{t-2})$ | Differenced first-order autoregressive model|\n",
"|ARIMA (0, 1, 1)|0 1 1 |$\\hat Y_t = Y_{t-1} - \\Phi_1 e^{t-1}$|Simple exponential smoothing|\n",
"|ARIMA (0, 0, 1)|0 0 1 |$\\hat Y_t = \\mu_0+ \\epsilon_t \\omega_1 \\epsilon_{t-1}$|MA(1): First-order regression model|\n",
"ARIMA (0, 0, 2) |0 0 2 |$\\hat Y_t = \\mu_0+ \\epsilon_t \\omega_1 \\epsilon_{t-1} \\omega_2 \\epsilon_{t-2}$|MA(2): Second-order regression model|\n",
"|ARIMA (1, 0, 1)|1 0 1 |$\\hat Y_t = \\Phi_0 + \\Phi_1 Y_{t-1}+ \\epsilon_t \\omega_1 \\epsilon_{t-1}$|ARMA model|\n",
"|ARIMA (0, 0, 1)|0 0 1 |$\\hat Y_t = \\mu_0+ \\epsilon_t - \\omega_1 \\epsilon_{t-1}$|MA(1): First-order regression model|\n",
"ARIMA (0, 0, 2) |0 0 2 |$\\hat Y_t = \\mu_0+ \\epsilon_t - \\omega_1 \\epsilon_{t-1} - \\omega_2 \\epsilon_{t-2}$|MA(2): Second-order regression model|\n",
"|ARIMA (1, 0, 1)|1 0 1 |$\\hat Y_t = \\Phi_0 + \\Phi_1 Y_{t-1}+ \\epsilon_t - \\omega_1 \\epsilon_{t-1}$|ARMA model|\n",
"|ARIMA (1, 1, 1)|1 1 1 |$\\Delta Y_t = \\Phi_1 Y_{t-1} + \\epsilon_t - \\omega_1 \\epsilon_{t-1}$ |ARIMA model|\n",
"|ARIMA (1, 1, 2)|1 1 2 |$\\hat Y_t = Y_{t-1} + \\Phi_1 (Y_{t-1} - Y_{t-2} )- \\Theta_1 e_{t-1} - \\Theta_1 e_{t-1}$ | Damped-trend linear Exponential smoothing|\n",
"|ARIMA (0, 2, 1) OR (0,2,2) |0 2 1 |$\\hat Y_t = 2 Y_{t-1} - Y_{t-2} - \\Theta_1 e_{t-1} - \\Theta_2 e_{t-2}$|Linear exponential smoothing|\n",
Expand Down
Loading
Loading