-
Notifications
You must be signed in to change notification settings - Fork 3
/
15special.tex
287 lines (216 loc) · 24.9 KB
/
15special.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
\hypertarget{AdvancedSettings}{}
\section[Advanced Stock Synthesis Configuration Settings and Advice]{\protect\hyperlink{AdvancedSettings}{Advanced Stock Synthesis Configuration Settings and Advice}}
\hypertarget{TVpara}{}
\subsection[Using Time-Varying Parameters]{\protect\hyperlink{TVpara}{Using Time-Varying Parameters}}
\hypertarget{tvOrder}{}
\subsubsection[Time-Varying Parameters]{\protect\hyperlink{tvOrder}{Time-Varying Parameters}}
Starting in v.3.30, mortality-growth, some stock-recruitment, catchability, and selectivity base parameters can be time varying. Note that as of v.3.30.16, time-varying parameters cannot be used with tagging parameters. There are four ways a parameter can be time-varying in SS3:
\begin{enumerate}
\item Environmental or Density dependent Linkages: Links the base parameter with environmental data or a model derived quantity.
\item Parameter deviations: Creates annual deviations from the base parameter during a user-specified range of years.
\item Time blocks: The base parameter is changed during a ``block'' (or ``blocks'') of time (i.e., one or more consecutive years) as specified by the user.
\item Trends: A trend (shape: cumulative normal distribution function) is applied to the parameter. Trends are specified using the same input column as time blocks, but with different codes. This means that trends and time blocks cannot be used simultaneously for the same base parameter.
\end{enumerate}
Environmental and density dependent linkages, parameter deviations, and either time blocks or trends can be applied to the same base parameter. The model processes each time-varying parameter specification (first time blocks and trends, then environmental linkages, then parameter deviations) and creates a time series of intermediate values that are used as the model subsequently loops through years.
\begin{figure}[ht]
\begin{center}
\includegraphics[alt={Some examples of time-varying setups.},scale = 0.4]{TimeVarying}\\
\end{center}
\caption{Some examples of time-varying setups.}
\label{(fig:timevarying)}
\end{figure}
\pagebreak
\input{tv_parameter_description}
\hypertarget{tvgrowth}{}
\subsubsection[Time-Varying Growth Considerations]{\protect\hyperlink{tvgrowth}{Time-Varying Growth Considerations}}
When time-varying growth is used, there are some additional considerations to be aware of:
\begin{itemize}
\item Growth in the forecast with time blocks: Growth deviations propagate into the forecast because growth is by cohort according to the current year's growth parameters. The user can select which growth parameters get used during the forecast by setting the end year of the last block, if using time blocks. If the last block ends in the model's end year, then the growth parameters in effect during the forecast will be the base parameters. By setting the end year of the last block to one year past the model end year (endyr), the model will continue the last block's growth parameter levels throughout the forecast.
\item The equilibrium benchmark quantities (\gls{msy}, $F_{40\%}$, etc.) previously used the model end year's (endyr) body size-at-age, which is not in equilibrium. Through the forecast file, it is possible to specify a range of years over which to average the size-at-age used in the benchmark calculations. An option to create equilibrium growth from averaged growth parameters would be a more realistic option and is under consideration, but is not yet available.
% Which input in forecast?? The benchmark years input? I couldn't find this option...
% Details about a potentially better solution.
%\item The mean length of fish in the plus group in the start year takes into account the infinity tail of numbers of fish at age and the continued growth of these tail fish out to 3 times the age of the plus group. This has nil effect if fish have already reached Linfinity, but can be noticeable if they have not. This detail is displayed only in echoinput.sso. Then in each year starting with the first year with time-varying growth parameters, the size in the plus group is updated according to the weighted mean of fish already there and fish just entering the plus group. If fish have not yet reached Linfinity by the time they reach the plus group, then you will see a decline over time in mean length of fish in the plus group simply because the numbers of fish in the plus group goes down over time. However, this is only done in years with time-varying growth. This is a topic for future Stock Synthesis evolution.
\end{itemize}
\hypertarget{tv-sr}{}
\subsubsection[Time-Varying Stock-Recruitment Considerations]{\protect\hyperlink{tv-sr}{Time-Varying Stock-Recruitment Considerations}}
\begin{itemize}
\item The $\sigma_R$ and autocorrelation parameters cannot be time-varying.
% Is this true? Or can they be time varying but it is not advisable because it doesn't make much sense?
\item The autocorrelation parameter cannot be estimated accurately within SS3 \citep{johnson-can-2016}, so external (i.e., external to SS3) estimation for selecting an autocorrelation value is currently recommended. The autocorrelation of the recruitment deviations appears in the report file, which can aid in selecting the autocorrelation value.
\item The value of $R_{0}$ and steepness in the initial year are used within virgin calculations and within the benchmarks for calculation of the denominator in depletion estimates. The average value of $R_{0}$ and steepness in the range of years specified as the benchmark years inputs 9 and 10 (see \hyperlink{fore-specify}{the forecast file specifications}) is used for \gls{msy}-type calculations.
%So, for example, a long-term climate effect could cause $R_{0}$ to change over time and B\textsubscript{MSY} could now be calculated for some future range of years.
\item The spawner-recruit regime parameter is a modifier on $R_{0}$. The regime shift parameter line allows for multi-year or environmentally driven deviations from $R_{0}$ without changing $R_{0}$ itself. The regime shift base parameter should have a base value of 0.0 and not be estimated (i.e., have a negative phase). Similar to the cohort-growth deviation, it serves simply as a base for adding time-varying adjustments.
\item The same algebraic effect on the calculated recruitment can be achieved by different combinations of spawner-recruit parameter options (e.g., changing $R_{0}$ directly instead of the regime shift parameter). It is recommended to use block, trend or environmental effects on $R_{0}$ only for long-term effects, and use time-vary effects on the regime shift parameter for transitory but multi-year deviations from $R_{0}$.
\item If the $R_{0}$ or steepness parameters are time-varying, then the model will use the current year's parameters to calculate the expected value of recruits as a function of the spawning biomass, then applies the recruitment deviations. If the regime shift parameter is time-varying, then the model applies the change in the regime shift parameter \textbf{after} calculating the expected value of recruits as a function of spawning biomass.
\end{itemize}
\hypertarget{ForecastTV}{}
\subsubsection[Forecast Considerations with Time-Varying Parameters]{\protect\hyperlink{ForecastTV}{Forecast Considerations with Time-Varying Parameters}}
Users should judiciously consider which parameter values are applied during forecast years. SS3 will default to use all base parameter values during the forecast period, but alternatively, which years of selectivity, relative F, and recruitment should be used during the forecast period by specifying in the \hyperlink{fore-specify}{forecast file}.
Time-varying parameters can extend into the forecast period. For example, a parameter with a time block that stops at the model end year will revert to the base parameter value for the forecast, but when the block definition extends to include some or all forecast years, the last block will apply to the forecast. A good practice is to use 9999 as the terminal year for the last block to ensure including all forecast years. If a parameter has deviations and the deviations' year range includes the forecast years, then the parameter will have process uncertainty in the forecast years and \gls{mcmc} draws(if using) will include the variability.
\hypertarget{2DAR}{}
\subsection[Parameterizing the Two-Dimensional Autoregressive Selectivity]{\protect\hyperlink{2DAR}{Parameterizing the Two-Dimensional Autoregressive Selectivity}}
When the two-dimensional autoregressive selectivity (2DAR) feature is turned on for a fleet, the selectivity is calculated as a product of the assumed selectivity pattern and a non-parametric deviation term deviating from this assumed pattern:
\begin{equation}
\hat{S}_{a,t} = S_aexp^{\epsilon_{a,t}}
\end{equation}
where $S_a$ is specified in the corresponding age/length selectivity types section, and it can be either parametric (recommended) or non-parametric (including any of the existing selectivity options in SS3); $\epsilon_{a,t}$ is simulated as a two-dimensional first-order autoregressive (2DAR) process:
\begin{equation}
vec(\epsilon) \sim MVN(\mathbf{0},\sigma_s^2\mathbf{R_{total}})
\end{equation}
where $\epsilon$ is the two-dimensional deviation matrix and $\sigma_s^2\mathbf{R_{total}}$ is the covariance matrix for the 2DAR process. More specifically, $\sigma_s^2$ quantifies the variance in selectivity deviations and $\mathbf{R_{total}}$ is equal to the Kronecker product ($\otimes$) of the two correlation matrices for the among-age and among-year AR processes:
\begin{equation}
\mathbf{R_{total}}=\mathbf{R}\otimes\mathbf{\tilde{R}}
\end{equation}
\begin{equation}
\mathbf{R}_{a,\tilde{a}}=\rho_a^{|a-\tilde{a}|}
\end{equation}
\begin{equation}
\mathbf{\tilde{R}}_{t,\tilde{t}}=\rho_t^{|t-\tilde{t}|}
\end{equation}
where $\rho_a$ and $\rho_t$ are the among age and among year AR coefficients, respectively. When both of them are zero, $\mathbf{R}$ and $\mathbf{\tilde{R}}$ are two identity matrices and their Kronecker product, $\mathbf{R_{total}}$, is also an identity matrix. In this case selectivity deviations are essentially identical and mutually independent:
\begin{equation}
\epsilon_{a,t}\sim N(0,\sigma_s^2)
\end{equation}
\myparagraph{Using the Two-Dimensional Autoregressive Selectivity}
See \citet{xu-new-2019} and \citet{xu-comparing-2020} for information on tuning the 2DAR selectivity parameters. There is not yet a generalized method to automate the tuning, so the information below provides a general framework. Additionally the stand-alone \gls{tmb} code used in \citet{xu-new-2019} to estimate the two correlation coefficients for selectivity deviations (rho\_a and rho\_t) outside SS3 [is available on GitHub](https://github.com/HaikunXu/2DAR4ss/tree/main) along with a [user manual](https://github.com/HaikunXu/2DAR4ss/blob/main/User%20Manual/User-Manual.pdf).
First, fix the two AR coefficients ($\rho_a$ and $\rho_t$) at 0 and tune $\sigma_s$ iteratively to match the relationship:
\begin{equation}
\sigma_s^2=SD(\epsilon)^2+\frac{1}{(a_{max}-a_{min}+1)(t_{max}-t_{min}+1)}\sum_{a=a_{min}}^{a_{max}}\sum_{t=t_{min}}^{t_{max}}SE(\epsilon_{a,t})^2
\end{equation}
The minimal and maximal ages/lengths and years for the 2DAR process can be freely specified by users in the control file. However, we recommend specifying the minimal and maximal ages and years to cover the relatively ``data-rich'' age/length and year ranges only. Particularly we introduce:
\begin{equation}
b=1-\frac{\frac{1}{(a_{max}-a_{min}+1)(t_{max}-t{min}+1)}\sum_{a=a_{min}}^{a_{max}}\sum_{t=t_{min}}^{t_{max}}SE(\epsilon_{a,t})^2}{\sigma_s^2}
\end{equation}
as a measure of how rich the composition data is regarding estimating selectivity deviations. We also recommend using the Dirichlet-multinomial method to ``weight'' the corresponding composition data while $\sigma_s$ is interactively tuned in this step.
Second, fix $\sigma_s$ at the value iteratively tuned in the previous step and estimate $\epsilon_{a,t}$. Plot both Pearson residuals and $\epsilon_{a,t}$ out on the age-year surface to check their 2D dimensions. If their distributions seems to be not random but rather be autocorrelated (deviation estimates have the same sign several ages and/or years in a row), users should consider estimating and then including the autocorrelations in $\epsilon_{a,t}$.
\hypertarget{continuous-seasonal-recruitment-sec}{}
\subsection[Continuous seasonal recruitment]{\protect\hyperlink{continuous-seasonal-recruitment-sec}{Continuous seasonal recruitment}}
Setting up a seasonal model such that recruitment can occur with similar and independent probability in any season of any year is awkward in SS3. Instead, SS3 can be set up so that each quarter appears as a year (i.e., a seasons-as-years model). All the data and parameters are set up to treat quarters as if they were years. Note that setting up a seasons-as-years model also requires that all rate parameters be re-scaled to correctly account for the quarters being treated as years.
Other adjustments to make when using seasons as years include:
\begin{itemize}
\item Re-index all ``year seas'' inputs to be in terms of quarter-year because all are now season 1; increase end year (endyr) value in sync with this.
\item Increase max age because age is now in quarters.
\item In the age error definitions, increase the number of entries to reflect that age is now in quarters.
\item In the age error definitions, re-code so that each quarter-age gets assigned to the correct age bin. This is because the age data are still in terms of age bins; i.e., the first 4 entries for quarter-ages 1 through 4 will all be assigned to age bin 1.5; the next four to age bin 2.5; you cannot accomplish the same result by editing the age bin values because the standard deviation of ageing error is in terms of age bin.
\item In the control file, multiply the natural mortality age breakpoints and growth Amin and Amax values by 1/season duration.
\item Decrease the $R_{0}$ parameter starting value because it is now the average number of recruitments per quarter year.
\item Edit the recruitment deviation (rec\_dev) start and end years to be in terms of quarter year.
\item Edit any age selectivity parameters that refer to age, because they are now in terms of quarter age.
\item If there needs to be some degree of seasonality to a parameter, then you could create a cyclic pattern in the environmental input and make the parameter a function of this cyclic pattern.
\end{itemize}
\pagebreak
\hypertarget{SS3Processes}{}
\section[Detailed Information on Stock Synthesis Processes]{\protect\hyperlink{SS3Processes}{Detailed Information on Stock Synthesis Processes}}
The processes and calculations within SS3 can be complex and not transparent based on the model input files. Here, additional information on processes within SS3 is provided.
\hypertarget{Jitter}{}
\subsection[Jitter]{\protect\hyperlink{Jitter}{Jitter}}
The following steps are now performed to determine the jittered starting parameter values (illustrated in Figure \ref{fig:jitter}):
\begin{enumerate}
\item A normal distribution is calculated such that the $pr(P_{MIN}) = 0.1\%$ and the $pr(P_{MAX}) = 99.9\%$.
\item A jitter shift value, termed ``$K$'', is calculated from the distribution equal to $pr(P_{CURRENT})$.
\item A random value is drawn, ``$J$'', from the range of $K$-jitter to $K$+jitter.
\item Any value which falls outside the 0-1 range (in the cumulative normal space) is mapped back from the bound to a point one-tenth of the way from the bound to the initial value.
\item $J$ is a new cumulative normal probability value.
\item Calculate a new parameter value, $P_{JITTERED}$, such that $pr(P_{JITTERED}) = J$.
\end{enumerate}
\begin{figure}[ht]
\begin{center}
\includegraphics[alt={Plot showing parameter space on the x-axis along and transformed space on the y-axis. A cumulative normal line is shown where the 0.001 and 0.999 quantiles are set to min and max respectively. A vertical stack of horizontal bars show the distribution of transformed initial values plus U. The distribution is shown on the parameter space axis with the initial input value in gray and the new init in red. Red arrows on the cumulative normal line show the random U written as negative jitter value comma positive jitter value.},scale = 0.75]{jitter_illustration}\\
\caption{Illustration of the jitter algorithm.}
\label{fig:jitter}
\end{center}
\end{figure}
In SS3, the jitter fraction defines a uniform distribution in cumulative normal space +/- the jitter fraction from the initial value (in cumulative normal space). The normal distribution for each parameter, for this purpose, is defined such that the minimum bound is at 0.001, and the maximum at 0.999 of the cumulative distribution. If the jitter faction and original initial value are such that a portion of the uniform distribution goes beyond 0.001 or 0.999 of the cumulative normal, the new value is set to one-tenth of the way from the bound to the original initial value.
Therefore, $\sigma$ = (max-min) / 6.18. For parameters that are on the log-scale, sigma may be the correct measure of variation for jitters, for real-space parameters, \gls{cv} = $\sigma$/(original initial value) may be a better measure.
If the original initial value is at or near the middle of the min-max range, then for each 0.1 of jitter, the range of jitters extends about 0.25 sigmas to either side of the original value (though as the total jitter increases the relationship varies more than this), and the average absolute jitter is about half of that. For values far from the middle of the min-max range, the resulting jitter is skewed in parameter space, and may hit the bound, invoking the resetting mentioned above.
To evaluate the jittering, the bounds, and the original initial values, a jitter\_info table is available from \texttt{r4ss}, including sigma, \gls{cv} and InitLocation columns (the latter referring to location within the cumulative normal - too close to 0 or 1 indicates a potential issue).
Note: parameters with min $\leq$ -99 or max $\geq$ 999 are not jittered to avoid unreasonable values (a warning is produced to indicate this).
\hypertarget{PriorDescrip}{}
\subsection[Parameter Priors]{\protect\hyperlink{PriorDescrip}{Parameter Priors}}
Priors on parameters fulfill two roles in SS3. First, for parameters provided with an informative prior, SS3 is receiving additional information about the true value of the parameter. This information works with the information in the data through the overall log likelihood function to arrive at the final parameter estimate. Second, diffuse priors provide only weak information about the value of a prior and serve to manage model performance during execution. For example, some selectivity parameters may become unimportant depending upon the values of other parameters of that selectivity function. In the double normal selectivity function, the parameters controlling the width of the peak and the slope of the descending side become redundant if the parameter controlling the final selectivity moves to a value indicating asymptotic selectivity. The width and slope parameters would no longer have any effect on the log likelihood, so they would have no gradient in the log likelihood and would drift aimlessly. A diffuse prior would then steer them towards a central value and avoid them crashing into the bounds. Another benefit of diffuse priors is the control of parameters that are given unnaturally wide bounds. When a parameter is given too broad of a bound, then early in a model run it could drift into this tail and potentially get into a situation where the gradient with respect that parameter approaches zero even though it is not at its global best value. Here the diffuse prior helps move the parameter back towards the middle of its range where it presumably will be more influential and estimable.
The options for parameter priors are described as a function of $Pval$, the value of the parameter for which a prior is being calculated, as well as the parameter bounds in the case of the beta distribution ($Pmax$ and $Pmin$), and the input values for $Prior$ and $Pr\_SD$, which in some cases are the mean and standard deviation, but interpretation depends on the prior type. The Prior Likelihoods below represent the negative log likelihood in all cases.
\myparagraph{Prior Types}
Note that the numbering in v.3.30 is different from that used in v.3.24 (where confusingly -1 indicated no prior and 0 indicated a normal prior). The calculation of the negative log likelihood is provided below for each prior types, as a function of the following inputs:
\begin{tabular}{ll}
$P_\text{init}$ & The value of the parameter for which a prior is being calculated where init can either be \\
& the initial un-estimated value or the estimated value (3rd column in control or \\
& control.ss\_new file) \\
$P_\text{LB}$ & The lower bound of the parameter (1st column in control file) \\
$P_\text{UB}$ & The upper bound of the parameter (2nd column in control file) \\
$P_\text{PR}$ & The input value for the prior input (4th column in control file) \\
$P_\text{PRSD}$ & The standard deviation input value for the prior (5th column in control file) \\
\end{tabular}
\begin{itemize}
\item \textbf{Prior Type = 0 = No prior applied} \\
In a Bayesian context this is equivalent to a uniform prior between the parameter bounds.
\item \textbf{Prior Type = 1 = Symmetric beta prior} \\
The symmetric beta is scaled between parameter bounds, imposing a larger penalty near the bounds. Prior standard deviation of 0.05 is very diffuse and a value of 5.0 provides a smooth U-shaped prior. The prior input is ignored for this prior type.
\begin{equation}
\mu = -P_\text{PRSD} \cdot ln\left(\frac{P_\text{UB}+P_\text{LB}}{2} - P_\text{LB} \right) - P_\text{PRSD} \cdot ln(0.5)
\end{equation}
\begin{equation}
\begin{split}
\text{Prior Likelihood} = &-\mu -P_\text{PRSD} \cdot ln\left(P_\text{init}-P_\text{LB}+0.0001\right) - \\
& P_\text{PRSD} \cdot ln\left(1-\frac{P_\text{init}-P_\text{LB}-0.0001}{P_\text{UB}-P_\text{LB}}\right)
\end{split}
\end{equation}
\begin{figure}[ht]
\begin{center}
\includegraphics[alt={The shape of the symmetric beta prior across alternative standard deviation values and the change in the negative log likelihood.},scale = 0.6]{SymetricBeta}\\
\end{center}
\caption{The shape of the symmetric beta prior across alternative standard deviation values and the change in the negative log likelihood.}
\end{figure}
\item \textbf{Prior Type = 2 = Beta prior} \\
The definition of $\mu$ is consistent with \href{https://casal2.github.io/}{\gls{casal}}'s formulation with the $\beta_\text{PR}$ and $\alpha_\text{PR}$ corresponding to the $m$ and $n$ parameters.
\begin{equation}
\mu = \frac{P_\text{PR}-P_\text{LB}}{P_\text{UB}-P_\text{LB}}
\end{equation}
\begin{equation}
\tau = \frac{(P_\text{PR}-P_\text{LB})(P_\text{UB}-P_\text{PR})}{P_\text{PRSD}^2}-1
\end{equation}
\begin{equation}
\beta_\text{PR} = \tau \cdot \mu
\end{equation}
\begin{equation}
\alpha_\text{PR} = \tau (1-\mu)
\end{equation}
\begin{equation}
\begin{split}
\text{Prior Likelihood} = &(1 - \beta_\text{PR}) \cdot ln(0.0001 + P_\text{init} - P_\text{LB}) + \\
&(1 - \alpha_\text{PR}) \cdot ln(0.0001 + P_\text{UB} - P_\text{init}) - \\
&(1 - \beta_\text{PR}) \cdot ln(0.0001 + P_\text{PR} - P_\text{LB}) - \\
&(1 - \alpha_\text{PR}) \cdot ln(0.0001 + P_\text{UB} - P_\text{PR})
\end{split}
\end{equation}
\item \textbf{Prior Type 3 = Log-normal prior} \\
Note that this is undefined for $p <= 0$ so the lower bound on the parameter must be > 0. The prior value is input into the parameter line in natural log space while the initial parameter value is defined in normal space (e.g., init = 0.20, prior = -1.609438).
\begin{equation}
\text{Prior Likelihood} = \frac{1}{2} \left(\frac{ln(P_\text{init})-P_\text{PR}}{P_\text{PRSD}}\right)^2
\end{equation}
\item \textbf{Prior Type 4 = Log-normal prior with bias correction} \\
This option allows the prior mean value to be entered as the ln(mean). Note that this is undefined for $p <= 0$ so the lower bound on the parameter must be > 0.
\begin{equation}
\text{Prior Likelihood} = \frac{1}{2} \left(\frac{ln(P_\text{init})-P_\text{PR} + \frac{1}{2}{P_\text{PRSD}}^2}{P_\text{PRSD}}\right)^2
\end{equation}
\item \textbf{Prior Type 5 = Gamma prior} \\
The lower bound should be 0 or greater.
\begin{equation}
\text{scale} = \frac{{P_\text{PRSD}}^2}{P_\text{PR}}
\end{equation}
\begin{equation}
\text{shape} = \frac{P_\text{PR}}{\text{scale}}
\end{equation}
\begin{equation}
\text{Prior Likelihood} = -\text{shape} \cdot ln(\text{scale}) - ln\big(\Gamma(\text{shape})\big) + (\text{shape} - 1) \cdot ln(P_\text{init}) - \frac{P_\text{init}}{\text{scale}}
\end{equation}
\item \textbf{Prior Type 6 = Normal prior} \\
Note that this function is independent of the parameter bounds.
\begin{equation}
\text{Prior Likelihood} = \frac{1}{2} \left(\frac{P_\text{init} - P_\text{PR}}{P_\text{PRSD}}\right)^2
\end{equation}
\end{itemize}
%=========Forecast Module
\input{_forecast_module}
%=========F mortality in SS3
% \input{_f_mortality}
\pagebreak