Skip to content

Commit

Permalink
Revive SSPKnoth
Browse files Browse the repository at this point in the history
  • Loading branch information
Sbozzolo committed May 31, 2024
1 parent b3dbb71 commit bbe3545
Show file tree
Hide file tree
Showing 16 changed files with 940 additions and 331 deletions.
468 changes: 200 additions & 268 deletions docs/Manifest.toml

Large diffs are not rendered by default.

2 changes: 2 additions & 0 deletions docs/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,10 @@ Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
DocumenterCitations = "daee34ce-89f3-4625-b898-19384cb65244"
Krylov = "ba0b0d4f-ebba-5204-a429-3ac8c609bfb7"
LaTeXStrings = "b964fa9f-0449-5b57-a5c2-d3ea65f4040f"
Literate = "98b081ad-f1c9-55d3-8b20-4c87d4299306"
NVTX = "5da4648a-3479-48b8-97b9-01cb529c0a1f"
Plots = "91a5bcdd-55d7-5caf-9e0b-520d859cae80"
PrettyTables = "08abe8d2-0d0c-5749-adfa-8a2ac140af0d"
Printf = "de0858da-6303-5e67-8744-51eddeeeb8d7"
SciMLBase = "0bca4576-84f4-4d90-8ffe-ffa030f20462"
StaticArrays = "90137ffa-7385-5640-81b9-e52037218182"
19 changes: 19 additions & 0 deletions docs/make.jl
Original file line number Diff line number Diff line change
@@ -1,5 +1,22 @@
using Documenter, DocumenterCitations
using ClimaTimeSteppers
using Literate

tutorial_basedir = "tutorials"
tutorial_basedir_from_here = joinpath(@__DIR__, "src", tutorial_basedir)

jl_files_in_basedir = filter(endswith(".jl"), readdir(tutorial_basedir_from_here))

println("Building literate tutorials...")
generated_tutorials = String[]
for filename in jl_files_in_basedir
Literate.markdown(
joinpath(tutorial_basedir_from_here, filename),
tutorial_basedir_from_here;
flavor = Literate.CommonMarkFlavor(),
)
push!(generated_tutorials, joinpath(tutorial_basedir, replace(filename, ".jl" => ".md")))
end

# https://github.com/jheinen/GR.jl/issues/278#issuecomment-587090846
ENV["GKSwstype"] = "nul"
Expand All @@ -12,12 +29,14 @@ pages = [
"Algorithm Formulations" => [
"ODE Solvers" => "algorithm_formulations/ode_solvers.md",
"Newtons Method" => "algorithm_formulations/newtons_method.md",
"Rosenbrock Method" => "algorithm_formulations/rosenbrock.md",
"Old LSRK Formulations" => "algorithm_formulations/lsrk.md",
"Old MRRK Formulations" => "algorithm_formulations/mrrk.md",
],
"Test problems" => [
"test_problems/index.md",
],
"Tutorials" => generated_tutorials,
"API docs" => [
"ODE Solvers" => "api/ode_solvers.md",
"Newtons Method" => "api/newtons_method.md",
Expand Down
122 changes: 122 additions & 0 deletions docs/src/algorithm_formulations/rosenbrock.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,122 @@
# Rosenbrock methods

In this page, we introduce Rosenbrock-type methods to solve ordinary
differential equations. In doing do, we roughly follow Chapter IV.7 of "Solving
Ordinary Differential Equations II" by Hairer and Wanner. (Beware, notation is
not the same.). In this spirit, let us introduce the need for Rosenbrock-type
methods in the same way as Hairer and Wanner, by quoting Rosenbrock himself:

> When the functions are non-linear, implicit equations can in general be solved
> only by iteration. This is a severe drawback, as it adds to the problem of
> stability, that of convergence of the iterative process. An alternative, which
> avoids this difficulty, is ...
Rosenbrock method!

Before reading this page, we recommend reading the page on ODE solvers first
[TODO: Add link]

## Introduction to the formalism

Let us consider an ordinary differential equation of the form

$$ \frac{d}{dt}u(t) = T(u(t), t)\,,$$

where $u$ is the state, $T$ is the tendency,
and $t$ the time. For the sake of simplicity, let us ignore the explicit time
dependency in the tendency (we will get back to that at the end).

The simplest way to introduce the Rosenbrock method is to start from a
diagonally implicit Runge-Kutta scheme (see page on DIRK). In an implicit
Runge-Kutta method with $s$ stages and tableau $a, b, c$, the updated value
$u_1$ for a step of size $\Delta t$ is obtained starting from the known value $u_0$
with

$$ u_1 = u_0 + \sum_{i=1}^s b_i k_i\,, $$
with
$$ k_i = \Delta t T ( u_0 + \sum_{j=1}^{i-1}\alpha_{ij}k_j + \alpha_{ii} k_i)\,. $$
$\alpha_{ij}$, $b_i$ are carefully chosen coefficients.

Rosenbrock's idea consists in linearizing $T$. This operation can be interpreted
as taking one iteration for Netwon solver and yields

$$ k_i = \Delta t T(g_i) + \Delta t T'(g_i) \alpha_{ii} k_i $$

with $J(g_i)$ Jacobian of the tendency and with

$$ g_i = u_0 + \sum_{j=1}^{i-1}\alpha_{ij} k_j $$

In Rosenbrock-type methods, the Jacobian $T'$ is replaced with linear combinations
of the Jacobian evaluated at $u_0$, $J$:
$$ T'(g_i) \alpha_{ii} k_i \approx J \sum_{j=1}^{i-1}\gamma_{ij} k_j + J \gamma_{ii} k_i\,,$$
with $\gamma_{ij}$ other coefficients that can be chosen.

Now, each stage consists of solving a system of linear equations in $k_i$ with
matrix $I - \Delta t \gamma_{ii}$: $$ (I - J \Delta t \gamma_{ii}) k_i = \Delta
t T(g_i) + J \sum_{j=1}^{i-1}\gamma_{ij} k_j$$ for each $i$ in $1, \dots, s$.
Once $k_i$ are known, $u_1$ can is easily computed and the process repeated for
the next step.

In practice, there are computational advantages at implementing a slight
variation of what presented here.

Let us define $\tilde{k}_{i} = \sum_{j=1}^{i} \gamma_{ij} k_j$. If the matrix
$\gamma_{ij}$ is invertible, we can move freely from $k_i$ to $\tilde{k}_i$. A
convenient step is to also define $C$, as

$$ C = diag(\gamma_{11}^{-1}, \gamma_{22}^{-1}, \dots, \gamma_{ss}^{-1}) - \Gamma^{-1} $$

Which establishes that

$$ k_i = \frac{\tilde{k}_i}{\gamma_{ii}} - \sum_{j=1}^{i -1} c_{ij} \tilde{k}_j$$
Substituting this, we obtain the following equations

$$ (J \Delta t \gamma_{ii} - 1) \tilde{k}_i = - \Delta
t \gamma_{ii} T(g_i) - \gamma_{ii} J \sum_{j=1}^{i-1}c_{ij} \tilde{k}_j \,, $$

$$ g_i = u_0 + \sum_{j=1}^{i-1}a_{ij} \tilde{k}_j \,,$$

$$ u_1 = u_0 + \sum_{j=1}^{s} m_j \tilde{k}_j\,,$$
with
$$ (a_{ij}) = (\alpha_{ij}) \Gamma^{-1}\,, $$ and $$ (m_i) = (b_i) \Gamma^{-1} $$

Finally, small changes are required to add support for explicit time derivative:

$$ (J \Delta t \gamma_{ii} - 1) \tilde{k}_i = - \Delta
t \gamma_{ii} T( t_0 + \alpha_i \Delta t, g_i) - \gamma_{ii} J \sum_{j=1}^{i-1}c_{ij} \tilde{k}_j - \Delta
t \gamma_{ii} \gamma_i \Delta
t \frac{\partial T}{\partial t}(t_0, u_0) $$

where we defined

$\alpha_{i} = \sum_{j=1}^{i-1}\alpha_{ij} $

$ \gamma _{i} = \sum_{j=1}^{i}\gamma_{ij} $

> :note: You may wonder, what is up with all these ugly minus signs? Why don't
> we cancel them out? The reason for this is because we want to have the
> prefactor $(J \Delta t \gamma_{ii} - 1)$. This is called `Wfact` in the
> language of `DifferentialEquations.jl`. Most other algorithms specify this
> quantity, so it is convenient to be consistent.
## Implementation

In `ClimaTimeSteppers.jl`, we implement the last equation presented in the
previous section. Currently, the only Rosenbrock-type algorithm implemented is
`SSPKnoth`. `SSPKnoth` is 3-stage, second-order accurate Rosenbrock with

$$ \alpha = \begin{bmatrix}
0 & 0 & 0 \\
1 & 0 & 0 \\
\frac{1}{4} & \frac{1}{4} & 0 \\
\end{bmatrix} $$

$$ \Gamma = \begin{bmatrix}
1 & 0 & 0 \\
1 & 1 & 0 \\
-\frac{3}{4} & \frac{3}{4} & 1 \\
\end{bmatrix} $$

and $$ b = (\frac{1}{6}, \frac{1}{6}, \frac{2}{3}) $$.

At each stage, the state is updated to $g_i$ and precomputed quantities are updated. Tendencies are computed and, unless the stage is the last, DSS is applied to the sum of all the tendencies. After the last stage, DSS is applied to the incremented state.
1 change: 1 addition & 0 deletions docs/src/api/ode_solvers.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,7 @@ HOMMEM1
ARK2GKC
ARK437L2SA1
ARK548L2SA2
SSPKnoth
```

## Explicit Algorithm Names
Expand Down
6 changes: 6 additions & 0 deletions docs/src/dev/report_gen.jl
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,10 @@ let # Convergence
title = "All Algorithms"
algorithm_names = map(T -> T(), all_subtypes(ClimaTimeSteppers.AbstractAlgorithmName))
algorithm_names = filter(name -> !(name isa ARK437L2SA1 || name isa ARK548L2SA2), algorithm_names) # too high order

# NOTE: Some imperfections in the convergence order for SSPKnoth are to be
# expected because we are not using the exact Jacobian

verify_convergence(title, algorithm_names, ark_analytic_nonlin_test_cts(Float64), 200)
verify_convergence(title, algorithm_names, ark_analytic_sys_test_cts(Float64), 400)
verify_convergence(title, algorithm_names, ark_analytic_test_cts(Float64), 40000; super_convergence = (ARS121(),))
Expand All @@ -29,6 +33,8 @@ let # Convergence
num_steps_scaling_factor = 4,
numerical_reference_algorithm_name = ARS343(),
)
rosenbrock_schems = filter(name -> name isa ClimaTimeSteppers.RosenbrockAlgorithmName, algorithm_names)
verify_convergence(title, rosenbrock_schems, climacore_1Dheat_test_implicit_cts(Float64), 400)
verify_convergence(
title,
algorithm_names,
Expand Down
4 changes: 4 additions & 0 deletions docs/src/plotting_utils.jl
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,8 @@ imex_convergence_orders(::ARK548L2SA2) = (5, 5, 5)
imex_convergence_orders(::SSP22Heuns) = (2, 2, 2)
imex_convergence_orders(::SSP33ShuOsher) = (3, 3, 3)
imex_convergence_orders(::RK4) = (4, 4, 4)
# SSPKnoth is not really an IMEX method
imex_convergence_orders(::SSPKnoth) = (2, 2, 2)

# Compute a confidence interval for the convergence order, returning the
# estimated convergence order and its uncertainty.
Expand Down Expand Up @@ -113,6 +115,8 @@ function verify_convergence(
default_dt = t_end / num_steps

algorithm(algorithm_name::ClimaTimeSteppers.ERKAlgorithmName) = ExplicitAlgorithm(algorithm_name)
algorithm(algorithm_name::ClimaTimeSteppers.SSPKnoth) =
ClimaTimeSteppers.RosenbrockAlgorithm(ClimaTimeSteppers.tableau(ClimaTimeSteppers.SSPKnoth()))
algorithm(algorithm_name::ClimaTimeSteppers.IMEXARKAlgorithmName) =
IMEXAlgorithm(algorithm_name, NewtonsMethod(; max_iters = linear_implicit ? 1 : 2))

Expand Down
Loading

0 comments on commit bbe3545

Please sign in to comment.