Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Separate implementation of time integration schemes #83

Closed
ranocha opened this issue Aug 18, 2020 · 12 comments · Fixed by #200
Closed

Separate implementation of time integration schemes #83

ranocha opened this issue Aug 18, 2020 · 12 comments · Fixed by #200
Assignees
Labels
enhancement New feature or request

Comments

@ranocha
Copy link
Member

ranocha commented Aug 18, 2020

In GitLab by @ranocha on Jun 10, 2020, 13:25

We want to be able to choose different time integration schemes easily. Useful classes of low-storage schemes seem to be

  • Williamson 2N (like the classical Kennedy-Carpenter we use right now)
  • Ketcheson 3S

We could implement these classes in a general way and only adapt the coefficients for each different scheme. Then, we also need to separate the storage necessary for the RK scheme from the DG struct (but be able to change the buffers/caches when AMR is used).

@ChrisRackauckas
Copy link

Also, they can be implemented on the common interface, and then you'd be able to swap out between: https://diffeq.sciml.ai/stable/solvers/ode_solve/#Low-Storage-Methods

@sloede
Copy link
Member

sloede commented Sep 1, 2020

@ChrisRackauckas That sounds interesting. Is there an example (or specification) available as to which interface we would have to support explicitly to be able to do this?

@ranocha
Copy link
Member Author

ranocha commented Sep 1, 2020

Yes, that's also on our list (and what I've suggested some months ago). We just need to figure out a convenient way to do the custom operations we currently have between RK steps using (discrete) callbacks, including

From my point of view, all of that is more or less only some work, not something fundamentally new. More difficult are

  • local time stepping
  • multiphysics simulations having multiple solvers coupled together: We would need to wrap the DG solvers in some closures to make them work with *DiffEq
  • MPI

@ChrisRackauckas
Copy link

As a first step you can take your methods and implement them as a dispatch like https://github.com/SciML/SimpleDiffEq.jl . I assume you have some cool stuff in your loops given all of the papers I've seen from your adapting to manifolds stuff, so it would be nice to have that kind of stuff on the interface, and we'll add you to that page. Once you're calling into there, yeah you might need to use some callbacks to interface with the other methods. Clima is also making some low-storage methods as well and uses a bunch, so it would be cool to figure out exactly what your strategy is and then try it on a full climate model as well.

@sloede
Copy link
Member

sloede commented Sep 1, 2020

There's also

  • multi-physics coupling (one ODE solver iterating during another's RK stage)
  • local time stepping

that should go on that list.

@ranocha
Copy link
Member Author

ranocha commented Sep 1, 2020

What we've implemented in this repo is mostly just the basic stuff, nothing fancy concerning time integration etc. Instead, we're focusing on the spatial part and We would need to be able to fuse some operations to get maximal efficiency, e.g. for low-storage 2N methods, we would like to compute the update of the solution and derivative parts in one pass instead of two. For example, looking at https://github.com/SciML/OrdinaryDiffEq.jl/blob/master/src/perform_step/low_storage_rk_perform_step.jl#L54-L62

  for i in eachindex(A2end)
    if williamson_condition
      f(ArrayFuse(tmp, u, (A2end[i], dt, B2end[i])), u, p, t+c2end[i]*dt)
    else
      @.. tmp = A2end[i]*tmp
      f(k, u, p, t+c2end[i]*dt)
      @.. tmp += dt * k
      @.. u   = u + B2end[i]*tmp
    end

it's okay for us to not use the williamson_condition but we would like to perform

      @.. tmp += dt * k
      @.. u   = u + B2end[i]*tmp

in one loop instead of going over the arrays two times, i.e. to use something like

for idx in eachindex(u)
  tmp[idx] += dt * k[idx]
  u[idx]   = u[idx] + B2end[i]*tmp[idx]
end

instead of

for idx in eachindex(u)
  tmp[idx] += dt * k[idx]
end
for idx in eachindex(u)
  u[idx]   = u[idx] + B2end[i]*tmp[idx]
end

I know that kernel fusing is on your list, @ChrisRackauckas - basically because of the clima project? But I don't know the current status.

@ranocha ranocha added the enhancement New feature or request label Sep 1, 2020
@ranocha
Copy link
Member Author

ranocha commented Sep 1, 2020

Concerning the manifold stuff, I'm still trying to figure out a nice abstraction that's performant and easy to implement. Currently, I'm mostly using discrete callbacks for relaxation methods etc. It's just difficult to find a compromise between flexibility and simplicity/efficiency for simple cases.

@ChrisRackauckas
Copy link

Yeah we might need another hook in there.

For fusing, @kanav99 did a bunch of stuff but I don't think it got documented yet, so we should update the docs first and then link them. There's an "incrementing form" that's allowed for the fusion now.

@ranocha
Copy link
Member Author

ranocha commented Sep 1, 2020

Okay, sounds interesting. I'm looking forward to that feature. Do you have an idea how to set up local time stepping methods in the DiffEq ecosystem?

@ChrisRackauckas
Copy link

Local time stepping? Like, multi-rate?

@gregorgassner
Copy link
Contributor

yes, multi-rate but not for different phyics, but for different grid cell sizes (due to AMR) and the corresponding local CFL estimates

@ranocha
Copy link
Member Author

ranocha commented Sep 9, 2020

Yes, that's also on our list (and what I've suggested some months ago). We just need to figure out a convenient way to do the custom operations we currently have between RK steps using (discrete) callbacks, including

From my point of view, all of that is more or less only some work, not something fundamentally new. More difficult are

  • local time stepping
  • multiphysics simulations having multiple solvers coupled together: We would need to wrap the DG solvers in some closures to make them work with *DiffEq

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants