-
Notifications
You must be signed in to change notification settings - Fork 206
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
distributed_hydrostatic_turbulence.jl
yields NaN
s
#4068
Comments
I don't think you can actually adapt the time step right now. Try with a fixed time step. Also the setup looks weird to me: model = HydrostaticFreeSurfaceModel(; grid,
momentum_advection = VectorInvariant(vorticity_scheme=WENO(order=9)),
free_surface = SplitExplicitFreeSurface(grid, substeps=10),
tracer_advection = WENO(),
buoyancy = nothing,
coriolis = FPlane(f = 1),
tracers = :c) This is WENO for vorticity but second order for everything else? And no other closure. Can you modify the physics so that we have a hope the simulation will run? You want to use |
Very good idea! I just noticed that it was proceeding with a time step of 1e-84 before it produced an |
Might make sense to try it in serial and make sure the setup runs before trying to distribute it. |
This is my serial version of the hydrostatic script. It fails for me after 100 iterations with Maybe the suggestions that @glwagner made will fix this up and then we can go to the
|
Oh, this validation is a little old and not up to date. I ll open a PR to correct it. |
Thank you @simone-silvestri ! |
For reference the script is here: https://github.com/CliMA/Oceananigans.jl/blob/main/validation/distributed_simulations/distributed_hydrostatic_turbulence.jl |
I have tried running all the scripts in
distributed_simulations
and they all failed for me. I thought I would point out the problems I have found here and we could clean them up. First, let's start with this one.distributed_hydrostatic_turbulence.jl
It starts fine but then I get
NaN
, which suggests to me that the time step is too large. I reduced thecfl
parameter from0.2
to0.1
and instead of dying at iteration 200 it died at 6100. Better but not great.I am going to try
0.05
but is it a concern that it ran several months ago with these parameters and now it doesn't? Also, why does the cfl need to be so small? I would think that a cfl of0.2
should be great for pretty much any simulation.The text was updated successfully, but these errors were encountered: