-
Notifications
You must be signed in to change notification settings - Fork 145
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat req: Memory scalable static data in DART - both static across the ensemble and per-ensemble member static data #744
Comments
WRF PHB is read from a wrfinput template file, but is PHB in every wrf file? |
Here's question that might influence our choices: Here's a framework for thinking about names for the kinds of data filter needs to store
I prefer leaving "prognostic" and "diagnostic" for classifying variables in models. |
Yes for sure it is "easy" - it is just counting things. |
As far as I know PHB (base state geopotential) is in every wrfinput file. It is static both across ensemble member and in time. It needs to be summed with the PH (perturbation geopotential) to provide actual geopotential. |
There are several model_mods and core DART modules that have a fixed size memory requirement on each processor. The memory usage is static_mem* num_procs (does not scale as you add processors), and is a hard limit for the model size in DART.
Goal:
Rather than the current:
Note the code may need to be sensible about what static data is tiny (fine on every core) vs. large.
Static data in DART:
static data, same across the ensemble:
Per ensemble member static data:
This gets put into the state at the moment, so is inflated (maybe should not be). An example (I think) is the CLM fields that are 'no-update' see bug: inflation files when using 'no copy back' variables #276
In addition (going as a separate issue), is observation sequence files which are on every core (and particularly for external forward operators which are in the obs sequence).
The text was updated successfully, but these errors were encountered: