Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GSI parallel IO for FV3-LAM #248

Closed

Conversation

TingLei-daprediction
Copy link
Contributor

Implemented the Parallel IO for FV3-LAM, which makes use of the parallel hdf5 under the netcdf
See #218.
The basic implementing strategy is to make use of the well-wrapped sub2grid interface in the GSI to parallelly deal with each level of fields over the whole horizontal domain for their IO and conversion to/from the analysis grids.
To make use of the existing interface for the distribution/transfer between subdomain and the whole domain, a group of gsi bundle objects are created : public :: fv3lam_io_dynmetvars3d_nouv,fv3lam_io_tracermetvars3d_nouv
public :: fv3lam_io_dynmetvars2d_nouv,fv3lam_io_tracermetvars2d_nouv, u and v are treated separately for they, at the same vertical level, needs be treated together in the conversion from grid-relative to earth relative winds.
Rather complete verifications have been done (see the GSI issue) and , in 3 km conus domain GSI run, significant speeding up was demonstrated, especially for write when fv3sar_bg_opt=1.
Thanks for @junpark217 providing the namelist block and anavinfo file for test of l_use_direct_dbz=.true. (though no dbz obs is actually used).
emc-work-gsi-parallel IO performance.pptx

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant