Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Implemented the Parallel IO for FV3-LAM, which makes use of the parallel hdf5 under the netcdf
See #218.
The basic implementing strategy is to make use of the well-wrapped sub2grid interface in the GSI to parallelly deal with each level of fields over the whole horizontal domain for their IO and conversion to/from the analysis grids.
To make use of the existing interface for the distribution/transfer between subdomain and the whole domain, a group of gsi bundle objects are created : public :: fv3lam_io_dynmetvars3d_nouv,fv3lam_io_tracermetvars3d_nouv
public :: fv3lam_io_dynmetvars2d_nouv,fv3lam_io_tracermetvars2d_nouv, u and v are treated separately for they, at the same vertical level, needs be treated together in the conversion from grid-relative to earth relative winds.
Rather complete verifications have been done (see the GSI issue) and , in 3 km conus domain GSI run, significant speeding up was demonstrated, especially for write when fv3sar_bg_opt=1.
Thanks for @junpark217 providing the namelist block and anavinfo file for test of l_use_direct_dbz=.true. (though no dbz obs is actually used).
emc-work-gsi-parallel IO performance.pptx