Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable AWS Parallel Works platform and Add Comprehensive End-To-End Tests #333

Merged
merged 17 commits into from
Sep 29, 2022

Conversation

jessemcfarland
Copy link
Collaborator

@jessemcfarland jessemcfarland commented Jul 26, 2022

DESCRIPTION OF CHANGES:

First, the AWS Parallel Works platform has been activated. A couple of additional minor changes were necessary in order to get a successful build and test run. These changes include mapping the Parallel Works cluster names to noaacloud and ensuring the the PROJ_LIB environment variable is set.

Second, support for the comprehensive workflow/end-to-end tests was added to the Jenkins pipeline and unified test script. A boolean parameter, SRW_WE2E_COMPREHENSIVE_TESTS, was added to the Jenkins pipeline. The parameter can be used to execute the comprehensive test suite manually on the desired branches. In addition, logic was added to the test stage to scan Pull Request labels for a specific label, run_we2e_comprehensive_tests. If set, the value of the SRW_WE2E_COMPREHENSIVE_TESTS parameter is overridden. The list of comprehensive workflow/end-to-end tests was added to the unified test script.

TESTS CONDUCTED:

This PR is the test. Two new labels should be created: run_we2e_fundamental_tests and run_we2e_comprehensive_tests. First, the run_we2e_fundamental_tests label should be applied to the PR. This should result in the pipeline executing the default set of tests. Next, the run_we2e_fundamental_tests label should be removed and the run_we2e_comprehensive_tests label should be added. In addition, a comment with the text "REBUILD" should be added to the PR. This should result in the pipeline executing the comprehensive set of tests. NOTE: The tests results can be found under the S3 Artifacts section of a Jenkins Build.

@JeffBeck-NOAA
Copy link
Collaborator

@jessemcfarland, thanks for getting the full suite of comprehensive tests integrated into Jenkins! Can we use a run_we2e_fundamental_tests label instead of run_we2e_default_tests? This will match the 'fundamental' vs 'comprehensive' paradigm. Thanks!

@jessemcfarland
Copy link
Collaborator Author

@jessemcfarland, thanks for getting the full suite of comprehensive tests integrated into Jenkins! Can we use a run_we2e_fundamental_tests label instead of run_we2e_default_tests? This will match the 'fundamental' vs 'comprehensive' paradigm. Thanks!

Yes! That's actually a copy/paste mistake, it works as you described though.


# The fundamental set of end-to-end tests to run.
declare -a we2e_fundamental_tests
we2e_fundamental_tests=('grid_RRFS_CONUS_25km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v16'
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there are more fundamental tests than these. Same for comprehensive tests. Will comment more soon.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jessemcfarland Here's the latest set of fundamental tests:

grid_RRFS_CONUS_25km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v16
grid_RRFS_CONUS_25km_ics_FV3GFS_lbcs_FV3GFS_suite_RRFS_v1beta
grid_RRFS_CONUS_25km_ics_FV3GFS_lbcs_RAP_suite_HRRR
grid_RRFS_CONUS_25km_ics_NAM_lbcs_NAM_suite_HRRR
grid_RRFS_CONUS_25km_ics_NAM_lbcs_NAM_suite_RRFS_v1beta
grid_RRFS_CONUScompact_25km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v16
grid_RRFS_CONUScompact_25km_ics_HRRR_lbcs_HRRR_suite_HRRR
grid_RRFS_CONUScompact_25km_ics_HRRR_lbcs_RAP_suite_HRRR
grid_RRFS_CONUScompact_25km_ics_HRRR_lbcs_RAP_suite_RRFS_v1beta
grid_SUBCONUS_Ind_3km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v16
grid_SUBCONUS_Ind_3km_ics_HRRR_lbcs_RAP_suite_HRRR
grid_SUBCONUS_Ind_3km_ics_HRRR_lbcs_RAP_suite_RRFS_v1beta
nco_grid_RRFS_CONUScompact_25km_ics_HRRR_lbcs_RAP_suite_HRRR
community_ensemble_2mems
custom_ESGgrid
deactivate_tasks
inline_post
MET_ensemble_verification
MET_verification
nco_ensemble
pregen_grid_orog_sfc_climo
specify_DOT_OR_USCORE
specify_DT_ATMOS_LAYOUT_XY_BLOCKSIZE
specify_RESTART_INTERVAL
specify_template_filenames

Also, you can remove subhourly_post and subhourly_post_ensemble_2mems from the comprehensive tests because I think the subhourly post feature (really should be called "subhourly forecast output" or something similar) is broken.

Please go ahead and update the list of tests here. I know I said it during the meeting, but I'll mention again here for the record that @mkavulich is going to add a variable to the WE2E test configuration files that indicates whether a given test is part of the fundamental list, and he will add a flag to run_WE2E_tests.sh that runs just the fundamental set of tests. Once that feature is in, you won't have to maintain a list in this script.

Finally, I just checked the run_WE2E_tests.sh script, and I don't see any if-statements, etc, that remove tests that shouldn't be run on certain platforms, e.g. removing get_from_HPSS_... on platforms that don't have access to NOAA HPSS. I remember @mkavulich mentioned he wanted to add that to the script (so you won't have to do it in this script and the capability will be available to users who run tests from the command line), but I don't know where that stands.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gsketefian I updated the fundamental tests lists (and subsequently the comprehensive tests lists). NOTE: I have logic in the script that adds the MET_ensemble_verification, MET_verification, and pregen_grid_orog_sfc_climo tests only when the platform is NOT Gaea or one of the Parallel Works clusters. Is this correct?

Can you verify that the comprehensive tests list is correct as well? Is there a definitive list of all tests somewhere in the repository?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @jessemcfarland, I just checked on Gaea, and it looks like METplus is not installed there (and the MET version is pretty old; it's v8.1, but latest is v10.1.1), so you're right, the MET_ensemble_verification and MET_verification tests cannot be run there. For pregen_grid_orog_sfc_climo, I cannot tell if the files needed to run that test are in the staging directory because the staging directory does not have read permission for "other" (so can't see contents). I assume you tried running pregen_grid_orog_sfc_climo and it failed, is that right?

For now, let's do what you're doing and remove these 3 tests on gaea and noaacloud, but we should discuss this at the next code management meeting. I've put some points to discuss in the meeting notes (feel free to raise other concerns there). One outstanding question is what to do with tests that work on some platforms but not others. Do we still run the test so we're reminded that there's a failure that needs to be fixed? Removing them here in this script is a bit sneaky in that the user may not realize it's happening and will assume all tests passed.

For the comprehensive tests, the idea is that we don't need a list and should run all available tests. So eventually (after @mkavulich is done with his changes), we shouldn't need a list here. For now, below is a list that you can use for the develop branch (I think the one you have was for the last release branch). I've removed subhourly_post and subhourly_post_ensemble_2mems from the list as well as a couple of others that are actually alternate names (these are nco_inline_post and template_vars, which are just symlinks to nco_grid_RRFS_CONUScompact_25km_ics_HRRR_lbcs_RAP_suite_HRRR and deactivate_tasks, respectively).

MET_ensemble_verification
MET_verification
community_ensemble_008mems
community_ensemble_2mems
custom_ESGgrid
custom_GFDLgrid
custom_GFDLgrid__GFDLgrid_USE_NUM_CELLS_IN_FILENAMES_eq_FALSE
custom_GFDLgrid__GFDLgrid_USE_NUM_CELLS_IN_FILENAMES_eq_TRUE
deactivate_tasks
get_from_HPSS_ics_FV3GFS_lbcs_FV3GFS_fmt_grib2_2019061200
get_from_HPSS_ics_FV3GFS_lbcs_FV3GFS_fmt_grib2_2019101818
get_from_HPSS_ics_FV3GFS_lbcs_FV3GFS_fmt_grib2_2020022518
get_from_HPSS_ics_FV3GFS_lbcs_FV3GFS_fmt_grib2_2020022600
get_from_HPSS_ics_FV3GFS_lbcs_FV3GFS_fmt_grib2_2021010100
get_from_HPSS_ics_FV3GFS_lbcs_FV3GFS_fmt_nemsio
get_from_HPSS_ics_FV3GFS_lbcs_FV3GFS_fmt_nemsio_2019061200
get_from_HPSS_ics_FV3GFS_lbcs_FV3GFS_fmt_nemsio_2019101818
get_from_HPSS_ics_FV3GFS_lbcs_FV3GFS_fmt_nemsio_2020022518
get_from_HPSS_ics_FV3GFS_lbcs_FV3GFS_fmt_nemsio_2020022600
get_from_HPSS_ics_FV3GFS_lbcs_FV3GFS_fmt_nemsio_2021010100
get_from_HPSS_ics_FV3GFS_lbcs_FV3GFS_fmt_netcdf_2021062000
get_from_HPSS_ics_FV3GFS_lbcs_FV3GFS_fmt_netcdf_2022060112_48h
get_from_HPSS_ics_GSMGFS_lbcs_GSMGFS
get_from_HPSS_ics_HRRR_lbcs_RAP
get_from_HPSS_ics_RAP_lbcs_RAP
get_from_NOMADS_ics_FV3GFS_lbcs_FV3GFS_fmt_nemsio
inline_post
nco_ensemble
pregen_grid_orog_sfc_climo
specify_DOT_OR_USCORE
specify_DT_ATMOS_LAYOUT_XY_BLOCKSIZE
specify_EXTRN_MDL_SYSBASEDIR_ICS_LBCS
specify_RESTART_INTERVAL
specify_template_filenames
grid_CONUS_25km_GFDLgrid_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v16
grid_CONUS_3km_GFDLgrid_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v16
grid_RRFS_AK_13km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v16
grid_RRFS_AK_3km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v16
grid_RRFS_CONUS_13km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v15p2
grid_RRFS_CONUS_13km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v16
grid_RRFS_CONUS_13km_ics_FV3GFS_lbcs_FV3GFS_suite_HRRR
grid_RRFS_CONUS_13km_ics_FV3GFS_lbcs_FV3GFS_suite_RRFS_v1beta
grid_RRFS_CONUS_25km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_2017_gfdlmp
grid_RRFS_CONUS_25km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_2017_gfdlmp_regional
grid_RRFS_CONUS_25km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v15p2
grid_RRFS_CONUS_25km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v16
grid_RRFS_CONUS_25km_ics_FV3GFS_lbcs_FV3GFS_suite_HRRR
grid_RRFS_CONUS_25km_ics_FV3GFS_lbcs_FV3GFS_suite_RRFS_v1beta
grid_RRFS_CONUS_25km_ics_FV3GFS_lbcs_RAP_suite_HRRR
grid_RRFS_CONUS_25km_ics_GSMGFS_lbcs_GSMGFS_suite_GFS_2017_gfdlmp
grid_RRFS_CONUS_25km_ics_GSMGFS_lbcs_GSMGFS_suite_GFS_v15p2
grid_RRFS_CONUS_25km_ics_GSMGFS_lbcs_GSMGFS_suite_GFS_v16
grid_RRFS_CONUS_25km_ics_NAM_lbcs_NAM_suite_HRRR
grid_RRFS_CONUS_25km_ics_NAM_lbcs_NAM_suite_RRFS_v1beta
grid_RRFS_CONUS_3km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v15_thompson_mynn_lam3km
grid_RRFS_CONUS_3km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v15p2
grid_RRFS_CONUS_3km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v16
grid_RRFS_CONUS_3km_ics_FV3GFS_lbcs_FV3GFS_suite_HRRR
grid_RRFS_CONUS_3km_ics_FV3GFS_lbcs_FV3GFS_suite_RRFS_v1beta
grid_RRFS_CONUScompact_13km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v16
grid_RRFS_CONUScompact_13km_ics_HRRR_lbcs_RAP_suite_HRRR
grid_RRFS_CONUScompact_13km_ics_HRRR_lbcs_RAP_suite_RRFS_v1beta
grid_RRFS_CONUScompact_25km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v16
grid_RRFS_CONUScompact_25km_ics_HRRR_lbcs_HRRR_suite_HRRR
grid_RRFS_CONUScompact_25km_ics_HRRR_lbcs_HRRR_suite_RRFS_v1beta
grid_RRFS_CONUScompact_25km_ics_HRRR_lbcs_RAP_suite_HRRR
grid_RRFS_CONUScompact_25km_ics_HRRR_lbcs_RAP_suite_RRFS_v1beta
grid_RRFS_CONUScompact_3km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v16
grid_RRFS_CONUScompact_3km_ics_HRRR_lbcs_RAP_suite_GFS_v15p2
grid_RRFS_CONUScompact_3km_ics_HRRR_lbcs_RAP_suite_HRRR
grid_RRFS_CONUScompact_3km_ics_HRRR_lbcs_RAP_suite_RRFS_v1beta
grid_RRFS_NA_13km_ics_FV3GFS_lbcs_FV3GFS_suite_RRFS_v1beta
grid_RRFS_NA_3km_ics_FV3GFS_lbcs_FV3GFS_suite_RRFS_v1beta
grid_RRFS_SUBCONUS_3km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v16
grid_RRFS_SUBCONUS_3km_ics_HRRR_lbcs_RAP_suite_GFS_v15p2
grid_SUBCONUS_Ind_3km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v16
grid_SUBCONUS_Ind_3km_ics_HRRR_lbcs_RAP_suite_HRRR
grid_SUBCONUS_Ind_3km_ics_HRRR_lbcs_RAP_suite_RRFS_v1beta
nco_grid_RRFS_CONUS_13km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v16
nco_grid_RRFS_CONUS_3km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v15_thompson_mynn_lam3km
nco_grid_RRFS_CONUScompact_25km_ics_HRRR_lbcs_RAP_suite_HRRR

@jessemcfarland jessemcfarland force-pushed the feature/ci_tests branch 2 times, most recently from e52ca97 to 5476f88 Compare July 28, 2022 16:30
@gsketefian
Copy link
Collaborator

@jessemcfarland Please let me know if/when you'd like me to take another look at this PR. Thanks.

@jessemcfarland jessemcfarland added the run_we2e_comprehensive_tests Run the comprehensive set of SRW end-to-end tests label Aug 17, 2022
@jessemcfarland
Copy link
Collaborator Author

REBUILD

mkavulich added a commit to mkavulich/ufs-srweather-app that referenced this pull request Aug 26, 2022
* Fix to post flat file.

* Create MET and METplus config files under ush/templates/parm

* Added script to pull and reorg ccpa data. Added a script to run gridstat with METplus. Updated MET and METplus config files.

* Added new jjob for running grid-stat vx. Updated setup.sh to include grid-stat vx. Updated run_gridstatvx script.

* Fixed typo on script name from ksh to sh

* Moved some hard coded items out from the script to the XML

* Updates to get METplus to run with fewer hard-coded paths.

* Updates to add grid-stat task to XML generation.

* Bug fixes for adding grid-stat to XML generation

* Updates to remove hard-coded paths in config files

* Change log dir to put master_metplus log file with other logs under log/, rather than default logs/.

* Updates to generate xml without hard-coded paths for MET

* Add hera gridstat module file

* Add METplus point-stat task for both sfc and upper air

* Small tweaks to remove hard coded paths and add some flexibility

* Updates for adding point-stat into auto-generated xml

* Add in function to set point-stat task to FALSE

* Final tweaks to get it to generate the xml correctly

* Minor updates to run ensure 0,6,12,18

* Tweaks to var list for Point-Stat

* Add METplus settings to config_defaults

* Move quote for end of settings and fix extra comment.

* Fix typos to populate templates correctly

* Updated to include SCRIPTSDIR and other MET specific settings along with updates to FHR syntax

* Update module loads on hera

* Fixed comment for BOTH_VARn_THRESH to avoid syntax issues

* Added files to run grid_stat for a variety of accumulation intervals, including 3, 6, and 24h

* Added module load hpss

* Remove module load informatino from these scripts

* Updated the method of turning on/off vx tasks using jinja template if statement

* Remove commented out lines of code. Fixed typo. Removed gen_wflow.out file.

* Updated pull scripts to have file names dependent on date to pull from HPSS. Updated to export a few more local variables that METplus conf needed in scripts. Updated workflow to use service queue (for now) to for 1h grid_stat and point_stat run and default for 3+h accumulation grid_stat runs)

* moved common_hera.conf to common.conf - no platform specific information included that needs to be handled.

* Remove common_hera.conf

* Add scripts to pull and process MRMS data from NOAA HPSS

* Updates for REFC vx tasks

* updates to obs pull scripts

* Update for adding in reflectivity verification using MRMS analyses and updating name of model output to RRFS rather than HRRR

* Updates to account for CCPA issues on HPSS - day off for 00-05 UTC directories

* Verification mods to feature/add metplus (#1)

* Remove unused/outdated code (ufs-community#313)

## DESCRIPTION OF CHANGES:
* In setup.sh and generate_FV3LAM_wflow.sh, remove temporary codes that fix bugs in the FV3_GFS_2017_gfdlmp_regional suite definition file because those bugs have been fixed (in the ufs-weather-model repo).
* In setup.sh, remove block of code that is no longer necessary because chgres_cube can now initialize from external model data with either 4 or 9 soil levels, and run with LSMs of either 4 or 9 soil levels.
* Remove modifications to LD_LIBRARY_PATH in exregional_run_fcst.sh.
* For the make_ics and make_lbcs tasks, move the setting of APRUN and other machine-specific actions from the J-job to the ex-script in order to be consistent with the other workflow tasks.
* Fix indentation and edit comments.
* Remove unused file load_fv3gfs_modules.sh.

## TESTS CONDUCTED: 
Ran two WE2E tests on hera, new_ESGgrid and new_GFDLgrid:
* new_ESGgrid uses the FV3_GFS_2017_gfdlmp_regional suite.  The test was successful.
* new_GFDLgrid uses the FV3_GFS_2017_gfdlmp suite.  The test was successful.

## ISSUE (optional): 
This resolves issue ufs-community#198.

* Add and call a function that checks for use of Thompson microphysics parameterization in the SDF and if so, adjusts certain workflow arrays to contain the names and other associated values of the fixed files needed by this parameterization so that those files are automatically copied and/or linked to. (ufs-community#319)

## DESCRIPTION OF CHANGES: 
Add and call a function that checks for use of Thompson microphysics parameterization in the suite definition file (SDF).  If not, do nothing.  If so, add to the appropriate workflow arrays the names and other associated values of the fixed files needed by this parameterization so that they are automatically copied and/or linked to instead of being regenerated from scratch in the run_fcst task.

## TESTS CONDUCTED: 
On hera, ran two WE2E tests, one in NCO mode (nco_RRFS_CONUS_25km_HRRRX_RAPX) and the other in community mode (suite_FV3_GSD_v0).  These use suites FV3_GSD_SAR and FV3_GSD_v0, respectively, and both of these call Thompson microphysics.  Both succeeded.

## ISSUE (optional):
This PR resolves issue ufs-community#297.

* RRFS_v1beta SDF changes after reverting from GSL to GFS GWD suite (ufs-community#322) (ufs-community#327)

## DESCRIPTION OF CHANGES:
Removed checks on the RRFS_v1beta SDF implemented for use with the GSL GWD suite (now uses the GFS GWD suite).  No longer copies staged orography files necessary for the GSL GWD suite.

## TESTS CONDUCTED:
Runs to completion on Hera. End-to-end runs DOT_OR_USCORE and suite_FV3_RRFS_v1beta succeeded on Cheyenne.

Co-authored-by: JeffBeck-NOAA <55201531+JeffBeck-NOAA@users.noreply.github.com>

* Update FV3.input.nml for fhzero = 1.0

* Updated conf files for file name conventions.

* Updated MET scripts and MRMS pull scripts.

* Adjust RRFS_CONUS_... grids (ufs-community#294)

## DESCRIPTION OF CHANGES: 
* Adjust RRFS_CONUS_25km, RRFS_CONUS_13km, and RRFS_CONUS_3km grid parameters so that:
  * All grids, including their 4-cell-wide halos, lie completely within the HRRRX domain.
  * All grids have dimensions nx and ny that factor "nicely", i.e. they don't have factors greather than 7.
  * The write-component grids corresponding to these three native grids cover as much of the native grids as possible without going outside of the native grid boundaries.  The updated NCL scripts (see below) were used to generate the write-component grid parameters.
* For the RRFS_CONUS_13km grid, reduce the time step (DT_ATMOS) from 180sec to 45sec.  This is necessary to get a successful forecast with the GSD_SAR suite, and thus likely also the RRFS_v1beta suite.
* Modify WE2E testing system as follows:
  * Add new tests with the RRFS_CONUS_25km, RRFS_CONUS_13km, and RRFS_CONUS_3km grids that use the GFS_v15p2 and RRFS_v1beta suites (which are now the ones officially supported in the first release of the short-range weather app) instead of the GFS_v16beta and GSD_SAR suites, respectively.
  * For clarity, rename the test configuration files that use the GFS_v16beta and GSD_SAR suites so they include the suite name.
  * Update list of WE2E tests (baselines_list.txt).
* Update the NCL plotting scripts to be able to plot grids with the latest version of the workflow.

## TESTS CONDUCTED: 
On hera, ran tests with all three grids with the GFS_v15p2 and RRFS_v1beta suites (a total of 6 tests).  All were successful.

* Remove redundant model_configure.${CCPP_PHYS_SUITE} template files; use Jinja2 to create model_configure (ufs-community#321)

## DESCRIPTION OF CHANGES:
* Remove model_configure template files whose names depend on the physics suite, i.e. files with names of the form model_configure.${CCPP_PHYS_SUITE}.  Only a single template file is needed because the contents of the model_configure file are not suite dependent.  This leaves just one template file (named model_configure).
* Change the function create_model_configure_file.sh and the template file model_configure so they use jinja2 instead of sed to replace placeholder values.
* Absorb the contents of the write-component template files wrtcmp_lambert_conformal, wrtcmp_regional_latlon, and wrtcmp_rotated_latlon into the new jinja2-compliant model_configure file.  We can do this because Jinja2 allows use of if-statements in the template file.
* In the new model_configure jinja2 template file, include comments to explain the various write-component parameters.

## TESTS CONDUCTED:
On Hera, ran the two WE2E tests new_ESGgrid and new_GFDLgrid.  The first uses a "lambert_conformal" type of write-component grid, and the second uses a "rotated_latlon" type of write-component grid.  (The write-component also allows "regional_latlon" type grids, which is just the usual earth-relative latlon coordinate system, but we do not have any cases that use that.)  Both tests succeeded.

## ISSUE (optional): 
This PR resolves issue ufs-community#281.

* Add Thompson ice- and water-friendly aerosol climo file support (ufs-community#332)

* Add if statement in set_thompson_mp_fix_files.sh to source Thompson climo file when using a combination of a Thompson-based SDF and non-RAP/HRRR external model data

* Modify if statement based on external models for Thompson climo file

* Remove workflow variable EMC_GRID_NAME (ufs-community#333)

## DESCRIPTION OF CHANGES: 
* Remove the workflow variable EMC_GRID_NAME.  Henceforth, PREDEF_GRID_NAME is the only variable that can be used to set the name of the predefined grid to use.
* Make appropriate change of variable name (EMC_GRID_NAME --> PREDEF_GRID_NAME) in the WE2E test configuration files.
* Change anywhere the "conus" and "conus_c96" grids are specified to "EMC_CONUS_3km" and "EMC_CONUS_coarse", respectively.
* Rename WE2E test configuration files with names containing the strings "conus" and "conus_c96" by replacing these strings with "EMC_CONUS_3km" and "EMC_CONUS_coarse", respectively.
* Update the list of WE2E test names (tests/baselines_list.txt).
* Bug fixes not directly related to grids:
  * In config.nco.sh, remove settings of QUEUE_DEFAULT, QUEUE_HPSS, and QUEUE_FCST since these are now set automatically (due to another PR).
  * In the template file FV3LAM_wflow.xml, add the ensemble member name after RUN_FCST_TN in the dependency of the run_post metatask.

## TESTS CONDUCTED: 
Since this change only affects runs in NCO mode, the following NCO-mode WE2E tests were rerun on hera, all successfully:
```
nco_EMC_CONUS_3km                                       SUCCESS
nco_EMC_CONUS_coarse                                    SUCCESS
nco_EMC_CONUS_coarse__suite_FV3_GFS_2017_gfdlmp         SUCCESS
nco_RRFS_CONUS_25km_HRRRX_RAPX                          SUCCESS
nco_RRFS_CONUS_3km_FV3GFS_FV3GFS                        SUCCESS
nco_RRFS_CONUS_3km_HRRRX_RAPX                           SUCCESS
nco_ensemble                                            SUCCESS
```

* Port workflow to Orion (ufs-community#309)

## DESCRIPTION OF CHANGES:
* Add stanzas for Orion where necessary.
* Add new module files for Orion.
* On Orion, both the slurm partition and the slurm QOS need to be specified in the rocoto XML in order to be able to have wall times longer than 30 mins (the partition needs to be specified because it is by default "debug", which has a limit of 30 mins).  Thus, introduce modifications to more easily specify slurm partitions:
    * Remove the workflow variables QUEUE_DEFAULT_TAG, QUEUE_HPSS_TAG, and QUEUE_FCST_TAG that are currently used to determine whether QUEUE_DEFAULT, QUEUE_HPSS, and QUEUE_FCST specify the names of queue/QOS's or slurm partitions.
    * Add the workflow variables PARTITION_DEFAULT_TAG, PARTITION_HPSS_TAG, and PARTITION_FCST_TAG.  These will be used to specify slurm partitions only, and the variables QUEUE_DEFAULT, QUEUE_HPSS, and QUEUE_FCST will be used to specify queues/QOS's only.

IMPORTANT NOTE:
On Orion, in order to load the regional_workflow environment needed for generating an experiment, the user must first issue the following commands:
```
module use -a /apps/contrib/miniconda3-noaa-gsl/modulefiles
module load miniconda3
conda activate regional_workflow
```

## TESTS CONDUCTED:
Ran 11 WE2E tests on Orion, Hera, and Cheyenne.

Results on Orion:
```
community_ensemble_2mems          SUCCESS
DOT_OR_USCORE                     SUCCESS
grid_GSD_HRRR_AK_50km             FAILURE - In the run_fcst task.
  * Error message:
  !!! (1) Error in subr radiation_aerosols: unrealistic surface pressure =
           1                     NaN
new_ESGgrid                       SUCCESS
new_GFDLgrid                      SUCCESS
regional_001                      SUCCESS
regional_002                      SUCCESS
suite_FV3_GFS_v15p2               SUCCESS
suite_FV3_GFS_v16beta             SUCCESS
suite_FV3_GSD_SAR                 SUCCESS
suite_FV3_GSD_v0                  SUCCESS
```
Results on Hera:
```
community_ensemble_2mems          SUCCESS
DOT_OR_USCORE                     SUCCESS
grid_GSD_HRRR_AK_50km             SUCCESS
new_ESGgrid                       SUCCESS
new_GFDLgrid                      SUCCESS
regional_001                      SUCCESS
regional_002                      SUCCESS
suite_FV3_GFS_v15p2               SUCCESS
suite_FV3_GFS_v16beta             SUCCESS
suite_FV3_GSD_SAR                 SUCCESS
suite_FV3_GSD_v0                  SUCCESS
```
Results on Cheyenne:
```
community_ensemble_2mems          SUCCESS
DOT_OR_USCORE                     SUCCESS
grid_GSD_HRRR_AK_50km             FAILURE - In run_fcst task.
  * Error message:
  !!! (1) Error in subr radiation_aerosols: unrealistic surface pressure =
           1                     NaN
new_ESGgrid                       SUCCESS
new_GFDLgrid                      SUCCESS
regional_001                      SUCCESS
regional_002                      SUCCESS
suite_FV3_GFS_v15p2               SUCCESS
suite_FV3_GFS_v16beta             SUCCESS
suite_FV3_GSD_SAR                 SUCCESS
suite_FV3_GSD_v0                  SUCCESS
```
All succeed except GSD_HRRR_AK_50km on Orion and Cheyenne.  It is not clear why grid_GSD_HRRR_AK_50km fails on Orion and Cheyenne but not Hera.  Seems to point to a bug in the forecast model.  These two failures are not so important since this grid will soon be deprecated.

Also tested successfully on Jet by @JeffBeck-NOAA and on Odin and Stampede by @ywangwof.

## ISSUE:
This resolves Issue ufs-community#152.

## CONTRIBUTORS:
@JeffBeck-NOAA @ywangwof @christinaholtNOAA

* Removed comments from exregional_get_mrms_files.sh and removed fhzero from FV3.input.yml

* Update FV3.input.nml for fhzero = 1.0

* Updated conf files for file name conventions.

* Updated MET scripts and MRMS pull scripts.

* Removed comments from exregional_get_mrms_files.sh and removed fhzero from FV3.input.yml

Co-authored-by: gsketefian <31046882+gsketefian@users.noreply.github.com>
Co-authored-by: Michael Kavulich <kavulich@ucar.edu>
Co-authored-by: JeffBeck-NOAA <55201531+JeffBeck-NOAA@users.noreply.github.com>
Co-authored-by: Jamie Wolff <jwolff@ucar.edu>

* Change cov_thresh for REFL to be a true max in nbrhood as SPC does.

* Job script for get_obs_ccpa

* Jobs script for get_obs_mrms

* Jobs script for get_obs_ndas

* Added external variables necessary to get_ccpa script

* Updated workflow template with separate get obs tasks

* Separated pull scripts from run scripts

* Added necessary defaults/values for defining pull tasks

* Added module files, default config.sh options, and changed dependencies for vx tasks

* Changed name of new workflow to FV3LAM_wflow.xml

* Added task get_obs_tn, removed config.sh, updated config_defaults and config.community.sh

* Adjusted the community and default config files based on comments

* Updated FV3LAM workflow

* Fixed discrepancies in config.community.sh

* Fixed discrepancies in config_defaults.sh

* Fixed discrepancies in config_defaults.sh round 2

* Fixed discrepancies in config_defaults.sh round 3

* Fixed discrepancies in config_defaults.sh round 4

* Fixed discrepancies in config.community.sh round 2

* Fixed discrepancies in config.community.sh round 3

* Fixed discrepancies in generate_FV3LAM_wflow.sh

* Fixed discrepancies in generate_FV3LAM_wflow.sh round 2

* Fixed discrepancies in generate_FV3LAM_wflow.sh round 3

* Updated FV3LAM_wflow template

* Separated Pull Data Scripts from Run Vx Scripts: Feature/add_metplus (ufs-community#2)

* Job script for get_obs_ccpa

* Jobs script for get_obs_mrms

* Jobs script for get_obs_ndas

* Added external variables necessary to get_ccpa script

* Updated workflow template with separate get obs tasks

* Separated pull scripts from run scripts

* Added necessary defaults/values for defining pull tasks

* Added module files, default config.sh options, and changed dependencies for vx tasks

* Changed name of new workflow to FV3LAM_wflow.xml

* Added task get_obs_tn, removed config.sh, updated config_defaults and config.community.sh

* Adjusted the community and default config files based on comments

* Updated FV3LAM workflow

* Fixed discrepancies in config.community.sh

* Fixed discrepancies in config_defaults.sh

* Fixed discrepancies in config_defaults.sh round 2

* Fixed discrepancies in config_defaults.sh round 3

* Fixed discrepancies in config_defaults.sh round 4

* Fixed discrepancies in config.community.sh round 2

* Fixed discrepancies in config.community.sh round 3

* Fixed discrepancies in generate_FV3LAM_wflow.sh

* Fixed discrepancies in generate_FV3LAM_wflow.sh round 2

* Fixed discrepancies in generate_FV3LAM_wflow.sh round 3

* Updated FV3LAM_wflow template

* Fixed the dependencies of the vx tasks

* Fixed Vx Task Dependencies in Workflow: Feature/add metplus (ufs-community#3)

* Job script for get_obs_ccpa

* Jobs script for get_obs_mrms

* Jobs script for get_obs_ndas

* Added external variables necessary to get_ccpa script

* Updated workflow template with separate get obs tasks

* Separated pull scripts from run scripts

* Added necessary defaults/values for defining pull tasks

* Added module files, default config.sh options, and changed dependencies for vx tasks

* Changed name of new workflow to FV3LAM_wflow.xml

* Added task get_obs_tn, removed config.sh, updated config_defaults and config.community.sh

* Adjusted the community and default config files based on comments

* Updated FV3LAM workflow

* Fixed discrepancies in config.community.sh

* Fixed discrepancies in config_defaults.sh

* Fixed discrepancies in config_defaults.sh round 2

* Fixed discrepancies in config_defaults.sh round 3

* Fixed discrepancies in config_defaults.sh round 4

* Fixed discrepancies in config.community.sh round 2

* Fixed discrepancies in config.community.sh round 3

* Fixed discrepancies in generate_FV3LAM_wflow.sh

* Fixed discrepancies in generate_FV3LAM_wflow.sh round 2

* Fixed discrepancies in generate_FV3LAM_wflow.sh round 3

* Updated FV3LAM_wflow template

* Fixed the dependencies of the vx tasks

* Manual merge with develop that didn't seem to work before. Trying to get feature branch updated so it will run again!

* Add local module files

* Add environment variable for SCRIPTSDIR

* Remove echo statement

* Remove old module files

* Update to config_default for walltime for ndas pull. Update to metplus parm for obs file template. Update to FV3LAM xml to not include 00 hour for verification

* Update template to remove full path

* Verification channges for obs. (ufs-community#4)

* Verification channges for obs.

* Update config_defaults.sh for vx description

* Update config_defaults.sh to remove extraneous MET info.

Co-authored-by: Michelle Harrold <Michelle.Harrold@noaa.gov>

* Initial METplus .confs and MET config files for EnsembleStat APCP

* J-Job script for running ensemble stat

* Exregional script for ensemble-stat

* Added EnsembleStat.conf for A6 and A24. Added PCPCombine to A3, A6, and A24.

* Added EnsembleStatConfig files for 6 and 24h

* Copy of workflow template with precipitation ensemble tasks added. Will become main template when testing is complete

* Added export statement for number of ensemble members

* Added necessary task definitions in ush

* Updated workflow to included ENTITY definitions for ensstat

* Fixed typo

* Added ens vx configs

* Pull in updates from develop that were not merging properly. Small change to config.community to turn off vx tasks by default.

* Added/mod files for point ens vx.

* Updated metplus conf files for ens point vx

* Did manual merge of these files because it was not handled properly automatically

* Adding additional variables to METplus for regional workflow (ufs-community#5)

* Changes made based on meeting with Michelle and Jamie

* Updating fork

* Cleanup after merge

* Added additional ens vx

* Ensemble point vx mods

* Additional updates for ens and det vx

* ensgrid_mean and ensgrid_prob .conf files for APCP

* Updates for ensemble vx.

* Added mean and prob point-stat configs

* Updates to ensgrid_vx

* Updates for mean/prob vx.

* Updates to FV3LAM_wflow.xml

* Deterministic and ensembel vx updates.

* Ensgrid mean

* Update setup.sh

* Changed workflow template title

* Updates to deterministic and ensemble verification

* Created EnsembleStat METplus conf and MET config files for REFC

* Added reflectivity mean and prob METplus and MET config files. Updated APCP mean and prob METplus and MET config files.

* Added all J-job scripts, exregional scripts, and necessary definitons for workflow generation for all ensgrid_mean and ensgrid_prob tasks

* Updates to workflow to add ensgrid_vx

* Changes I made to account for runtime errors.

* Made changes to directory structures

* Made changes to directory structures and variables

* Changed log files and stage dir.

* Changes for grid- and point-vx.

* Updated METplus ensemble precip conf files.

* Mods for ensemble and deterministic vx.

* Change to GridStatConfig_REFC_mean

* Updated EnsembleStat_REFC.conf

* Updated to METv10.0.0

* Updated conf files for paths.

* Updated FV3LAM_wflow.xml template.

* Mods for vx dependencies

* Updated for censor thresh in METplus conf files; changes to FV3LAM_wflow.xml after sync with develop.

* Updated exregional_run_fcst.sh generate_FV3LAM_wflow.sh to address merge with develop.

* Mods for ensemble precip vx, handling padded/non-padded ensemble member names, fixes for python environment for obs pull.

* Changes to RETOP (units) and REFC (naming and level) verification.

* Fix OUTPUT_BASE for deterministic vx.

* Changes to some verification ex-scripts for syntax and path fixes. Included start end dates of incorrect 01-h CCPA data. Removed some extra lines in FV3LAM_wflow.xml template.

* Changed comp. ref. variable name in GridStat_REFC_prob.conf

* Changed comp. ref. level in GridStat_REFC_prob.conf

* Updated logic for number padding in the directory name when running in ensemble mode.

* Added MET ensemble vx WE2E test.

* Modified location of obs to live outside cycle dir, allowing for obs to be shared across cycles.

* Mods to address comments on PR575.

* Updated ensemble METPlus conf files for changes to post output name.

* Addessed comments in PR and mods for 10-m WIND.

* Addressing final comments in PR.

Co-authored-by: Jamie Wolff <jwolff@ucar.edu>
Co-authored-by: gsketefian <31046882+gsketefian@users.noreply.github.com>
Co-authored-by: Michael Kavulich <kavulich@ucar.edu>
Co-authored-by: JeffBeck-NOAA <55201531+JeffBeck-NOAA@users.noreply.github.com>
Co-authored-by: lindsayrblank <lblank@ucar.edu>
Co-authored-by: Michelle Harrold <Michelle.Harrold@noaa.gov>
Co-authored-by: PerryShafran-NOAA <62255233+PerryShafran-NOAA@users.noreply.github.com>
@jessemcfarland
Copy link
Collaborator Author

REBUILD

@JeffBeck-NOAA
Copy link
Collaborator

@jessemcfarland, do you have any updates on this PR? Thanks!

Jesse McFarland added 15 commits September 20, 2022 12:19
Set the value of platform to 'noaacloud' when SRW_PLATFORM matches a
Parallel Works cluster name.
This change allows the platform filter to work correctly, otherwise, the
Parallel Works clusters would block indefinitely waiting to execute the
matrix on a agent/node that was not started.
Some platforms do not recognize quoted variables within an arithmetic
expression. This change removes the quotes.
* Add a parameter to the Jenkins pipeline that allows the comprehensive
set of workflow and end-to-end tests to be executed during the test
stage.
* Add logic to the Jenkins pipeline that checks for a specific Pull
Request label, then overrides the comprehensive end-to-end test
parameter's value if set.
The experiments directory uses a lot of disk space. Removing it after
the end-to-end tests complete will allow us to keep the workspaces
longer. However, the test logs should be preserved. This change creates
a tarball containing the test logs in the workspace, which is archived,
then removes the experiments directory.
Prevent Jenkins from executing multiple pipelines at the same time for a
given branch or change request.
@jessemcfarland
Copy link
Collaborator Author

@jessemcfarland, do you have any updates on this PR? Thanks!

@JeffBeck-NOAA I'm working on rebasing the branch and making adjustments to paths related to the incorporation of regional_workflow repo now. I'll schedule a run after that to see where we are, but I think there have been some changes to the platforms that will require other code updates and vice versa.

@jessemcfarland
Copy link
Collaborator Author

REBUILD

@jessemcfarland
Copy link
Collaborator Author

@JeffBeck-NOAA Here is the current summary of the failures, the full log for each platform/stage can be viewed in Jenkins. I'll need some assistance in determining the appropriate fix for each failure. NOTE: All platforms were working (executing the tests) when this pipeline was tested on the latest release branch.

  • Cheyenne: Fails during test stage; fails to load conda module because python module is already loaded.
  • Gaea: Fails during test stage; see error below

The directory (COMIN) that needs to be specified when running the workflow in NCO mode (RUN_ENVIR set to "nco") AND using the FV3GFS or the GSMGFS as the external model for ICs and/or LBCs has not been specified for this machine (MACHINE): MACHINE= "GAEA"

  • Hera: Comprehensive tests are executed successfully, however, there are 19 failures. I need someone to investigate these failures and determine if any tests should be omitted for Hera.
  • Jet: Comprehensive tests are executed successfully, however, there are 24 failures. I need someone to investigate these failures and determine if any tests should be omitted for Jet.
  • Orion: Fails during test stage; OCL_ICD_FILENAMES_RESET: unbound variable error during conda initialization.
  • Parallel Works AWS: Fails during build stage; fails to load libpng module.

@danielabdi-noaa
Copy link
Collaborator

danielabdi-noaa commented Sep 21, 2022

@jessemcfarland I would ignore 17 of those failures (get_from_HPSS*) on hera/jet due to issue #349
Manual re-run solves all of those, most likely because it was a timeout issue

get_from_HPSS_ics_FV3GFS_lbcs_FV3GFS_fmt_grib2_2019061200 FAILURE
get_from_HPSS_ics_FV3GFS_lbcs_FV3GFS_fmt_grib2_2019101818 FAILURE
get_from_HPSS_ics_FV3GFS_lbcs_FV3GFS_fmt_grib2_2020022518 FAILURE
get_from_HPSS_ics_FV3GFS_lbcs_FV3GFS_fmt_grib2_2020022600 FAILURE
get_from_HPSS_ics_FV3GFS_lbcs_FV3GFS_fmt_grib2_2021010100 FAILURE
get_from_HPSS_ics_FV3GFS_lbcs_FV3GFS_fmt_nemsio FAILURE
get_from_HPSS_ics_FV3GFS_lbcs_FV3GFS_fmt_nemsio_2019061200 FAILURE
get_from_HPSS_ics_FV3GFS_lbcs_FV3GFS_fmt_nemsio_2019101818 FAILURE
get_from_HPSS_ics_FV3GFS_lbcs_FV3GFS_fmt_nemsio_2020022518 FAILURE
get_from_HPSS_ics_FV3GFS_lbcs_FV3GFS_fmt_nemsio_2020022600 FAILURE
get_from_HPSS_ics_FV3GFS_lbcs_FV3GFS_fmt_nemsio_2021010100 FAILURE
get_from_HPSS_ics_FV3GFS_lbcs_FV3GFS_fmt_netcdf_2021062000 FAILURE
get_from_HPSS_ics_GSMGFS_lbcs_GSMGFS FAILURE
get_from_HPSS_ics_HRRR_lbcs_RAP FAILURE
get_from_HPSS_ics_RAP_lbcs_RAP FAILURE
get_from_NOMADS_ics_FV3GFS_lbcs_FV3GFS_fmt_nemsio FAILURE

That leaves out two failures for Hera

nco_grid_RRFS_CONUS_13km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v16 FAILURE
nco_grid_RRFS_CONUS_3km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v15_thompson_mynn_lam3km FAILURE

These should not have failed comparing to the last comprhensive test done during merge here #343

For Jet, besides the above 19 failures

nco_ensemble FAILURE
pregen_grid_orog_sfc_climo FAILURE
grid_RRFS_CONUScompact_25km_ics_HRRR_lbcs_RAP_suite_RRFS_v1beta FAILURE
grid_RRFS_CONUS_25km_ics_GSMGFS_lbcs_GSMGFS_suite_GFS_v16 FAILURE
grid_RRFS_CONUScompact_3km_ics_HRRR_lbcs_RAP_suite_GFS_v15p2 FAILURE
grid_RRFS_NA_3km_ics_FV3GFS_lbcs_FV3GFS_suite_RRFS_v1beta FAILURE

The first two are known failures on Jet comparing to #343. The rest are most likely due to forecast runnign out of time given that they are mostly 3km runs. You would have to look at the log files.

For Orion, I believe I have had that same issue for months when I last tried to incorporate your changes from release to develop in PR #778 The Jenkins pipeline page seem to be no more accessible but here is the link
https://jenkins-epic.woc.noaa.gov/blue/organizations/jenkins/ufs-srweather-app%2Fpipeline/detail/PR-309/6/pipeline/145/
Gaea and Cheynne were working fine at the time though but there is an upgrade to conda for cheynne since then.

In my opinion, it is going to be hard to a get a green on comprehensive tests given some test cases fail first time they are run
but a re-run works just fine.

@MichaelLueken
Copy link
Collaborator

As discussed in this afternoon's meeting, since there are two approvals, I will merge this work into the official repository.

@MichaelLueken MichaelLueken merged commit dd0677b into ufs-community:develop Sep 29, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
run_we2e_comprehensive_tests Run the comprehensive set of SRW end-to-end tests
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants