You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
During the meeting on the recipe testing strategy (#2259) on October 14, 2021, we discussed the automatized testing of all recipes for the releases of ESMValTool. This needs to be done to check that all recipes run fine with the versions of the Core/Tool to be released and output produced have not changed compare to the previous release.
This process have been partially automatized in #2219 and a website is available to scroll through the recipe runs (for example). Some aspects remain to be clarified:
Who runs this and what are the computational efforts needed? Can we record the computational cost (runtime, memory, cpu usage) to run each recipe in one place?
Who is responsible for checking the outcome of these tests? The authors and maintainers could play an active role and help to make a first check of what failed. Core developers can take over whenever needed.
How do we reach out recipe maintainers more easily? Would it help to record their GitHub IDs in the config-references.yml file?
Which measures can be taken for recipe failures due to missing data, in particular CMIP5 data? One option is to use the automatic download option to recover the missing data. Another option is to ask maintainers to update their recipes with CMIP6 data or revise the list of datasets needed.
Which measures can be taken for recipes that are known to fail since a long time and for which no maintenance is done? Shall we move those to a "legacy" subfolders of the esmvaltool/recipes with a note "recipe not maintained anymore"? New ESMValTool users may assume that all recipes in the main branch would run fine and may not check related GitHub issues.
To reduce the computational costs for some recipes, could we ask maintainers to reduce the number of datasets/years used? This is not always possible and depends on how the diagnostic scripts have been implemented.
Feel free to edit this summary or to comment in this issue.
The text was updated successfully, but these errors were encountered:
I will work on implementing automated regression tests for recipes in 2022. This will reduce the workload for recipe maintainers, because before we make the release, they will then only need to look at the output of recipes where the output changed since the previous release.
Can we record the computational cost (runtime, memory, cpu usage) to run each recipe in one place?
Is your feature request related to a problem? Please describe.
During the meeting on the recipe testing strategy (#2259) on October 14, 2021, we discussed the automatized testing of all recipes for the releases of ESMValTool. This needs to be done to check that all recipes run fine with the versions of the Core/Tool to be released and output produced have not changed compare to the previous release.
This process have been partially automatized in #2219 and a website is available to scroll through the recipe runs (for example). Some aspects remain to be clarified:
config-references.yml
file?esmvaltool/recipes
with a note "recipe not maintained anymore"? New ESMValTool users may assume that all recipes in the main branch would run fine and may not check related GitHub issues.Feel free to edit this summary or to comment in this issue.
The text was updated successfully, but these errors were encountered: