You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Add an option in the compare dataset drilldown called ' Compare model errors'.
The second selector will contains the various datasets that are connected as input (output of calibrate operators + one ground truth dataset). The user will select the one that corresponds to the 'ground truth'. This will be used the same way it is used for comparing interventions i.e. as the baseline.
The only new thing here is the need for the mapping UI. The first column will be the ground truth dataset. Note that the variables from that dataset will be used to select what to plot in the output settings.
The table will show the selected error metric. The MAE metric is computed the same way as it is in Calibrate. The script to compute WIS will be provided by @liunelson. The 'Overall' column in the table will be computed by averaging the error metric of all the variables (not just the ones displayed).
The chart should contain all timepoints where the datasets overlap with the ground truth (i.e. where an error can be computed).
The text was updated successfully, but these errors were encountered:
shawnyama
changed the title
[FEAT]: Compare model errors in 'Compare datasets' operator
[FEAT](compare datasets): MAE table in Compare model errors
Jan 31, 2025
Add an option in the compare dataset drilldown called ' Compare model errors'.
The second selector will contains the various datasets that are connected as input (output of calibrate operators + one ground truth dataset). The user will select the one that corresponds to the 'ground truth'. This will be used the same way it is used for comparing interventions i.e. as the baseline.
The only new thing here is the need for the mapping UI. The first column will be the ground truth dataset. Note that the variables from that dataset will be used to select what to plot in the output settings.
The table will show the selected error metric. The MAE metric is computed the same way as it is in Calibrate. The script to compute WIS will be provided by @liunelson. The 'Overall' column in the table will be computed by averaging the error metric of all the variables (not just the ones displayed).
The chart should contain all timepoints where the datasets overlap with the ground truth (i.e. where an error can be computed).
The text was updated successfully, but these errors were encountered: