You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In addition to #93 it will be nice to expose an ability for users to expose arbitrary metrics like code coverage, benchmark score, amount of TODOs, etc.
Context
Cirrus can comment on PRs with comparison of such stats with the base of the PR. Plus expose badges.
Each stat should have properties like:
unit to define if it's a percent or an absolute number
merge_strategy to define how to merge stats with the same name from different tasks. Either override or average (which might be useful for benchmarking: run couple of tasks in parallel and take the average).
on_success would be a key under task, right? If so, the metric name should be concatenated with the task name by default, and merging of metrics across tasks should only be done if requested.
Interesting! Will keep an eye on @seriesci! Thanks for sharing!
I was thinking about scoping metrics by build (aka SHA). That's why there is merge_strategy to merge metrics of the same name produced by different tasks within a build (you can run benchmark several times and take an average as a result).
Description
In addition to #93 it will be nice to expose an ability for users to expose arbitrary metrics like code coverage, benchmark score, amount of TODOs, etc.
Context
Cirrus can comment on PRs with comparison of such stats with the base of the PR. Plus expose badges.
Each stat should have properties like:
unit
to define if it's a percent or an absolute numbermerge_strategy
to define how to merge stats with the same name from different tasks. Eitheroverride
oraverage
(which might be useful for benchmarking: run couple of tasks in parallel and take the average).Syntax might look like this:
The text was updated successfully, but these errors were encountered: