-
Notifications
You must be signed in to change notification settings - Fork 174
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve Codecov config for unexpected coverage changes #621
Conversation
Hello, I'm Franklin Bot and I will run some test suites that validate the page speed.
|
Codecov Report
@@ Coverage Diff @@
## main #621 +/- ##
==========================================
+ Coverage 95.86% 96.01% +0.14%
==========================================
Files 95 95
Lines 25629 25633 +4
==========================================
+ Hits 24570 24611 +41
+ Misses 1059 1022 -37 see 8 files with indirect coverage changes 📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I respect the initiative!
Couple of comments:
- I'm not sure we should be setting a specific codecov config for milo. Milo kind of sets the baseline for our other adobecom projects and so we should probably make configuration changes to our codecov in our global config - https://docs.codecov.com/docs/codecov-yaml
- I don't know the history for this PR, but in my opinion a Jira task and link is missing. This would show that this change had been verified with the milo team first before just building it. - Submittings PRs
patch: | ||
default: | ||
target: 100% | ||
threshold: 0.1% |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we adding this threshold because the coverage might be irregular? Imo we shouldn't be affected by coverage irregularities and allowing a threshold of 0.1% is not the way to fix it.
We're always uploading our coverage reports, our tests should not be time sensitive etc. If we are having irregularities I'd like to investigate what the root cause of them is first before allowing for thresholds.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Imo we shouldn't be affected by coverage irregularities
We are. Look at the report for the PR herein. I added a single non-code file and look how it changed.
allowing a threshold of 0.1% is not the way to fix it.
It is. I made the change with 100% code coverage on patch preventing threshold from decreasing total coverage over time.
We're always uploading our coverage reports
We are not. Example: https://app.codecov.io/gh/adobecom/milo/commit/8fa7b0aa06758cf0c65d7e43a078de7ee57bd347
If we are having irregularities I'd like to investigate what the root cause of them is first before allowing for thresholds.
I already have. That is the point of this PR after reading Unexpected Coverage Changes
and conducting my own investigation, with my findings outlined here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@hparra I've taken a deeper dive and it seems to me that this difference in code coverage is being caused by some of our tests not being idempotent. In my opinion we need to investigate which tests do not always give the same result and fix them instead of allowing for a threshhold of 0.1%. If your test coverage had gone down by 0.13% btw, instead of going up, then codecov would have failed. This threshold therefore seems like a bandage solution to a bigger problem that we have.
On the codecov link that you attached, they describe how to investigate this issue further, but do not describe setting thresholds as an acceptable solution. Reading this other github issue of a team having similar problems, they also had to investigate the root cause of their tests giving different results.
I'm still against setting these arbitrary thresholds.
project: | ||
default: | ||
target: auto | ||
threshold: 0.1% |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd remove the threshhold here too.
Concerns come from discussion that was started by one of org's lead architects 64 days ago and no one had responded to until now.
I disagree strongly. Projects and teams have their own standards and exceptions. We should never couple. I have never seen coupling like that at scale (5000+ project repos) even after they were moved to a monorepo.
I could not test this change without making the actual PR since codecov and other checks are not enabled on my fork and adding them may affect whatever non-determinism may be happening here. Intent of these changes was mentioned in 1:1 with @auniverseaway and with Milo Contributors last week. I could not bring it up again this week because meeting was canceled. My hope was that discussion would lead to creation of a new epic that would cover multiple changes to make checks green and improve developer velocity. |
Fair - you still should have a Jira issue though. Regarding the global config, good point. 👍 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After discussion today we've agreed that we'll try this out and separate tasks will be created to investigate the flaky tests.
Resolves one issue causing red builds. See https://github.com/orgs/adobecom/discussions/626
Resolves: MWPW-129597
Test URL:
Validation:
curl -X POST --data-binary @codecov.yaml https://codecov.io/validate
Documentation: https://docs.codecov.com/docs/codecovyml-reference