-
Notifications
You must be signed in to change notification settings - Fork 337
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rep001 1 (rez-test improvements) #807
Conversation
-fix bug where test was expanding root to first variant, even though first variant may not be the actual variant the test is running in -better output (summary added)
-added handling of on_variants.requires case
-added --inplace support
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So I had time to look at the PR. In general I like your changes. I left some questions.
I'll also leave two general questions here:
- Should we add support for doing
rez-test .
orrez-test /path/to/package
(can be useful for CI) in this PR? - Are you planning to add a variant selection flag to the CLI?
|
||
if returncode: | ||
sys.exit(returncode) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We no more exit with non-zero exit codes? Is that intentional or on purpose?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's intentional, as it isn't clear what this would mean anymore.
Previously, rez-test raised as soon as a test fails. It doesn't do that anymore (unless you specify --stop-on-fail), and it also iterates over variants. So now, you get a summary of all the tests, showing whether they passed/failed/skipped. In that context it isn't really clear what the exit code should be, or if it's useful to have at all.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have the feeling it becomes unclear because the tests are skipped instead of being marked as failed. I've been thinking about it even more today and I really think tests that fail to resolve should really be marked as failed. That would avoid hiding possible issues and would probably make it possible to use a different exit code more easily.
A lot of CI system base themselves on the exit code of processes to know if a step failed or not. If we don't return a proper exit code, it's going to be quite difficult to know when a test failed or not and mark a build as failed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok I'll mark those as failed, and set exit code to match exit code of first failed test.
Optional("on_variants"): Or( | ||
bool, | ||
{ | ||
"requires": [package_request_schema] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm curious here, how would you use this? If I understand correctly, this is a selection mechanism. So you can select the variants that the test should run on. So you can only have one filter?
Also, why not call the attribute selection
or filter
? requiressounds too much like the package
requiresand the tests
requires. Or maybe
matches`?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"on_variants" is the scope that you would consider as the filter, whereas the fields within ("requires") can be thought of as the type of filter. So this reads like "run the test on variants that require X". For eg, I haven't added it here (and probably won't) but in theory it would make sense to also have an "index" filter option, within the on_variants dict. And "on_variants" can't be named "filter" because that's too generic. We could go for "variants_filter", but is that any better than "on_variants"?
Example of use: on_variants.requires=["maya"] will run only on variants that (directly) require maya; on_variants.requires=["!nuke"] would run on all non-nuke variants.
I suppose you could have multiple 'requires' filters that would OR together, but I'm not sure that would ever be useful, and it would get quite confusing with conflict ops present. Ie, what would "!maya OR !nuke: mean? That would be the same as no requires filter at all. ANDed options aren't necessary, because you only need a single 'requires' field to describe that (eg ["maya", "!python-3"] will select any maya variant not using python-3).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that my comment was not on the on_variant
but on the requires
key (from your comment I have the feeling you thought I was talking about the on_variant name...)
What about:
"on_variant": {
"filter_type" (Or just type): "requirements", (eventually could be index too)
"filter": ["maya"]
}
I feel this would make the intention clearer here than having yet another property called requires
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is ok, and if different filter types are added, this will avoid some confusion.
I would go with type/value though rather than filter_type/filter (the on_variant dict is the filter)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great. I'm fine with type
and value
.
else: | ||
tests_dict = package.tests | ||
|
||
return sorted((tests_dict or {}).keys()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Technically you can just do sorted(test_dict or {})
else: | ||
command = map(variant.format, command) | ||
if reqlist.conflict: | ||
continue |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As a user I would probably like to know that a variant was filtered out by something. In your second PR you are adding multiple verbosity level, that could probably help I guess. At least it would help debugging if we had a small log. Same for the next if.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I wrestled with this a bit. Not sure it felt right to keep telling the developer that their test has been skipped on a variant, when they explicitly specified that anyway. However I agree it probably makes sense to add if verbosity is enabled. I'll take a look in the second PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The way I see things is like skipped tests in a test suite. Your test framework will tell you which test was skipped, even if they were manually skipped. But maybe others have a different view. I guess we can just add the log and we could adjust if it's too verbose.
"""Should we add support for doing rez-test . or rez-test /path/to/package (can be useful for CI) in this PR?""" I wanted to keep that as a separate PR because I think there are some outstanding questions that need to be answered, so I wanted to wait til we get consensus. See #759 """Are you planning to add a variant selection flag to the CLI?""" Yes. I didn't want this PR to be too large, so was leaving this for next. Will also add the |
I'll reply to your comments when I get at home (so 2 hours). |
|
||
if returncode: | ||
sys.exit(returncode) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have the feeling it becomes unclear because the tests are skipped instead of being marked as failed. I've been thinking about it even more today and I really think tests that fail to resolve should really be marked as failed. That would avoid hiding possible issues and would probably make it possible to use a different exit code more easily.
A lot of CI system base themselves on the exit code of processes to know if a step failed or not. If we don't return a proper exit code, it's going to be quite difficult to know when a test failed or not and mark a build as failed.
Optional("on_variants"): Or( | ||
bool, | ||
{ | ||
"requires": [package_request_schema] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that my comment was not on the on_variant
but on the requires
key (from your comment I have the feeling you thought I was talking about the on_variant name...)
What about:
"on_variant": {
"filter_type" (Or just type): "requirements", (eventually could be index too)
"filter": ["maya"]
}
I feel this would make the intention clearer here than having yet another property called requires
.
"-s", "--stop-on-fail", action="store_true", | ||
help="stop on first test failure") | ||
parser.add_argument( | ||
"--inplace", action="store_true", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should --extra-packages
and --inplace
be mutually exclusive? In theory it doesn't make much sense to let a user use --extra-package
if it's gonna be ignored... Or would it be ignored right now?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure, but agreed, they should be mutually exclusive to avoid confusion.
else: | ||
command = map(variant.format, command) | ||
if reqlist.conflict: | ||
continue |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The way I see things is like skipped tests in a test suite. Your test framework will tell you which test was skipped, even if they were manually skipped. But maybe others have a different view. I guess we can just add the log and we could adjust if it's too verbose.
Ok, I think after I make these changes we're on the same page. It's gonna be a lot simpler to do this over in #811. Let's take the convo over there, and when that's ready, I'll do both merges into the same release. |
I also think we are now on the same page. So if you feel like it's going to be easier to do the changes in #811, I'm fine with that. |
Sounds good. I should have these changes in #811 within the next 24 hours.
A
…On Wed, Dec 4, 2019 at 3:50 PM Jean-Christophe Morin < ***@***.***> wrote:
I also think we are now on the same page. So if you feel like it's going
to be easier to do the changes in #811
<#811>, I'm fine with that.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#807>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAMOUSQ3ZNWKASPVFRZCOWLQW4ZKZANCNFSM4JP4E56A>
.
|
This updates rez-test behaviour wrt #665.
Not all of the REP is addressed here. This PR includes:
run_on
tests field (however this is not fully acted on yet - pre_release does nothing atm)--inplace
option, to run a test in the current env (if necessary requirements are present)on_variants
tests field to give granular control over this.tests
package attrib.Note: It would also make sense to have tests for the
PackageTestRunner
class. However I'd like to hold off until we port topytest
- I think it's going to be significantly simpler to do something like have a fixture for installing a set of packages into a fake fs. The way packages are built and tested on currently is haphazard and I'd rather not increase our technical debt for the sake of this fairly non-critical part of the code. #808.