-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Declarative PIXITs in python tests #395 #36817
base: master
Are you sure you want to change the base?
Declarative PIXITs in python tests #395 #36817
Conversation
Changed Files
|
def validate_bool(value: Any) -> bool: | ||
if isinstance(value, str): | ||
value_lower = value.lower() | ||
if value_lower not in ('true', 'false', '1', '0', 'yes', 'no'): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
only true false supported
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
|
||
@staticmethod | ||
def validate_float(value: Any) -> bool: | ||
float(value) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- value is
str
for all validators validate_xxx
-->is_xxx_pixit_value_valid
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
|
||
@staticmethod | ||
def validate_string(value: Any) -> bool: | ||
str(value) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For all these you need to wrap the possibly-failing conversion in a try: except ValueError
before true/false returned.
Also, when returning False, you are losing the reason for why it failed validation, which would have come out fo the exception.
Recommend moving all of these validators to raise exceptions rather than be bool return. In that case, it could remain validate_xxx
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed and converted bool returns to exceptions
return True | ||
|
||
@classmethod | ||
def validate_value(cls, value: Any, pixit_def: PIXITDefinition) -> tuple[bool, Optional[str]]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid raw tuple returns. Raise an exception with message on failure
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for this, I was afraid that with using tuple there readability degrades a bit. Removed tuple return
if missing or invalid: | ||
error = "" | ||
if missing: | ||
error += "\nMissing required PIXITs:\n" + "\n".join(missing) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please avoid multi-line emssages
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
default: Any = None | ||
|
||
@staticmethod | ||
def is_pixit(arg_name: str) -> bool: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
actually, we probably want to validate all the flags. There are some flags that should be exposed that aren't called PIXIT.whatever
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you suggest to rename all methods to use some more generic words?
src/python_testing/matter_testing_infrastructure/chip/testing/matter_testing.py
Outdated
Show resolved
Hide resolved
|
||
# Validate PIXITs before proceeding | ||
validator = PIXITValidator() | ||
valid, error = validator.validate_pixits(test_info[0].pixits, matter_test_config.global_test_params) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this validate only for the first test in the list?
might be better in the setup_test function for each test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, thank you for noticing this issue, I've moved validation to setup_test
@@ -612,6 +612,142 @@ def test_skipped(self, filename: str, name: str): | |||
logging.info(f"Skipping test from {filename}: {name}") | |||
|
|||
|
|||
class PIXITType(Enum): | |||
INT = "int" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This approach in general is OK, but I do want to consider this in the larger context of verifying that the test invocations are correct in a broader way. We can land this in stages, but it is important to consider how the other checks are going to be integrated here.
In particular, test authors need an easy way to declare certain other test requirements that also need verification. Ex. Endpoint project-chip/matter-test-scripts#450, which we should consider mandating with no default for tests that are run against a single endpoint, no --endpoint flag for tests that run against the entire node at once. Need a way to declare that a test requires a commissioned device, whether or not that commissioning happens as a part of the test invocation. Some tests require passcodes, discriminators etc, even for tests where the commissioning happens external to the test. Tests that DO use PICS (hopefully fewer and fewer) should have a PICS file specified (and this get extra difficult in the context of the test harness)
Anyway, worth thinking through how these extended checks would fit in with this design. I don't see anything in particular blocking these options right now.
It's also worth thinking through how to expose this properly to test authors. ie, is there a way that we can alert test authors that are using test flags that they should be providing these functions?
@@ -612,6 +612,142 @@ def test_skipped(self, filename: str, name: str): | |||
logging.info(f"Skipping test from {filename}: {name}") | |||
|
|||
|
|||
class PIXITType(Enum): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know I used PIXIT in the issue, but I think this applies generally to test flags. But I can't think of a great name for just "Flags" that doesn't sound too generic. Ideas welcome.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To be honest, I also don't have many ideas. TestFlag? TestFlagType? TestParamType?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would you mind adding some tests here that the verifiers are correctly catching errors? We have some examples of how to test these frameworking bits in TestMatterTestingSupport.py et. al.
… validation would run only for first test
Co-authored-by: C Freeman <cecille@google.com>
project-chip/matter-test-scripts#395
Added a common mechanism for tests to declare their optional and required PIXIT values and override flags and ensure the required values are supplied. Add information about optional override flags with descriptions in the error outputs of tests.
Example: