You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
When we run our unit tests everything grinds to a halt on the semantic validator tests
This is potentially caused by our test construction - most of these tests do something like:
with pytest.raises(...):
# run all validations
That means we run every validation on every test input model on every test case, even though most, if not all, of these tests cases are targeted at highly specific validation rules.
A good start here would be to migrate these test cases to only run the specific rule we care to test.
Describe the bug
When we run our unit tests everything grinds to a halt on the semantic validator tests
This is potentially caused by our test construction - most of these tests do something like:
That means we run every validation on every test input model on every test case, even though most, if not all, of these tests cases are targeted at highly specific validation rules.
A good start here would be to migrate these test cases to only run the specific rule we care to test.
Another possible optimization - both for runtime and readability - is to use smaller, more targeted models defined local to the test case, so converting these cases away from model fixtures onto the local model shims would be great as well. See https://github.com/transform-data/metricflow/blob/main/metricflow/test/model/validations/test_validity_param_definitions.py#L51-L71 for an example of a locally defined model with a specific failure state written into it.
Steps To Reproduce
Steps to reproduce the behavior:
make test
validations
pathExpected behavior
These should be much faster
The text was updated successfully, but these errors were encountered: