Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove warnings pytest models #6593

Merged
merged 16 commits into from
Sep 20, 2022

Conversation

ambujpawar
Copy link
Contributor

@ambujpawar ambujpawar commented Sep 16, 2022

test/test_models.py Outdated Show resolved Hide resolved
@datumbox
Copy link
Contributor

@ambujpawar Thanks for the PR.

The tests are failing and look related. Could you please have a look? Thanks!

@ambujpawar
Copy link
Contributor Author

Thanks for the review. I tried to resolve those errors by just modifying the tests. But apparently, it was not possible for inceptionv3 model. Please try to have a look at the solution :)

@ambujpawar ambujpawar marked this pull request as draft September 16, 2022 10:12
@pmeier pmeier self-requested a review September 16, 2022 10:16
@ambujpawar ambujpawar marked this pull request as ready for review September 16, 2022 10:21
torchvision/models/quantization/inception.py Outdated Show resolved Hide resolved
test/test_models.py Outdated Show resolved Hide resolved
Copy link
Collaborator

@pmeier pmeier left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM if CI is green. Thanks @ambujpawar! The current failing jobs

  • binary_linux_conda_py3.9_cu116
  • cmake_macos_cpu
  • cmake_windows_cpu
  • cmake_windows_gpu

are unrelated.

@ambujpawar
Copy link
Contributor Author

Failing tests seems unrelated to the PR and I see that the related tasks are listed in issues

@pmeier
Copy link
Collaborator

pmeier commented Sep 16, 2022

The following could be related, but I'll leave that up to @datumbox since he knows the model tests better than I do.

_____________ test_detection_model[cuda-fasterrcnn_resnet50_fpn] ______________
Traceback (most recent call last):
  File "C:\Users\circleci\project\test\test_models.py", line 776, in check_out
    _assert_expected(output, model_name, prec=prec)
  File "C:\Users\circleci\project\test\test_models.py", line 117, in _assert_expected
    torch.testing.assert_close(output, expected, rtol=rtol, atol=atol, check_dtype=False, check_device=False)
  File "C:\Users\circleci\project\env\lib\site-packages\torch\testing\_comparison.py", line 1342, in assert_close
    assert_equal(
  File "C:\Users\circleci\project\env\lib\site-packages\torch\testing\_comparison.py", line 1093, in assert_equal
    raise error_metas[0].to_error(msg)
AssertionError: Tensor-likes are not close!

Mismatched elements: 61 / 80 (76.2%)
Greatest absolute difference: 182.7935028076172 at index (9, 1) (up to 0.01 allowed)
Greatest relative difference: inf at index (1, 0) (up to 0.01 allowed)

The failure occurred for item [0]['boxes']

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\circleci\project\test\test_models.py", line 804, in test_detection_model
    full_validation &= check_out(out)
  File "C:\Users\circleci\project\test\test_models.py", line 784, in check_out
    torch.testing.assert_close(
  File "C:\Users\circleci\project\env\lib\site-packages\torch\testing\_comparison.py", line 1342, in assert_close
    assert_equal(
  File "C:\Users\circleci\project\env\lib\site-packages\torch\testing\_comparison.py", line 1093, in assert_equal
    raise error_metas[0].to_error(msg)
AssertionError: Tensor-likes are not close!

Mismatched elements: 1 / 20 (5.0%)
Greatest absolute difference: 0.019545435905456543 at index (19,) (up to 0.01 allowed)
Greatest relative difference: 0.026511636142229424 at index (19,) (up to 0.01 allowed)

@ambujpawar
Copy link
Contributor Author

ambujpawar commented Sep 16, 2022

I think it is unrelated, as it was happening over here #6589 before this PR was even submitted

Copy link
Contributor

@datumbox datumbox left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The highlighted failure on the fasterrcnn model is not related. The changes LGTM.

Only one question which we could resolve elsewhere.

@@ -142,6 +134,8 @@ def __init__(
QuantizableInceptionE,
QuantizableInceptionAux,
],
*args,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pmeier Since you are the most Pythonic person I know, any idea why this works? We pass the inception_blocks keyword parameter before *args. We do exactly the same thing on QuantizableGoogLeNet. Could this because we dont actually have any positional args in the classes? I was expecting that the class would emit an error for putting positional arguments after named params but it doesn't.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is one of the Python quirks that are syntactically allowed, but should never be used due to its very weird behavior. TL;DR: foo(baz="baz", "bar") is not valid syntax while foo(baz="baz", *["bar"]) is. See python/cpython#82741.

I would advise to put the *args before the inception_blocks=... parameter to avoid confusion. Same for all other occurrences of this pattern. I don't know of a linter that checks this for us though.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done! 👍
LGTM now?

Copy link
Collaborator

@pmeier pmeier Sep 19, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if we want to do it in this PR or split this off to a follow-up cc @datumbox. Nevertheless this bothered me a bit to much TBH. I wrote a small flake8 plugin to find all occurrences of this. Put the following in a nuakw.py file in the root of the project

import ast
import contextlib


class NodeVisitor(ast.NodeVisitor):
    def __init__(self):
        self.errors = []

    def visit_Call(self, node: ast.Call):
        with contextlib.suppress(StopIteration):
            starred = next(starred for starred in node.args if isinstance(starred, ast.Starred))
            if any(
                (keyword.value.lineno, keyword.value.col_offset) < (starred.lineno, starred.col_offset)
                for keyword in node.keywords
            ):
                self.errors.append(
                    (node.lineno, node.col_offset, "NUAKW Argument unpacking after keyword argument", type(self))
                )

        self.generic_visit(node)


class NoUnpackingAfterKeyword:
    name = "no_unpacking_after_keyword"
    version = "0.0.1"

    def __init__(self, tree: ast.AST):
        self._tree = tree

    def run(self):
        visitor = NodeVisitor()
        visitor.visit(self._tree)
        yield from visitor.errors

Now, put the following at the bottom of setup.cfg

[flake8:local-plugins]
extension =
    NUAKW = nuakw:NoUnpackingAfterKeyword
paths =
    ./

Running flake8 torchvision yields the following errors:

torchvision/models/quantization/inception.py:44:9: NUAKW Argument unpacking after keyword argument
torchvision/models/quantization/inception.py:55:9: NUAKW Argument unpacking after keyword argument
torchvision/models/quantization/inception.py:66:9: NUAKW Argument unpacking after keyword argument
torchvision/models/quantization/inception.py:77:9: NUAKW Argument unpacking after keyword argument
torchvision/models/quantization/inception.py:88:9: NUAKW Argument unpacking after keyword argument
torchvision/models/quantization/inception.py:122:9: NUAKW Argument unpacking after keyword argument
torchvision/models/quantization/googlenet.py:42:9: NUAKW Argument unpacking after keyword argument
torchvision/models/quantization/googlenet.py:53:9: NUAKW Argument unpacking after keyword argument
torchvision/models/quantization/googlenet.py:77:9: NUAKW Argument unpacking after keyword argument
torchvision/models/quantization/mobilenetv3.py:86:9: NUAKW Argument unpacking after keyword argument

If we fix one, we should fix all.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seemed to be a short fix, so I have implement this in this PR itself :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc @pmeier @datumbox Shall we go to master now?

Copy link
Contributor

@datumbox datumbox left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM (again) @ambujpawar. Thank you very much for being ultra patient and flexible with us. You did end up fixing many more issues than originally described. I do apologise for this but I think your PR leaves our codebase in much better state than before.

@pmeier Thanks a lot for the explanation you gave on the thread above. I agree that the aforementioned syntax is problematic and should be avoided. We can review whether its avoidance should be done via linter or via our code reviews, offline to avoid further delays on this work.

I just kicked the CI once more, we should be able to merge this once everything looks green. :)

@datumbox datumbox merged commit 3a1f05e into pytorch:main Sep 20, 2022
@github-actions
Copy link

Hey @datumbox!

You merged this PR, but no labels were added. The list of valid labels is available at https://github.com/pytorch/vision/blob/main/.github/process_commit.py

@ambujpawar
Copy link
Contributor Author

Thanks for the kind words. Looking forward to solving more issues
Thanks for yours patience.

@ambujpawar ambujpawar deleted the remove_warnings_pytest_models branch September 20, 2022 11:21
facebook-github-bot pushed a commit that referenced this pull request Sep 23, 2022
Summary:
* ADD: init_weights config for googlenet

* Fix: Inception and googlenet warnings

* Fix: warning in test_datasets.py

* Fix: Formatting error with ufmt

* Fix: Failing tests in quantized_classification_model

* Update test/test_models.py to make googlenet in 1 line

* Refactor: Change inception quantisation class initialization to use args/kwargs

* Resolve mypy issue

* Move *args before inception_blocks

* Move args keywords before other arguments

Reviewed By: NicolasHug

Differential Revision: D39765307

fbshipit-source-id: 2b52fefd4c6c71a07d180247f098e44458e66e74

Co-authored-by: Philip Meier <github.pmeier@posteo.de>
Co-authored-by: Ambuj Pawar <your_email@abc.example>
Co-authored-by: Philip Meier <github.pmeier@posteo.de>
Co-authored-by: Vasilis Vryniotis <datumbox@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Fix warnings in the tests
4 participants