Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

logging in test runner / CI : higher log level but only print failing tests #1120

Open
jku opened this issue Sep 8, 2020 · 3 comments
Open
Labels

Comments

@jku
Copy link
Member

jku commented Sep 8, 2020

We should do what in-toto does: have the test runner output much more logging but only for failing tests. This should be especially useful for CI but also for developers.

Assuming we hard-code a good log level (see #1093 for potential alternatives), I think we only need roughly this in aggregate_tests.py:

class TestHandler(logging.StreamHandler):
  def __init__(self):
    super().__init__(self)

  @property
  def stream(self):
    return sys.stderr

  @stream.setter
  def stream(self, value):
    pass

if __name__ == '__main__':
  logging.basicConfig(level=logging.INFO, handlers=[TestHandler()])
  # current testrunner code here...

This is because the testrunner buffering will not work with long lived loggers otherwise: the default handler caches sys.stderr value and testrunner wants to modify it between test runs (and buffering is required to only print the failing test output).

jku pushed a commit to jku/python-tuf that referenced this issue Sep 15, 2020
Fix the aggregate_tests.py TextTestRunner buffering so that
output from failing tests is now printed below the failure. This
is done by adding a logging handler that allows TextTestRunner
to modify sys.stderr value between tests.

aggregate_tests.py now accepts the same verbosity flag as individual
test files (e.g. '-vvv' for DEBUG level).

tox sets '-vvv' so failing tests will now get DEBUG output: this
probably makes sense on CI at least.

Fixes theupdateframework#1120

Signed-off-by: Jussi Kukkonen <jkukkonen@vmware.com>
jku pushed a commit to jku/python-tuf that referenced this issue Sep 15, 2020
Fix the aggregate_tests.py TextTestRunner buffering so that
output from failing tests is now printed below the failure. This
is done by adding a logging handler that allows TextTestRunner
to modify sys.stderr value between tests.

aggregate_tests.py now accepts the same verbosity flag as individual
test files (e.g. '-vvv' for DEBUG level).

tox sets '-vvv' so failing tests will now get DEBUG output: this
probably makes sense on CI at least.

Fixes theupdateframework#1120

Signed-off-by: Jussi Kukkonen <jkukkonen@vmware.com>
@jku jku mentioned this issue Sep 15, 2020
3 tasks
jku pushed a commit to jku/python-tuf that referenced this issue Sep 16, 2020
Fix the aggregate_tests.py TextTestRunner buffering so that
output from failing tests is now printed below the failure. This
is done by adding a logging handler that allows TextTestRunner
to modify sys.stderr value between tests.

aggregate_tests.py now accepts the same verbosity flag as individual
test files (e.g. '-vvvv' for DEBUG level).

tox sets '-vvv' so failing tests will now get INFO output: this
probably makes sense on CI at least.

Fixes theupdateframework#1120

Signed-off-by: Jussi Kukkonen <jkukkonen@vmware.com>
@jku
Copy link
Member Author

jku commented Sep 17, 2020

I have a commit for this (see link above) but it might require some discussion and work:

  • My original idea was to increase the default tox log level significantly (to INFO or even DEBUG) and this is what the branch does: The logic is that on CI successful test output should be hidden but failing tests should output as much as possible to give the needed context. However, I'm not sure if other developers use tox in their development workflows and if it should not print that much by default (and instead CI should have a separate tox config that uses higher log level)?
  • The unittest buffering does not seem to work properly on python 2? This means logs from successful tests are not hidden so there's a huge amount of output, even at INFO level
  • Failing test output is currently printed twice: first all of the output from all failing tests is printed, then individual test output is printed nicely separated by test names and results. We would like to prevent former and only have the latter but this seems to be a unittest issue: in-toto has the same problem

I should maybe make a draft PR of this and make some tests fail to showcase this...

@MVrachev
Copy link
Collaborator

MVrachev commented Sep 18, 2020

My original idea was to increase the default tox log level significantly (to INFO or even DEBUG) and this is what the branch does: The logic is that on CI successful test output should be hidden but failing tests should output as much as possible to give the needed context. However, I'm not sure if other developers use tox in their development workflows and if it should not print that much by default (and instead CI should have a separate tox config that uses higher log level)?

We cannot assess how all developers are using tox, but probably the best we can do is to ask that on a tuf meeting.
I think that locally the default level should be above INFO the same way you decided to use ERROR default level when running with python <test_file> in this pr #1145 and on a CI test run we do want more information, so I agree INFO or DEBUG is the right choice.
It seems that a separate configuration for the CI is the right way to achieve both of those goals.

The unittest buffering does not seem to work properly on python 2? This means logs from successful tests are not hidden so there's a huge amount of output, even at INFO level.

Another reason why we want locally to have a log level above INFO.

Failing test output is currently printed twice: first all of the output from all failing tests is printed, then individual test output is printed nicely separated by test names and results. We would like to prevent former and only have the latter but this seems to be a unittest issue: in-toto has the same problem.

Yes, I have encountered that problem multiple times. Have you searched if that's a known issue for this module?

@jku
Copy link
Member Author

jku commented Sep 18, 2020

bugs.python.org is not the easiest to search in but I haven't seen a bug

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants