-
-
Notifications
You must be signed in to change notification settings - Fork 615
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Disable individual tests #597
Disable individual tests #597
Conversation
8948bef
to
98c8d12
Compare
Would love to see this make it in. It's especially useful as a guide in the code for others, because seeing We use the pre-commit framework and have bandit in use with it, so skipping one check on one line is way better than skipping all checks for a single line, or one check for the entire run. |
98c8d12
to
01def46
Compare
Isn't cleaner to have |
@sbrunner I think it's probably best to follow your first suggestion and be in line with @lukehinds or @ericwb thoughts? |
01def46
to
674e5de
Compare
+1 request a review on this PR |
Devs, please rereview this PR. Thanks @mikespallino for the patch. |
👍 please add this, it would help us out. |
e73f5f9
to
f67796f
Compare
Second this PR, I was looking for this exact feature! @lukehinds @ericwb @ghugo @sigmavirus24 can you please review and merge? |
This is presently not mergeable. I'm happy to review once it's up-to-date |
b9d406c
to
433fb15
Compare
Should be ready for review now @sigmavirus24 |
bandit/core/tester.py
Outdated
'nosecs_by_tests': 0, | ||
'failed_nosecs_by_test': 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are we adding these here? We don't report nosec
here from what I can see this file, we only use that to exclude those lines from checks
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I sort of shoehorned this in here because I couldn't think of any existing structures that were better suited for tracking these until we make it to metric collection. The trouble being that we check for test failures in this module, but we report the metrics separately and I needed a way to tie these results over to metrics so that they could be reported. This is complicated by the way that this method is called within BanditNodeVisitor, so here I'm pretty much calculating these metrics on the fly while visiting the AST.
Very open to ideas on changing this around.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So our tester class has other attributes on it. In all of the visitor functions, we update scores but we could also update metrics too because we have the tester and the metrics gatherer:
bandit/bandit/core/node_visitor.py
Lines 35 to 46 in e0a12a9
self.tester = b_tester.BanditTester( | |
self.testset, self.debug, nosec_lines) | |
# in some cases we can't determine a qualified name | |
try: | |
self.namespace = b_utils.get_module_qualname_from_path(fname) | |
except b_utils.InvalidModulePath: | |
LOG.info('Unable to find qualified name for module: %s', | |
self.fname) | |
self.namespace = "" | |
LOG.debug('Module qualified name: %s', self.namespace) | |
self.metrics = metrics |
So if we changed our tester to store these metrics you care about, perhaps in a metrics = {}
default dictionary, we could update metrics from there before/after lines like
bandit/bandit/core/node_visitor.py
Line 78 in e0a12a9
self.update_scores(self.tester.run_tests(self.context, 'FunctionDef')) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I ended up just passing metrics into BanditTester
's constructor when we instantiate it in BanditNodeVisitor
's constructor so we can just call the metric function there directly, is this okay? I originally did it this way to avoid changing any return types or function signatures.
b4b7222
to
3595ab0
Compare
4cc9247
to
abb233d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So some of these comments I meant to post weeks ago but are still valid I think. Some may be duplicates because the GitHub UI is confusing in this matter and I can't tell how to remove old duplicates
bandit/core/tester.py
Outdated
'nosecs_by_tests': 0, | ||
'failed_nosecs_by_test': 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So our tester class has other attributes on it. In all of the visitor functions, we update scores but we could also update metrics too because we have the tester and the metrics gatherer:
bandit/bandit/core/node_visitor.py
Lines 35 to 46 in e0a12a9
self.tester = b_tester.BanditTester( | |
self.testset, self.debug, nosec_lines) | |
# in some cases we can't determine a qualified name | |
try: | |
self.namespace = b_utils.get_module_qualname_from_path(fname) | |
except b_utils.InvalidModulePath: | |
LOG.info('Unable to find qualified name for module: %s', | |
self.fname) | |
self.namespace = "" | |
LOG.debug('Module qualified name: %s', self.namespace) | |
self.metrics = metrics |
So if we changed our tester to store these metrics you care about, perhaps in a metrics = {}
default dictionary, we could update metrics from there before/after lines like
bandit/bandit/core/node_visitor.py
Line 78 in e0a12a9
self.update_scores(self.tester.run_tests(self.context, 'FunctionDef')) |
This PR allows for disabling specific tests by ID (e.g. B602, B607). It also supports using test name (e.g. subprocess_popen_with_shell_equals_true) as requested in #418 .
the nosec comment behaves nicely with others and can be present anywhere in a string of comments.
If nosec is used without specific tests, it acts as it did before by blanket ignoring all tests for that line.
If nosec is used with specific tests and a test not indicated in the comments is triggered, the line will be caught and reported.