-
Notifications
You must be signed in to change notification settings - Fork 504
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🐛 Pinned-Dependencies continues on error #3515
🐛 Pinned-Dependencies continues on error #3515
Conversation
Codecov Report
Additional details and impacted files@@ Coverage Diff @@
## main #3515 +/- ##
==========================================
- Coverage 76.01% 70.40% -5.62%
==========================================
Files 206 206
Lines 14003 14053 +50
==========================================
- Hits 10645 9894 -751
- Misses 2727 3585 +858
+ Partials 631 574 -57 |
I think this is fine. There's the chance of missing dangerous workflows this way, but if we log it (either in the execution, or as a detail) then hopefully we still get bug reports.
I think the simplest is to just throw in a log that mentions it, but I can see the value in having it in the details. We currently handle some parse errors just as a silent debug statement: scorecard/checks/evaluation/pinned_dependencies.go Lines 80 to 89 in 7034306
These days, it's usually done with errors.As Of course the other option is to fix the root cause of #3316, instead of skipping over it. But that may be a more complicated fix. Any thoughts @laurentsimon @raghavkaul ? |
Stale pull request message |
I've just pushed some commits such that the skipped steps are displayed in the details. Currently, {
"details": null,
"score": -1,
"reason": "internal error: internal error: unable to determine OS for job: Build python wheels on ${{matrix.arch}} for ${{ matrix.os_python.os }}",
} With the proposed change, the score is now correctly calculated (skipping the problematic steps), and everything that had to be skipped is logged in the details. {
"details": [
"Warn: Possibly incomplete results: error parsing job operating system: .github/workflows/build_wheels.yml's job 'Build python wheels on ${{matrix.arch}} for ${{ matrix.os_python.os }}' (step 5)",
"Warn: Possibly incomplete results: error parsing job operating system: .github/workflows/build_wheels.yml's job 'Build python wheels on ${{matrix.arch}} for ${{ matrix.os_python.os }}' (step 7)",
"Warn: GitHub-owned GitHubAction not pinned by hash: .github/workflows/assign_milestone.yml:34: update your workflow using https://app.stepsecurity.io/secureworkflow/apache/beam/assign_milestone.yml/master?enable=pin",
"... 1000+ other warnings ..."
],
"score": 0,
"reason": "dependency not pinned by hash detected -- score normalized to 0",
} It was also quite straightforward to do the same for the apache/arrow error, so I've implemented that as well.
{
"details": null,
"score": -1,
"reason": "internal error: error parsing shell code: ci/docker/python-wheel-windows-test-vs2017.dockerfile:1:2: \"if <cond>\" must be followed by \"then\"",
} Now: {
"details": [
"Warn: Possibly incomplete results: error parsing shell code: ci/docker/python-wheel-windows-test-vs2017.dockerfile:1:2: \"if \u003ccond\u003e\" must be followed by \"then\"",
"Warn: Possibly incomplete results: error parsing shell code: ci/docker/python-wheel-windows-vs2017.dockerfile:1:2: \"if \u003ccond\u003e\" must be followed by \"then\"",
"Warn: GitHub-owned GitHubAction not pinned by hash: .github/workflows/archery.yml:53: update your workflow using https://app.stepsecurity.io/secureworkflow/apache/arrow/archery.yml/main?enable=pin",
"... 300+ warnings ..."
],
"score": 0,
"reason": "dependency not pinned by hash detected -- score normalized to 0",
} I've seen a few other places where this maybe could also be applied, but I haven't dug into them to be sure yet.
Thanks for the tip, I used it! (I'm still getting my golang sea legs...)
The issue here isn't necessarily in Scorecard, though:
|
I'd lean towards Info instead of warn? just so it doesn't end up as something in the security dashboard?
I have a change I was playing around with, but haven't had time to finish testing.
Because we don't have a shell parser for powershell. So partially scorecard's fault |
I'm fine with logging only for now, until we have better parsing code |
Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com>
Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com>
Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com>
Will store all errors handled during analysis, which may lead to incomplete results. Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com>
Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com>
Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com>
Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com>
Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com>
Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com>
Dockerfile pinning is not affected. Everything in a 'broken' Dockerfile RUN block is ignored Everything in a 'broken' shell script is ignored testdata/script-invalid.sh modified to demonstrate the above Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com>
Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com>
Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com>
Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com>
checker/raw_result defines types used to describe analysis results. ElementError is meant to describe potential flaws in the analysis and is therefore a sort of analysis result itself. Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com>
Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com>
Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com>
Related discussion here: #1327 |
Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com>
6b6dba7
to
74f9892
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've just pushed a few changes.
ElementError
is now in checker/raw_result, where it should've been from the start. It's .Element
field is no longer a string, but a Location
, making it easier to register precisely where the error occurred.
Due to this change, we also have Improved logging of the errors, now showing the specific lines and following the styling in the other details (since we're using the same plumbing).
Lastly, PinningDependenciesData.Incomplete []error
has been replaced with .ProcessingErrors []ElementError
. The name is better and using a struct makes more sense in this case.
I'd thought of using the error
interface in case other error types come up which aren't a good fit for ElementError
, but using the interface only makes sense if we actually work with the errors as errors
– using .Error()
to display them. However, since our logging relies on passing LogMessage
structs to the logger, the string returned by .Error()
isn't a good fit.
Unless we want to create our own error interface (or, more broadly, a something-to-be-logged interface) that expects a LogMessage instead of a string?
type ScorecardError /* or */ LogMessenger interface {
Message() checker.LogMessage
}
The errors are still displayed as Info
, not Error
as suggested in #1327. Don't know if that change (adding a new logger function) fits in this PR or if it should come in a follow-up.
Please fix the unit tests too |
- Replace ElementError's `Element *finding.Location` with `Location finding.Location` - Rename ErrorJobOSParsing to ErrJobOSParsing to satisfy linter - Fix unit test Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com>
* Continue on error detecting OS Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com> * Add tests for error detecting OS Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com> * Add ElementError to identify elements that errored Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com> * Add Incomplete field to PinningDependenciesData Will store all errors handled during analysis, which may lead to incomplete results. Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com> * Register job steps that errored out Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com> * Add tests that incomplete steps are caught Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com> * Add warnings to details about incomplete steps Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com> * Add tests that incomplete steps generate warnings Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com> * Register shell files skipped due to parser errors Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com> * Add tests showing when parser errors affect analysis Dockerfile pinning is not affected. Everything in a 'broken' Dockerfile RUN block is ignored Everything in a 'broken' shell script is ignored testdata/script-invalid.sh modified to demonstrate the above Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com> * Incomplete results logged as Info, not Warn Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com> * Remove `Type` from logging of incomplete results Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com> * Update tests after rebase Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com> * Add Unwrap for ElementError, improve its docs Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com> * Add ElementError case to evaluation unit test Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com> * Move ElementError to checker/raw_result checker/raw_result defines types used to describe analysis results. ElementError is meant to describe potential flaws in the analysis and is therefore a sort of analysis result itself. Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com> * Use finding.Location for ElementError.Element Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com> * Use an ElementError for script parser errors Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com> * Replace .Incomplete []error with .ProcessingErrors []ElementError Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com> * Adopt from reviewer comments - Replace ElementError's `Element *finding.Location` with `Location finding.Location` - Rename ErrorJobOSParsing to ErrJobOSParsing to satisfy linter - Fix unit test Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com> --------- Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com> Signed-off-by: Diogo Teles Sant'Anna <diogoteles@google.com>
* Continue on error detecting OS Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com> * Add tests for error detecting OS Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com> * Add ElementError to identify elements that errored Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com> * Add Incomplete field to PinningDependenciesData Will store all errors handled during analysis, which may lead to incomplete results. Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com> * Register job steps that errored out Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com> * Add tests that incomplete steps are caught Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com> * Add warnings to details about incomplete steps Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com> * Add tests that incomplete steps generate warnings Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com> * Register shell files skipped due to parser errors Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com> * Add tests showing when parser errors affect analysis Dockerfile pinning is not affected. Everything in a 'broken' Dockerfile RUN block is ignored Everything in a 'broken' shell script is ignored testdata/script-invalid.sh modified to demonstrate the above Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com> * Incomplete results logged as Info, not Warn Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com> * Remove `Type` from logging of incomplete results Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com> * Update tests after rebase Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com> * Add Unwrap for ElementError, improve its docs Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com> * Add ElementError case to evaluation unit test Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com> * Move ElementError to checker/raw_result checker/raw_result defines types used to describe analysis results. ElementError is meant to describe potential flaws in the analysis and is therefore a sort of analysis result itself. Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com> * Use finding.Location for ElementError.Element Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com> * Use an ElementError for script parser errors Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com> * Replace .Incomplete []error with .ProcessingErrors []ElementError Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com> * Adopt from reviewer comments - Replace ElementError's `Element *finding.Location` with `Location finding.Location` - Rename ErrorJobOSParsing to ErrJobOSParsing to satisfy linter - Fix unit test Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com> --------- Signed-off-by: Pedro Kaj Kjellerup Nacht <pnacht@google.com> Signed-off-by: Allen Shearin <allen.p.shearin@gmail.com>
What kind of change does this PR introduce?
What is the current behavior?
Whenever Pinned-Dependencies hits a runtime error (i.e. can't detect job OS, can't parse a Dockerfile, etc), the check crashes entirely and the project gets an inconclusive score for the check.
What is the new behavior (if this is a feature change)?**
If Pinned-Dependencies hits a runtime error, the "element" that caused the error (i.e. workflow job, Dockerfile) is skipped and the check progresses as well as possible.
TO-DO
At this stage of the PR, the proposed change only applies to failures to detect a job's operating system (as happens with
apache/beam
). The idea is for this to cover other similar cases (i.e.apache/arrow
, caused by an error parsing a Dockerfile).Also, at this stage, the failure to detect a job's operating system isn't logged anywhere. The job is simply skipped "invisibly". This is useful information that should be displayed as a warning in the check's details. However, there's currently no place to store this information since
PinningDependenciesData
only stores data on problematic dependencies, not on problems encountered while investigating.The most straightforward solution I see is to modify
PinningDependenciesData
to also contain newSkippedWorkflowJobs
andSkippedFiles
fields (each a struct containing the job/filename and skip reason). The data in these fields can later be added to the logs as warnings, i.e:The job/filenames can be either parsed from the error message or a new error type can be defined which also includes this metadata. A type assertion can then be used to identify such cases and extract the data from the error. (I don't know where I'd store this new error type)
However, I'm no expert in either Go or the Scorecard codebase, so would like to get feedback on this solution before implementing it.
I have added the same test workflow to multiple test functions in
raw/pinned-dependencies-test.go
. The results show that a problematic job affects non-GHA-pinning scores, but GHA-pinning is unaffected.Notes for the reviewer
I couldn't see where to neatly add anything regarding this change to the Pinned-Deps documentation (which is somewhat high-level, without much detail on how the score is calculated). I have therefore left the docs as-is, without changes. Let me know if you wish to add a comment on such "error-handling" to the check's docs.
Which issue(s) this PR fixes
Fixes #3316.
Does this PR introduce a user-facing change?