-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
18F-originated code is scanned for vulnerabilities before being allowed into production #260
Comments
Does 18F / GSA have access to any (white box) code scanning services today? Most of the open source stuff in this area is not very good / tied to a specific language (or even a specific framework) and given the number of languages currently used in the various cloud.gov components could be very difficult to implement scanning for all of it in a reasonable amount of time using open source tools. |
We did some evaluation of various proprietary scanning tools as part of the Compliance Toolkit project, but at least for web projects, didn't find anything that did any better than the open source options. We could scan things like the Cloud Controller for web vulnerabilities, but there isn't any such thing as a generic scanner for any type of code, that I'm aware of. /cc @jacobian |
Just heard from the GSA CISO office that they use Checkmarx and Fortify, so we can get access to those if desired. Checkmarx was one of the tools we evaluated, and passed on: |
Do any of the proprietary tools know golang? I didn't see much available on the open-source side. |
I haven't heard of any scanners for Go, no. |
Identifying a list of languages we need to scan is probably a good start. I know for sure that Cloudfoundry itself has at least:
Our eventing is Riemann so Clojure. What else is out there, do we have a full inventory? From my experience with these types of tools, a human always has to review the report, and also frequently do a deep-dive code analysis to really get anything out of the results of a scan. For example, I've attached dependency-check-report.html.zip As you can see, it includes several libraries with known HIGH/MEDIUM vulnerabilities BUT those vulns are only actually vulns if the library is used in specific ways. For example:
Knowing that pretty much all open source software is going to produce a report that looks like this (or worse) the technology part of this to automat our scans is the easy part. The hard part is going to be defining, and supporting with enough resource an on-going process to review, certify, and remediate (may require forking) pretty much everything we depend on based on the results from whatever tool(s) we use to scan our codebase. |
@afeld what do we need to do to get access to HP Fortify. It's probably at least worth evaluating vs the open-source tools. I was hoping we might have access to veracode which is the gold standard for this type of thing usually but I'll work with what we've got :) |
FWIW I have some experience with Fortify and I've had good results, albeit over there in Java-land. I think it's worth a shot, especially if we get it for "free" (as part of the GSA license). |
I'm pretty sure @dlapiduz was chasing down Fortify with @nvembar. We can always start with Fortify, and add other scanners later if there's one that gives better results. The priority right now is to make sure there's a place in our pipeline where these sorts of scans run at all in order to address the immediate compliance blocker... Evaluating and responding to the results, reducing noise/false positives, getting deeper coverage, etc is all stuff that be improved over time at a less urgent pace. |
We spoke with them last Friday and while they were eager to help it seemed like it was a multi-week process to get up and running which seems like it doesn't work with our deadline. Also we have the issue that a large portion of the OSS code we rely on is Go, and neither tool supports that language. Their asks to move forward were a list of the repos and languages we wanted to scan. I generated a first draft of that list yesterday based on what we are currently deploying via concourse but it's surprisingly large, although I'm hoping that will let us get started with them, even if it's on a subset of our dependencies. I'm looking at what little exists for Go scanners this afternoon, so we can say we are running something, but right now they all appear to be research/toy projects :/ |
Have reached out to FedRAMP PMO for information about getting access to Fortify on Demand. |
Change in plans... After a discussion with @NoahKunin we will be following the Before You Ship guide for 18F-originated repositories, in keeping with the practices observed in our leveraged dependency ATOs. I've edited the story definition up-top accordingly with @cnelson. |
Just to sanity check before I enable these tools for all repos in scope, this is what we are after correct? Same report after introducing a "defect": https://codeclimate.com/github/18F/cg-uaa-invite/pull/11 |
@cnelson TLDR: yes! Full answer: actually, hard incorporating it as a CI block is even better than a simple alert of course. That's not (yet) a formal requirement of Before You Ship. Right now the threshold would be to set up Code Climate (or whatever) continuous monitoring and ensure at minimum there's a badge code snippet in the README (usually one is available from the provider, see example), and the system sends some alerts on some cadence to the team responsible for fixes (emails, Slack notifications, whatever, I don't micromanage team notification processes). |
We're going to avoid going down the broadcast-notification rabbit-hole for now... The process as it stands will make sure that checks run when new code is proposed, and our policy of not accepting our own pull-requests means someone will have to deal with issues noted via PRs as they happen. |
Code Climate can send weekly summary emails of the current state of your code. There's your MVP regularly scheduled reminders on top of the scanning during each PR. |
Just sent them an email:
|
...because I'm not sure that the summary emails would actually trigger new scans of inactive repositories. |
Do they have an API where you can trigger a scan? |
@dlapiduz, yep: https://codeclimate.com/docs/api
Also you can run these tools locally during development (requires docker): |
Is there work here intended for Liberator? We are wondering what, if anything, we are supposed to do with the results. We've gone through those indicating that we have some vulnerabilities but are not sure if they're applicable to the front-end application. @msecret and I just briefly spoke and are wondering what the process is to either address these or set them up to be ignored. Marco could speak more to this, but I believe the current changes are setting our tests to fail. |
@suprenant you can go into Code Climate and mark them as false positives if they don't indicate anything demonstrably wrong with the code. |
First, come to a final answer on how you see the results. If you concur If you can't fix it immediately, or think it's not applicable, or just If consensus can still not be reached, forward the whole Issue to me, for On Wednesday, September 21, 2016, Andrew Suprenant notifications@github.com
Noah S. Kunin |
Above doesn't contradict what @mogul said, but regardless of why we are ignoring (false positive or not applicable) we need a short written explanation somewhere. |
Looks done. Will ensure all controls are updated to reflect reality. |
In order to mitigate the risk of a subtle code vulnerability being introduced into production cloud.gov by a careless/malicious commit, code included in cloud.gov originating from 18F goes through a scan for vulnerabilities before we allow it into production.
Acceptance Criteria
Implementation sketch
Repos in Scope
The text was updated successfully, but these errors were encountered: