Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

18F-originated code is scanned for vulnerabilities before being allowed into production #260

Closed
12 tasks done
mogul opened this issue Aug 31, 2016 · 27 comments
Closed
12 tasks done

Comments

@mogul
Copy link
Contributor

mogul commented Aug 31, 2016

In order to mitigate the risk of a subtle code vulnerability being introduced into production cloud.gov by a careless/malicious commit, code included in cloud.gov originating from 18F goes through a scan for vulnerabilities before we allow it into production.

Acceptance Criteria

  • We demonstrate that code is tested for vulnerabilities before it is merged into the mainline of 18F-originated repositories (eg at pull-request time)
  • We can demonstrate that intentionally bad commits cause an addition to the scanning report

Implementation sketch

  • Follow the Before You Ship guide to implement an appropriate static analysis tool on each 18F-originated repo that is included in cloud.gov (Gemnasium, Hakiri, Code Climate)
  • Deliberately make a bad commit, and note in the PR comments a pointer to how the problem was detected in the report automatically generated above

Repos in Scope

@mogul mogul added Atlas compliance Compliance, security, and accessibility issues and removed compliance Compliance, security, and accessibility issues labels Aug 31, 2016
@mogul mogul added this to the FedRAMP JAB review remediations milestone Sep 7, 2016
@cnelson
Copy link
Contributor

cnelson commented Sep 7, 2016

Does 18F / GSA have access to any (white box) code scanning services today?

Most of the open source stuff in this area is not very good / tied to a specific language (or even a specific framework) and given the number of languages currently used in the various cloud.gov components could be very difficult to implement scanning for all of it in a reasonable amount of time using open source tools.

@afeld
Copy link
Contributor

afeld commented Sep 8, 2016

We did some evaluation of various proprietary scanning tools as part of the Compliance Toolkit project, but at least for web projects, didn't find anything that did any better than the open source options. We could scan things like the Cloud Controller for web vulnerabilities, but there isn't any such thing as a generic scanner for any type of code, that I'm aware of.

/cc @jacobian

@afeld
Copy link
Contributor

afeld commented Sep 8, 2016

Just heard from the GSA CISO office that they use Checkmarx and Fortify, so we can get access to those if desired. Checkmarx was one of the tools we evaluated, and passed on:

@jmcarp
Copy link
Contributor

jmcarp commented Sep 8, 2016

Do any of the proprietary tools know golang? I didn't see much available on the open-source side.

@afeld
Copy link
Contributor

afeld commented Sep 8, 2016

I haven't heard of any scanners for Go, no.

@cnelson
Copy link
Contributor

cnelson commented Sep 8, 2016

Identifying a list of languages we need to scan is probably a good start. I know for sure that Cloudfoundry itself has at least:

  • Ruby
  • Java
  • Go

Our eventing is Riemann so Clojure. What else is out there, do we have a full inventory?

From my experience with these types of tools, a human always has to review the report, and also frequently do a deep-dive code analysis to really get anything out of the results of a scan.

For example, I've attached dependency-check-report.html.zip
a report on the current release of UAA from OWASP's dependency checker (which works agains the NVD)

As you can see, it includes several libraries with known HIGH/MEDIUM vulnerabilities BUT those vulns are only actually vulns if the library is used in specific ways.

For example:

  • You only need to worry about the BouncyCastle vulns if you aren't using the Java unlimited strength encryption, and are also using Elliptic Curve Crypto. (I know in this case, that this doesn't apply to us, so we can ignore, but it's going to be flagged every time UAA is scanned)
  • You only need to worry about the Apache Commons issues, if you are accepting untrusted serialized data (does UAA do that? I don't know, in this case we need to do a code review)

Knowing that pretty much all open source software is going to produce a report that looks like this (or worse) the technology part of this to automat our scans is the easy part. The hard part is going to be defining, and supporting with enough resource an on-going process to review, certify, and remediate (may require forking) pretty much everything we depend on based on the results from whatever tool(s) we use to scan our codebase.

@cnelson cnelson self-assigned this Sep 8, 2016
@cnelson
Copy link
Contributor

cnelson commented Sep 8, 2016

@afeld what do we need to do to get access to HP Fortify. It's probably at least worth evaluating vs the open-source tools.

I was hoping we might have access to veracode which is the gold standard for this type of thing usually but I'll work with what we've got :)

@jacobian
Copy link

jacobian commented Sep 8, 2016

FWIW I have some experience with Fortify and I've had good results, albeit over there in Java-land. I think it's worth a shot, especially if we get it for "free" (as part of the GSA license).

@mogul
Copy link
Contributor Author

mogul commented Sep 11, 2016

I'm pretty sure @dlapiduz was chasing down Fortify with @nvembar.

We can always start with Fortify, and add other scanners later if there's one that gives better results. The priority right now is to make sure there's a place in our pipeline where these sorts of scans run at all in order to address the immediate compliance blocker... Evaluating and responding to the results, reducing noise/false positives, getting deeper coverage, etc is all stuff that be improved over time at a less urgent pace.

@cnelson
Copy link
Contributor

cnelson commented Sep 13, 2016

We spoke with them last Friday and while they were eager to help it seemed like it was a multi-week process to get up and running which seems like it doesn't work with our deadline. Also we have the issue that a large portion of the OSS code we rely on is Go, and neither tool supports that language.

Their asks to move forward were a list of the repos and languages we wanted to scan. I generated a first draft of that list yesterday based on what we are currently deploying via concourse but it's surprisingly large, although I'm hoping that will let us get started with them, even if it's on a subset of our dependencies.

I'm looking at what little exists for Go scanners this afternoon, so we can say we are running something, but right now they all appear to be research/toy projects :/

@mogul
Copy link
Contributor Author

mogul commented Sep 14, 2016

Have reached out to FedRAMP PMO for information about getting access to Fortify on Demand.

@mogul
Copy link
Contributor Author

mogul commented Sep 15, 2016

Still some uncertainty due to lack of access to scanning tools. Actual technical integration doesn't seem difficult. Once we have access to tools it's about 3-4 days of remaining work. @cnelson is concentrating on docs now, but thinks anyone on the team could tackle this; @jmcarp looks interested.

@mogul
Copy link
Contributor Author

mogul commented Sep 20, 2016

Change in plans... After a discussion with @NoahKunin we will be following the Before You Ship guide for 18F-originated repositories, in keeping with the practices observed in our leveraged dependency ATOs. I've edited the story definition up-top accordingly with @cnelson.

@cnelson
Copy link
Contributor

cnelson commented Sep 20, 2016

Just to sanity check before I enable these tools for all repos in scope, this is what we are after correct?

Same report after introducing a "defect": https://codeclimate.com/github/18F/cg-uaa-invite/pull/11
PR shows code climate failed as it identified new issues: cloud-gov/uaa-extras#11

@cnelson cnelson changed the title Upstream code is scanned for vulnerabilities before being allowed into production 18F-originated code is scanned for vulnerabilities before being allowed into production Sep 20, 2016
@afeld
Copy link
Contributor

afeld commented Sep 20, 2016

/cc 18F/before-you-ship#213

@NoahKunin
Copy link
Contributor

@cnelson TLDR: yes!

Full answer: actually, hard incorporating it as a CI block is even better than a simple alert of course. That's not (yet) a formal requirement of Before You Ship.

Right now the threshold would be to set up Code Climate (or whatever) continuous monitoring and ensure at minimum there's a badge code snippet in the README (usually one is available from the provider, see example), and the system sends some alerts on some cadence to the team responsible for fixes (emails, Slack notifications, whatever, I don't micromanage team notification processes).

@mogul
Copy link
Contributor Author

mogul commented Sep 20, 2016

and the system sends some alerts on some cadence to the team responsible for fixes (emails, Slack notifications, whatever, I don't micromanage team notification processes)

We're going to avoid going down the broadcast-notification rabbit-hole for now... The process as it stands will make sure that checks run when new code is proposed, and our policy of not accepting our own pull-requests means someone will have to deal with issues noted via PRs as they happen.

@cnelson
Copy link
Contributor

cnelson commented Sep 20, 2016

Code Climate can send weekly summary emails of the current state of your code. There's your MVP regularly scheduled reminders on top of the scanning during each PR.

@afeld
Copy link
Contributor

afeld commented Sep 21, 2016

Just sent them an email:

Hello!
I work at 18F, and we are big users of Code Climate. We have a large number of repositories that we have Code Climate hooked up to, particularly for security stuff.

https://pages.18f.gov/before-you-ship/security/static-analysis/

Anyway, I know the main use case for Code Climate is doing analysis on new commits/pull requests, but wanted to ask about repositories that aren't actively maintained. Systems like Gemnasium (which we also use) will do periodic scans of project dependencies, which is useful for projects that might still be running/used but aren't active. Just wondering if Code Climate offers an equivalent, or if there's a recommended way to do it.

Thanks!

@afeld
Copy link
Contributor

afeld commented Sep 21, 2016

...because I'm not sure that the summary emails would actually trigger new scans of inactive repositories.

@dlapiduz
Copy link
Contributor

Do they have an API where you can trigger a scan?

@cnelson
Copy link
Contributor

cnelson commented Sep 21, 2016

@dlapiduz, yep: https://codeclimate.com/docs/api

POST /api/repos/:repo_id/branches/:branch_name/refresh

Triggers an immediate scan of your repository. (Note that in most cases, especially if you are Github-linked, this is unnecessary)

Also you can run these tools locally during development (requires docker):
https://github.com/codeclimate/codeclimate

@suprenant
Copy link

suprenant commented Sep 21, 2016

Is there work here intended for Liberator?

We are wondering what, if anything, we are supposed to do with the results. We've gone through those indicating that we have some vulnerabilities but are not sure if they're applicable to the front-end application.

@msecret and I just briefly spoke and are wondering what the process is to either address these or set them up to be ignored. Marco could speak more to this, but I believe the current changes are setting our tests to fail.

@mogul
Copy link
Contributor Author

mogul commented Sep 21, 2016

@suprenant you can go into Code Climate and mark them as false positives if they don't indicate anything demonstrably wrong with the code.

@NoahKunin
Copy link
Contributor

First, come to a final answer on how you see the results. If you concur
they should be fixed, and can do so immediately, just do it.

If you can't fix it immediately, or think it's not applicable, or just
don't know, all should be sent to the ISSO (Randy B) and the DAO (Diego)
for adjudication. If you don't think it is applicable include a short
explanation and justification. Randy will ensure it ends up on a POAM.

If consensus can still not be reached, forward the whole Issue to me, for
final and complete adjudication as the AO.

On Wednesday, September 21, 2016, Andrew Suprenant notifications@github.com
wrote:

Is there work here intended for Liberator?

We are wondering what, if anything, we are supposed to do with the
results. We've gone through those indicating that we have some
vulnerabilities but are not sure if they're applicable to the front-end
application.

@msecret https://github.com/msecret and I just briefly spoke and are
wondering what the process is to address these or set them up to be
ignored.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#260 (comment), or mute
the thread
https://github.com/notifications/unsubscribe-auth/AAifT0qLlhtn0_yDtMpS41DkoEax-rDIks5qsYXjgaJpZM4JyGnk
.

Noah S. Kunin
Infrastructure Director https://18f.gsa.gov/team/noah/ | Technology
Transformation Service http://www.gsa.gov/portal/category/25729

@NoahKunin
Copy link
Contributor

Above doesn't contradict what @mogul said, but regardless of why we are ignoring (false positive or not applicable) we need a short written explanation somewhere.

@NoahKunin
Copy link
Contributor

Looks done. Will ensure all controls are updated to reflect reality.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants