-
Notifications
You must be signed in to change notification settings - Fork 135
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hardened Tekton! #87
Comments
I would certainly be interested in looking at the TPM part (I just need to get my head around the mapping from an abstracted container layer to the hosts hardware and how we retain the hardware root of trust all the way up). Tagging @lkatalin |
This would have to go through a TEP, but one other idea to attest build node identity (through say, SPIRE) would be for the Tekton pipelines controller to inject some kind of challenge token as an env variable into each TaskRun. The entry pointer could generate a keypair and get an x509 SVID from SPIRE. The key would sign the challenge token, generating a response. The response and SVID bundle could be output as a result of the run, and verified by the controller. The controller challenge could be a random nonce kept in memory, or (for HA) a fixed secret HMACed against the TaskRun selflink/uuid. Chains could then verify all of this - the orchestrator workload identity AND the build workload identity, and include this in the final signed provenance. |
My very loose understanding here is that most k8s workloads go through the SPIRE API for things like this. The SPIRE API can be configured to do TPM roots of trust and issue SVIDs to actual running workloads in the cluster. Here's one example: https://github.com/bloomberg/spire-tpm-plugin |
@dlorenc @lukehinds https://github.com/boxboat/spire-tpm-plugin is our fork of that plugin -- we do plan to upstream it. |
Nice!!! |
One other possibility:
Projected volume service accounts can help here: https://cloud.google.com/community/tutorials/gke-workload-id-clientserver
|
Issues go stale after 90d of inactivity. /lifecycle stale Send feedback to tektoncd/plumbing. |
Stale issues rot after 30d of inactivity. /lifecycle rotten Send feedback to tektoncd/plumbing. |
Rotten issues close after 30d of inactivity. /close Send feedback to tektoncd/plumbing. |
@tekton-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen |
@priyawadhwa: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Issues go stale after 90d of inactivity. /lifecycle stale Send feedback to tektoncd/plumbing. |
Stale issues rot after 30d of inactivity. /lifecycle rotten Send feedback to tektoncd/plumbing. |
Rotten issues close after 30d of inactivity. /close Send feedback to tektoncd/plumbing. |
@tekton-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Rotten issues close after 30d of inactivity. /close Send feedback to tektoncd/plumbing. |
@tekton-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Rotten issues close after 30d of inactivity. /close Send feedback to tektoncd/plumbing. |
@tekton-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Rotten issues close after 30d of inactivity. /close Send feedback to tektoncd/plumbing. |
@tekton-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/remove-lifecycle rotten |
Issues go stale after 90d of inactivity. /lifecycle stale Send feedback to tektoncd/plumbing. |
Stale issues rot after 30d of inactivity. /lifecycle rotten Send feedback to tektoncd/plumbing. |
Rotten issues close after 30d of inactivity. /close Send feedback to tektoncd/plumbing. |
@tekton-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I'm imagining a Tekton Cat but with the chains necklace on, hanging out with the Falco logo. The cat will be holding some kind of shield with the SPIFFE logo on it, and we can throw in some sigstore/cosign logos too when we get them.
Ref cncf/tag-security#625 (comment)
But anyway - the real is to try to put together an end to end hardened build system using Falco policies to detect runtime issues. Use something like SPIRE to tie builds to specific machines, and some kind of TPM attestations for integrity of the build system itself and the individual build instances.
In-toto, sigstore and hermekton can provide a secure end to end provenance of build artifacts, which could include these build system and build node attestations all the way back to a hardware root of trust.
The text was updated successfully, but these errors were encountered: