Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add "volume" PipelineResource 🔊 #1417

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions docs/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -124,16 +124,16 @@ or a [GCS storage bucket](https://cloud.google.com/storage/)
The PVC option can be configured using a ConfigMap with the name
`config-artifact-pvc` and the following attributes:

- size: the size of the volume (5Gi by default)
- storageClassName: the [storage class](https://kubernetes.io/docs/concepts/storage/storage-classes/) of the volume (default storage class by default). The possible values depend on the cluster configuration and the underlying infrastructure provider.
- `size`: the size of the volume (5Gi by default)
- `storageClassName`: the [storage class](https://kubernetes.io/docs/concepts/storage/storage-classes/) of the volume (default storage class by default). The possible values depend on the cluster configuration and the underlying infrastructure provider.

The GCS storage bucket can be configured using a ConfigMap with the name
`config-artifact-bucket` with the following attributes:

- location: the address of the bucket (for example gs://mybucket)
- bucket.service.account.secret.name: the name of the secret that will contain
- `location`: the address of the bucket (for example gs://mybucket)
- `bucket.service.account.secret.name`: the name of the secret that will contain
the credentials for the service account with access to the bucket
- bucket.service.account.secret.key: the key in the secret with the required
- `bucket.service.account.secret.key`: the key in the secret with the required
service account json.
- The bucket is recommended to be configured with a retention policy after which
files will be deleted.
Expand Down
139 changes: 53 additions & 86 deletions docs/resources.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,15 @@ For example:

- [Syntax](#syntax)
- [Resource types](#resource-types)
- [Git Resource](#git-resource)
- [Pull Request Resource](#pull-request-resource)
- [Image Resource](#image-resource)
- [Cluster Resource](#cluster-resource)
- [Storage Resource](#storage-resource)
- [GCS Storage Resource](#gcs-storage-resource)
- [BuildGCS Storage Resource](#buildgcs-storage-resource)
- [Volume Resource](#volume-resource)
- [Cloud Event Resource](#cloud-event-resource)
- [Using Resources](#using-resources)

## Syntax
Expand Down Expand Up @@ -119,94 +128,8 @@ spec:
value: /workspace/go
```

### Overriding where resources are copied from

When specifying input and output `PipelineResources`, you can optionally specify
`paths` for each resource. `paths` will be used by `TaskRun` as the resource's
new source paths i.e., copy the resource from specified list of paths. `TaskRun`
expects the folder and contents to be already present in specified paths.
`paths` feature could be used to provide extra files or altered version of
existing resource before execution of steps.

Output resource includes name and reference to pipeline resource and optionally
`paths`. `paths` will be used by `TaskRun` as the resource's new destination
paths i.e., copy the resource entirely to specified paths. `TaskRun` will be
responsible for creating required directories and copying contents over. `paths`
feature could be used to inspect the results of taskrun after execution of
steps.

`paths` feature for input and output resource is heavily used to pass same
version of resources across tasks in context of pipelinerun.

In the following example, task and taskrun are defined with input resource,
output resource and step which builds war artifact. After execution of
taskrun(`volume-taskrun`), `custom` volume will have entire resource
`java-git-resource` (including the war artifact) copied to the destination path
`/custom/workspace/`.

```yaml
apiVersion: tekton.dev/v1alpha1
kind: Task
metadata:
name: volume-task
namespace: default
spec:
inputs:
resources:
- name: workspace
type: git
outputs:
resources:
- name: workspace
steps:
- name: build-war
image: objectuser/run-java-jar #https://hub.docker.com/r/objectuser/run-java-jar/
command: jar
args: ["-cvf", "projectname.war", "*"]
volumeMounts:
- name: custom-volume
mountPath: /custom
```

```yaml
apiVersion: tekton.dev/v1alpha1
kind: TaskRun
metadata:
name: volume-taskrun
namespace: default
spec:
taskRef:
name: volume-task
inputs:
resources:
- name: workspace
resourceRef:
name: java-git-resource
outputs:
resources:
- name: workspace
paths:
- /custom/workspace/
resourceRef:
name: java-git-resource
volumes:
- name: custom-volume
emptyDir: {}
```

## Resource Types

The following `PipelineResources` are currently supported:

- [Git Resource](#git-resource)
- [Pull Request Resource](#pull-request-resource)
- [Image Resource](#image-resource)
- [Cluster Resource](#cluster-resource)
- [Storage Resource](#storage-resource)
- [GCS Storage Resource](#gcs-storage-resource)
- [BuildGCS Storage Resource](#buildgcs-storage-resource)
- [Cloud Event Resource](#cloud-event-resource)

### Git Resource

Git resource represents a [git](https://git-scm.com/) repository, that contains
Expand Down Expand Up @@ -770,6 +693,50 @@ the container image
[gcr.io/cloud-builders//gcs-fetcher](https://github.com/GoogleCloudPlatform/cloud-builders/tree/master/gcs-fetcher)
does not support configuring secrets.

#### Volume Resource

The Volume `PipelineResource` will create and manage an underlying
[Persistent Volume](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) (PVC).

To create a Volume resource:

```yaml
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: volume-resource-1
spec:
type: storage
params:
- name: type
value: volume
- name: size
value: 5Gi
- name: subPath
value: some/path/on/the/pvc
- name: storageClassName
value: regional-disk
```

Supported `params` are:

* `size` - **Required** The size to make the underlying PVC, expressed as a
[Quantity](https://godoc.org/k8s.io/apimachinery/pkg/api/resource#Quantity)
* `subPath` - By default, data will be placed at the root of the PVC. This allows data to
instead be placed in a subfolder on the PVC
* `storageClassName` - The [storage class](https://kubernetes.io/docs/concepts/storage/storage-classes/)
that the PVC should use. For example, this is how you can use multiple Volume PipelineResources
[with GKE regional clusters](#using-with-gke-regional-clusters).

##### Using with GKE Regional Clusters

When using GKE regional clusters, when PVCs are created they will be assigned to zones
round robin. This means if two Volume PipelineResources are used by one Task, you must specify a
[`regional-pd`](https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/regional-pd) storage class, otherwise the PVCs could be created in different zones, and it will
be impossible to schedule a Task's pod that can use both.

[See the volume PipelineResource example.](../examples/pipelineruns/volume-output-pipelinerun.yaml)

### Cloud Event Resource

The Cloud Event Resource represents a [cloud event](https://github.com/cloudevents/spec)
Expand Down
183 changes: 183 additions & 0 deletions examples/pipelineruns/volume-output-pipelinerun.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,183 @@
# This example will be using multiple PVCs and will be run against a regional GKE.
# This means we have to make sure that the PVCs aren't created in different zones,
# and the only way to do this is to create regional PVCs.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: regional-disk
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
replication-type: regional-pd
---
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: volume-resource-1
spec:
type: storage
params:
- name: type
value: volume
- name: storageClassName
value: regional-disk
---
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: volume-resource-2
spec:
type: storage
params:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm getting an error running this from the webhook:

Error from server (InternalError): error when creating "./examples/pipelineruns/volume-output-pipelinerun.yaml": Internal error occurred: admission webhook "webhook.tekton.dev" denied the request: mutation failed: missing field(s): spec.params.size
Error from server (InternalError): error when creating "./examples/pipelineruns/volume-output-pipelinerun.yaml": Internal error occurred: admission webhook "webhook.tekton.dev" denied the request: mutation failed: missing field(s): spec.params.size

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

waaaat

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shoot, makes one wonder how they ever got this to run XD XD XD

im guessing i added that validation after running it 🤦‍♀ oh well XD

- name: type
value: volume
- name: path
value: special-folder
- name: storageClassName
value: regional-disk
---
# Task writes data to a predefined path
apiVersion: tekton.dev/v1alpha1
kind: Task
metadata:
name: create-files
spec:
outputs:
# This Task uses two volume outputs to ensure that multiple volume
# outputs can be used
resources:
- name: volume1
type: storage
- name: volume2
type: storage
steps:
- name: write-new-stuff-1
image: ubuntu
command: ['bash']
args: ['-c', 'echo stuff1 > $(outputs.resources.volume1.path)/stuff1']
- name: write-new-stuff-2
image: ubuntu
command: ['bash']
args: ['-c', 'echo stuff2 > $(outputs.resources.volume2.path)/stuff2']
---
# Reads a file from a predefined path and writes as well
apiVersion: tekton.dev/v1alpha1
kind: Task
metadata:
name: files-exist-and-add-new
spec:
inputs:
resources:
- name: volume1
type: storage
targetPath: newpath
- name: volume2
type: storage
outputs:
resources:
- name: volume1
type: storage
steps:
- name: read1
image: ubuntu
command: ["/bin/bash"]
args:
- '-c'
- '[[ stuff1 == $(cat $(inputs.resources.volume1.path)/stuff1) ]]'
- name: read2
image: ubuntu
command: ["/bin/bash"]
args:
- '-c'
- '[[ stuff2 == $(cat $(inputs.resources.volume2.path)/stuff2) ]]'
- name: write-new-stuff-3
image: ubuntu
command: ['bash']
args: ['-c', 'echo stuff3 > $(outputs.resources.volume1.path)/stuff3']
---
# Reads a file from a predefined path and writes as well
apiVersion: tekton.dev/v1alpha1
kind: Task
metadata:
name: files-exist
spec:
inputs:
resources:
- name: volume1
type: storage
steps:
- name: read1
image: ubuntu
command: ["/bin/bash"]
args:
- '-c'
- '[[ stuff1 == $(cat $(inputs.resources.volume1.path)/stuff1) ]]'
- name: read3
image: ubuntu
command: ["/bin/bash"]
args:
- '-c'
- '[[ stuff3 == $(cat $(inputs.resources.volume1.path)/stuff3) ]]'
---
# First task writees files to two volumes. The next task ensures these files exist
# then writes a third file to the first volume. The last Task ensures both expected
# files exist on this volume.
apiVersion: tekton.dev/v1alpha1
kind: Pipeline
metadata:
name: volume-output-pipeline
spec:
resources:
- name: volume1
type: storage
- name: volume2
type: storage
tasks:
- name: first-create-files
taskRef:
name: create-files
resources:
outputs:
- name: volume1
resource: volume1
- name: volume2
resource: volume2
- name: then-check-and-write
taskRef:
name: files-exist-and-add-new
resources:
inputs:
- name: volume1
resource: volume1
from: [first-create-files]
- name: volume2
resource: volume2
from: [first-create-files]
outputs:
- name: volume1
# This Task uses the same volume as an input and an output to ensure this works
resource: volume1
- name: then-check
taskRef:
name: files-exist
resources:
inputs:
- name: volume1
resource: volume1
from: [then-check-and-write]
---
apiVersion: tekton.dev/v1alpha1
kind: PipelineRun
metadata:
name: volume-output-pipeline-run
spec:
pipelineRef:
name: volume-output-pipeline
serviceAccount: 'default'
resources:
- name: volume1
resourceRef:
name: volume-resource-1
- name: volume2
resourceRef:
name: volume-resource-2
17 changes: 10 additions & 7 deletions pkg/apis/pipeline/v1alpha1/artifact_pvc.go
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ func (p *ArtifactPVC) GetCopyFromStorageToSteps(name, sourcePath, destinationPat
}}}
}

// GetCopyToStorageFromSteps returns a container used to upload artifacts for temporary storage
// GetCopyToStorageFromSteps returns a container used to upload artifacts for temporary storageCreateDirStep
func (p *ArtifactPVC) GetCopyToStorageFromSteps(name, sourcePath, destinationPath string) []Step {
return []Step{{Container: corev1.Container{
Name: names.SimpleNameGenerator.RestrictLengthWithRandomSuffix(fmt.Sprintf("source-mkdir-%s", name)),
Expand Down Expand Up @@ -86,13 +86,16 @@ func GetPvcMount(name string) corev1.VolumeMount {
}
}

// CreateDirStep returns a container step to create a dir
func CreateDirStep(bashNoopImage string, name, destinationPath string) Step {
// CreateDirStep returns a container step to create a dir at destinationPath. The name
// of the step will include name. Optionally will mount included volumeMounts if the
// dir is to be created on the volume.
func CreateDirStep(bashNoopImage string, name, destinationPath string, volumeMounts []corev1.VolumeMount) Step {
return Step{Container: corev1.Container{
Name: names.SimpleNameGenerator.RestrictLengthWithRandomSuffix(fmt.Sprintf("create-dir-%s", strings.ToLower(name))),
Image: bashNoopImage,
Command: []string{"/ko-app/bash"},
Args: []string{"-args", strings.Join([]string{"mkdir", "-p", destinationPath}, " ")},
Name: names.SimpleNameGenerator.RestrictLengthWithRandomSuffix(fmt.Sprintf("create-dir-%s", strings.ToLower(name))),
Image: bashNoopImage,
Command: []string{"/ko-app/bash"},
Args: []string{"-args", strings.Join([]string{"mkdir", "-p", destinationPath}, " ")},
VolumeMounts: volumeMounts,
}}
}

Expand Down
Loading