Skip to content

Commit

Permalink
Add DAG to docs πŸ“ŽπŸ‘£
Browse files Browse the repository at this point in the history
Update the documentation such that Tasks no longer execute in the order
they are declared in the Pipeline, order is now controlled by `from` AND
`runAfter`.

We need to add `runAfter` as part of #168 because without it, all
ordering must be expressed with `from`, which would make our examples
really gross temporarily (not to mention be a lot of work and slow down
test execution) just to remove it as soon as we add `runAfter`, so we
might as well do it all at once.
  • Loading branch information
bobcatfish committed Feb 27, 2019
1 parent d76325c commit e5f125f
Show file tree
Hide file tree
Showing 4 changed files with 167 additions and 23 deletions.
15 changes: 7 additions & 8 deletions docs/pipelineruns.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,8 @@
This document defines `PipelineRuns` and their capabilities.

On its own, a [`Pipeline`](pipelines.md) declares what [`Tasks`](tasks.md) to
run, and dependencies between [`Task`](tasks.md) inputs and outputs via
[`from`](pipelines.md#from). To execute the `Tasks` in the `Pipeline`, you must
create a `PipelineRun`.
run, and [the order they run in](pipelines.md#ordering). To execute the `Tasks`
in the `Pipeline`, you must create a `PipelineRun`.

Creation of a `PipelineRun` will trigger the creation of
[`TaskRuns`](taskruns.md) for each `Task` in your pipeline.
Expand Down Expand Up @@ -43,15 +42,15 @@ following fields:
object that enables your build to run with the defined authentication
information.
- `timeout` - Specifies timeout after which the `PipelineRun` will fail.
- [`nodeSelector`] - a selector which must be true for the pod to fit on a
- [`nodeSelector`] - A selector which must be true for the pod to fit on a
node. The selector which must match a node's labels for the pod to be
scheduled on that node. More info:
https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
- [`affinity`] - the pod's scheduling constraints. More info:
https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature
<https://kubernetes.io/docs/concepts/configuration/assign-pod-node/>
- [`affinity`] - The pod's scheduling constraints. More info:
<https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature>

[kubernetes-overview]:
https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields
<https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields>

### Resources

Expand Down
150 changes: 142 additions & 8 deletions docs/pipelines.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,8 @@ This document defines `Pipelines` and their capabilities.
- [Parameters](#parameters)
- [Pipeline Tasks](#pipeline-tasks)
- [From](#from)
- [RunAfter](#runafter)
- [Ordering](#ordering)
- [Examples](#examples)

## Syntax
Expand All @@ -31,9 +33,16 @@ following fields:
- [`resources`](#declared-resources) - Specifies which
[`PipelineResources`](resources.md) of which types the `Pipeline` will be
using in its [Tasks](#pipeline-tasks)
- `tasks`
- `resources`
- `inputs/outputs`
- [`from`](#from) - Used when the content of the [`PipelineResource`](resources.md)
should come from the [output](tasks.md#output) of a previous [Pipeline Task](#pipeline-tasks)
- [`runAfter`](#runAfter) - Used when the [Pipeline Task](#pipeline-task) should be executed
after another Pipeline Task, but there is no [output linking](#from) required

[kubernetes-overview]:
https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields
<https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields>

### Declared resources

Expand Down Expand Up @@ -119,9 +128,9 @@ spec:

### Pipeline Tasks

A `Pipeline` will execute a sequence of [`Tasks`](tasks.md) in the order they
are declared in. At a minimum, this declaration must include a reference to the
`Task`:
A `Pipeline` will execute a graph of [`Tasks`](tasks.md) (see [ordering](#ordering)
for how to express this graph). At a minimum, this declaration must include a
reference to the [`Task`](tasks.md):

```yaml
tasks:
Expand Down Expand Up @@ -165,16 +174,21 @@ spec:

#### from

Sometimes you will have `Tasks` that need to take as input the output of a
previous `Task`, for example, an image built by a previous `Task`.
Sometimes you will have [Pipeline Tasks](#pipeline-tasks) that need to take as
input the output of a previous `Task`, for example, an image built by a previous `Task`.

Express this dependency by adding `from` on `Resources` that your `Tasks` need.
Express this dependency by adding `from` on [`PipelineResources`](resources.md)
that your `Tasks` need.

- The (optional) `from` key on an `input source` defines a set of previous
`PipelineTasks` (i.e. the named instance of a `Task`) in the `Pipeline`
- When the `from` key is specified on an input source, the version of the
resource that is from the defined list of tasks is used
- The name of the `PipelineResource` must correspond to a `PipelineResource`
- `from` can support fan in and fan out
- The `from` clause [expresses ordering](#ordering), i.e. the
[Pipeline Task](#pipeline-task) which provides the `PipelineResource` must run
_before_ the Pipeline Task which needs that `PipelineResource` as an input
- The name of the `PipelineResource` must correspond to a `PipelineResource`
from the `Task` that the referenced `PipelineTask` gives as an output

For example see this `Pipeline` spec:
Expand All @@ -201,6 +215,126 @@ The resource `my-image` is expected to be given to the `deploy-app` `Task` from
the `build-app` `Task`. This means that the `PipelineResource` `my-image` must
also be declared as an output of `build-app`.

This also means that the `build-app` Pipeline Task will run before `deploy-app`,
regardless of the order they appear in the spec.

#### runAfter

Sometimes you will need to have [Pipeline Tasks](#pipeline-tasks) that need to run in
a certain order, but they do not have an explicit [output](tasks.md#outputs) to
[input](tasks.md#inputs) dependency (which is expressed via [`from`](#from)). In this case
you can use `runAfter` to indicate that a Pipeline Task should be run after one or more
previous Pipeline Tasks.

For example see this `Pipeline` spec:

```yaml
- name: test-app
taskRef:
name: make-test
resources:
inputs:
- name: my-repo
- name: build-app
taskRef:
name: kaniko-build
runAfter:
- test-app
resources:
inputs:
- name: my-repo
```

In this `Pipeline`, we want to test the code before we build from it, but there is no output
from `test-app`, so `build-app` uses `runAfter` to indicate that `test-app` should run before
it, regardless of the order they appear in the spec.

## Ordering

The [Pipeline Tasks](#pipeline-tasks) in a `Pipeline` can be connected and run in a graph,
specifically a *Directed Acyclic Graph* or DAG. Each of the Pipeline Tasks is a node, which
can be connected (i.e. a *Graph*) such that one will run before another (i.e. *Directed*),
and the execution will eventually complete (i.e. *Acyclic*, it will not get caught in infinite
loops).

This is done using:

- [`from`](#from) clauses on the [`PipelineResources`](#resources) needed by a `Task`
- [`runAfter`](#runAfter) clauses on the [Pipeline Tasks](#pipeline-tasks)

For example see this `Pipeline` spec:

```yaml
- name: lint-repo
taskRef:
name: pylint
resources:
inputs:
- name: my-repo
- name: test-app
taskRef:
name: make-test
resources:
inputs:
- name: my-repo
- name: build-app
taskRef:
name: kaniko-build-app
runAfter:
- test-app
resources:
inputs:
- name: my-repo
outputs:
- name: image
resource: my-app-image
- name: build-frontend
taskRef:
name: kaniko-build-frontend
runAfter:
- test-app
resources:
inputs:
- name: my-repo
outputs:
- name: image
resource: my-frontend-image
- name: deploy-all
taskRef:
name: deploy-kubectl
resources:
inputs:
- name: my-app-image
from:
- build-app
- name: my-frontend-image
from:
- build-frontend
```

This will result in the following execution graph:

```none
| |
v v
test-app lint-repo
/ \
v v
build-app build-frontend
\ /
v v
deploy-all
```

1. The `lint-repo` and `test-app` Pipeline Tasks will begin executing simultaneously.
(They have no `from` or `runAfter` clauses.)
1. Once `test-app` completes, both `build-app` and `build-frontend` will begin
executing simultaneously (both `runAfter` `test-app`).
1. When both `build-app` and `build-frontend` have completed, `deploy-all` will
execute (it requires `PipelineResources` from both Pipeline Tasks).
1. The entire `Pipeline` will be finished executing after `lint-repo` and `deploy-all`
have completed.

## Examples

For complete examples, see
Expand Down
24 changes: 17 additions & 7 deletions docs/tutorial.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ This tutorial will walk you through creating and running some simple

For more details on using `Pipelines`, see [our usage docs](README.md).

**[This tutorial can be run on a local workstation](#local-development)**<br>
**[This tutorial can be run on a local workstation](#local-development)**<br/>

## Task

Expand Down Expand Up @@ -326,12 +326,14 @@ The status of type `Succeeded = True` shows the Task ran successfully and you
can also validate the Docker image is created in the location specified in the
resource definition.

# Pipeline
## Pipeline

A [`Pipeline`](pipelines.md) defines a list of tasks to execute, while also
indicating if any outputs should be used as inputs of a following task by using
[the `from` field](pipelines.md#from). The same templating you used in tasks is
also available in pipeline.
A [`Pipeline`](pipelines.md) defines a list of tasks to execute in
order, while also indicating if any outputs should be used as inputs
of a following task by using [the `from` field](pipelines.md#from) and
also indicating [the order of executing (using the `runAfter` and
`from` fields)](pipelines.md#ordering). The same templating you used
in tasks is also available in pipeline.

For example:

Expand Down Expand Up @@ -603,7 +605,7 @@ Tekton Pipelines is known to work with:

Elasticsearch, Beats and Kibana can be deployed locally as a means to view logs:
an example is provided at
https://github.com/mgreau/tekton-pipelines-elastic-tutorials.
<https://github.com/mgreau/tekton-pipelines-elastic-tutorials>v.

## Experimentation

Expand All @@ -612,6 +614,14 @@ annotation applies to subjects such as Docker registries, log output locations
and other nuances that may be specific to particular cloud providers or
services.

The `TaskRuns` have been created in the following [order](pipelines.md#ordering):

1. `tutorial-pipeline-run-1-build-skaffold-web` - This runs the [Pipeline Task](pipelines.md#pipeline-tasks)
`build-skaffold-web` first, because it has no [`from` or `runAfter` clauses](pipelines.md#ordering)
1. `tutorial-pipeline-run-1-deploy-web` - This runs `deploy-web` second, because its [input](tasks.md#inputs)
`web-image` comes [`from`](pipelines.md#from) `build-skaffold-web` (therefore `build-skaffold-web`
must run before `deploy-web`).

---

Except as otherwise noted, the content of this page is licensed under the
Expand Down
1 change: 1 addition & 0 deletions test/dag_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -113,6 +113,7 @@ func TestDAGPipelineRun(t *testing.T) {
}
// FIXME(vdemeester) do the rest :)
/*
// TODO(christiewilson) can't actually get the logs reliably at this point, maybe write to a volume instead?
logger.Infof("Getting logs from results validation task")
// The volume created with the results will have the same name as the TaskRun
validationTaskRunName := "dag-pipeline-run-pipeline-task-4-validate-results"
Expand Down

0 comments on commit e5f125f

Please sign in to comment.