Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

include opinionated but configurable pipeline logic #3008

Open
cgwalters opened this issue Jul 26, 2022 · 4 comments
Open

include opinionated but configurable pipeline logic #3008

cgwalters opened this issue Jul 26, 2022 · 4 comments

Comments

@cgwalters
Copy link
Member

cgwalters commented Jul 26, 2022

We have an effort to deduplicate the rhcos/fcos pipeline logic. But even if that happens, there's still a duplicate "pipeline" today in e.g. the Prow logic. I think we're not going to get away from that.

One aspect of gangplank was trying to include some notion of "dependencies" between tasks that could be chained.

I propose that we include something Makefile-like in coreos-assembler itself that expresses an opinionated default pipeline that can be executed reliably both locally/manually and automatically in a pipeline.

I also strongly think that we should move architecture conditionals into this file, and not have them in the pipeline.

Something like this as a starting point: where each set of commands is executed in parallel:

$ cat /usr/lib/coreos-assembler/pipeline-base.yaml
build:
  - cosa build metal metal4k
tier0: build
  - cosa kola run --basic-qemu-scenarios
  - kola testiso -S --scenarios pxe-offline-install,iso-offline-install
tier1: tier0
  - cfgarch(["x86_64", "aarch64"]): kola testiso -S --qemu-firmware uefi --scenarios iso-live-login,iso-as-disk
  - cosa kola run

Notice the strawman for cfgarch.

Now, just to start from a plain cosa shell (in a manually scheduled pod or run via podman), without any use of the pipeline code I could type:

$ cosa init https://github.com/coreos/fedora-coreos-config/
$ cosa pipeline-run tier1

And do everything on that list, automatically parallelizing.

But you might ask: if we just change the Jenkins pipeline to do this, won't we lose the nice visualization of stages? Yes. But, I propose we also have:

$ cosa pipeline-render --json tier1 > pipeline.json

This would then output a JSON form of this (after processing arch dependencies) that could be dynamically read by the pipeline groovy code and turned into dynamic invocations of stage().

Now today, Prow unfortunately does not support that level of dynamism. One approach we could take is to have some custom glue that updates the CI flow in openshift/release periodically. Alternatively, and really simpler to start: we could just manually keep the "targets" in sync, assuming we didn't change them often.

@jlebon
Copy link
Member

jlebon commented Jul 28, 2022

I think this sounds good in principle, but the devil is in the details. :) There's a lot of business logic in the FCOS pipeline beyond "build artifacts" and it's going to be tricky to "declarativize" it all without ending up with a declarative spec with extensive knobs (e.g. I have reservations about the current RHCOS jobspec). A simple example is that we have to build the ostree separately first for signing.

I do see the appeal though of extracting out the logic from Jenkins and into something that Jenkins/Prow/devs can just run. I guess I'm saying it might be better if it doesn't try to be too declarative. It also doesn't have to live in cosa.

@dustymabe
Copy link
Member

This would then output a JSON form of this (after processing arch dependencies) that could be dynamically read by the pipeline groovy code and turned into dynamic invocations of stage().

This is an important aspect to keep IMO and needs to be done well (high fidelity with what we have now if possible).

@cgwalters
Copy link
Member Author

A simple example is that we have to build the ostree separately first for signing.

This relates to #2685 which is aiming to cleanly split up ostree-container from disk images too.

@cgwalters
Copy link
Member Author

One potential option is to use Starlark as the language - specifically we'd probably use Starlark Go.

A key advantage Starlark has over Groovy is that it is not Turing complete and cannot make arbitrary API calls etc.

(For example, the idea of "let's send a message to Slack" is not something you could do as part of the "pipeline" logic)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants