Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support cosa init --ostree docker://quay.io/coreos-assembler/fcos:testing-devel #2685

Open
cgwalters opened this issue Feb 3, 2022 · 24 comments · Fixed by openshift/os#657
Labels
jira for syncing to jira

Comments

@cgwalters
Copy link
Member

cgwalters commented Feb 3, 2022

This is part of coreos/fedora-coreos-tracker#828 conceptually, which was actually in retrospect framed too broadly. Focus shifted to coreos layering but that's really the "user experience" half. Re engineering how we build and ship FCOS (the first issue) still applies.

In particular, I think we should support a cosa init --ostree mode that takes a container image as input and outputs just a container. We may not even generate a builds/ directory, and no meta.json stuff should be created for this.

flowchart TB
    quayprevious["previous ostree container"] --> ostreebuild
    subgraph ostreebuild [cosa ostree build]
    configgit["config git"]-->container
    rpms --> container["ostree container"]
    container --> quay["quay.io"]
    end
    subgraph imagebuild [cosa image build]
    quay --> qemu["qemu image"]
    quay --> metal
    metal --> iso
    qemu --> ami
    qemu --> vsphere
    qemu --> gcp
    end


    imagebuild --> S3
Loading
imagebuild -->

Note how the input to "cosa image build" is just the ostree container, not config git (or rpms).

Further, I want to emphasize that the "build ostree container" and "build disk images" can (and would normally be) separate processes. (Now, how testing is integrated here is not depicted, but basically we'd still probably generate a qemu image to sanity test our container builds, but it would be discarded and regenerated by the image build process only once that image had passed other approvals)

@cgwalters
Copy link
Member Author

A specific thing this would really help unblock is reworking our build/CI flow to be more like:

  • check for changes in input
  • build new container image
  • Do sanity checks on that container image as a container (perhaps systemd-in-container even)
  • Push that container image to registry

The remainder of stuff here could be parallelized/configurable:

  • Kick off upgrade tests from the previous stable release
  • Generate a fresh qemu image and run qemu basic tests
  • Do ISO/metal tests
  • Do cloud tests

And we could now much more naturally represent stages of CI with container image tags. For example we might push fcos:testing-devel-candidate or so. And then only tag to fcos:testing-devel once some of those tests have passed.

@cheesesashimi
Copy link
Contributor

For example we might push fcos:testing-devel-candidate or so. And then only tag to fcos:testing-devel once some of those tests have passed.

This is my favorite part of this. This would enable consumers of these images to get feedback about the overall image state.

Just to clarify, when you say OCI standard keys, are you referring to https://github.com/opencontainers/image-spec/blob/main/annotations.md?

@cgwalters
Copy link
Member Author

Yep! Specifically org.opencontainers.image.source and org.opencontainers.image.revision.

cgwalters added a commit to cgwalters/ostree-rs-ext that referenced this issue Feb 3, 2022
For coreos/coreos-assembler#2685
we want to copy e.g. `rpmostree.input-hash` into the container image.
Extend the `ExportOpts` struct to support this, and also expose
it via the CLI, e.g.
`ostree container encapsulate --copymeta=rpmostree.input-hash ...`.

And while I was thinking about this...we should by default copy
some core ostree keys, such as `ostree.bootable` and `ostree.linux`
since they are key pieces of metadata.
@cgwalters
Copy link
Member Author

ostreedev/ostree-rs-ext#234 will help this

cgwalters added a commit to cgwalters/ostree-rs-ext that referenced this issue Feb 3, 2022
For coreos/coreos-assembler#2685
we want to copy e.g. `rpmostree.input-hash` into the container image.
Extend the `ExportOpts` struct to support this, and also expose
it via the CLI, e.g.
`ostree container encapsulate --copymeta=rpmostree.input-hash ...`.

And while I was thinking about this...we should by default copy
some core ostree keys, such as `ostree.bootable` and `ostree.linux`
since they are key pieces of metadata.
cgwalters added a commit to cgwalters/ostree-rs-ext that referenced this issue Feb 3, 2022
For coreos/coreos-assembler#2685
we want to copy e.g. `rpmostree.input-hash` into the container image.
Extend the `ExportOpts` struct to support this, and also expose
it via the CLI, e.g.
`ostree container encapsulate --copymeta=rpmostree.input-hash ...`.

And while I was thinking about this...we should by default copy
some core ostree keys, such as `ostree.bootable` and `ostree.linux`
since they are key pieces of metadata.
cgwalters added a commit to cgwalters/ostree-rs-ext that referenced this issue Feb 3, 2022
For coreos/coreos-assembler#2685
we want to copy e.g. `rpmostree.input-hash` into the container image.
Extend the `ExportOpts` struct to support this, and also expose
it via the CLI, e.g.
`ostree container encapsulate --copymeta=rpmostree.input-hash ...`.

And while I was thinking about this...we should by default copy
some core ostree keys, such as `ostree.bootable` and `ostree.linux`
since they are key pieces of metadata.
@dustymabe
Copy link
Member

I'm not sure I understand the value here. Maybe we can talk about it at the next video community meeting to make it more clear for people.

@cgwalters
Copy link
Member Author

ostreedev/ostree-rs#47

cgwalters added a commit to cgwalters/ostree-rs-ext that referenced this issue Feb 4, 2022
@dustymabe
Copy link
Member

ostreedev/ostree-rs#47

Was that a response to me? If so I still don't understand how that answers the question.

@cgwalters
Copy link
Member Author

Was that a response to me?

Nope, just keeping track of related PRs.

I'm not sure I understand the value here.

I tried to elaborate on all this in coreos/fedora-coreos-tracker#828

The simplest way to say it is that our center of gravity ships much closer to container image builds, and not custom JSON schema stored in a blob store.

Right now the container image is exported from the blob store - this would flip things around; source of truth is a container image. Disk image builds are secondary/derivatives of that.

cgwalters added a commit to cgwalters/rpm-ostree that referenced this issue Feb 4, 2022
Builds on ostreedev/ostree-rs-ext#235

Part of coreos/coreos-assembler#2685

Note making use of this will require bumping ostree-ext here.
@cgwalters
Copy link
Member Author

coreos/rpm-ostree#3402

cgwalters added a commit to cgwalters/fedora-coreos-config that referenced this issue Feb 4, 2022
Builds on coreos/rpm-ostree#3402

Relates to coreos/coreos-assembler#2685

Basically, source of truth for the CMD moves from being hardcoded
in cosa hackiliy to being part of the ostree commit, which means
it survives a full round trip of
ostree → container → ostree → container
@jlebon
Copy link
Member

jlebon commented Feb 4, 2022

Hmm, also unsure about this. At the end of the day, we'll probably still always want public images sitting in object stores so it's convenient for users/higher-level tools to download and run without involving a container stack. Which means we'd still have something like the builds dir in S3. So there's a lot of force pulling us towards keeping it as canonical too.

@cgwalters
Copy link
Member Author

we'll probably still always want public images sitting in object stores so it's convenient for users/higher-level tools to download and run without involving a container stack.

In our world, "images" is an ambiguous term. You're thinking disk/boot images, right? Yes, I agree. Wrapping those in a container is currently a bit of a weird thing to do.

Which means we'd still have something like the builds dir in S3. So there's a lot of force pulling us towards keeping it as canonical too.

I think the interesting angle here more is having disk images come after (follow, derive from) the container builds. But yes, when we go to generate a cosa build, we convert the container back into an ociarchive and store it in S3 as we do currently.

@jmarrero jmarrero added the jira for syncing to jira label Mar 1, 2022
cgwalters added a commit to cgwalters/fedora-coreos-config that referenced this issue Mar 7, 2022
Builds on coreos/rpm-ostree#3402

Relates to coreos/coreos-assembler#2685

Basically, source of truth for the CMD moves from being hardcoded
in cosa hackiliy to being part of the ostree commit, which means
it survives a full round trip of
ostree → container → ostree → container
@dustymabe
Copy link
Member

I feel like if we're pushing in this direction we should probably have a larger discussion about it. Would you like to bring it up at this week's meeting?

cgwalters added a commit to cgwalters/fedora-coreos-config that referenced this issue Mar 8, 2022
Builds on coreos/rpm-ostree#3402

Relates to coreos/coreos-assembler#2685

Basically, source of truth for the CMD moves from being hardcoded
in cosa hackiliy to being part of the ostree commit, which means
it survives a full round trip of
ostree → container → ostree → container
jlebon pushed a commit to coreos/fedora-coreos-config that referenced this issue Mar 8, 2022
Builds on coreos/rpm-ostree#3402

Relates to coreos/coreos-assembler#2685

Basically, source of truth for the CMD moves from being hardcoded
in cosa hackiliy to being part of the ostree commit, which means
it survives a full round trip of
ostree → container → ostree → container
cgwalters added a commit to cgwalters/coreos-assembler that referenced this issue Jun 7, 2022
For now, we need to support having the new format oscontainer in
`meta.json`.

Part of coreos#2685
And see coreos#2685 (comment)
in particular.
cgwalters added a commit to cgwalters/coreos-assembler that referenced this issue Jun 7, 2022
For now, we need to support having the new format oscontainer in
`meta.json`.

Part of coreos#2685
And see coreos#2685 (comment)
in particular.
cgwalters added a commit to cgwalters/coreos-assembler that referenced this issue Jun 8, 2022
For now, we need to support having the new format oscontainer in
`meta.json`.

Part of coreos#2685
And see coreos#2685 (comment)
in particular.
cgwalters added a commit to cgwalters/coreos-assembler that referenced this issue Jun 8, 2022
For now, we need to support having the new format oscontainer in
`meta.json`.

Part of coreos#2685
And see coreos#2685 (comment)
in particular.
cgwalters added a commit to cgwalters/coreos-assembler that referenced this issue Jun 13, 2022
For now, we need to support having the new format oscontainer in
`meta.json`.

Part of coreos#2685
And see coreos#2685 (comment)
in particular.
jlebon pushed a commit that referenced this issue Jun 13, 2022
For now, we need to support having the new format oscontainer in
`meta.json`.

Part of #2685
And see #2685 (comment)
in particular.
cgwalters added a commit to cgwalters/coreos-assembler that referenced this issue Jul 21, 2022
Quite a while ago we added this special case code to kola
which learned how to do in-place updates to the weird bespoke
"ostree repo in container" OCP/RHCOS-specific container image.

A huge benefit of the change to use ostree-native containers
is that this approach can now be shared across FCOS/RHCOS.

(Also, rpm-ostree natively understands this, so it's much much
 more efficient and less awkward than the wrappers we had in `pivot` around
 rpm-ostree)

But the benefits get even larger: this issue:
coreos#2685
proposes rethinking of our pipeline to more cleanly split up
"build OS update container" from "build disk images".

With this, it becomes extra convenient to do a flow of:

- build OS update container, push to registry
- `kola run -p stable --oscontainer quay.io/fcos-devel/testos@sha256...`

IOW we're not generating a disk image to test the OS update - we're
using the *stable* disk image and doing an in-place update before
we run tests.

Now...as of right now nothing in the pipeline passes this flag,
so the code won't be used (except for manual testing).

Suddenly with this, the number of tests we can run roughly *doubles*.
For example, we can run e.g.
`kola run rpm-ostree` both with and without `--oscontainer`.

In most cases, things should be the same.  But, I think it will
be interesting to try to explictly use this for at least some tests;
it's almost a full generalization of the `kola run-upgrades` bits.
cgwalters added a commit that referenced this issue Jul 28, 2022
Quite a while ago we added this special case code to kola
which learned how to do in-place updates to the weird bespoke
"ostree repo in container" OCP/RHCOS-specific container image.

A huge benefit of the change to use ostree-native containers
is that this approach can now be shared across FCOS/RHCOS.

(Also, rpm-ostree natively understands this, so it's much much
 more efficient and less awkward than the wrappers we had in `pivot` around
 rpm-ostree)

But the benefits get even larger: this issue:
#2685
proposes rethinking of our pipeline to more cleanly split up
"build OS update container" from "build disk images".

With this, it becomes extra convenient to do a flow of:

- build OS update container, push to registry
- `kola run -p stable --oscontainer quay.io/fcos-devel/testos@sha256...`

IOW we're not generating a disk image to test the OS update - we're
using the *stable* disk image and doing an in-place update before
we run tests.

Now...as of right now nothing in the pipeline passes this flag,
so the code won't be used (except for manual testing).

Suddenly with this, the number of tests we can run roughly *doubles*.
For example, we can run e.g.
`kola run rpm-ostree` both with and without `--oscontainer`.

In most cases, things should be the same.  But, I think it will
be interesting to try to explictly use this for at least some tests;
it's almost a full generalization of the `kola run-upgrades` bits.
cgwalters added a commit to cgwalters/coreos-assembler that referenced this issue Sep 12, 2022
Part of coreos#2685

I'm looking at replacing the guts of `cosa build ostree` with the
new container-native `rpm-ostree compose image`.  In order
for that to work, we need two things:

- The committed overlays from `overlays/` - xref coreos/rpm-ostree#4005
- The rendered `image.json` which is also an overlay now

Basically in combination with the above PR, this works now when
invoked manually:

```
$ cosa build --prepare-only
$ sudo rpm-ostree compose image --cachedir=cache/buildimage  --layer-repo tmp/repo src/config/manifest.yaml oci:tmp/fcos.oci
```
cgwalters added a commit to cgwalters/coreos-assembler that referenced this issue Sep 12, 2022
Part of coreos#2685

I'm looking at replacing the guts of `cosa build ostree` with the
new container-native `rpm-ostree compose image`.  In order
for that to work, we need two things:

- The committed overlays from `overlays/` - xref coreos/rpm-ostree#4005
- The rendered `image.json` which is also an overlay now

Basically in combination with the above PR, this works now when
invoked manually:

```
$ cosa build --prepare-only
$ sudo rpm-ostree compose image --cachedir=cache/buildimage  --layer-repo tmp/repo src/config/manifest.yaml oci:tmp/fcos.oci
```
cgwalters added a commit that referenced this issue Sep 13, 2022
Part of #2685

I'm looking at replacing the guts of `cosa build ostree` with the
new container-native `rpm-ostree compose image`.  In order
for that to work, we need two things:

- The committed overlays from `overlays/` - xref coreos/rpm-ostree#4005
- The rendered `image.json` which is also an overlay now

Basically in combination with the above PR, this works now when
invoked manually:

```
$ cosa build --prepare-only
$ sudo rpm-ostree compose image --cachedir=cache/buildimage  --layer-repo tmp/repo src/config/manifest.yaml oci:tmp/fcos.oci
```
cgwalters added a commit to cgwalters/coreos-assembler that referenced this issue Oct 20, 2022
This is a big step towards coreos#2685

I know there's a lot going on with the pipeline, and I don't
want to conflict with all that work - but at the same time,
in my opinion we are just too dependent on complex Jenkins flows
and our bespoke "meta.json in S3".

The core of CoreOS *is a container image* now.  This new command
adds an opinionated flow where one can do:

```
$ cosa init
$ cosa build-cimage quay.io/cgwalters/ostest
```

And *that's it* - we do proper change detection, reading and
writing from the remote container image.  We don't do silly things
like storing an `.ociarchive` in S3 when we have native registries
available.

Later, we can build on this and rework our disk images to
derive from that container image, as coreos#2685 calls for.

Also in the near term future, I think we can rework `cmd-build`
such that it reuses this flow, but outputs to an `.ociarchive` instead.
However, this code is going to need a bit more work to run in
supermin.
@cgwalters
Copy link
Member Author

PR in #3128 which starts the ball rolling here

cgwalters added a commit to cgwalters/coreos-assembler that referenced this issue Oct 20, 2022
This is a big step towards coreos#2685

I know there's a lot going on with the pipeline, and I don't
want to conflict with all that work - but at the same time,
in my opinion we are just too dependent on complex Jenkins flows
and our bespoke "meta.json in S3".

The core of CoreOS *is a container image* now.  This new command
adds an opinionated flow where one can do:

```
$ cosa init
$ cosa build-cimage quay.io/cgwalters/ostest
```

And *that's it* - we do proper change detection, reading and
writing from the remote container image.  We don't do silly things
like storing an `.ociarchive` in S3 when we have native registries
available.

Later, we can build on this and rework our disk images to
derive from that container image, as coreos#2685 calls for.

Also in the near term future, I think we can rework `cmd-build`
such that it reuses this flow, but outputs to an `.ociarchive` instead.
However, this code is going to need a bit more work to run in
supermin.
cgwalters added a commit to cgwalters/coreos-assembler that referenced this issue Nov 29, 2022
This is a big step towards coreos#2685

I know there's a lot going on with the pipeline, and I don't
want to conflict with all that work - but at the same time,
in my opinion we are just too dependent on complex Jenkins flows
and our bespoke "meta.json in S3".

The core of CoreOS *is a container image* now.  This new command
adds an opinionated flow where one can do:

```
$ cosa init
$ cosa build-cimage quay.io/cgwalters/ostest
```

And *that's it* - we do proper change detection, reading and
writing from the remote container image.  We don't do silly things
like storing an `.ociarchive` in S3 when we have native registries
available.

Later, we can build on this and rework our disk images to
derive from that container image, as coreos#2685 calls for.

Also in the near term future, I think we can rework `cmd-build`
such that it reuses this flow, but outputs to an `.ociarchive` instead.
However, this code is going to need a bit more work to run in
supermin.
cgwalters added a commit to cgwalters/coreos-assembler that referenced this issue Nov 30, 2022
This is a big step towards coreos#2685

I know there's a lot going on with the pipeline, and I don't
want to conflict with all that work - but at the same time,
in my opinion we are just too dependent on complex Jenkins flows
and our bespoke "meta.json in S3".

The core of CoreOS *is a container image* now.  This new command
adds an opinionated flow where one can do:

```
$ cosa init
$ cosa build-cimage quay.io/cgwalters/ostest
```

And *that's it* - we do proper change detection, reading and
writing from the remote container image.  We don't do silly things
like storing an `.ociarchive` in S3 when we have native registries
available.

Later, we can build on this and rework our disk images to
derive from that container image, as coreos#2685 calls for.

Also in the near term future, I think we can rework `cmd-build`
such that it reuses this flow, but outputs to an `.ociarchive` instead.
However, this code is going to need a bit more work to run in
supermin.
cgwalters added a commit to cgwalters/os that referenced this issue Jul 27, 2023
This will cause us to run through the ostree-native container
stack when generating the disk images.

Today for RHCOS we're using the "custom origin" stuff which
lets us inject metadata about the built source, but rpm-ostree
doesn't understand it.

With this in the future (particularly after coreos/coreos-assembler#2685)
`rpm-ostree status` will show the booted container and *understand it*.

We'll have the digest of the OCI archive at least...though
that may get changed if it gets converted to docker v2s2 when pushing
to a registry...

Now in the future what we want is to entirely rework our build
pipeline like this: coreos/coreos-assembler#2685
HuijingHei pushed a commit to HuijingHei/fedora-coreos-config that referenced this issue Oct 10, 2023
Builds on coreos/rpm-ostree#3402

Relates to coreos/coreos-assembler#2685

Basically, source of truth for the CMD moves from being hardcoded
in cosa hackiliy to being part of the ostree commit, which means
it survives a full round trip of
ostree → container → ostree → container
HuijingHei pushed a commit to HuijingHei/fedora-coreos-config that referenced this issue Oct 10, 2023
Builds on coreos/rpm-ostree#3402

Relates to coreos/coreos-assembler#2685

Basically, source of truth for the CMD moves from being hardcoded
in cosa hackiliy to being part of the ostree commit, which means
it survives a full round trip of
ostree → container → ostree → container
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
jira for syncing to jira
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants