Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stream and release metadata #98

Open
bgilbert opened this issue Dec 14, 2018 · 40 comments
Open

Stream and release metadata #98

bgilbert opened this issue Dec 14, 2018 · 40 comments

Comments

@bgilbert
Copy link
Contributor

bgilbert commented Dec 14, 2018

There are two types of metadata relevant to FCOS releases. We should publish each type in a unified machine-readable format.

Proposal

Release metadata

Release metadata is associated with a particular OS version. It should include:

  • The stream name, version number, and release notes
  • The Pungi compose ID from which packages were sourced
  • Image identifiers for each tuple of (platform, CPU architecture, region). For example, on AWS there is an AMI ID per region. On GCP there is one image name across the entire platform. On all platforms there is an associated image artifact; we could also include its content hash.

Release metadata is populated incrementally, but each field should be immutable once added. For example, we might need to publish an urgent security release while an AWS region is down. In this case, we'd provide the available AMI IDs, and backfill the metadata once we got an AMI published into the lagging region. Platforms with slow publishing processes (such as AWS Marketplace) might always need their image IDs backfilled after the release.

There's been some discussion of storing release metadata as ostree detached metadata (coreos/coreos-assembler#189 (comment)) or in a Git repo (coreos/coreos-assembler#203). Perhaps release metadata should be signed.

Stream metadata

Stream metadata is associated with a stream. It should give the currently-recommended OS version, image ID, and artifact URL for the stream, separately for each platform (and region, where appropriate). Most of the time, stream metadata will just be a refactored version of the release metadata for the current release on the stream. However, operational issues may require divergence from the current release, such as:

  • Holding certain platforms or regions at a previous release
  • Reverting certain platforms or regions to a previous release after updating them to the current release

Stream metadata might also include the target OS version for updates, to be consumed by the Cincinnati graph builder (#83). Operational issues may sometimes require this to be different from the target version for new installs.

Tooling should perform on-demand sync of stream metadata from current release metadata, subject to overrides specified via explicit configuration. Those overrides should allow replacing certain stream metadata elements with corresponding elements from a specified older release, or from manually specified metadata.

Stream metadata for each production stream should be available at a well-known HTTP endpoint, perhaps in the form of a static JSON document generated by a site builder. Perhaps it should be signed. Metadata history should be recorded, perhaps in a Git repo.

cc @cgwalters

@bgilbert
Copy link
Contributor Author

If we want to ship ostree static deltas, release metadata should also record the expected predecessor(s) of a release. This will usually be the previous release on the stream, not the previous release on the snapshot branch. (For example, the predecessor of 30.5.2 might be 30.4.3.) Note that "release" really means release; abandoned unreleased versions (which can occur in CL; not sure if they will with FCOS) shouldn't count as predecessors.

In some cases there will be multiple predecessors, notably when rollout of the previous release was halted due to a regression.

cc @sinnykumari

@bgilbert bgilbert added the jira for syncing to jira label Jan 11, 2019
@cgwalters
Copy link
Member

coreos-assembler's builds.json + meta.json is one model. It's OK. The git model could also be nice although it requires more work for endpoints (it's not just a REST API to consume), though obviously one could write one on top. Modeling the releases in ostree itself has some nice properties although it's also not REST.

I think coreos-assembler should probably define a schema for "streams" that point to builds.json; obviously one can model it as just separate dirs but having it be discoverable (like ostree summary files list multiple refs) would be nice.

I guess all of this gets to Dusty's push to model as much data as possible in ostree. That's certainly possible - but it would mean e.g. ostree commit metadata would have e.g. AMI ids. Which...is probably fine, in the end the data isn't going to be more than 10k I'd guess which is way way smaller than the statically linked binaries in the OS.

@bgilbert bgilbert added this to Proposed in Fedora CoreOS preview via automation Jan 22, 2019
@bgilbert bgilbert moved this from Proposed to Selected in Fedora CoreOS preview Jan 22, 2019
@ashcrow
Copy link
Member

ashcrow commented Feb 19, 2019

/cc @dustymabe @sinnykumari

@brianredbeard
Copy link
Member

Let me just add two concrete use cases we had with CoreOS/Container Linux (CL) which may be useful as a litmus test:

With CL it was challenging to discover all of the related files to an OEM to know which files one would need to sync. When reflecting on the distribution repos there have been times when file name changes were necessitated or when new platforms had files added.

Using OpenStack and PXE platforms as an example, in the case of OpenStack all that is needed is to pull down the virtual machine image while with PXE one has to pull down the Kernel and the Initramfs.

While one may pull down a specific version of an OpenStack image it will quickly get out of date meaning that immediately after install a host has to go down for an update needlessly wasting time during provisioning.

Our solution was to present two scripts. A generic one which could check the ETag on version.txt so that it could easily be run via cron and only react when version.txt changed. The second was to then trigger the execution of an OEM specific script (glance_load.sh in this case) which will then perform the specific logic of loading in a new version to glance*.

In the case of PXE we then had two distinct paths of execution: Either using a PXE load to then bootstrap the host or using it as a live running system. In the latter case this allowed organizations to use Container Linux as a bootstrapping platform for their own clouds. Multiple public cloud providers used Container Linux in combination with a containerized agent to perform dynamic bootstrapping of hosts (at least one of them used this with OpenStack Ironic).

This presented an interesting problem though, if a user was not persisting data to disk then there is no mechanism for Omaha (or Cincinnati) based updates. This meant that the synchronization of OS components MUST occur at the infrastructure level. For this users would then be required to synchronize down the components of an OEM. As (throughout the life of Container Linux) metadata was not generally presented on a per-OEM basis, this meant users had to report to string matching of content to identify the content they needed to mirror. One example is this script - coreos_mirror.py.

  • We had differing opinions about the "right" way to go about this, so I personally had a copy of my script with additional functionality around tagging of images so that one could boot the latest image in a "channel" using the os_family, os_distro, os_release, or os_version metadata.

@bgilbert
Copy link
Contributor Author

bgilbert commented May 29, 2019

Here are some sample metadata blobs for discussion. I've written them in YAML for clarity, but the published documents would be JSON.

Stream metadata

# Include stream name so the document is self-contained
stream: stable
metadata:
  last-modified: "2019-06-04T16:18:34Z"
architectures:
  x86_64:
    artifacts:
      # Some of these will be useful for many users, such as qemu or
      # openstack. Some will likely only be useful for cloud operators,
      # such as digitalocean or packet.  Some, such as aws, are useful
      # for users in special situations.
      aws:
        release: 30.1.2.3
        formats:
          # Generally one format per platform, but allow for future expansion
          # without obscuring the platform ID (as on Container Linux)
          "vmdk.xz":
            # Generally only one artifact, but not always
            disk:
              location: https://artifacts.example.com/dsB2fnzP7KhqzQ5a.vmdk.xz
              signature: https://artifacts.example.com/dsB2fnzP7KhqzQ5a.vmdk.xz.sig
              sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
      azure:
        release: 30.1.2.3
        formats:
          "vdi.xz":
            disk:
              location: https://artifacts.example.com/aeng0xah6vaaVosh.vdi.xz
              signature: https://artifacts.example.com/aeng0xah6vaaVosh.vdi.xz.sig
              sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
      digitalocean:
        release: 30.1.2.3
        formats:
          "raw.xz":
            disk:
              location: https://artifacts.example.com/ichaloomuHax9ahR.raw.xz
              signature: https://artifacts.example.com/ichaloomuHax9ahR.raw.xz.sig
              sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
      gcp:
        release: 30.1.2.3
        formats:
          "tar.gz":
            disk:
              location: https://artifacts.example.com/ais7tah1aa7Ahvei.tar.gz
              signature: https://artifacts.example.com/ais7tah1aa7Ahvei.tar.gz.sig
              sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
      metal:
        release: 30.1.2.3
        formats:
          "raw.xz":
            disk:
              location: https://artifacts.example.com/xTqYJZKCPNvoNs6B.raw.xz
              signature: https://artifacts.example.com/xTqYJZKCPNvoNs6B.raw.xz.sig
              sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
          iso:
            disk:
              location: https://artifacts.example.com/ADE5GO3bjAXeDcLO.iso
              signature: https://artifacts.example.com/ADE5GO3bjAXeDcLO.iso.sig
              sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
          pxe:
            kernel:
              location: https://artifacts.example.com/hkIj8FkCydT3lV9h
              signature: https://artifacts.example.com/hkIj8FkCydT3lV9h.sig
              sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
            initramfs:
              location: https://artifacts.example.com/a9ytS8yB4cGZpca1.cpio.gz
              signature: https://artifacts.example.com/a9ytS8yB4cGZpca1.cpio.gz.sig
              sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
          "installer.iso":
            disk:
              location: https://artifacts.example.com/KwKye6YW4SIIPrhY.iso
              signature: https://artifacts.example.com/KwKye6YW4SIIPrhY.iso.sig
              sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
          installer-pxe:
            kernel:
              location: https://artifacts.example.com/EtqI0KsLIwZOHlCx
              signature: https://artifacts.example.com/EtqI0KsLIwZOHlCx.sig
              sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
            initramfs:
              location: https://artifacts.example.com/EhoS1x66RVA2k8y6.cpio.gz
              signature: https://artifacts.example.com/EhoS1x66RVA2k8y6.cpio.gz.sig
              sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
      openstack:
        release: 30.1.2.3
        formats:
          "qcow.xz":
            disk:
              location: https://artifacts.example.com/oKooheogobofai8l.qcow.xz
              signature: https://artifacts.example.com/oKooheogobofai8l.qcow.xz.sig
              sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
      packet:
        release: 30.1.2.3
        formats:
          "raw.xz":
            disk:
              location: https://artifacts.example.com/Oofohng0xo2phai5.raw.xz
              signature: https://artifacts.example.com/Oofohng0xo2phai5.raw.xz.sig
              sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
      qemu:
        release: 30.1.2.3
        formats:
          "qcow.xz":
            disk:
              location: https://artifacts.example.com/Siejeeb6ohpu8Eel.qcow.xz
              signature: https://artifacts.example.com/Siejeeb6ohpu8Eel.qcow.xz.sig
              sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
      virtualbox:
        release: 30.1.2.3
        formats:
          ova:
            disk:
              location: https://artifacts.example.com/yohsh2haiquaeYah.ova
              signature: https://artifacts.example.com/yohsh2haiquaeYah.ova.sig
              sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
      vmware:
        release: 30.1.2.3
        formats:
          ova:
            disk:
              location: https://artifacts.example.com/quohgh8ei0uzaD5a.ova
              signature: https://artifacts.example.com/quohgh8ei0uzaD5a.ova.sig
              sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855

    images:
      # Cloud images to be launched directly by users.  These are in a
      # separate section because they might not always in sync with the
      # release artifacts above.
      aws:
        regions:
          us-east-1:
            # We know the release because we uploaded it, so might as well
            # list it.
            release: 30.1.2.3
            image: ami-0123456789abcdef
          us-east-2:
            release: 30.1.2.3
            image: ami-0123456789abcdef
      azure:
        # We could give a specific image URN here, but we probably want
        # users to always use a Marketplace URN.  So this is a static
        # string, and represents advice rather than a value we might
        # change.
        image: Fedora:CoreOS:stable:latest
      gcp:
        # We could give a specific image name here, but we probably want
        # users to always use an image family.  So this is a static string,
        # and represents advice rather than a value we might change.
        image: projects/fedora-cloud/global/images/family/fedora-coreos-stable
      digitalocean:
        # We don't control platform ingest, so an image slug is probably
        # the best we can do.
        image: fedora-coreos-stable
      packet:
        # Images don't have addressable versions, so an operating system
        # slug is the best we can do.
        image: fedora_coreos_stable

    updates:
      # Primarily meant as input to Cincinnati
      release: 30.1.2.3

We could also include artifact size/uncompressed-size/uncompressed-sha256 from meta.json, if desired.

Override file

The simplest possible implementation of the override file would just be partial stream metadata, overriding corresponding parts of the automatically generated one. A fancier one would allow references to older releases, but that functionality could be added later if needed.

Release metadata

This seems largely redundant with meta.json, so in principle we might skip it. But it could also be a good opportunity to clean up that format a bit.

release: 30.1.2.3
stream: stable
metadata:
  last-modified: "2019-06-04T16:18:34Z"
architectures:
  x86_64:
    commit: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
    media:
      aws:
        artifacts:
          "vmdk.xz":
            disk:
              location: https://artifacts.example.com/dsB2fnzP7KhqzQ5a.vmdk.xz
              signature: https://artifacts.example.com/dsB2fnzP7KhqzQ5a.vmdk.xz.sig
              sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
        images:
          us-east-1:
            image: ami-0123456789abcdef
          us-east-2:
            image: ami-0123456789abcdef
      azure:
        artifacts:
          "vdi.xz":
            disk:
              location: https://artifacts.example.com/aeng0xah6vaaVosh.vdi.xz
              signature: https://artifacts.example.com/aeng0xah6vaaVosh.vdi.xz.sig
              sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
        images:
          global:
            image: Fedora:CoreOS:Stable:30.1.2.3
      digitalocean:
        artifacts:
          "raw.xz":
            disk:
              location: https://artifacts.example.com/ichaloomuHax9ahR.raw.xz
              signature: https://artifacts.example.com/ichaloomuHax9ahR.raw.xz.sig
              sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
      gcp:
        artifacts:
          "tar.gz":
            disk:
              location: https://artifacts.example.com/ais7tah1aa7Ahvei.tar.gz
              signature: https://artifacts.example.com/ais7tah1aa7Ahvei.tar.gz.sig
              sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
        image: projects/fedora-cloud/global/images/fedora-coreos-stable-30-1-2-3
      metal:
        artifacts:
          "raw.xz":
            disk:
              location: https://artifacts.example.com/xTqYJZKCPNvoNs6B.raw.xz
              signature: https://artifacts.example.com/xTqYJZKCPNvoNs6B.raw.xz.sig
              sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
          iso:
            disk:
              location: https://artifacts.example.com/ADE5GO3bjAXeDcLO.iso
              signature: https://artifacts.example.com/ADE5GO3bjAXeDcLO.iso.sig
              sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
          pxe:
            kernel:
              location: https://artifacts.example.com/hkIj8FkCydT3lV9h
              signature: https://artifacts.example.com/hkIj8FkCydT3lV9h.sig
              sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
            initramfs:
              location: https://artifacts.example.com/a9ytS8yB4cGZpca1.cpio.gz
              signature: https://artifacts.example.com/a9ytS8yB4cGZpca1.cpio.gz.sig
              sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
          "installer.iso":
            disk:
              location: https://artifacts.example.com/KwKye6YW4SIIPrhY.iso
              signature: https://artifacts.example.com/KwKye6YW4SIIPrhY.iso.sig
              sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
          installer-pxe:
            kernel:
              location: https://artifacts.example.com/EtqI0KsLIwZOHlCx
              signature: https://artifacts.example.com/EtqI0KsLIwZOHlCx.sig
              sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
            initramfs:
              location: https://artifacts.example.com/EhoS1x66RVA2k8y6.cpio.gz
              signature: https://artifacts.example.com/EhoS1x66RVA2k8y6.cpio.gz.sig
              sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
      openstack:
        artifacts:
          "qcow.xz":
            disk:
              location: https://artifacts.example.com/oKooheogobofai8l.qcow.xz
              signature: https://artifacts.example.com/oKooheogobofai8l.qcow.xz.sig
              sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
      packet:
        artifacts:
          "raw.xz":
            disk:
              location: https://artifacts.example.com/Oofohng0xo2phai5.raw.xz
              signature: https://artifacts.example.com/Oofohng0xo2phai5.raw.xz.sig
              sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
      qemu:
        artifacts:
          "qcow.xz":
            disk:
              location: https://artifacts.example.com/Siejeeb6ohpu8Eel.qcow.xz
              signature: https://artifacts.example.com/Siejeeb6ohpu8Eel.qcow.xz.sig
              sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
      virtualbox:
        artifacts:
          ova:
            disk:
              location: https://artifacts.example.com/yohsh2haiquaeYah.ova
              signature: https://artifacts.example.com/yohsh2haiquaeYah.ova.sig
              sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
      vmware:
        artifacts:
          ova:
            disk:
              location: https://artifacts.example.com/quohgh8ei0uzaD5a.ova
              signature: https://artifacts.example.com/quohgh8ei0uzaD5a.ova.sig
              sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855

@bgilbert bgilbert moved this from Selected to In Progress in Fedora CoreOS preview May 30, 2019
@bgilbert
Copy link
Contributor Author

Some discussion in #187.

@arithx
Copy link
Contributor

arithx commented May 30, 2019

In an OOB discussion we were discussing having an index for the release metadata that would be consumed by tooling to determine the release metadata endpoint for specific OSTree commits / versions. This index will be generated by plume as part of the release process and I'm currently planning on using the following JSON layout:

{
   "releases": [
        {
            "commit": "<hash>",
            "version": "<version>",
            "metadata": "<url endpoint to build release metadata>"
        },
        ...
    ],
    "metadata": {
        "last-modified": "<timestamp>"
    }
}

@lucab
Copy link
Contributor

lucab commented May 31, 2019

@arithx do we plan to use the "index by hash" capability of this for something? If not, an alternate approach is to have (hash,version,metadata) as the same level in a single object, and keep them in an array with bottom-append (this is what cincinnati does).

@arithx
Copy link
Contributor

arithx commented May 31, 2019

@lucab my intention was the allow the tooling (like cincinnati), that primarily operate on the hash, to do a direct lookup rather than having to search through the entire array for one containing the right hash.

@bgilbert
Copy link
Contributor Author

bgilbert commented Jun 3, 2019

Both the stream and release metadata should have some indication of freshness. A last-modified timestamp is straightforward and seems like a good idea. I like the idea of also adding a serial number for easy reference, though that would make updates more complicated (since generating an updated artifact would require fetching the current one). Thoughts?

@jlebon
Copy link
Member

jlebon commented Jun 4, 2019

Could we start with the just the timestamp and go from there?

@arithx
Copy link
Contributor

arithx commented Jun 4, 2019

I've updated my comment on the release metadata index structure (the entries are now all located under a releases key) based on some OOB discussions w/ @bgilbert

cc @lucab

@arithx
Copy link
Contributor

arithx commented Jun 5, 2019

Updated the release metadata index structure again in the comment.

@bgilbert
Copy link
Contributor Author

bgilbert commented Jun 5, 2019

@jlebon Yes, let's do that.

I've added timestamps to both documents, under metadata.generated. Bikeshedding welcome. I wanted to distinguish the metadata timestamp from the timestamp of the stream (whatever that might mean), and this also leaves some structure for adding e.g. a serial number later. Thoughts? @arithx, should we use the same structure for the release index?

@arithx
Copy link
Contributor

arithx commented Jun 5, 2019

@bgilbert: for the release index I'd be okay with using a similar structure but I'd change the name of the key from generated to something along the lines of last_modified or last_updated.

@bgilbert
Copy link
Contributor Author

bgilbert commented Jun 5, 2019

I was trying to avoid the dashes/underscores/camel-case debate. 😁 But I agree that e.g. last-modified is clearer, and we should use the same key everywhere.

@bgilbert
Copy link
Contributor Author

bgilbert commented Jun 6, 2019

Updated both documents to use last-modified.

arithx added a commit to arithx/mantle that referenced this issue Jun 25, 2019
Per discussions in the [fedora-coreos-tracker](coreos/fedora-coreos-tracker#98 (comment)), modify the build metadata structure to better support multi-arch.
arithx added a commit to arithx/mantle that referenced this issue Jun 25, 2019
Per discussions in the [fedora-coreos-tracker](coreos/fedora-coreos-tracker#98 (comment)), modify the build metadata structure to better support multi-arch.
arithx added a commit to arithx/mantle that referenced this issue Jun 26, 2019
Per discussions in the [fedora-coreos-tracker](coreos/fedora-coreos-tracker#98 (comment)), modify the build metadata structure to better support multi-arch.
@lucab
Copy link
Contributor

lucab commented Jun 27, 2019

Followup from a private discussion: all the manifests above are meant for automatic/machine consumption, so there is the additional topic of signing them (and related key management).

For the moment, we are ensuring via TLS that those cannot be tampered on the wire.
Integrity of downloadable blobs (ostree commits, image artifacts) is guaranteed by direct signatures on such objects.
I think the only remaining case to cover is an overall infrastructure hijack, where somebody is able to reroute or manipulate our bucket and inject forged manifests that way. That would still not be a problem for installed machines, but may prevent new installations and auto-upgrades.

/cc @dustymabe

@bgilbert
Copy link
Contributor Author

bgilbert commented Jul 8, 2019

Created #213 for #98 (comment).

@cgwalters
Copy link
Member

I only partially followed this discussion originally. Is it plume that is creating this metadata from the cosa builds?

@sinnykumari
Copy link
Contributor

Plume generates release metadata and fedora-coreos-stream-generator generates stream metadata. stream metadata also gets stored into fedora-coreos-streams repo. Some discussion related to stream metadata is in issue #193

@cgwalters
Copy link
Member

OK, a lot of useful links there, thanks! I think my high level concern here is simple: What's shared between the FCOS and RHCOS teams today is mostly https://github.com/coreos/coreos-assembler - and of that only a subset (i.e. not plume).

Or to rephrase: There's a lot of stuff that got invented in this issue that hence isn't shared.

As was noted in this comment

Release metadata...This seems largely redundant with meta.json, so in principle we might skip it. But it could also be a good opportunity to clean up that format a bit.

Hmm...a lot of this seems to be trying to support having different platforms at different build IDs, but that seems like a very unusual case?

@cgwalters
Copy link
Member

And to fully xref the reason I came here is thinking about coreos/coreos-assembler#719

@bgilbert
Copy link
Contributor Author

Hmm...a lot of this seems to be trying to support having different platforms at different build IDs, but that seems like a very unusual case?

It's an unusual case but a critical one. For example, we can't release Container Linux if any AWS region is down.

@cgwalters
Copy link
Member

cgwalters commented Aug 20, 2019

It's an unusual case but a critical one. For example, we can't release Container Linux if any AWS region is down.

I've thought about that problem a lot too for RHCOS. But the thing is we have a clear distinction between "bootimages" and updates. And there's no reason for FCOS not to have that as well, right?
If an AWS region is down, nothing stops us from shipping an ostree.

@bgilbert
Copy link
Contributor Author

But the thing is we have a clear distinction between "bootimages" and updates. And there's no reason for FCOS not to have that as well, right?

FCOS releases install and upgrade images together, as CL does. This is partly to avoid confusion, and partly because there are workflows where users deploy new OS versions solely via image launches and not via upgrades.

@cgwalters
Copy link
Member

So I guess the question is to what degree do we view "strong bootimage and update binding" as a property of "FCOS tooling" (down to the cosa level e.g.) or is it something that's more of a policy defined in the FCOS pipeline?

Actually this is a very interesting discussion because today for RHCOS we do usually want "strong bootimage binding"...I think. We haven't really discussed it but my instinct is that if we allowed the bootimage versions on different platforms to float and the update stream to float...we'd be confused fast.

On the other hand, personally I'm not opposed to the idea of potentially having "per platform bootimage versions". But it's also because today we don't need to release bootimages often that we don't need to worry about a particular platform's transient availability.

@bgilbert
Copy link
Contributor Author

Ah, yeah. It's completely a policy question. We do want the ability to release install images and updates separately, it'll just be an uncommon case.

@dustymabe dustymabe removed the jira for syncing to jira label Sep 5, 2019
@cgwalters
Copy link
Member

Clearly at this point the ship has sailed for FCOS and the user-visible output is release streams. OK.

Today for RHCOS the bootimages are hidden in the installer and also published in an ad-hoc fashion to https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/

I'm looking at fixing this as part of openshift/enhancements#201 and I'm debating whether and how we introduce release streams for RHCOS.

A proposal:

@bgilbert
Copy link
Contributor Author

bgilbert commented Sep 2, 2020

SGTM!

@cgwalters
Copy link
Member

cgwalters commented Dec 18, 2020

I'm looking at this again and just some notes to myself; there's really 3 levels of thing (for each stream) type:

I'm struggling to try to articulate the rationale in having both cosa builds.json/meta.json and releases.json. Is there any reason not to simplify this so that a stream can be updated directly from cosa builds.json?

@jlebon
Copy link
Member

jlebon commented Dec 18, 2020

I'm struggling to try to articulate the rationale in having both cosa builds.json/meta.json and releases.json. Is there any reason not to simplify this so that a stream can be updated directly from cosa builds.json?

A major factor is that not all builds are released. It's not uncommon that we do e.g. two stable builds before a release. The release index (releases.json) tracks those we did release. This is used by Cincinnati for example to build the graph.

release.json abstract cosa's meta.json which has lots of details we don't really need in a public API, and it also regroups all the metadata from different arches into one file.

Stream metadata could in theory be directly updated from cosa's meta.json, though it's easier to do it from release.json, since the schemas are more similar.

@cgwalters
Copy link
Member

cgwalters commented Dec 18, 2020

release.json abstract cosa's meta.json which has lots of details we don't really need in a public API,

release.json isn't really a public API though right? Nevermind I meant releases.json...confusing.

@lucab
Copy link
Contributor

lucab commented Dec 18, 2020

For reference, https://github.com/coreos/fedora-coreos-tracker/tree/master/metadata is the canonical place where we describe FCOS metadata.

@cgwalters
Copy link
Member

For reference, https://github.com/coreos/fedora-coreos-tracker/tree/master/metadata is the canonical place where we describe FCOS metadata.

Thanks!

I created openshift/os#477 based on this so far.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
No open projects
Development

No branches or pull requests

9 participants