-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove clusterID from installconfig and move it to cluster #1057
Remove clusterID from installconfig and move it to cluster #1057
Conversation
None of the options, other than Image, that were in the Libvirt MachinePool were being used. They have been removed. The Image has been pulled up to the Libvirt Platform, as there was no way to use a different image for different machines pools. For consistency with the AWS and OpenStack platforms, the Libvirt MachinePool has been retained, even though it is empty. The DefaultMachinePlatform has been retained in the Libvirt Platform as well. The code in the Master Machines and Worker Machines assets that determines the configuration to use for the machines has been adjusted for Libvirt to rectify the machine-pool-specific configuration against the default machine-pool configuration. This is not strictly necessary as, again, the Libvirt configuration is empty. It keeps the logic consistent with the other platforms, though. https://jira.coreos.com/browse/CORS-911
added OPENSHIFT_INSTALL_OS_IMAGE_OVERRIDE env with warning to override the image that will be used.
With baseimage user query removed from TUI, this function is no longer required. This also - drops the vendored files - updates the mocks using `hack/go-genmock.sh`
RHCOS image that needs to be used for installation must be sources from RHCOS build pipeline and RHCOS build. Keeping the image related fields in install-config allows users to change these values as part of valid configuration, but we do not want users to configure this option as the RHCOS image controls the runtime and kubelet versions we depend on.
ClusterID is now removed from installconfig. The reason is that it should not be possible for a user to override this value. The clusterID is still needed for destroy - and hence it is now a separate asset which gets stored in ClusterMetadata. Other assets needing the clusterID (e.g. legacy manifests) issue a separate dependency on this asset. For convenience of package depenencies, the asset still lives in the installconfig package.
/lgtm I'll pull the /hold once #1052 lands. |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: staebler, wking The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
#1052 landed. /hold cancel |
/retest |
/retest |
Per last arch call Hive and service delivery both have a need to know the cluster ID, before we start provisioning resources. With this change it looks like we can no longer specify our own, but I'm not seeing the cluster metadata on disk when I generate manifests. Did this change make it in before this? Any advice on what our best path here is to know that UUID before we provision? I do see a few mentions of it in state file and cvo-overrides.yml after generating manifests. |
@dgoodwin oops, I think this was merged a little prematurely. We'll need to follow up with an easy way to capture that UUID before the provisioning is started. |
Instead of just the UUID, can we make a |
@dgoodwin Abhinav pointed out that |
You need |
Never mind, I was confused. Bring-your-own-infrastructure must also be destroy-your-own-infrastructure. For Hive or anything that is actually calling |
The two uses cases were (1) service delivery will start receiving telemetry for the cluster while it's installing, but they have no knowledge of the UUID which is a problem for them, and (2) if Hive fails to upload that UUID after install we have an orphaned cluster that can't be cleaned up automatically. Writing the metadata.json as an asset is a perfect solution, we can upload once ready and if it fails, no harm done, we'll just keep retrying. |
From Devan Goodwin [1]: The two uses cases were (1) service delivery will start receiving telemetry for the cluster while it's installing, but they have no knowledge of the UUID which is a problem for them, and (2) if Hive fails to upload that UUID after install we have an orphaned cluster that can't be cleaned up automatically. Writing the metadata.json as an asset is a perfect solution, we can upload once ready and if it fails, no harm done, we'll just keep retrying. Matthew recommended the no-op load [2]: My suggestion is that, for now, Load should return false always. The installer will ignore any changes to metadata.json. In the future, perhaps we should introduce a read-only asset that would cause the installer to warn (or fail) in the face of changes. [1]: openshift#1057 (comment) [2]: openshift#1070 (comment)
Since 0.10.0, clusterID is no longer part of install-config.yaml See openshift/installer#1057
Since 0.10.0, clusterID is no longer part of install-config.yaml See openshift/installer#1057
Catching up with openshift/installer@170fdc2d2c (pkg/asset/: Remove clusterID from installconfig and move it to cluster, 2018-12-04, openshift/installer#1057).
#783 rebased on top of #1052