Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OCPBUGS-45218: aws: fix perm requirement for edge nodes #9256

Merged

Conversation

r4f4
Copy link
Contributor

@r4f4 r4f4 commented Dec 2, 2024

If an edge machine pool is specified without an instance type, the installer needs the ec2:DescribeInstanceTypeOfferings permission to derive the correct instance type available according to the local/wavelength zones being used.

Before this change, the permission was optional and its absence would result in the installer picking a hard-coded non-edge pool instance type which can cause unsupported configuration issues in mapi's output:

     providerStatus:
      conditions:
      - lastTransitionTime: "2024-11-28T15:32:09Z"
        message: 'error launching instance: The requested configuration is currently
          not supported. Please check the documentation for supported configurations.'
        reason: MachineCreationFailed
        status: "False"
        type: MachineCreation

@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Dec 2, 2024
@openshift-ci-robot
Copy link
Contributor

@r4f4: This pull request references Jira Issue OCPBUGS-45218, which is invalid:

  • expected the bug to target the "4.19.0" version, but no target version was set

Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.

The bug has been updated to refer to the pull request using the external bug tracker.

In response to this:

If an edge machine pool is specified without an instance type, the installer needs the ec2:DescribeInstanceTypeOfferings permission to derive the correct instance type available according to the local/wavelength zones being used.

Before this change, the permission was optional and its absence would result in the installer picking a hard-coded non-edge pool instance type which can cause unsupported configuration issues in mapi's output:

    providerStatus:
     conditions:
     - lastTransitionTime: "2024-11-28T15:32:09Z"
       message: 'error launching instance: The requested configuration is currently
         not supported. Please check the documentation for supported configurations.'
       reason: MachineCreationFailed
       status: "False"
       type: MachineCreation

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot openshift-ci-robot added the jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. label Dec 2, 2024
@r4f4
Copy link
Contributor Author

r4f4 commented Dec 2, 2024

/uncc @andfasano
/cc @mtulio

@openshift-ci openshift-ci bot requested review from mtulio and removed request for andfasano December 2, 2024 14:39
@r4f4
Copy link
Contributor Author

r4f4 commented Dec 2, 2024

/jira refresh

@openshift-ci-robot openshift-ci-robot added the jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. label Dec 2, 2024
@openshift-ci-robot
Copy link
Contributor

@r4f4: This pull request references Jira Issue OCPBUGS-45218, which is valid. The bug has been moved to the POST state.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.19.0) matches configured target version for branch (4.19.0)
  • bug is in the state New, which is one of the valid states (NEW, ASSIGNED, POST)

Requesting review from QA contact:
/cc @gpei

In response to this:

/jira refresh

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot openshift-ci-robot removed the jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. label Dec 2, 2024
@openshift-ci openshift-ci bot requested a review from gpei December 2, 2024 14:44
@patrickdillon
Copy link
Contributor

I think requiring this perm is a good idea

/approve

IIRC the permission check is a dependency of the cluster asset and runs after the machine manifests have been generated. I am on my phone right now and can’t double check, but if that is the case it seems like the following could happen:

  • installer generates machineset with bad instance types (manifests target)
  • installer throws validation error for missing perms (cluster target)
  • User adds perms
  • User reruns install with same machinesets and install fails

If i am mistaken and permissions checks run earlier then this wouldn’t be possible. Also i dont think we need to hold up this pr with solving this problem but just wanted to point out this potential edge case for discussion

Copy link
Contributor

openshift-ci bot commented Dec 5, 2024

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: patrickdillon

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Dec 5, 2024
@patrickdillon
Copy link
Contributor

This LGTM but let’s get another reviewer to take a look

@r4f4
Copy link
Contributor Author

r4f4 commented Dec 5, 2024

I think requiring this perm is a good idea

/approve

IIRC the permission check is a dependency of the cluster asset and runs after the machine manifests have been generated. I am on my phone right now and can’t double check, but if that is the case it seems like the following could happen:

* installer generates machineset with bad instance types (manifests target)

* installer throws validation error for missing perms (cluster target)

* User adds perms

* User reruns install with same machinesets and install fails

Indeed that's a problem. The Worker asset depends on the CredsCheck but not on the PermsCheck:

// Dependencies returns all of the dependencies directly needed by the
// Worker asset
func (w *Worker) Dependencies() []asset.Asset {
	return []asset.Asset{
		&installconfig.ClusterID{},
		// PlatformCredsCheck just checks the creds (and asks, if needed)
		// We do not actually use it in this asset directly, hence
		// it is put in the dependencies but not fetched in Generate
		&installconfig.PlatformCredsCheck{},
		&installconfig.InstallConfig{},

compared to the Cluster asset

// Dependencies returns the direct dependency for launching
// the cluster.
func (c *Cluster) Dependencies() []asset.Asset {
	return []asset.Asset{
		&installconfig.ClusterID{},
		&installconfig.InstallConfig{},
		// PlatformCredsCheck, PlatformPermsCheck, PlatformProvisionCheck, and VCenterContexts.
		// perform validations & check perms required to provision infrastructure.
		// We do not actually use them in this asset directly, hence
		// they are put in the dependencies but not fetched in Generate.
		&installconfig.PlatformCredsCheck{},
		&installconfig.PlatformPermsCheck{},

Isn't that an existing installer bug? Generating worker manifests involves SDK calls not only for AWS but also for other platforms. I'll open a separate bug to handle that.

@r4f4
Copy link
Contributor Author

r4f4 commented Dec 5, 2024

I created https://issues.redhat.com/browse/OCPBUGS-45657 to address the point raised by Patrick.

logrus.Warnf("failed to find preferred instance type for one or more zones in the %s pool, using default: %s", pool.Name, instanceTypes[0])
mpool.InstanceType = instanceTypes[0]
if !awsSetPreferredInstanceByEdgeZone(ctx, instanceTypes, installConfig.AWS, zones) {
// Using the default instance type from the non-edge pool often fails.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems to be the main part related to the bug. Is it ok to fail here (i.e. is this the intended behavior)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My proposal is: it seems better to fail here instead of having an undetected node failing to come up with the error as in the PR's description. The non-edge node default might not be compatible with edge zones.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM failing when any zone isn't found the respective type.

Copy link
Contributor

@barbacbd barbacbd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Dec 6, 2024
@r4f4
Copy link
Contributor Author

r4f4 commented Dec 10, 2024

@mtulio can you review this one?

@openshift-ci-robot
Copy link
Contributor

/retest-required

Remaining retests: 0 against base HEAD 78adb29 and 2 for PR HEAD 55d4f83 in total

Comment on lines +332 to +357
PermissionEdgeDefaultInstance: {
// Needed to filter zones by instance type
"ec2:DescribeInstanceTypeOfferings",
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@r4f4 @patrickdillon I would like to exercise a little bit more this comment. Isn't ec2:DescribeInstanceTypeOfferings called every installer execution to discover types from the pool, and fallback to static ones regardless of the pool type? My impression that this permissions must be in the base set of required permissions, instead of only when edge zones is added.

Holding until hear from you all. Feel free to drop.
/hold

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not every execution, only when the instanceType is not set and/or when zones are not specified. For the former, we fallback to a hardcoded type so the permission is optional. The problem with that logic for edge zones is that not all regions and instance types support them.

If you want to make it required for all cases, that should be addressed as a separate bug as it impacts managed services and their managed policies.

Copy link
Contributor

@mtulio mtulio Dec 10, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not every execution, only when the instanceType is not set and/or when zones are not specified

Yeah, apologies. I mean default execution flow which those fields isn't set/required - and I am not sure if we have a CI scenario of that flow as a couple of fields were enforced in CI to, specially, save costs, preventing spreading the cluster across all zones discovered in the region.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as it impacts managed services and their managed policies.

If this change is only to support managed services flow for now, LGTM.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree @mtulio that ec2:DescribeInstanceTypeOfferings should be required in the default permission set for all installs.

If you want to make it required for all cases, that should be addressed as a separate bug as it impacts managed services and their managed policies.

@r4f4 should we open a new bug? I believe rosa supports edge zones (@mtulio probably knows better than I do) so I would think as it stands, this would affect the rosa policy as well

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right. So it looks like there are at least two regions that don't have M6i: eu-south-2 & ap-southeast-4. So IIUC the issue is that IFF you're installing to one of those regions & don't have this perm & don't specify an instance type, the install would fail (as it will default to the unavailable m6i instance).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, Patrick.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mtulio are you happy to proceede with this PR and make the perm always required as part of the new bug Patrick opened?

Copy link
Contributor

@mtulio mtulio Dec 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So IIUC the issue is that IFF you're installing to one of those regions & don't have this perm & don't specify an instance type, the install would fail (as it will default to the unavailable m6i instance).

Correct.

@r4f4 Yes. Patrick described the rationale behind my thoughts. When we introduced m6i we aimed better performance for the same price for control planes (in general new instances increases ~15%, and are cheaper, except 7th Gen). That time, m6i was available in a few regions, but the algorithm covered that edge cases.

@mtulio are you happy to proceede with this PR and make the perm always required as part of the new bug Patrick opened?

Yes.
/lgtm

FWIW some time ago I wrote an ugly script to collect and consolidate that information across zones around the globe, and mount the supported instance types on edge zones and in the region, if you are interested to explore, this could be a start point, and sample output.

@openshift-ci openshift-ci bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Dec 10, 2024
@openshift-merge-robot openshift-merge-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Dec 14, 2024
@r4f4 r4f4 force-pushed the aws-edge-default-instance-fix branch from 55d4f83 to 30724f9 Compare December 16, 2024 19:15
@openshift-ci openshift-ci bot removed the lgtm Indicates that a PR is ready to be merged. label Dec 16, 2024
@openshift-merge-robot openshift-merge-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Dec 16, 2024
@r4f4
Copy link
Contributor Author

r4f4 commented Dec 16, 2024

Update: rebased on top of master to fix merge conflicts.

@r4f4
Copy link
Contributor Author

r4f4 commented Dec 17, 2024

/retest-required

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Dec 18, 2024
@r4f4
Copy link
Contributor Author

r4f4 commented Dec 19, 2024

/retest-required

@r4f4
Copy link
Contributor Author

r4f4 commented Dec 19, 2024

/hold cancel

@openshift-ci openshift-ci bot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Dec 19, 2024
@r4f4
Copy link
Contributor Author

r4f4 commented Dec 19, 2024

/label acknowledge-critical-fixes-only

@openshift-ci openshift-ci bot added the acknowledge-critical-fixes-only Indicates if the issuer of the label is OK with the policy. label Dec 19, 2024
@openshift-ci-robot
Copy link
Contributor

/retest-required

Remaining retests: 0 against base HEAD 27fa766 and 2 for PR HEAD 30724f9 in total

@openshift-ci-robot
Copy link
Contributor

/retest-required

Remaining retests: 0 against base HEAD ec72ce6 and 1 for PR HEAD 30724f9 in total

@r4f4
Copy link
Contributor Author

r4f4 commented Dec 19, 2024

/override ci/prow/e2e-azure-ovn-upi
Few e2e failures and not affected by this PR.

Copy link
Contributor

openshift-ci bot commented Dec 19, 2024

@r4f4: Overrode contexts on behalf of r4f4: ci/prow/e2e-azure-ovn-upi

In response to this:

/override ci/prow/e2e-azure-ovn-upi
Few e2e failures and not affected by this PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@openshift-ci-robot
Copy link
Contributor

/retest-required

Remaining retests: 0 against base HEAD ec72ce6 and 2 for PR HEAD 30724f9 in total

@r4f4
Copy link
Contributor Author

r4f4 commented Dec 20, 2024

/retest-required

Copy link
Contributor

openshift-ci bot commented Dec 20, 2024

@r4f4: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-aws-ovn-shared-vpc-edge-zones 30724f9 link false /test e2e-aws-ovn-shared-vpc-edge-zones
ci/prow/e2e-aws-default-config 30724f9 link false /test e2e-aws-default-config
ci/prow/e2e-aws-ovn-single-node 30724f9 link false /test e2e-aws-ovn-single-node
ci/prow/e2e-aws-ovn-edge-zones 30724f9 link false /test e2e-aws-ovn-edge-zones
ci/prow/altinfra-e2e-aws-ovn 30724f9 link false /test altinfra-e2e-aws-ovn
ci/prow/e2e-external-aws-ccm 30724f9 link false /test e2e-external-aws-ccm
ci/prow/e2e-aws-ovn-heterogeneous 30724f9 link false /test e2e-aws-ovn-heterogeneous
ci/prow/e2e-aws-ovn-fips 30724f9 link false /test e2e-aws-ovn-fips
ci/prow/e2e-aws-ovn-imdsv2 30724f9 link false /test e2e-aws-ovn-imdsv2
ci/prow/okd-scos-e2e-aws-ovn 30724f9 link false /test okd-scos-e2e-aws-ovn
ci/prow/e2e-aws-ovn-shared-vpc-custom-security-groups 30724f9 link false /test e2e-aws-ovn-shared-vpc-custom-security-groups

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@r4f4
Copy link
Contributor Author

r4f4 commented Dec 20, 2024

/override ci/prow/e2e-azure-ovn-upi
Not affected by PR changes.

Copy link
Contributor

openshift-ci bot commented Dec 20, 2024

@r4f4: Overrode contexts on behalf of r4f4: ci/prow/e2e-azure-ovn-upi

In response to this:

/override ci/prow/e2e-azure-ovn-upi
Not affected by PR changes.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@openshift-merge-bot openshift-merge-bot bot merged commit f7a8032 into openshift:main Dec 20, 2024
33 of 44 checks passed
@openshift-ci-robot
Copy link
Contributor

@r4f4: Jira Issue OCPBUGS-45218: All pull requests linked via external trackers have merged:

Jira Issue OCPBUGS-45218 has been moved to the MODIFIED state.

In response to this:

If an edge machine pool is specified without an instance type, the installer needs the ec2:DescribeInstanceTypeOfferings permission to derive the correct instance type available according to the local/wavelength zones being used.

Before this change, the permission was optional and its absence would result in the installer picking a hard-coded non-edge pool instance type which can cause unsupported configuration issues in mapi's output:

    providerStatus:
     conditions:
     - lastTransitionTime: "2024-11-28T15:32:09Z"
       message: 'error launching instance: The requested configuration is currently
         not supported. Please check the documentation for supported configurations.'
       reason: MachineCreationFailed
       status: "False"
       type: MachineCreation

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-bot
Copy link
Contributor

[ART PR BUILD NOTIFIER]

Distgit: ose-installer-altinfra
This PR has been included in build ose-installer-altinfra-container-v4.19.0-202412201709.p0.gf7a8032.assembly.stream.el9.
All builds following this will include this PR.

@openshift-bot
Copy link
Contributor

[ART PR BUILD NOTIFIER]

Distgit: ose-installer-terraform-providers
This PR has been included in build ose-installer-terraform-providers-container-v4.19.0-202412201709.p0.gf7a8032.assembly.stream.el9.
All builds following this will include this PR.

@openshift-bot
Copy link
Contributor

[ART PR BUILD NOTIFIER]

Distgit: ose-baremetal-installer
This PR has been included in build ose-baremetal-installer-container-v4.19.0-202412201709.p0.gf7a8032.assembly.stream.el9.
All builds following this will include this PR.

@r4f4
Copy link
Contributor Author

r4f4 commented Dec 20, 2024

/cherry-pick release-4.18

@openshift-cherrypick-robot

@r4f4: new pull request created: #9334

In response to this:

/cherry-pick release-4.18

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@openshift-bot
Copy link
Contributor

[ART PR BUILD NOTIFIER]

Distgit: ose-installer-artifacts
This PR has been included in build ose-installer-artifacts-container-v4.19.0-202412201709.p0.gf7a8032.assembly.stream.el9.
All builds following this will include this PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
acknowledge-critical-fixes-only Indicates if the issuer of the label is OK with the policy. approved Indicates a PR has been approved by an approver from all required OWNERS files. jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants