Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multi-arch node affinity patch included in auth proxy patch? #3272

Closed
joelanford opened this issue Mar 9, 2023 · 2 comments · Fixed by #3311
Closed

Multi-arch node affinity patch included in auth proxy patch? #3272

joelanford opened this issue Mar 9, 2023 · 2 comments · Fixed by #3311
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@joelanford
Copy link
Member

What broke? What's expected?

Since v3.7.0, it looks like kubebuilder is inserting a multi-arch-related affinity patch into my final kustomize manifests by including it in the auth proxy patch by default here:

affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- amd64
- arm64
- ppc64le
- s390x
- key: kubernetes.io/os
operator: In
values:
- linux

However, this same patch is included (but commented out) in the ./co.nfig/manager/manager.yaml file (originating here:

# TODO(user): Uncomment the following code to configure the nodeAffinity expression
# according to the platforms which are supported by your solution.
# It is considered best practice to support multiple architectures. You can
# build your manager image using the makefile target docker-buildx.
# affinity:
# nodeAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: kubernetes.io/arch
# operator: In
# values:
# - amd64
# - arm64
# - ppc64le
# - s390x
# - key: kubernetes.io/os
# operator: In
# values:
# - linux
)

It seems like the patch was inadvertently added to the auth proxy patch file (it doesn't seem related to the auth proxy at all). I also didn't see any discussion about this in the PR that introduced it (#2906).

Just checking to see if this is a bug or if I'm missing something. Thanks!

Reproducing this issue

Run any invocation of kubebuilder init that uses the kustomize/v1 plugin.

KubeBuilder (CLI) Version

3.7.0 (or later, it seems)

PROJECT version

3

Plugin versions

go.kubebuilder.io/v4-alpha

Other versions

No response

Extra Labels

No response

@joelanford joelanford added the kind/bug Categorizes issue or PR as related to a bug. label Mar 9, 2023
@camilamacedo86 camilamacedo86 added triage/accepted Indicates an issue or PR is ready to be actively worked on. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. and removed priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. triage/accepted Indicates an issue or PR is ready to be actively worked on. labels Mar 11, 2023
@camilamacedo86
Copy link
Member

Hi @joelanford,

I believe that the idea of it not be commented on in manager_auth_proxy_patch.go is because it will apply the config for the image gcr.io/kubebuilder/kube-rbac-proxy which would support those platforms.

However, are you saying that it doesn't seem related to the auth proxy at all? Did you find any issue within?
Could you please let us know more about it to see if we need to just comment or if would be required to change the scaffold?

Thank you for your collaboration and to bring it to our attention.

@joelanford
Copy link
Member Author

joelanford commented Mar 13, 2023

If I understand correctly, the patch is applied at the pod level, not the container level. So it affects both the kube-rbac-proxy container and the manager container.

Scheduling happens for an entire pod, not individual containers for a pod. So it seems like we should remove this from the rbac proxy patch file but leave the commented out affinity in the manager deployment.

@camilamacedo86 camilamacedo86 added help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. triage/accepted Indicates an issue or PR is ready to be actively worked on. labels Mar 20, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
2 participants