-
Notifications
You must be signed in to change notification settings - Fork 419
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] Issues with RayCluster CRD and kubectl apply #271
Comments
In case this helps other people using ArgoCD to deploy KubeRay, we solved this issue using a Kustomization and patching the RayCluster CRD with the annotation apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- https://github.com/ray-project/kuberay/manifests/cluster-scope-resources/?ref=master
- https://github.com/ray-project/kuberay/manifests/base/?ref=master
patchesStrategicMerge:
# CRD rayclusters.ray.io manifest is too big to fit in the
# annotation `kubectl.kubernetes.io/last-applied-configuration`
# added by `kubectl apply` used by ArgoCD, and so it fails
# https://github.com/ray-project/kuberay/issues/271
# Annotate this CRD to make ArgoCD use `kubectl replace` and avoid the error when syncing it
- |-
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: rayclusters.ray.io
annotations:
argocd.argoproj.io/sync-options: Replace=true |
I have the same issue. |
We'll start by replacing "apply" in the docs with "create". Then we'll look into shrinking the CRD. |
Also to extend this, we should have
it used to be
|
Status could make sense -- it would simply indicate the status of the head pod. We could potentially take a look at what the K8s deployment controller does. |
Try using |
For the Argo CD users, maybe we can add some instructions into the document? |
@haoxins |
|
We could update the docs to mention that |
I think for the moment, the only actionable item is the documentation item described in the last comment. |
I thought this had already been documented. |
…ray-project#302) This PR adds a warning about a known issue (ray-project#271) to the KubeRay docs.
Can we not attempt to install both CRDs and Kuberay operator using Kustomize at the same time. When I try to do it, it throws the following error:
admin@instance-1:~$ kustomize build . Error: accumulating resources: accumulation err='accumulating resources from 'https://github.com/ray-project/kuberay/manifests/base?ref=v1.0.0&timeout=90s': URL is a git repository': recursed merging from path '/tmp/kustomize-3664874623/manifests/base': may not add resource with an already registered id: Namespace.v1.[noGrp]/ray-system.[noNs] |
Search before asking
KubeRay Component
Others
What happened + What you expected to happen
kubectl apply -k manifests/cluster-scope-resources
yields the errorThe CustomResourceDefinition "rayclusters.ray.io" is invalid: metadata.annotations: Too long: must have at most 262144 bytes.
Reason:
After re-generating the KubeRay CRD in #268, some pod template fields from recent versions of K8s were generated. Now the CRD is too big to fit in the metadata.lastAppliedConfiguration field used by
kubectl apply
.The solution I'd propose is to move the CRD out of the kustomization file and advise users to
kubectl create
the CRD before installing the rest of the cluster-scoped resources.Reproduction script
See above.
Anything else
After running
kubectl apply -k
, I tried tokubectl delete -k
so that I could subsequentlykubectl create -k
.Unfortunately, my
ray-system
namespace is hanging in a terminating state!edit: My ray-system namespace is hanging simply because cluster is 100% borked.
Are you willing to submit a PR?
The text was updated successfully, but these errors were encountered: