-
Notifications
You must be signed in to change notification settings - Fork 94
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Governance Policy Controller disable is not working #186
Comments
open-cluster-management-io/governance-policy-addon-controller#85 fixes this, but that was put in after the export CLUSTERNAME=cluster1 # adjust this for the managed cluster's name
kubectl create role fix-policy-fw-uninstall --verb=deletecollection --resource=policies.policy.open-cluster-management.io --namespace="${CLUSTERNAME}"
kubectl create rolebinding fix-policy-fw-uninstall --role=fix-policy-fw-uninstall --serviceaccount=open-cluster-management-agent-addon:governance-policy-framework-sa --namespace="${CLUSTERNAME}" |
…#186) Signed-off-by: Zhiwei Yin <zyin@redhat.com>
This issue is stale because it has been open for 120 days with no activity. After 14 days of inactivity, it will be closed. Remove the |
this is fixed in 0.12.0 |
@qiujian16: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Describe the bug
Im getting error in disabling policy framework. Used the following command
clusteradm addon disable --names governance-policy-framework --cluster cluster1
in the managed cluster there is uninstallation pod for governance-policy-framework is created. the actual pod logs even have this
2023-06-10T07:47:36.822Z error klog watch/retrywatcher.go:130 Watch failed {"error": "the server could not find the requested resource"}
However, I'm seeing the following error continuously and the addon is not disabled.
2023-06-10T07:47:36.822Z error klog watch/retrywatcher.go:130 Watch failed {"error": "the server could not find the requested resource"} k8s.io/client-go/tools/watch.(*RetryWatcher).doReceive /go/pkg/mod/k8s.io/client-go@v0.26.4/tools/watch/retrywatcher.go:130 k8s.io/client-go/tools/watch.(*RetryWatcher).receive.func2 /go/pkg/mod/k8s.io/client-go@v0.26.4/tools/watch/retrywatcher.go:265 k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1 /go/pkg/mod/k8s.io/apimachinery@v0.27.1/pkg/util/wait/backoff.go:259 k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1 /go/pkg/mod/k8s.io/apimachinery@v0.27.1/pkg/util/wait/backoff.go:226 k8s.io/apimachinery/pkg/util/wait.BackoffUntil /go/pkg/mod/k8s.io/apimachinery@v0.27.1/pkg/util/wait/backoff.go:227 k8s.io/apimachinery/pkg/util/wait.JitterUntil /go/pkg/mod/k8s.io/apimachinery@v0.27.1/pkg/util/wait/backoff.go:204 k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext /go/pkg/mod/k8s.io/apimachinery@v0.27.1/pkg/util/wait/backoff.go:259 k8s.io/apimachinery/pkg/util/wait.NonSlidingUntilWithContext /go/pkg/mod/k8s.io/apimachinery@v0.27.1/pkg/util/wait/backoff.go:190 k8s.io/client-go/tools/watch.(*RetryWatcher).receive
To Reproduce
Steps to reproduce the behavior:
clusteradm addon disable --names governance-policy-framework --cluster cluster1
Expected behavior
Neat and clean removal of Governance policy framework components from managed cluster
Environment ie: OCM version, Kubernetes version and provider:
clusteradm version client version :v0.6.0 server release version :v1.25.5+k3s1 default bundle version :0.11.0
kubectl get pods -nopen-cluster-management governance-policy-addon-controller-fb58bf69c-5h9hh -oyaml|grep image image: quay.io/open-cluster-management/governance-policy-addon-controller:v0.11.0 imagePullPolicy: IfNotPresent image: quay.io/open-cluster-management/governance-policy-addon-controller:v0.11.0 imageID: quay.io/open-cluster-management/governance-policy-addon-controller@sha256:00f3b0661bbc801d895470e8ab559bbf079b77ce7e0a419192a355657dedf7bd
Additional context
https://kubernetes.slack.com/archives/C01GE7YSUUF/p1686383711029539
The text was updated successfully, but these errors were encountered: