Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to run the targetAllocator in namespace mode #3086

Open
alita1991 opened this issue Jul 1, 2024 · 1 comment
Open

Unable to run the targetAllocator in namespace mode #3086

alita1991 opened this issue Jul 1, 2024 · 1 comment
Assignees
Labels
area:target-allocator Issues for target-allocator enhancement New feature or request good first issue Good for newcomers

Comments

@alita1991
Copy link

alita1991 commented Jul 1, 2024

Component(s)

target allocator

Describe the issue you're reporting

Hi,

I'm trying to run the targetAllocator in namespace mode, but I encountered the following errors:

{"level":"error","ts":"2024-07-01T09:59:11Z","logger":"setup.prometheus-cr-watcher","msg":"Failed to create namespace informer in promOperator CRD watcher","error":"missing list/watch permissions on the 'namespaces' resource: missing \"list\" permission on resource \"namespaces\" (group: \"\") for all namespaces: missing \"watch\" permission on resource \"namespaces\" (group: \"\") for all namespaces","stacktrace":"[github.com/open-telemetry/opentelemetry-operator/cmd/otel-allocator/watcher.NewPrometheusCRWatcher\n\t/home/runner/work/opentelemetry-operator/opentelemetry-operator/cmd/otel-allocator/watcher/promOperator.go:99\nmain.main\n\t/home/runner/work/opentelemetry-operator/opentelemetry-operator/cmd/otel-allocator/main.go:119\nruntime.main\n\t/opt/hostedtoolcache/go/1.22.4/x64/src/runtime/proc.go:271](http://github.com/open-telemetry/opentelemetry-operator/cmd/otel-allocator/watcher.NewPrometheusCRWatcher/n/t/home/runner/work/opentelemetry-operator/opentelemetry-operator/cmd/otel-allocator/watcher/promOperator.go:99/nmain.main/n/t/home/runner/work/opentelemetry-operator/opentelemetry-operator/cmd/otel-allocator/main.go:119/nruntime.main/n/t/opt/hostedtoolcache/go/1.22.4/x64/src/runtime/proc.go:271)"}
{"level":"error","ts":"2024-07-01T09:59:14Z","msg":"pkg/mod/[k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232](http://k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232): Failed to watch *v1.PodMonitor: failed to list *v1.PodMonitor: [podmonitors.monitoring.coreos.com](http://podmonitors.monitoring.coreos.com/) is forbidden: User \"system:serviceaccount:argocd-openshift:observability-cr-argocd-openshift-sa\" cannot list resource \"podmonitors\" in API group \"[monitoring.coreos.com](http://monitoring.coreos.com/)\" at the cluster scope","stacktrace":"[k8s.io/client-go/tools/cache.DefaultWatchErrorHandler\n\t/home/runner/go/pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:150\nk8s.io/client-go/tools/cache.(*Reflector).Run.func1\n\t/home/runner/go/pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:299\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:226\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:227\nk8s.io/client-go/tools/cache.(*Reflector).Run\n\t/home/runner/go/pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:297\nk8s.io/client-go/tools/cache.(*controller).Run.(*Group).StartWithChannel.func2\n\t/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/wait.go:55\nk8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1\n\t/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/wait.go:72](http://k8s.io/client-go/tools/cache.DefaultWatchErrorHandler/n/t/home/runner/go/pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:150/nk8s.io/client-go/tools/cache.(*Reflector).Run.func1/n/t/home/runner/go/pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:299/nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1/n/t/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:226/nk8s.io/apimachinery/pkg/util/wait.BackoffUntil/n/t/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:227/nk8s.io/client-go/tools/cache.(*Reflector).Run/n/t/home/runner/go/pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:297/nk8s.io/client-go/tools/cache.(*controller).Run.(*Group).StartWithChannel.func2/n/t/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/wait.go:55/nk8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1/n/t/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/wait.go:72)"}
{"level":"error","ts":"2024-07-01T12:10:48Z","msg":"pkg/mod/[k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232](http://k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232): Failed to watch *v1.ServiceMonitor: failed to list *v1.ServiceMonitor: [servicemonitors.monitoring.coreos.com](http://servicemonitors.monitoring.coreos.com/) is forbidden: User \"system:serviceaccount:argocd-openshift:observability-cr-argocd-openshift-sa\" cannot list resource \"servicemonitors\" in API group \"[monitoring.coreos.com](http://monitoring.coreos.com/)\" at the cluster scope","stacktrace":"[k8s.io/client-go/tools/cache.DefaultWatchErrorHandler\n\t/home/runner/go/pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:150\nk8s.io/client-go/tools/cache.(*Reflector).Run.func1\n\t/home/runner/go/pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:299\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:226\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:227\nk8s.io/client-go/tools/cache.(*Reflector).Run\n\t/home/runner/go/pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:297\nk8s.io/client-go/tools/cache.(*controller).Run.(*Group).StartWithChannel.func2\n\t/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/wait.go:55\nk8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1\n\t/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/wait.go:72](http://k8s.io/client-go/tools/cache.DefaultWatchErrorHandler/n/t/home/runner/go/pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:150/nk8s.io/client-go/tools/cache.(*Reflector).Run.func1/n/t/home/runner/go/pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:299/nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1/n/t/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:226/nk8s.io/apimachinery/pkg/util/wait.BackoffUntil/n/t/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:227/nk8s.io/client-go/tools/cache.(*Reflector).Run/n/t/home/runner/go/pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:297/nk8s.io/client-go/tools/cache.(*controller).Run.(*Group).StartWithChannel.func2/n/t/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/wait.go:55/nk8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1/n/t/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/wait.go:72)"}

Config

  targetAllocator:
    allocationStrategy: consistent-hashing
    enabled: true
    filterStrategy: relabel-config
    observability:
      metrics: {}
    podSecurityContext:
      fsGroup: 1000700000
      seccompProfile:
        type: RuntimeDefault
    prometheusCR:
      enabled: true
      podMonitorSelector: {}
      scrapeInterval: 30s
      serviceMonitorSelector: {}

The collector is configured to run with a serviceAccount bound to a Role, limiting access to a namespace only.

@jaronoff97
Copy link
Contributor

the target allocator already has a servicemonitor and podmonitor namespace selectro, we just need to expose it in the operator here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area:target-allocator Issues for target-allocator enhancement New feature or request good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

2 participants