Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[receiver/chronyreceiver] Receiver is not scraping dial unixgram /var/run/chrony/chronyd.sock #32487

Closed
saleelshetye84 opened this issue Apr 17, 2024 · 5 comments

Comments

@saleelshetye84
Copy link

Component(s)

receiver/chrony

What happened?

Description

Steps to Reproduce

Expected Result

Actual Result

Collector version

latest

Environment information

Environment

OS: (e.g., "Ubuntu 20.04")
Compiler(if manually compiled): (e.g., "go 14.2")

OpenTelemetry Collector configuration

receivers:
      chrony:
        #address: unix:///var/run/chrony/chronyd.sock
        timeout: {{ .Values.scrape.timeout.chrony }}
        collection_interval: {{ .Values.scrape.duration.chrony }}
        metrics:
          ntp.skew:
            enabled: false
          ntp.time.correction:
            enabled: false


=================================================================
podSecurityContext:
    fsGroup: 5000
  securityContext:
    allowPrivilegeEscalation: true  
    runAsNonRoot: false    
    readOnlyRootFilesystem: true

Log output

2024-04-08T20:44:32.646Z error scraperhelper/scrapercontroller.go:200 Error scraping metrics {"kind": "receiver", "name": "chrony", "data_type": "metrics", "error": "dial unixgram /var/run/chrony/chronyd.sock: connect: permission denied", "scraper": "chrony"}
go.opentelemetry.io/collector/receiver/scraperhelper.(*controller).scrapeMetricsAndReport
go.opentelemetry.io/collector/receiver@v0.87.0/scraperhelper/scrapercontroller.go:200
go.opentelemetry.io/collector/receiver/scraperhelper.(*controller).startScraping.func1
go.opentelemetry.io/collector/receiver@v0.87.0/scraperhelper/scrapercontroller.go:172

Additional context

We are running the otel agent as a daemonset configuration on the EKS cluster nodes and we want to have the NTP values scraped.
Is it the case that the chrony receiver should run within the k8s pods with the root privileges so as to scrape the node level /var/run/chrony/chronyd.sock ?

We are running into the above pasted error as permission denied. Is there an alternative to scrape the node level ntp metrics without having the k8s pods to run as root privileges? Or is this a bug with the chrony receiver?

@saleelshetye84 saleelshetye84 added bug Something isn't working needs triage New item requiring triage labels Apr 17, 2024
Copy link
Contributor

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

Copy link
Contributor

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@MovieStoreGuy
Copy link
Contributor

Hey @saleelshetye84,

The file needs to be access by the user that the container is using, I suspect if you're mounting it inside the collector container, it is inheriting the default file permissions that are present on the node.

You should be able to update the file permissions as part of the mount and resolve the issue that way so the file perms match the user settings.

@MovieStoreGuy MovieStoreGuy removed bug Something isn't working Stale needs triage New item requiring triage labels Jun 21, 2024
Copy link
Contributor

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@github-actions github-actions bot added the Stale label Aug 20, 2024
Copy link
Contributor

This issue has been closed as inactive because it has been stale for 120 days with no activity.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Oct 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants