Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reduce log level on detectors failing to detect #2268

Open
david-luna opened this issue Jun 7, 2024 · 1 comment · May be fixed by #2382
Open

Reduce log level on detectors failing to detect #2268

david-luna opened this issue Jun 7, 2024 · 1 comment · May be fixed by #2382

Comments

@david-luna
Copy link
Contributor

I've noticed that some detectors make use of diag.info or diag.warn when they fail to do the detection. So with the default log level they are displayed in stdout.

A couple of places where it happens

This is not a problem per se but these messages appear if the customer enables some detectors that for sure will fail. Some examples:

  • app is started enabling container detector but not deployed in a container
  • app is started enabling AWS detector but deployed in another cloud
  • app is started with OTEL_NODE_RESOURCE_DETECTORS=all when using auto instrumentations

IMHO seeing messages about AWS when my app is running in another cloud could be a bit misleading. I'm attaching here a small repro

package.json

{
  "name": "detectors-test",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "author": "",
  "license": "ISC",
  "dependencies": {
    "@opentelemetry/api": "^1.9.0",
    "@opentelemetry/auto-instrumentations-node": "^0.47.0"
  }
}

index.js

// Usage:
//  OTEL_NODE_RESOURCE_DETECTORS=all node --require '@opentelemetry/auto-instrumentations-node/register' index.js

const https = require('https');

const clientReq = https.request('https://opentelemetry.io/', function (cres) {
  console.log('client response: %s %s', cres.statusCode, cres.headers);
  const chunks = [];
  cres.on('data', function (chunk) {
      chunks.push(chunk);
  });
  cres.on('end', function () {
      const body = chunks.join('');
      console.log('client response body lenght: %j', body.length);
  });
});
clientReq.end();

And this is what you get

OTEL_NODE_RESOURCE_DETECTORS=all node --require '@opentelemetry/auto-instrumentations-node/register' index.js

OTEL_TRACES_EXPORTER is empty. Using default otlp exporter.
OpenTelemetry automatic instrumentation started successfully
Container Detector failed to read the Container ID:  ENOENT: no such file or directory, open '/proc/self/cgroup'
Process is not running on K8S [Error: ENOENT: no such file or directory, access '/var/run/secrets/kubernetes.io/serviceaccount/token'] {
  errno: -2,
  code: 'ENOENT',
  syscall: 'access',
  path: '/var/run/secrets/kubernetes.io/serviceaccount/token'
}
client response: 200 {
  ...
  connection: 'close'
}
client response body lenght: 20669

The K8s message is scary if you're not familiar with the topic. Detectors just resolved that they're nor running in a container neither in an AWS cluster. So my proposal would be:

  • reduce the level to debug
  • rephrase the messages
@johnli-developer
Copy link

I will work on this issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
2 participants