Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to get instance details through mongodb receiver #32350

Closed
ashish121092 opened this issue Apr 12, 2024 · 4 comments · Fixed by #33714
Closed

Unable to get instance details through mongodb receiver #32350

ashish121092 opened this issue Apr 12, 2024 · 4 comments · Fixed by #33714
Labels
needs triage New item requiring triage processor/resourcedetection Resource detection processor receiver/mongodb

Comments

@ashish121092
Copy link

ashish121092 commented Apr 12, 2024

Component(s)

receiver/mongodb

Describe the issue you're reporting

I have a central collector running on k8 cluster. I have configured mongodb receiver in collector and the collector exports the metrics to googlemanaged prometheus. I am able to see mongodb metrics like mongo_cache_operations and mongo_collection_count but the metrics do not have any labels about which instance is reporting these metrics. Below is my collector configuration. Can you let me know how I can fetch mongodb instance details in metric labels.

Collector version: 0.97.0

receivers:
  otlp:
    protocols:
      grpc:
      http:
  mongodb:
    hosts:
      - endpoint: 10.224.10.25:27017
        transport: "tcp"
      - endpoint: 10.224.10.28:27017
        transport: "tcp"
      - endpoint: 10.224.10.31:27017
        transport: "tcp"
    collection_interval: 60s
    initial_delay: 1s
    replica_set: rs0
    tls:
      insecure: true
      insecure_skip_verify: true

processors:
  resourcedetection:
    detectors: [gcp]
    timeout: 10s
  transform:
    # "location", "cluster", "namespace", "job", "instance", and "project_id" are reserved, and
    # metrics containing these labels will be rejected.  Prefix them with exported_ to prevent this.
    metric_statements:
      - context: datapoint
        statements:
          - set(attributes["exported_location"], attributes["location"])
          - delete_key(attributes, "location")
          - set(attributes["exported_cluster"], attributes["cluster"])
          - delete_key(attributes, "cluster")
          - set(attributes["exported_namespace"], attributes["namespace"])
          - delete_key(attributes, "namespace")
          - set(attributes["exported_job"], attributes["job"])
          - delete_key(attributes, "job")
          - set(attributes["exported_instance"], attributes["instance"])
          - delete_key(attributes, "instance")
          - set(attributes["exported_project_id"], attributes["project_id"])
          - delete_key(attributes, "project_id")
          - set(attributes["host_name"], resource.attributes["host.name"])
  batch:
    # batch metrics before sending to reduce API usage
    send_batch_max_size: 200
    send_batch_size: 200
    timeout: 5s
  memory_limiter:
    # drop metrics if memory usage gets too high
    check_interval: 1s
    limit_percentage: 65
    spike_limit_percentage: 20
  probabilistic_sampler:
    hash_seed: 22
    sampling_percentage: 50
extensions:
  health_check:
    endpoint: 0.0.0.0:13133
exporters:
  googlecloud:
    project: test
    sending_queue:
      enabled: true
      num_consumers: 10
      queue_size: 2500
  googlemanagedprometheus:
  logging:

connectors:
  spanmetrics:
    resource_metrics_key_attributes:
      - service.name
      - telemetry.sdk.language
      - telemetry.sdk.name

service:
  telemetry:
    logs:
      level: "debug"
  extensions: [health_check]
  pipelines:
    metrics:
      receivers: [otlp,spanmetrics,mongodb]
      processors: [transform,batch, memory_limiter,resourcedetection]
      exporters: [googlemanagedprometheus]
    traces:
      receivers: [otlp]
      processors: [filter/ottl,probabilistic_sampler]
      exporters: [googlecloud,spanmetrics]
    logs:
      receivers: [otlp]
      processors: []
      exporters: [logging]
@ashish121092 ashish121092 added the needs triage New item requiring triage label Apr 12, 2024
Copy link
Contributor

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@crobert-1 crobert-1 added the processor/resourcedetection Resource detection processor label Apr 12, 2024
Copy link
Contributor

Pinging code owners for processor/resourcedetection: @Aneurysm9 @dashpole. See Adding Labels via Comments if you do not have permissions to add labels yourself.

Copy link
Contributor

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@dmitryax
Copy link
Member

@ashish121092 I submitted a PR that fixed the issue #33714. Feel free to take a look

dmitryax added a commit that referenced this issue Jun 25, 2024
The new attribute are added to the MongoDB receiver to distinguish
metrics coming from different MongoDB instances.
- `server.address`: The address of the MongoDB host, enabled by default.
  - `server.port`: The port of the MongoDB host, disabled by default.

Resolves
#32350
and
#32810

Co-authored-by: Curtis Robert <crobert@splunk.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs triage New item requiring triage processor/resourcedetection Resource detection processor receiver/mongodb
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants