Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[8.14](backport #40367) Azure Monitor: fix metric timespan to restore Storage Account PT1H metrics #40413

Merged
merged 1 commit into from
Aug 1, 2024

Conversation

mergify[bot]
Copy link
Contributor

@mergify mergify bot commented Aug 1, 2024

Proposed commit message

Move the timespan logic into a dedicated buildTimespan() function with a test for each supported use case.

Some Azure services have longer latency between service usage and metric availability. For example, the Storage Account capacity metrics (Blob capacity, etc.) have a PT1H time grain and become available after one hour. Service X also has PT1H metrics, however become available after a few minutes.

This PR restores the core of the older timespan logic the Azure Monitor metricset was using before the regression introduced by the PR #36823.

However, the buildTimespan() does not restore the interval * (-2) part because doubling the interval causes duplicates.

Checklist

  • My code follows the style guidelines of this project
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • I have made corresponding change to the default configuration files
  • I have added tests that prove my fix is effective or that my feature works
  • I have added an entry in CHANGELOG.next.asciidoc or CHANGELOG-developer.next.asciidoc.

Author's Checklist

  • Test with Microsoft.ContainerRegistry/registries metrics (PT1M and PT1H)
  • Test with Microsoft.ContainerInstance/containerGroups metrics (PT5M)
  • Test with Microsoft.KeyVault/vaults metrics (PT1M)
  • Test with Storage Account metrics (PT5M and PT1H)
  • Test with Virtual Machines metrics (PT5M)

Related issues


This is an automatic backport of pull request #40367 done by [Mergify](https://mergify.com).

…trics (#40367)

Move the timespan logic into a dedicated `buildTimespan()` function with a test for each supported use case.

Some Azure services have longer latency between service usage and metric availability. For example, the Storage Account capacity metrics (Blob capacity, etc.) have a PT1H time grain and become available after one hour. Service X also has PT1H metrics, however become available after a few minutes.

This PR restores the core of the [older timespan logic](https://github.com/elastic/beats/blob/d3facc808d2ba293a42b2ad3fc8e21b66c5f2a7f/x-pack/metricbeat/module/azure/client.go#L110-L116) the Azure Monitor metricset was using before the regression introduced by the PR #36823.

However, the `buildTimespan()` does not restore the `interval * (-2)` part because doubling the interval causes duplicates.

(cherry picked from commit 5fccb0d)
@mergify mergify bot requested a review from a team as a code owner August 1, 2024 10:44
@mergify mergify bot added the backport label Aug 1, 2024
@mergify mergify bot assigned zmoog Aug 1, 2024
@botelastic botelastic bot added the needs_team Indicates that the issue/PR needs a Team:* label label Aug 1, 2024
@zmoog zmoog added the Team:obs-ds-hosted-services Label for the Observability Hosted Services team label Aug 1, 2024
@elasticmachine
Copy link
Collaborator

Pinging @elastic/obs-ds-hosted-services (Team:obs-ds-hosted-services)

@botelastic botelastic bot removed the needs_team Indicates that the issue/PR needs a Team:* label label Aug 1, 2024
@zmoog zmoog merged commit 640f2de into 8.14 Aug 1, 2024
19 checks passed
@zmoog zmoog deleted the mergify/bp/8.14/pr-40367 branch August 1, 2024 12:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backport Team:obs-ds-hosted-services Label for the Observability Hosted Services team
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants