-
Notifications
You must be signed in to change notification settings - Fork 418
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RFC] Add host metric fields to ECS #950
Changes from 2 commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -26,6 +26,17 @@ Proposed 7 new fields are: | |
<!-- | ||
Stage 1: Describe at a high level how this change affects fields. Which fieldsets will be impacted? How many fields overall? Are we primarily adding fields, removing fields, or changing existing fields? The goal here is to understand the fundamental technical implications and likely extent of these changes. ~2-5 sentences. | ||
--> | ||
This RFC calls for the addition of host fields to collect basic monitoring metrics from a host or VM such as CPU, network and disk. | ||
|
||
| field | type | description | | ||
| --- | --- | --- | | ||
| `host.cpu.pct` | scaled_float | Percent CPU used. This value is normalized by the number of CPU cores and it ranges from 0 to 1. | | ||
kaiyan-sheng marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| `host.network.in.bytes` | long | The number of bytes received on all network interfaces by the host in a given period of time. | | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. In the observer namespaces, ECS uses "ingress" and "egress". For consistency we might want to consider uses those terms. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Can we document the nature of this counter. Is it a monotonic counter that is sometimes reset (e.g. system restart)? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @cyrille-leclerc These values will actually be gauges. For example, There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. ++ on aligning on There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. thanks @kaiyan-sheng I was used to ever increasing counter this kind of metric but I don't know what's the state of the art today. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I agree with monotonic counters being superior here. Capturing rates may make it easier to render the data, however rates are lossy. As Cyrille pointed out, counters are more resilient to an agent missing a beat (pun intended 😄). But counters are also superior for data rollups. Rolling up initial /10s metrics to /5m or hourly percentiles with a counter is trivial. With rates it's not so easy, as the basic piece of data is already an average. Not sure if we can do better in the Elastic Stack than the tool I was using in the past, though... I'm not opposed to capturing rates if they're dramatically easier to work with in general, but I would like them to be accompanied with the counter backing them. WDYT? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think we're confusing two different things here. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @webmat @cyrille-leclerc Agree on the benefit of ever-increasing counters. We will definitely keep all the counters as they are right now from the system module. We only added extra calculation using these counters to get gauges in system module to match whatever we get from other resource providers such as AWS, Azure, and GCP. Unfortunately, they don't provide ever-increasing counters in their monitoring metrics as @sorantis mentioned above. Also in the UI side, it's definitely easier to have metrics as gauges than counters. |
||
| `host.network.in.packets` | long | The number of packets received on all network interfaces by the host in a given period of time. | | ||
| `host.network.out.bytes` | long | The number of bytes sent out on all network interfaces by the host in a given period of time. | | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We have in ES now @webmat ECS should add support for _meta. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is a good point, and I think we should avoid postfixes just for the sake of making the unit explicit. In the case of
I wonder how we should differentiate all the different metrics there, specially between bytes and packets There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ha, interesting case. I'm tempted to say in this context the unit name might make sense. In any case, we should still add the info to _meta as this is what should be used by Kibana. Alternative There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. |
||
| `host.network.out.packets` | long | The number of packets sent out on all network interfaces by the host in a given period of time. | | ||
| `host.disk.read.bytes` | long | The total number of bytes read successfully in a given period of time. | | ||
kaiyan-sheng marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| `host.disk.write.bytes` | long | The total number of bytes write successfully in a given period of time. | | ||
|
||
<!-- | ||
Stage 2: Include new or updated yml field definitions for all of the essential fields in this draft. While not exhaustive, the fields documented here should be comprehensive enough to deeply evaluate the technical considerations of this change. The goal here is to validate the technical details for all essential fields and to provide a basis for adding experimental field definitions to the schema. Use GitHub code blocks with yml syntax formatting. | ||
|
@@ -41,11 +52,23 @@ Stage 3: Add or update all remaining field definitions. The list should now be e | |
Stage 1: Describe at a high-level how these field changes will be used in practice. Real world examples are encouraged. The goal here is to understand how people would leverage these fields to gain insights or solve problems. ~1-3 paragraphs. | ||
--> | ||
|
||
These host metrics will be collected from different kinds of hosts such as bare | ||
metal, virtual machines or virtual machines on public clouds like AWS, Azure and | ||
GCP. These host metrics will be the standard minimal used in resource centric UI | ||
views. For example, when user has VMs on bare metal, AWS and Azure, these host | ||
fields will be collected from all VMs across all platforms and displayed in a | ||
centralized location for better monitoring experience. | ||
|
||
## Source data | ||
|
||
<!-- | ||
Stage 1: Provide a high-level description of example sources of data. This does not yet need to be a concrete example of a source document, but instead can simply describe a potential source (e.g. nginx access log). This will ultimately be fleshed out to include literal source examples in a future stage. The goal here is to identify practical sources for these fields in the real world. ~1-3 sentences or unordered list. | ||
--> | ||
* Bare metal | ||
* VMs | ||
* AWS EC2 instances | ||
* GCP compute engines | ||
* Azure compute VMs | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm not sure if the following is applicable. But would we eventually want to capture the same metrics for containers as well? If that's the case, perhaps we should consider defining this set of metrics independently, and make them nestable both under "host" and "container". There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Good point! We haven't got to |
||
|
||
<!-- | ||
Stage 2: Included a real world example source document. Ideally this example comes from the source(s) identified in stage 1. If not, it should replace them. The goal here is to validate the utility of these field changes in the context of a real world example. Format with the source name as a ### header and the example document in a GitHub code block with json formatting. | ||
|
@@ -65,12 +88,28 @@ Stage 2: Identifies scope of impact of changes. Are breaking changes required? S | |
The goal here is to research and understand the impact of these changes on users in the community and development teams across Elastic. 2-5 sentences each. | ||
--> | ||
|
||
No breaking changes required. | ||
These are new fields already added into Metricbeat: | ||
* aws ec2 metricset | ||
* googlecloud compute metricset | ||
* azure compute_vm metricset | ||
|
||
Only change would be once these fields are in ECS, we can remove these fields | ||
from `metricbeat/_meta/fields.common.yml` file. | ||
|
||
## Concerns | ||
|
||
<!-- | ||
Stage 1: Identify potential concerns, implementation challenges, or complexity. Spend some time on this. Play devil's advocate. Try to identify the sort of non-obvious challenges that tend to surface later. The goal here is to surface risks early, allow everyone the time to work through them, and ultimately document resolution for posterity's sake. | ||
--> | ||
|
||
We need to carefully define each field because when these metrics are collected | ||
from different platforms/services, the scope of these metrics change. We need to | ||
make sure when users are using these metrics, they are all collected to represent | ||
the same thing. For example, `host.network.in.bytes` needs to be an aggregated | ||
value for all network interfaces. `host.cpu.pct` needs to be a normalized value | ||
between 0 and 1. | ||
|
||
<!-- | ||
Stage 2: Document new concerns or resolutions to previously listed concerns. It's not critical that all concerns have resolutions at this point, but it would be helpful if resolutions were taking shape for the most significant concerns. | ||
--> | ||
|
@@ -117,6 +156,7 @@ e.g.: | |
<!-- An RFC should link to the PRs for each of it stage advancements. --> | ||
|
||
* Stage 0: https://github.com/elastic/ecs/pull/947 | ||
* Stage 1: https://github.com/elastic/ecs/pull/950 | ||
|
||
<!-- | ||
* Stage 1: https://github.com/elastic/ecs/pull/NNN | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's important to specify the
scaling_factor
so that we know how the value should be interpreted.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we document how this cpu percentage works on a server with multiple CPU cores.
Example, If I have a server with 12 CPU cores, will this "CPU percentage" max to
1200%
or to100%
?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe the way that the endpoint team has done this in the past is the 12 CPU cores has a max of
1200.0123...
right @ferullo ?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the input! This
host.cpu.pct
will be the normalized value ranges from 0 to 1 only. If the server has 12 CPU cores, then this normalized value will betotal cpu percentage
/12.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would that make
scaling_factor: 100
?This would be the first time a
scaled_float
is defined in ECS, and we'll need to add support for the definingscaling_factor
in the schema. I don't see any issue with the addition - just noting that dependency.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ebeahan I was thinking to keep
scaleing_factor
as default value. So value 0.12 for example will be stored into ES as 0.12 itself. Maybe I should just usefloat
here instead? Thanks!There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unless there's disagreement on this (I'm fine with it), let's update the definition in the RFC to make this a bit more clear. I'm not sure the current phrasing "normalized by by the number of cores" makes it clear enough.
Could be as simple as adding an example.
Or we could make it more literal and say "if there's 12 cores, this should be the average of the 12 cores, between 0 and 1"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On the
scaling_factor
, it's a required parameter for this datatype, there's no default value here.Note that the source doc is not affected by it, and would always contain the full float value.
A scaling factor of 100 would mean 0.123 (for 12.3%) would only store 0.12 in the index, for aggregations & so on. If we want to aggregate on full digit percents, a scaling_factor of 100 is appropriate.
@kaiyan-sheng or @cyrille-leclerc is 100 appropriate for this use case? Who can chime in on how observability intends to query this field?
Experiment with scaled_float
I was hoping that
scaled_float
being backed by a long meant it wouldn't have float artifacts, but with the simple test below, the last two documents ingested show up with...0000001
in the aggregation 🤷♂️There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@webmat Thank you for the explanation!! I think I would prefer a scale_factor of 1000 in this case so for CPU usage, we are keeping one decimal when displaying it in a percentage format. For example, I would like to store 0.1234 to be 0.123 in the index so it will show as 12.3% in percentage. But if this is a concern with saving space for storing these values, scale_factor of 100 is also fine for me. @exekias WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can try to do some testing and compare sizes, 100 has always looked a bit limiting, some more precision would be good