Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

System metrics semantic conventions #937

Merged
merged 32 commits into from
Oct 15, 2020
Merged
Show file tree
Hide file tree
Changes from 23 commits
Commits
Show all changes
32 commits
Select commit Hold shift + click to select a range
1040fc2
System metrics semantic conventions
aabmass Sep 9, 2020
f7f2ef7
change process count to UpDownSumObserver
aabmass Sep 11, 2020
98d72a1
fix system.cpu.utilization, use better example
aabmass Sep 11, 2020
9d20079
first several comments
aabmass Sep 24, 2020
fd6375e
add description columns, update units to UCUM
aabmass Sep 24, 2020
9d871af
Merge branch 'master' into system-metrics-818
aabmass Sep 24, 2020
a0e3e2d
markdown-toc
aabmass Sep 24, 2020
7d02a69
Merge branch 'master' into system-metrics-818
aabmass Sep 28, 2020
4f7d3e1
clarify OS process level metrics
aabmass Sep 28, 2020
dc13aa2
clarify load average exapmle
aabmass Sep 28, 2020
5e7cde9
Merge branch 'master' into system-metrics-818
aabmass Oct 1, 2020
ceb99bb
move general conventions + OTEP 108 into README.md
aabmass Oct 1, 2020
45ae1f8
renamed swap -> paging
aabmass Oct 1, 2020
b3f7508
add addition fs labels
aabmass Oct 1, 2020
2512dd7
fix links
aabmass Oct 1, 2020
964c535
fix link
aabmass Oct 1, 2020
cde2393
Update specification/metrics/semantic_conventions/README.md
aabmass Oct 6, 2020
b758d24
Update specification/metrics/semantic_conventions/README.md
aabmass Oct 6, 2020
c9a37fb
Apply suggestions from code review
aabmass Oct 8, 2020
6c1c579
fix tigran comments
aabmass Oct 6, 2020
5ffcb58
add disk io_time and operation_time
aabmass Oct 8, 2020
1b90514
add descriptions/footnotes for dropped packets and net errors
aabmass Oct 8, 2020
5ffd8d0
Merge branch 'master' into system-metrics-818
aabmass Oct 8, 2020
7b14a93
lint, more info for net dropped packets/errors
aabmass Oct 8, 2020
a903783
"dropped_packets" -> "dropped"
aabmass Oct 9, 2020
c218cac
Apply suggestions from James' code review
aabmass Oct 12, 2020
09a31b7
comments from James' code review
aabmass Oct 12, 2020
fdea5e4
Merge branch 'master' into system-metrics-818
aabmass Oct 12, 2020
8fec8f9
clarify windows perf counter
aabmass Oct 12, 2020
aa5e16e
Update specification/metrics/semantic_conventions/README.md
aabmass Oct 15, 2020
aa28566
reflow text
aabmass Oct 15, 2020
7f808ab
Merge branch 'master' into system-metrics-818
aabmass Oct 15, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,8 @@ New:
([#994](https://github.com/open-telemetry/opentelemetry-specification/pull/994))
- Add Metric SDK specification (partial): covering terminology and Accumulator component
([#626](https://github.com/open-telemetry/opentelemetry-specification/pull/626))
- Add semantic conventions for system metrics
([#937](https://github.com/open-telemetry/opentelemetry-specification/pull/937))

Updates:

Expand Down
109 changes: 108 additions & 1 deletion specification/metrics/semantic_conventions/README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,114 @@
# Metrics Semantic Conventions

TODO: Add semantic conventions for metric names and labels.
The following semantic conventions surrounding metrics are defined:

* [HTTP Metrics](http-metrics.md): Semantic conventions and instruments for HTTP metrics.
* [System Metrics](system-metrics.md): Semantic conventions and instruments for standard system metrics.
* [Process Metrics](process-metrics.md): Semantic conventions and instruments for standard process metrics.
* [Runtime Environment Metrics](runtime-environment-metrics.md): Semantic conventions and instruments for runtime environment metrics.

Apart from semantic conventions for metrics and [traces](../../trace/semantic_conventions/README.md),
OpenTelemetry also defines the concept of overarching [Resources](../../resource/sdk.md) with their own
[Resource Semantic Conventions](../../resource/semantic_conventions/README.md).

## General Guidelines
jmacd marked this conversation as resolved.
Show resolved Hide resolved

Metric names and labels exist within a single universe and a single
hierarchy. Metric names and labels MUST be considered within the universe of
all existing metric names. When defining new metric names and labels,
consider the prior art of existing standard metrics and metrics from
frameworks/libraries.

Associated metrics SHOULD be nested together in a hierarchy based on their
usage. Define a top-level hierarchy for common metric categories: for OS
metrics, like CPU and network; for app runtimes, like GC internals. Libraries
and frameworks should nest their metrics into a hierarchy as well. This aids
in discovery and adhoc comparison. This allows a user to find similar metrics
given a certain metric.

The hierarchical structure of metrics defines the namespacing. Supporting
OpenTelemetry artifacts define the metric structures and hierarchies for some
categories of metrics, and these can assist decisions when creating future
metrics.

Common labels SHOULD be consistently named. This aids in discoverability and
disambiguates similar labels to metric names.

["As a rule of thumb, **aggregations** over all the dimensions of a given
metric **SHOULD** be
meaningful,"](https://prometheus.io/docs/practices/naming/#metric-names) as
Prometheus recommends.

Semantic ambiguity SHOULD be avoided. Use prefixed metric names in cases
where similar metrics have significantly different implementations across the
breadth of all existing metrics. For example, every garbage collected runtime
has slightly different strategies and measures. Using a single set of metric
names for GC, not divided by the runtime, could create dissimilar comparisons
and confusion for end users. (For example, prefer `runtime.java.gc*` over
`runtime.gc.*`.) Measures of many operating system metrics are similarly
ambiguous.

For conventional metrics or metrics that have their units included in
OpenTelemetry metadata (e.g. `metric.WithUnit` in Go), SHOULD NOT include the
aabmass marked this conversation as resolved.
Show resolved Hide resolved
units in the metric name. Units may be included when it provides additional
meaning to the metric name. Metrics MUST, above all, be understandable and
usable.

## General Metric Semantic Conventions

The following semantic conventions aim to keep naming consistent. They
provide guidelines for most of the cases in this specification and should be
followed for other instruments not explicitly defined in this document.

### Instrument Naming

- **limit** - an instrument that measures the constant, known total amount of
something should be called `entity.limit`. For example, `system.memory.limit`
for the total amount of memory on a system.

- **usage** - an instrument that measures an amount used out of a known total
(**limit**) amount should be called `entity.usage`. For example,
`system.memory.usage` with label `state = used | cached | free | ...` for the
amount of memory in a each state. In many cases, the sum of **usage** over
aabmass marked this conversation as resolved.
Show resolved Hide resolved
all label values is equal to the **limit**.

A measure of the amount of an unlimited resource consumed is differentiated
from **usage**.

- **utilization** - an instrument that measures the *fraction* of **usage**
out of its **limit** should be called `entity.utilization`. For example,
`system.memory.utilization` for the fraction of memory in use. Utilization
values are in the range `[0, 1]`.
aabmass marked this conversation as resolved.
Show resolved Hide resolved

- **time** - an instrument that measures passage of time should be called
`entity.time`. For example, `system.cpu.time` with label `state = idle | user
| system | ...`. **time** measurements are not necessarily wall time and can
be less than or greater than the real wall time between measurements.

**time** instruments are a special case of **usage** metrics, where the
**limit** can usually be calculated as the sum of **time** over all label
values. **utilization** can also be calculated and useful, for example
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure I understand what this tries to say.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As an example, the sum over all state labels of system.cpu.time (idle, user, etc.) gives system.cpu.limit Does that make sense? Happy to remove this too if it's not very useful.

`system.cpu.utilization`.
aabmass marked this conversation as resolved.
Show resolved Hide resolved

- **io** - an instrument that measures bidirectional data flow should be
called `entity.io` and have labels for direction. For example,
`system.network.io`.

- Other instruments that do not fit the above descriptions may be named more
freely. For example, `system.paging.faults` and `system.network.packets`.
Units do not need to be specified in the names since they are included during
instrument creation, but can be added if there is ambiguity.

### Units

Units should follow the [UCUM](http://unitsofmeasure.org/ucum.html) (need
more clarification in
[#705](https://github.com/open-telemetry/opentelemetry-specification/issues/705)).

- Instruments for **utilization** metrics (that measure the fraction out of a
total) are dimensionless and SHOULD use the default unit `1` (the unity).
- Instruments that measure an integer count of something SHOULD use the
default unit `1` (the unity) and
[annotations](https://ucum.org/ucum.html#para-curly) with curly braces to
give additional meaning. For example `{packets}`, `{errors}`, `{faults}`,
etc.
25 changes: 25 additions & 0 deletions specification/metrics/semantic_conventions/process-metrics.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
# Semantic Conventions for OS Process Metrics

This document describes instruments and labels for common OS process level
metrics in OpenTelemetry. Also consider the [general metric semantic
conventions](README.md#general-metric-semantic-conventions) when creating
instruments not explicitly defined in this document. OS process metrics are
not related to the runtime environment of the program, and should take
measurements from the operating system. For runtime environment metrics see
[semantic conventions for runtime environment
metrics](runtime-environment-metrics.md).

<!-- Re-generate TOC with `markdown-toc --no-first-h1 -i` -->

<!-- toc -->

- [Metric Instruments](#metric-instruments)
* [Standard Process Metrics - `process.`](#standard-process-metrics---process)

<!-- tocstop -->

## Metric Instruments

### Standard Process Metrics - `process.`

TODO
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
# Semantic Conventions for Runtime Environment Metrics

This document includes semantic conventions for runtime environment level
metrics in OpenTelemetry. Also consider the [general
metric](README.md#general-metric-semantic-conventions), [system
metrics](system-metrics.md) and [OS Process metrics](process-metrics.md)
semantic conventions when instrumenting runtime environments.

<!-- Re-generate TOC with `markdown-toc --no-first-h1 -i` -->

<!-- toc -->

- [Metric Instruments](#metric-instruments)
* [Runtime Environment Metrics - `runtime.`](#runtime-environment-metrics---runtime)
+ [Runtime Environment Specific Metrics - `runtime.{environment}.`](#runtime-environment-specific-metrics---runtimeenvironment)

<!-- tocstop -->

## Metric Instruments

### Runtime Environment Metrics - `runtime.`

Runtime environments vary widely in their terminology, implementation, and
relative values for a given metric. For example, Go and Python are both
garbage collected languages, but comparing heap usage between the Go and
CPython runtimes directly is not meaningful. For this reason, this document
does not propose any standard top-level runtime metric instruments. See [OTEP
108](https://github.com/open-telemetry/oteps/pull/108/files) for additional
discussion.

#### Runtime Environment Specific Metrics - `runtime.{environment}.`

Metrics specific to a certain runtime environment should be prefixed with
`runtime.{environment}.` and follow the semantic conventions outlined in
[general metric semantic
conventions](README.md#general-metric-semantic-conventions). Authors of
runtime instrumentations are responsible for the choice of `{environment}` to
avoid ambiguity when interpreting a metric's name or values.

For example, some programming languages have multiple runtime environments
that vary significantly in their implementation, like [Python which has many
implementations](https://wiki.python.org/moin/PythonImplementations). For
such languages, consider using specific `{environment}` prefixes to avoid
ambiguity, like `runtime.cpython.` and `runtime.pypy.`.

There are other dimensions even within a given runtime environment to
consider, for example pthreads vs green thread implementations.
Loading