Skip to content

Commit

Permalink
Merge branch 'main' into transform-attributes
Browse files Browse the repository at this point in the history
  • Loading branch information
akats7 authored Jul 31, 2023
2 parents 4c96c83 + fe3a413 commit bd615a1
Show file tree
Hide file tree
Showing 90 changed files with 7,291 additions and 386 deletions.
3 changes: 3 additions & 0 deletions .github/component_owners.yml
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,9 @@ components:
consistent-sampling:
- oertl
- PeterF778
disk-buffering:
- LikeTheSalad
- zeitlinger
samplers:
- iNikem
- trask
Expand Down
40 changes: 0 additions & 40 deletions .github/dependabot.yml

This file was deleted.

46 changes: 46 additions & 0 deletions .github/renovate.json5
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
{
"$schema": "https://docs.renovatebot.com/renovate-schema.json",
"extends": [
"config:base"
],
"packageRules": [
{
"matchPackagePrefixes": ["ch.qos.logback:"],
"groupName": "logback packages"
},
{
"matchPackagePrefixes": ["com.gradle.enterprise"],
"groupName": "gradle enterprise packages"
},
{
// junit-pioneer 2+ requires Java 11+
"matchPackageNames": ["org.junit-pioneer:junit-pioneer"],
"matchUpdateTypes": ["major"],
"enabled": false
},
{
// mockito 5+ requires Java 11+
"matchPackagePrefixes": ["org.mockito:"],
"matchUpdateTypes": ["major"],
"enabled": false
},
{
// pinned version for compatibility
"matchPackageNames": ["io.micrometer:micrometer-core"],
"matchCurrentVersion": "1.1.0",
"enabled": false
},
{
// pinned version for compatibility
"matchPackageNames": ["io.micrometer:micrometer-core"],
"matchCurrentVersion": "1.1.0",
"enabled": false
},
{
// pinned version for compatibility
"matchPackagePrefixes": ["org.apache.maven:"],
"matchCurrentVersion": "3.5.0",
"enabled": false
}
]
}
1 change: 1 addition & 0 deletions .github/scripts/draft-change-log-entries.sh
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,7 @@ component_names["aws-resources/"]="AWS resources"
component_names["aws-xray/"]="AWS X-Ray SDK support"
component_names["aws-xray-propagator/"]="AWS X-Ray propagator"
component_names["consistent-sampling/"]="Consistent sampling"
component_names["disk-buffering/"]="Disk buffering"
component_names["jfr-connection/"]="JFR connection"
component_names["jfr-events/"]="JFR events"
component_names["jmx-metrics/"]="JMX metrics"
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/gradle-wrapper-validation.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,4 +13,4 @@ jobs:
steps:
- uses: actions/checkout@v3

- uses: gradle/wrapper-validation-action@v1.0.6
- uses: gradle/wrapper-validation-action@v1.1.0
34 changes: 0 additions & 34 deletions .github/workflows/update-gradle-wrappers-daily.yml

This file was deleted.

12 changes: 12 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,18 @@

## Unreleased

## Version 1.28.0 (2023-07-14)

### AWS X-Ray SDK support

- generate error/fault metrics by aws sdk status code
([#924](https://github.com/open-telemetry/opentelemetry-java-contrib/pull/924))

### Disk buffering - New 🌟

This module provides signal exporter wrappers that intercept and store telemetry signals in files
which can be sent later on demand.

## Version 1.27.0 (2023-06-16)

### AWS X-Ray SDK support
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,6 @@ void serviceNameOnly() {
try (SdkTracerProvider tracerProvider =
AutoConfiguredOpenTelemetrySdk.builder()
.addPropertiesSupplier(() -> props)
.setResultAsGlobal(false)
.build()
.getOpenTelemetrySdk()
.getSdkTracerProvider()) {
Expand Down Expand Up @@ -62,7 +61,6 @@ void setEndpoint() {
try (SdkTracerProvider tracerProvider =
AutoConfiguredOpenTelemetrySdk.builder()
.addPropertiesSupplier(() -> props)
.setResultAsGlobal(false)
.build()
.getOpenTelemetrySdk()
.getSdkTracerProvider()) {
Expand Down
6 changes: 3 additions & 3 deletions buildSrc/build.gradle.kts
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
plugins {
`kotlin-dsl`
// When updating, update below in dependencies too
id("com.diffplug.spotless") version "6.19.0"
id("com.diffplug.spotless") version "6.20.0"
}

repositories {
Expand All @@ -12,10 +12,10 @@ repositories {

dependencies {
// When updating, update above in plugins too
implementation("com.diffplug.spotless:spotless-plugin-gradle:6.19.0")
implementation("com.diffplug.spotless:spotless-plugin-gradle:6.20.0")
implementation("net.ltgt.gradle:gradle-errorprone-plugin:3.1.0")
implementation("net.ltgt.gradle:gradle-nullaway-plugin:1.6.0")
implementation("com.gradle.enterprise:com.gradle.enterprise.gradle.plugin:3.13.4")
implementation("com.gradle.enterprise:com.gradle.enterprise.gradle.plugin:3.14.1")
}

spotless {
Expand Down
8 changes: 4 additions & 4 deletions dependencyManagement/build.gradle.kts
Original file line number Diff line number Diff line change
Expand Up @@ -14,10 +14,10 @@ rootProject.extra["versions"] = dependencyVersions
val DEPENDENCY_BOMS = listOf(
"com.fasterxml.jackson:jackson-bom:2.15.2",
"com.google.guava:guava-bom:32.1.1-jre",
"com.linecorp.armeria:armeria-bom:1.24.2",
"org.junit:junit-bom:5.9.3",
"com.linecorp.armeria:armeria-bom:1.24.3",
"org.junit:junit-bom:5.10.0",
"io.grpc:grpc-bom:1.56.1",
"io.opentelemetry.instrumentation:opentelemetry-instrumentation-bom-alpha:1.27.0-alpha",
"io.opentelemetry.instrumentation:opentelemetry-instrumentation-bom-alpha:1.28.0-alpha",
"org.testcontainers:testcontainers-bom:1.18.3"
)

Expand Down Expand Up @@ -49,7 +49,7 @@ val CORE_DEPENDENCIES = listOf(
)

val DEPENDENCIES = listOf(
"io.opentelemetry.javaagent:opentelemetry-javaagent:1.27.0",
"io.opentelemetry.javaagent:opentelemetry-javaagent:1.28.0",
"com.google.code.findbugs:annotations:3.0.1u2",
"com.google.code.findbugs:jsr305:3.0.2",
"com.squareup.okhttp3:okhttp:4.11.0",
Expand Down
56 changes: 56 additions & 0 deletions disk-buffering/CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
# Contributor Guide

Each one of the three exporters provided by this
tool ([LogRecordDiskExporter](src/main/java/io/opentelemetry/contrib/disk/buffering/LogRecordDiskExporter.java), [MetricDiskExporter](src/main/java/io/opentelemetry/contrib/disk/buffering/MetricDiskExporter.java)
and [SpanDiskExporter](src/main/java/io/opentelemetry/contrib/disk/buffering/SpanDiskExporter.java))
is responsible of performing 2 actions, `write` and `read/delegate`, the `write` one happens
automatically as a set of signals are provided from the processor, while the `read/delegate` one has
to be triggered manually by the consumer of this library as explained in the [README](README.md).

## Writing overview

![Writing flow](assets/writing-flow.png)

* The writing process happens automatically within its `export(Collection<SignalData> signals)`
method, which is called by the configured signal processor.
* When a set of signals is received, these are delegated over to
the [DiskExporter](src/main/java/io/opentelemetry/contrib/disk/buffering/internal/exporters/DiskExporter.java)
class which then serializes them using an implementation
of [SignalSerializer](src/main/java/io/opentelemetry/contrib/disk/buffering/internal/serialization/serializers/SignalSerializer.java)
and then the serialized data is appended into a File using an instance of
the [Storage](src/main/java/io/opentelemetry/contrib/disk/buffering/internal/storage/Storage.java)
class.
* The data is written into a file directly, without the use of a buffer, to make sure no data gets
lost in case the application ends unexpectedly.
* Each disk exporter stores its signals in its own folder, which is expected to contain files
that belong to that type of signal only.
* Each file may contain more than a batch of signals if the configuration parameters allow enough
limit size for it.
* If the configured folder size for the signals has been reached and a new file is needed to be
created to keep storing new data, the oldest available file will be removed to make space for the
new one.
* The [Storage](src/main/java/io/opentelemetry/contrib/disk/buffering/internal/storage/Storage.java),
[FolderManager](src/main/java/io/opentelemetry/contrib/disk/buffering/internal/storage/FolderManager.java)
and [WritableFile](src/main/java/io/opentelemetry/contrib/disk/buffering/internal/storage/files/WritableFile.java)
files contain more information on the details of the writing process into a file.

## Reading overview

![Reading flow](assets/reading-flow.png)

* The reading process has to be triggered manually by the library consumer as explained in
the [README](README.md).
* A single file is read at a time and updated to remove the data gathered from it after it is
successfully exported, until it's emptied. Each file previously created during the
writing process has a timestamp in milliseconds, which is used to determine what file to start
reading from, which will be the oldest one available.
* If the oldest file available is stale, which is determined based on the configuration provided at
the time of creating the disk exporter, then it will be ignored, and the next oldest (and
unexpired) one will be used instead.
* All the stale and empty files will be removed as a new file is created.
* The [Storage](src/main/java/io/opentelemetry/contrib/disk/buffering/internal/storage/Storage.java),
[FolderManager](src/main/java/io/opentelemetry/contrib/disk/buffering/internal/storage/FolderManager.java)
and [ReadableFile](src/main/java/io/opentelemetry/contrib/disk/buffering/internal/storage/files/ReadableFile.java)
files contain more information on the details of the file reading process.
* Note that the reader delegates the data to the exporter exactly in the way it has received the
data - it does not try to batch data (but this could be an optimization in the future).
113 changes: 113 additions & 0 deletions disk-buffering/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,113 @@
# Disk buffering

This module provides signal exporter wrappers that intercept and store signals in files which can be
sent later on demand. A high level description of how it works is that there are two separate
processes in place, one for writing data in disk, and one for reading/exporting the previously
stored data.

* Each exporter stores the received data automatically in disk right after it's received from its
processor.
* The reading of the data back from disk and exporting process has to be done manually. At
the moment there's no automatic mechanism to do so. There's more information on it can be
achieved, under [Reading data](#reading-data).

> For a more detailed information on how the whole process works, take a look at
> the [CONTRIBUTING](CONTRIBUTING.md) file.
## Configuration

The configurable parameters are provided **per exporter**, the available ones are:

* Max file size, defaults to 1MB.
* Max folder size, defaults to 10MB. All files are stored in a single folder per-signal, therefore
if all 3 types of signals are stored, the total amount of space from disk to be taken by default
would be of 30MB.
* Max age for file writing, defaults to 30 seconds.
* Min age for file reading, defaults to 33 seconds. It must be greater that the max age for file
writing.
* Max age for file reading, defaults to 18 hours. After that time passes, the file will be
considered stale and will be removed when new files are created. No more data will be read from a
file past this time.
* An instance
of [TemporaryFileProvider](src/main/java/io/opentelemetry/contrib/disk/buffering/internal/files/TemporaryFileProvider.java),
defaults to calling `File.createTempFile`. This provider will be used when reading from the disk
in order create a temporary file from which each line (batch of signals) will be read and
sequentially get removed from the original cache file right after the data has been successfully
exported.

## Usage

### Storing data

In order to use it, you need to wrap your own exporter with a new instance of
the ones provided in here:

* For a LogRecordExporter, it must be wrapped within
a [LogRecordDiskExporter](src/main/java/io/opentelemetry/contrib/disk/buffering/LogRecordDiskExporter.java).
* For a MetricExporter, it must be wrapped within
a [MetricDiskExporter](src/main/java/io/opentelemetry/contrib/disk/buffering/MetricDiskExporter.java).
* For a SpanExporter, it must be wrapped within
a [SpanDiskExporter](src/main/java/io/opentelemetry/contrib/disk/buffering/SpanDiskExporter.java).

Each wrapper will need the following when instantiating them:

* The exporter to be wrapped.
* A File instance of the root directory where all the data is going to be written. The same root dir
can be used for all the wrappers, since each will create their own folder inside it.
* An instance
of [StorageConfiguration](src/main/java/io/opentelemetry/contrib/disk/buffering/internal/StorageConfiguration.java)
with the desired parameters. You can create one with default values by
calling `StorageConfiguration.getDefault()`.

After wrapping your exporters, you must register the wrapper as the exporter you'll use. It will
take care of always storing the data it receives.

#### Set up example for spans

```java
// Creating the SpanExporter of our choice.
SpanExporter mySpanExporter = OtlpGrpcSpanExporter.getDefault();

// Wrapping our exporter with its disk exporter.
SpanDiskExporter diskExporter = SpanDiskExporter.create(mySpanExporter, new File("/my/signals/cache/dir"), StorageConfiguration.getDefault());

// Registering the disk exporter within our OpenTelemetry instance.
SdkTracerProvider myTraceProvider = SdkTracerProvider.builder()
.addSpanProcessor(SimpleSpanProcessor.create(diskExporter))
.build();
OpenTelemetrySdk.builder()
.setTracerProvider(myTraceProvider)
.buildAndRegisterGlobal();

```

### Reading data

Each of the exporter wrappers can read from the disk and send the retrieved data over to their
wrapped exporter by calling this method from them:

```java
try {
if(diskExporter.exportStoredBatch(1, TimeUnit.SECONDS)) {
// A batch was successfully exported and removed from disk. You can call this method for as long as it keeps returning true.
} else {
// Either there was no data in the disk or the wrapped exporter returned CompletableResultCode.ofFailure().
}
} catch (IOException e) {
// Something unexpected happened.
}
```

Both the writing and reading processes can run in parallel and they don't overlap
because each is supposed to happen in different files. We ensure that reader and writer don't
accidentally meet in the same file by using the configurable parameters. These parameters set non-overlapping time frames for each action to be done on a single file at a time. On top of that, there's a mechanism in
place to avoid overlapping on edge cases where the time frames ended but the resources haven't been
released. For that mechanism to work properly, this tool assumes that both the reading and the
writing actions are executed within the same application process.

## Component owners

- [Cesar Munoz](https://github.com/LikeTheSalad), Elastic
- [Gregor Zeitlinger](https://github.com/zeitlinger), Grafana

Learn more about component owners in [component_owners.yml](../.github/component_owners.yml).
Binary file added disk-buffering/assets/reading-flow.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added disk-buffering/assets/writing-flow.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading

0 comments on commit bd615a1

Please sign in to comment.