You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Official telegraf 1.11 docker image, relevant to all telegraf versions in all OS.
Steps to reproduce:
Write a batch of points in one request to Telegraf containing some invalid points.
Expected behavior:
If I send a batch of points to InfluxDB with some invalid points, I receive a partial write error. Nonetheless, the valid points in that batch are stored.
Actual behavior:
If I send the same batch of points to http_listener, influxdb_listener or http_listener_v2, letting it serve as a proxy to my influxdb instance, none of the batch points are stored in InfluxDB. Telegraf will respond with a "bad request" error and discard the entire batch before even forwarding it to InfluxDB.
Additional info:
This issue is slightly related to #4742 but with much more impact. It's not just about output / error messages. We observe a totally different behavior in terms of what is actually stored into InfluxDB when just adding a proxy into the connection.
Our scenario:
We run Apache Flink and Apache Spark applications on a cluster which can communicate to our InfluxDB backend storage only via a dedicated proxy node. The applications produce tons of metrics, some containing the value +-infinity, which can't be handled neither by InfluxDB nor by telegraf. When sending metrics directly to InfluxDB in the test environment, all works fine (only the invalid metrics are just not written), when using telegraf as a proxy, we don't get any metrics reported.
The text was updated successfully, but these errors were encountered:
Thanks for the bug report, I took a quick look at the code the current behavior of influxdb_listener is to parse lines until the first error and then stop, whereas InfluxDB parses all lines skipping the unparseable lines in full. The http_listener_v2 plugin is not meant to behave the same as InfluxDB, so let's ignore it here.
System info:
Official telegraf 1.11 docker image, relevant to all telegraf versions in all OS.
Steps to reproduce:
Expected behavior:
If I send a batch of points to InfluxDB with some invalid points, I receive a partial write error. Nonetheless, the valid points in that batch are stored.
Actual behavior:
If I send the same batch of points to http_listener, influxdb_listener or http_listener_v2, letting it serve as a proxy to my influxdb instance, none of the batch points are stored in InfluxDB. Telegraf will respond with a "bad request" error and discard the entire batch before even forwarding it to InfluxDB.
Additional info:
This issue is slightly related to #4742 but with much more impact. It's not just about output / error messages. We observe a totally different behavior in terms of what is actually stored into InfluxDB when just adding a proxy into the connection.
Our scenario:
We run Apache Flink and Apache Spark applications on a cluster which can communicate to our InfluxDB backend storage only via a dedicated proxy node. The applications produce tons of metrics, some containing the value +-infinity, which can't be handled neither by InfluxDB nor by telegraf. When sending metrics directly to InfluxDB in the test environment, all works fine (only the invalid metrics are just not written), when using telegraf as a proxy, we don't get any metrics reported.
The text was updated successfully, but these errors were encountered: