-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature request: support for nanosecond timestamps #10822
Comments
This is a feature we all need when you deal with a high throughput of logs. Often we check logs on box due to the interleaved effect in Kibana inherent to milliseconds precision only. I've also discovered that logstash is now the bottleneck for nanoseconds probably because of joda time. Since ES7.x announcement, I guess the whole community believe it's a done deal but it's far from it. Current state:
References:
|
@bmerry : Can you edit your issue title to add the word nanoseconds? Google did not find this issue. I only found this issue via Github issue search. If more people 👍 this request then it may get more traction. In the mean time, we can only wait for some skilled programmers to improve the situation ;-) |
Thanks @bmerry . Now we just have to wait... |
We would love to have this. |
Any known workarounds? (in logstash, but without the date filter). |
I've worked around it by adding an extra field called Here's the Elasticsearch pipeline:
|
I've got nanoseconds working on logs with this precision having @timestamp and @timesamp_nanos fields coexisting at the same time and telling Kibana to use nanos as main timestamp: |
Logstash 8.0 will have nanoseconds precision internally, keeping nano precision in inbound As mentioned in this thread, the missing piece is still Logstash's ability to parse a nano-precise timestamp and keep its granularity, which is a limitation of the I had hopes of adding precision into the date filter, but the subtle differences between joda time's format strings and those provided by To address this, I think our best course is to introduce a new nano-precise filter that only works with Java time format strings, so that a user who is looking to add precision to their pipeline approaches it in a manner that doesn't assume Joda patterns will magically work. I plan to create the specification for this new plugin in the coming week or two, and will add a link to it here. In the mean-time, on Logstash 8, it is possible to use the Ruby internals to either parse a strict ISO8601 timestamp or to convert the Ruby I have created an unoptimized proof-of-concept
For example, if the value in the source field were a no-separators
|
Great to hear that there is now some forward progress here. Any idea whether the workaround can be used with the gelf input plugin, or will the plugin need to be updated? |
My reading of the code is that on Logstash 8, events created by the Gelf input will maintain the precision of what is available, since it uses the |
Thanks, that's great news. |
Given that Logstash 8.0 does already support nanosecond on timestamps, I'll close this issue. |
Hi, sorry for reopening the discussion. I've tried to use the pipeline using a timestamp_precise field that contains the date with nanoseconds precision in text format e.g. "timestamp_precise":"2022-10-28T19:43:57.867631621Z") but it does not work for me. |
Hi,
The timestamp_nano includes a timestamp with nanoseconds precision. |
For those waiting (like me) for the Date filter to handle nanosecond precision in 8.x, there are a couple of workarounds:
I hope it helps! |
Now that Elasticsearch 7.x supports nanosecond timestamps, it would be nice if they could pass through logstash as well (in my case I want them just for better sorting). It looks like the Timestamp class is still based on Joda rather than Java 8 so presumably only supports milliseconds.
The text was updated successfully, but these errors were encountered: