-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Duplicate @timestamp fields in elasticsearch output #628
Comments
I have the same issue. kibana for example produces logs containing @timestamp fields. my own applications i was able to fix by renaming the timestamp field. |
I'm also having the same issue. Has anyone found a workaround yet? |
to avoid that duplicate field you can set an alternative name for the time field (Time_Key): https://fluentbit.io/documentation/0.13/output/elasticsearch.html let me know if that fixes the issue |
that should do it, thanks :) |
Having to rename it is patchwork, so, think about providing better defaults instead. |
@edsiper I would reopen this, I have this config:
and keep getting error like this one:
|
I have this issue w/ build from git (latest). I'm not sure what a workaround would be. I am using via helm chart in Kubernetes, w/ mergeJSONLog: true enabled, and an annotation of apache2 on one of the pods. since its doing a tail -f on the docker logs, they are first parsed as docker, and then parsed as apache2. this causes a duplicate.
|
I used
|
@lxfontes this only solves the timestamp problem for me but I still have 'duplicate "time" fields'. Why do we actually need this? |
Ok so for me the fix was for the kubernetes chart setting:
As now it will not longer try to merge keys. Guess this is a bug in fluentd as the expected behavior should be that while doing the merge it MUST NOT try to append the field. |
Isn't it more convenient to change the behavior of Elasticsearch plugin such that, it won't append the @timestamp key (or Time_Key in general) if it already exists? |
This needs to be reopened. Consider that the official Logstash formatter for log4j, https://github.com/logstash/log4j-jsonevent-layout is going to output the @timestamp to standard out. So a developer, encountering this bug, is going to wonder why their Logstash-format JSON is considered invalid by something that describes itself as Logstash compatible. |
I am facing same issue when generating logs in json format following standard plugin https://github.com/logstash/log4j-jsonevent-layout. I agree with @wirehead comment |
JSON with keys the same name is not invalid. But make sense it's a restriction for elasticsearch, your workarounds:
https://docs.fluentbit.io/manual/filter/kubernetes
https://docs.fluentbit.io/manual/output/elasticsearch Now If I implement a kind of "sanitizer" option, take in count it will affect performance. Options above should work, if don't please let me know. |
2nd option sounded more cleaner to me and that worked well as well |
i think this need to reopened |
Had to try all options to make it work with Kubana discovery+logs:
The only thing that worked for me is adding a prefix to merged fields: |
I fixed with following config `[PARSER]
|
Hi @Vfialkin, currently i'm working with serilog too, following your instruction, it's work fine but i still cannot get all log in log field like |
Hi Minhnhat,
Take a look at my yml, maybe that will help: I also tried to describe full setup process in by |
Hi @Vfialkin, it's really helpful. Thank you ! |
@Vfialkin thanks for the article, it was super helpful. One thing though: extraEntries:
input: |-
Exclude_Path /var/log/containers/kibana*.log,/var/log/containers/kube*.log,/var/log/containers/etcd-*.log,/var/log/containers/dashboard-metrics*.log This would not work as expected: at least in the current version of the chart: https://github.com/helm/charts/blob/b71c8c665e7de2ef22e915cd2f173d680cd7636c/stable/fluent-bit/templates/config.yaml Those extra entries are being appended to the end of the config, and if systemd is enabled, then they are appended to the systemd's section. I sent a PR to the chart, see below |
@zerkms thanks for the feedback! Nice catch with systemd section, so it works by coincidence because systemd is defaulted to false 😁 It would definitely be better to have it as a param, hope your PR will get merged soon. 🤞 |
I am trying to replace my fluentd installation in kubernetes with fluent-bit 0.13.3 but ran into an issue. We currently have the standard setup:
The problem is that some of the log messages from services are json encoded and also include a
@timestamp
field. This then causes some errors:I tried to use
Merge_JSON_Key
to mitigate this, but the option seems to be disabled in the source code (without mentioning it in the docs, took me some time to figure out why it does not work ;-)). In my opinion the Merge_JSON_Log should overwrite existing keys instead of having duplicate keys.The text was updated successfully, but these errors were encountered: