You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
json_lines (3.2.0+) codec introduced a decode_size_limit_bytes parameter to set the limit for each JSON line when decoding. This change triggered Logstash core buffer flush-fail bug
For the LS-to-LS communications, ls-output plugins was sending all events without setting new line delimiter where LS core takes it as with a big size in BufferedTokenizer.
The problematic situation happens with 8.16.0, 8.16.1, or 8.17.0 versions which includes the json_lines-v3.2.0+
LS core bug is fixed, correctly tracks the buffer marker.
LS output plugin correctly generates event-oriented ndjson-compatible payloads.
json_lines has a 512M limit for each JSON line which is pretty big and users can increase the limit as much as big they want.
No other plugins using the BufferedTokenizer with setting its limit.
From current situation, I am not seeing benefits of adding a feature to disable the BufferedTokenizer limit with its value (IMHO: 0 is invalid number for the size, intentionally sticking to negative -1 would make sense to as other products/services have similar ability)
CCing for feedback: @jsvd , @robbavey, @yaauie
Enforcement of a maximum size to decode in the BufferedTokenizer was added, by the introduction of a
sizeLimit
parameter.This parameter will be enforced, regardless of the size of the value, even if it is 0 or negative.
We should add the ability to skip the enforcement if the sizeLimit is set to 0.
The text was updated successfully, but these errors were encountered: