Cache parsing of content-type header #3011
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
#2969 unknowingly introduced a ~10% regression in performance for simple requests when the content-type header is present. This regression was introduced because parsing of the content-type header is a fairly expensive (~1us to parse
application/json
based on some basic benchmarking), and to make things worse, it is executed within Netty's event loop which should not be performing CPU-intensive work.On one hand, we could simply revert the PR or parse the content-type header only when needed, but I doubt this is the first time someone will accidentally do this. In order to guard against such usages in the future, this PR adds a cache infront of the content-type header parser. Since this can be potentially, we guard against this by clearing the cache if it grows too big. This size (8192 entries) should never be reached for standard usages