Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HHH-17980 Excessive contention during getter identification in the ByteBuddy enhancer #8204

Merged
merged 1 commit into from
Apr 22, 2024

Conversation

Sanne
Copy link
Member

@Sanne Sanne commented Apr 19, 2024

This is the first patch on a series, motivated by the analysis of bootstrap times using https://github.com/topicusonderwijs/tribe-krd-quarkus.

https://hibernate.atlassian.net/browse/HHH-17980

This patch in isolation will not dramatically improve things, as contention is currently quite low - however this lock becomes a problem as other bottlenecks are resolved. I'm resolving the other bottlenecks via a combination of patches on the Quarkus side and other improvements coming on the ORM side.

Not all improvements on the ORM side will be suitable for pre-7 versions, still I'd apply this one on 6.5+ as well as the combination of improvements on 6.5 is still meaningful enough to make this lock problematic.

@Sanne
Copy link
Member Author

Sanne commented Apr 19, 2024

cc/ @yrodiere and @dashorst ;)

//otherwise large models might exhibit significant contention on the map.
Map<String, MethodDescription> getters = getterByTypeMap.get( erasure );

if ( getters == null ) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just doing that is not enough to remove the contention?

I had good results using this pattern when you recommended me to always check with a get first. And if that good enough it could be more easily backported?

Now I suppose you didn’t come up with something complicated just for fun so interested in your insights :)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, so glad you’re checking this :)

Yes, indeed, the solution of just doing the 'get' first was the first thing I tried, but it made almost no difference in this particular case.

Turns out that when there is a very high chance of the current operation being the first (only?) to ever check for that particular key, we still end up doing a lot of 'computeIfAbstent' - which takes the very coarse lock. Things are actually made worse, potentially, as now you have an additional pointless get first and then you still fall into the slow path - now in this case the 'get' wasn't always pointless so things balanced out a little, and there was a little improvement, but still a lot of contention: 25+ seconds in my case.

With the pattern below I got a nice balance: extremely low, negligible contention (order of some dozen ms), while still avoiding to perform duplicated work so making effective use of the cache.

I guess it's a good reminder of how such guidelines are highly dependent on context - in this case the expectation of which outcome is most likely.

I must say I'm a little puzzled that there isn't something OOB in the JDK for this scenario.. hopefully CHM could improve on this, but for now this does the trick for us.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I ended up doubting myself here and wondering why the CHM causes such levels of contention.. looking at the source code it's unclear, it seems to make a similar attempt to what I've done here, except it doesn't seem to be effective in practice !?
It might be interesting to investigage further with the Java team - but I'd rather put all these pieces together and see the live-reload issue fixed first.

@Sanne Sanne merged commit e84370e into hibernate:main Apr 22, 2024
25 checks passed
@Sanne Sanne deleted the HHH-17980 branch April 22, 2024 10:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants