You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When using Jobs in combination with LogWatches, since version 6.9.0 a memory leak exists.
I found out, as the memory gradually increases over time. I see the retained heap size in the old gen is getting bigger and bigger. After debugging for a day, I feel like the leak comes from the informer functionality and that the used CompletableFuture is not properly teared down. This creates a lot of back references which the GC can't drop. However, was not fully able to nail it down to the exact location.
I created a demo project which causes the fault to appear quite frequently after a few minutes.
Please also see the heap dump I attached showing the problem here.
The first thing that I did was convert your example to just java 17 - removing the task executor override and using just a cached thread pool for running the jobs in main. At least for me after 10 minutes the memory usage just exhibited a normal pattern of gc, where the heap was not growing. So I suspect the issue is with using virtual threads for the task executor - can you cofirm this?
After running for 10 minutes with count 16 I see the client occupies now most of the heap, it will grow more and more. The absolute count of a few MB is not that big, but the relative count, in contrast to all the other classes gets absurd. Its the same pattern as with the virtual threads. When switching to 6.8.1 I don't see anything like this:
Ok, upping the thread count made the issue more appearent over a short interval. The problem is with the auto-closure logic - it's adding a task to ensure the informer is closed if the client is closed, but there's nothing cleaning that up when the informer is closed naturally.
Describe the bug
When using Jobs in combination with LogWatches, since version 6.9.0 a memory leak exists.
I found out, as the memory gradually increases over time. I see the retained heap size in the old gen is getting bigger and bigger. After debugging for a day, I feel like the leak comes from the informer functionality and that the used CompletableFuture is not properly teared down. This creates a lot of back references which the GC can't drop. However, was not fully able to nail it down to the exact location.
I created a demo project which causes the fault to appear quite frequently after a few minutes.
Please also see the heap dump I attached showing the problem here.
heapdump.zip
project.zip
My workaround atm is to just stay on 6.8.1, which seems to not have a memory leak.
Fabric8 Kubernetes Client version
6.9.2
Steps to reproduce
Expected behavior
no memory leak :D
Runtime
other (please specify in additional context)
Kubernetes API Server version
1.25.3@latest
Environment
Linux
Fabric8 Kubernetes Client Logs
No response
Additional context
k3d 1.25.3
The text was updated successfully, but these errors were encountered: