-
-
Notifications
You must be signed in to change notification settings - Fork 424
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Thread exhaustion - possibly out of memory or process/resource limits reached #4310
Comments
@morph166955 you could try |
The big thing that pops out with that is that I have over 1500 of these currently out of 1960 total threads. Also interesting that the elapsed time is no more than 60 seconds on any of them and it slowly decreases from 59.x to 0.23.
|
I believe this may be resolved. I ended up doubling the amount of memory allocated to the openhab VM (4GB to 8GB) and it calmed this issue down. I've had this up for 3 months now with no crashes. I would like to note that my max threads ended up being thousands higher than before when it finally stabilized. I do find it interesting that there were so many threads happening at the same time, that just seems excessive.
|
I've been having issues with jupnp failing after a random period of time. I had opened #3843 thinking this was related there. I'm now wondering if this is actually an OH/Java issue as I'm getting errors outside of jupnp as well (in this case the neeo binding also had a failing). I'm on Snapshot 4182 which is only a few behind right now.
My jupnp is configured with:
so they should be able to grow without limit. I have had this config in for years and have only had this issue for a few months now. This leads me to think this is more of a Java issue.
When I reset the jupnp bundle via the karaf console the first error I see is:
Then I get a TON of neeo errors:
And then some more jupnp:
EDIT: Just crashed again. Grabbed some stats:
EDIT2: I dialed up the RAM on the OH VM from 4GB to 8GB after seeing the free physical total was so low. Notice the amount of peak threads now.
The text was updated successfully, but these errors were encountered: