Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Payara 5.182 Full stops responding to http-requests if only the console-Logger is activated in admin-panel #2942

Closed
oleelo opened this issue Jul 6, 2018 · 5 comments · Fixed by #3508

Comments

@oleelo
Copy link

oleelo commented Jul 6, 2018

Description


Payara 5.182 Full (fresh downloaded - no modifications) stops responding to Http-request after about 5000 requests (jmeter 20 threads in parallel) if only the console-Logger is activated and each requests logs one simple log-entry via java.util-Logging. If the file-logger and the console-logger or only the file logger is activated everything works just fine.

Expected Outcome

The Server continues answering requests and writing logs

Current Outcome

The server stops answering requests after about 5000 requests

Steps to reproduce (Only for bug reports)

  1. Download fresh Payara 5.182 (build 303)
  2. Deploy minimalistic REST-App: just two Classes: JaxRSConfiguration and one REST-Resource which logs one line with java.util.logging and returns Response 200.
  3. Load Test with jmeter to the endpoint
  4. After 5000 requests the server stops answering requests if only the console logger is activated.

Environment

  • Payara Version: 5.182 (build 303)
  • Edition: Full
  • JDK Version: Oracle 1.8_171
  • Operating System: Mac
@ghost
Copy link

ghost commented Jul 16, 2018

I got the same issue.

@meliora
Copy link

meliora commented Dec 11, 2018

To let you know, we also have had various similar issues we think might be related to logging somehow. We think the logging is somehow broken with some kind of deadlocks or race-conditions present. We actually downgraded our all production servers back to 4.1.2.173 which - in our experience - seems to be the latest version which is stable when running our quite-complex-full-stack-application. We encountered these issues with 4.1.2.181.

I'm just adding this comment to possibly help track down the changes in the logging backend (GFFileHandler is our suspect - it had the most deadlocks when we tried to resolve the issue on our servers). As said, 4.1.2.173 is stable for us, versions after that we have had issues with.

@oleelo I'd suggest you grab a stack from the VM when it is in deadlock. It might help to know if you happen to have GFFileHandler-related deadlocks in place. http://fastthread.io/ is a great tool to analyze the standard dumps.

@svendiedrichsen
Copy link
Contributor

svendiedrichsen commented Dec 11, 2018

This may be related to #3506 .

@gmsa
Copy link

gmsa commented Jul 18, 2019

We just had this same problem on payara 5.192. Wasn't able the grab and stack dump since it was a critical production server and we had to rollback quickly to our previous working glassfish

@josemiguel1999
Copy link

GFFileHandler log pump
priority:5 - threadId:GFFileHandler log pump - state:WAITING
stackTrace:
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.7/Native Method)

  • parking to wait for <0x0000000740d2c3b0> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
    at java.util.concurrent.locks.LockSupport.park(java.base@11.0.7/LockSupport.java:194)
    at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(java.base@11.0.7/AbstractQueuedSynchronizer.java:2081)
    at java.util.concurrent.ArrayBlockingQueue.take(java.base@11.0.7/ArrayBlockingQueue.java:417)
    at com.sun.enterprise.server.logging.GFFileHandler.log(GFFileHandler.java:948)
    at com.sun.enterprise.server.logging.GFFileHandler$1.run(GFFileHandler.java:581)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants