-
-
Notifications
You must be signed in to change notification settings - Fork 312
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
All tomee related test issues #2057
Comments
I've done some local testing of running the external/tomee locally running just the fault-tolerance sub-tests:
Jenkins Grinder:
What seems to be the main difference is the JBoss repository downloads taking 4 times longer on average in Jenkins. The TCK is designed to download many many dependencies and some of these are slow downloads,eg: Local:
Jenkins:
|
@andrew-m-leonard htmlunit-core-js is on Maven Central. I don't see a reason why we download from jboss.org. Is there a way to override that in the tests? Otherwise we can override on the level of settings.xml, but that has probably to go into the playbooks. |
@aahlenst The tck pom points at JBoss https://github.com/apache/tomee/blob/5f4da4236c914e249091afdd7f356e7977747e43/tck/pom.xml#L52
|
@smlambert Shelley, the full tomee tck is going to take a long time to "build" and "run tests", it's exaggerated by the slower dependency download time from Jenkins for doing the "build" stage, but even so the test run is going to take a long time...
Would that entail duplicating the external/tomee folder for each sub-module and updating dockerfile/tomee-test.sh to target the sub-module in each? We would only want it to run "weekly", xLinux to start with. |
You could try to override that with http://maven.apache.org/settings.html#Mirrors. But that has to happen in the settings.xml in If all that does not help or the JBoss repository contains artifacts that aren't on Maven Central, we should think about placing a proxy within Adopt. |
A single test:
Same test from job: https://ci.adoptopenjdk.net/job/Test_openjdk8_hs_sanity.external_x86-64_linux_tomee/484/consoleFull
|
so i’ve been trying to debug why the tomee microprofile-tck is very slow on some VMs, and I have replicated it on my local VM, but I am seeing an odd behaviour I can’t explain. The tests seem to go slow around where it is doing EJB lookups I believe, but I am seeing the following behaviour that seems to make it go 10 times quicker!
Why?? I am wondering if its some sort of XTerm session thingy…? |
Examing "top" while running tests provided no obvious clues |
The biggest clue we have is it run very fast on this node: https://ci.adoptopenjdk.net/label/test-ibmcloud-ubuntu1604-x64-1/ |
@smlambert I think we have at least two options for this issue:
|
and option 3 is get to the core issue and figure out why they run fast on ibmcloud and if its a 'solvable' infra issue, correct and/or improve the config of the other nodes so all machines are improved. |
Yes ideally... i've tried for several days, and can't work it out, so i'm leaving it for the moment as it's frustrating me!! |
gotcha, if we are to do an interim mitigation, I guess I would not vote for option 2, as:
|
Noting that a few other actions may be worth trying:
|
thanks for the ideas @smlambert I have been investigating the tomee community, i've search their Jira bugs database and have requested access to their maillist, awaiting response... |
jdk8u275_openj9-0.23.0 exhibits the same slow performance |
Discovered the Catalina log file, and this shows on every test where it starts the server during the long time it sits here:
|
It seems wasteful that the Catalina server is starting & stopping on every test... |
ah hah!
https://stackoverflow.com/questions/28201794/slow-startup-on-tomcat-7-0-57-because-of-securerandom |
The following PR fixes the problem: https://github.com/AdoptOpenJDK/openjdk-tests/pull/2113/files |
Following maybe just glitches, not seen during my many re-runs, suggest we get these re-enabled and see if it re-occurs: |
My Grinder re-runs show the gradle dependency downloads are all downloaded using https, suggest the following has already been fixed upstream: |
#2113 is now merged, will close this but monitor to see if glitches re-occur. |
For #1993 that includes openliberty tck on s390x, I ran a test and that one still takes 10hrs, but that is down to using ZeroVM, since it runs about 10x slower so that would make sense. |
Collapsing all open tomee issues into this one epic issue:
long-running tomee on xlinux sanity.external_x86-64_linux_tomee takes 10 hours to run #1997
long-running tomee and openliberty on s390x jdk8 sanity.external s390x_linux_openliberty-mp-tck and x86-64_linux_tomee taking a considerable amout of time on zLinux #1993
tomee hung on xlinux jdk8_hs_sanity.external_x86-64_linux_tomee hung #1992
tomee jdk8 failure on xlinux jdk8 sanity.external tomee_test_0 failure #1867
tomee gradle plugin access failure sanity.external tomee test failure (gradle plugin access issue) #1730
Will close each sub-issue, as all of the above can be gathered under this one issue and addressed (and if we can not resurrect the health and stability of this set of tests, we can consider removing them).
The text was updated successfully, but these errors were encountered: