Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DipolarBarnesHutGpu and DipolarDirectSumGpu tests fail on ROCm #3895

Closed
espresso-ci opened this issue Sep 14, 2020 · 10 comments · Fixed by #3966
Closed

DipolarBarnesHutGpu and DipolarDirectSumGpu tests fail on ROCm #3895

espresso-ci opened this issue Sep 14, 2020 · 10 comments · Fixed by #3966

Comments

@espresso-ci
Copy link

https://gitlab.icp.uni-stuttgart.de/espressomd/espresso/pipelines/13105

@jngrad jngrad changed the title CI build failed for merged PR DipolarBarnesHutGpu and DipolarDirectSumGpu tests fail on ROCm Sep 15, 2020
@jngrad
Copy link
Member

jngrad commented Sep 15, 2020

Currently affecting the following PRs: #3891 #3896

These tests fail randomly since yesterday:

The following tests FAILED:
	 32 - dawaanr-and-dds-gpu (Failed)
	 33 - dawaanr-and-bh-gpu (Failed)
	 34 - dds-and-bh-gpu (Failed)

The failures are reproducible with the docker image on coyote11. More details in #3891 (comment).

@jngrad
Copy link
Member

jngrad commented Sep 15, 2020

Reproducible as far back as 79b53f6. Before that, we used a different docker image and a different compiler (HCC).

@mkuron
Copy link
Member

mkuron commented Sep 15, 2020

Hmm then why did it just start coming up in CI this week?

@psci2195 any ideas what might be causing it? I don‘t know enough about the algorithm to understand your code.

@mkuron
Copy link
Member

mkuron commented Sep 16, 2020

coyote10 was just rebooted and the issue appears resolved. Looks like a hardware or driver glitch.

@jngrad
Copy link
Member

jngrad commented Sep 16, 2020

Rebooting the runner seems to have fixed the issue. Before the reboot:

  • BH GPU0
    • free(): invalid pointer
    • Forces on particle do not match
    • Torques on particle do not match
  • BH GPU1
    • Forces on particle do not match
    • Torques on particle do not match
  • DDS GPU0
    • MPI deadlock
    • std::runtime_error
    • free(): invalid pointer
  • DDS GPU1: all good

After the reboot: all good

@jngrad
Copy link
Member

jngrad commented Sep 16, 2020

Rebooting the runner only made the issue irreproducible in docker in a ssh terminal. It's still failing in CI pipelines.

The 4.1.4 release might be affected too. In docker in a ssh terminal, there is a random MPI deadlock with dawaanr-and-bh-gpu.

@jngrad
Copy link
Member

jngrad commented Sep 16, 2020

We're temporarily running the ROCm jobs on lama to clear the backlog.

@jngrad
Copy link
Member

jngrad commented Oct 13, 2020

We exchanged one Vega 56 from coyote10 with the Vega 58 in lama. The ROCm job was run multiple times on the lama and coyote10runners without failing:

We will leave the GPUs in this configuration for now. If the issue resurfaces on the Vega 56 but not on the Vega 58 in coyote10, we can pause the corresponding runner (GPU0 or GPU1).

This was referenced Oct 22, 2020
@jngrad
Copy link
Member

jngrad commented Oct 23, 2020

Failing again on coyote10 GPU0 (logfile).

@kodiakhq kodiakhq bot closed this as completed in #3966 Oct 23, 2020
kodiakhq bot added a commit that referenced this issue Oct 23, 2020
Closes #2973, closes #3895, follow-up to espressomd/docker#190

Due to their fast release cycle, ROCm packages are not stable enough for ESPResSo. We are currently supporting ROCm 3.0 to 3.7, which means supporting two compilers (HCC and HIP-Clang) and keeping patches for each ROCm release in our codebase.  Maintaining these patches and the CMake build system for ROCm is time-consuming. The ROCm job also has a tendency to break the CI pipeline (#2973), sometimes due to random or irreproducible software bugs in ROCm, sometimes due to failing hardware in both the main CI runner and backup CI runner. The frequency of false positives in CI is too large compared to the number of true positives. The last true positives were 5da80a9 (April 2020) and #2973 (comment) (July 2019). There are no known users of ESPResSo on AMD GPUs according to the [May 2020 user survey](https://lists.nongnu.org/archive/html/espressomd-users/2020-05/msg00001.html). The core team has decided to drop support for ROCm ([ESPResSo meeting 2020-10-20](https://github.com/espressomd/espresso/wiki/Espresso-meeting-2020-10-20)).

The Intel image cannot be built automatically in the espressomd/docker pipeline. Re-building it manually is a time-consuming procedure, usually several hours, due to long response time from the licensing server and the size of the Parallel Studio XE source code. When a new Intel compiler version is available, it usually requires the expertise of two people to update the dockerfile. The core team has decided to remove the Intel job from CI and use the Fedora job to test ESPResSo with MPICH.
@jngrad
Copy link
Member

jngrad commented Oct 23, 2020

Failing on both coyote10 GPU0 (logfile) and coyote10 GPU1 (logfile). The lama backup has timeout issues.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants