Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HIP issue list as discussed in the offline meeting #2973

Closed
KaiSzuttor opened this issue Jul 5, 2019 · 16 comments · Fixed by #3966
Closed

HIP issue list as discussed in the offline meeting #2973

KaiSzuttor opened this issue Jul 5, 2019 · 16 comments · Fixed by #3966

Comments

@KaiSzuttor
Copy link
Member

KaiSzuttor commented Jul 5, 2019

@KaiSzuttor
Copy link
Member Author

@mkuron
Copy link
Member

mkuron commented Jul 23, 2019

https://gitlab.icp.uni-stuttgart.de/espressomd/espresso/-/jobs/134161

That one was due to incorrect #pragma unroll use and caused reduced performance (but no crash) on CUDA too. Fixed by #2982.

https://gitlab.icp.uni-stuttgart.de/espressomd/espresso/-/jobs/137077

Please merge the latest master branch, that issue has been fixed since #2937.

@mkuron
Copy link
Member

mkuron commented Jul 31, 2019

https://gitlab.icp.uni-stuttgart.de/espressomd/espresso/-/jobs/142467

That one was reproducible on an Nvidia 2080, but we don't have that in CI. So one more case where AMD actually helped find a bug that can affect Nvidia too.

@jngrad
Copy link
Member

jngrad commented Dec 13, 2019

@jngrad
Copy link
Member

jngrad commented Mar 12, 2020

The ROCm library rocFFT broke on multiple occasions:

@fweik
Copy link
Contributor

fweik commented Apr 1, 2020

@mkuron
Copy link
Member

mkuron commented Apr 1, 2020

https://gitlab.icp.uni-stuttgart.de/espressomd/espresso/-/jobs/218922

That's an odd one. Why does that test even call into HIP code?

       Start   1: save_checkpoint_lb.cpu-p3m.cpu-lj-therm.lb_1
  1/149 Test   #1: save_checkpoint_lb.cpu-p3m.cpu-lj-therm.lb_1 ..............***Exception: Child aborted  1.33 sec
Memory access fault by GPU node-5 (Agent handle: 0x563058748d20) on address 0x7f94057e0000. Reason: Page not present or supervisor privilege.

@jngrad
Copy link
Member

jngrad commented Apr 1, 2020

It happened again today, dedicated ticket: #3620

@fweik
Copy link
Contributor

fweik commented Apr 1, 2020

@mkuron I think the gpu initialization code if always run, to detect the devices present and so on, but I haven't checked.

@mkuron
Copy link
Member

mkuron commented Apr 2, 2020

ROCm 3.3 was released last night (not sure what happened to 3.2). It's installed for testing on lama. It's still broken in multiple ways:

  • ln -s /opt/rocm/bin/hcc* /opt/rocm/hip/bin/ required because the hipcc_cmake_linker_helper has not been fixed.
  • tests succeed, but hang during HSA::hsa_shut_down()

At least they fixed the cudaMemcpyToSymbol thing that broke the EK and LB tests. The shutdown thing is probably our own fault though; it is probably related to the order of destruction for static/global variables and library unloading.

@jngrad
Copy link
Member

jngrad commented Sep 16, 2020

@mkuron
Copy link
Member

mkuron commented Sep 16, 2020

#3895

No idea about that one, it's either a hardware or driver issue, and a heisenbug too. We don't have anyone here who understands the Barnes-Hut code, so our debugging abilities are rather limited.

@jngrad
Copy link
Member

jngrad commented Sep 23, 2020

@jngrad
Copy link
Member

jngrad commented Oct 16, 2020

  • the lama backup broke down several times before, during and after the summer school (Sep 30, twice on Oct 6, Oct 12), putting us in a situation where CI could not pass and PRs could not get merged

@mkuron
Copy link
Member

mkuron commented Oct 16, 2020

lama

The first two cases were a due to a broken SSD, the other one was due to a crashed graphics driver.

@KaiSzuttor
Copy link
Member Author

as long as we do not have redundancy for testing rocm, we cannot use it in CI.

@kodiakhq kodiakhq bot closed this as completed in #3966 Oct 23, 2020
kodiakhq bot added a commit that referenced this issue Oct 23, 2020
Closes #2973, closes #3895, follow-up to espressomd/docker#190

Due to their fast release cycle, ROCm packages are not stable enough for ESPResSo. We are currently supporting ROCm 3.0 to 3.7, which means supporting two compilers (HCC and HIP-Clang) and keeping patches for each ROCm release in our codebase.  Maintaining these patches and the CMake build system for ROCm is time-consuming. The ROCm job also has a tendency to break the CI pipeline (#2973), sometimes due to random or irreproducible software bugs in ROCm, sometimes due to failing hardware in both the main CI runner and backup CI runner. The frequency of false positives in CI is too large compared to the number of true positives. The last true positives were 5da80a9 (April 2020) and #2973 (comment) (July 2019). There are no known users of ESPResSo on AMD GPUs according to the [May 2020 user survey](https://lists.nongnu.org/archive/html/espressomd-users/2020-05/msg00001.html). The core team has decided to drop support for ROCm ([ESPResSo meeting 2020-10-20](https://github.com/espressomd/espresso/wiki/Espresso-meeting-2020-10-20)).

The Intel image cannot be built automatically in the espressomd/docker pipeline. Re-building it manually is a time-consuming procedure, usually several hours, due to long response time from the licensing server and the size of the Parallel Studio XE source code. When a new Intel compiler version is available, it usually requires the expertise of two people to update the dockerfile. The core team has decided to remove the Intel job from CI and use the Fedora job to test ESPResSo with MPICH.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants