-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update images and add Ubuntu 20.04 #157
Conversation
The issue in v0.16 and v0.17 seems to have been fixed in v0.17.1.
The 3.1 release has numerous bugs that can only be fixed upstream.
Features gcc 9, Clang 9, boost 1.71 and CMake 3.16.3.
A few notes on Clang 9:
|
https://gitlab.icp.uni-stuttgart.de/espressomd/docker/-/jobs/214138 |
@mkuron could you please have a look |
@mkuron I've changed the line in |
@jngrad please don't invest too much time on this unstable feature |
The ROCm apt URL was pointing to the latest ROCm release, causing a mix of ROCm 3.0 and ROCm 3.1 libraries to be installed and breaking the /opt/rocm symlink to /opt/rocm-3.1.0.
This ROCm issue should have been investigated 10 days ago when the master CI was red. We were in the exact same situation in February with the kaniko and Intel images, and it bled into the coding day. We'll keep wasting precious time until we start taking CI seriously and fix failing master before it collides with the regular flow of docker PRs. |
@fweik and me discussed about this. The CI of espresso should be totally independent on updating any images on the docker repository. We should pin the version of the images used for CI and only change the version via pull requests. |
@RudolfWeeber and I discussed that as well last month, and we wanted to have CI images follow the release workflow of espresso, e.g.
This way we can fix the distro instead of having |
but this is not related to releases. We should have an internal versioning all the time. This way you disentangle the docker container dev and the espresso CI. |
so you mean |
yes |
but then, how do we regenerate a corrupted/deleted image? We would have to pin every single apt and python package to guarantee we don't get anything newer, which is tricky if you want to install a new python package that depends on a newer version of a pinned package. |
I don't think manual versioning is the right approach. It's possible to pull a specific hash, |
They never do. That's why we build and run Espresso tests in the docker repo too.
This wouldn't have been avoided by any kind of version pinning. If CI in the docker repo is red and you urgently need to change a Dockerfile, you first need to make the master green again either way. The only thing we could do is pin the Docker base images to specific SHA1 hashes. That's a terrible idea though as you are not guaranteed that untagged hashes will forever remain available on the Docker Hub. And distribution base images don't have a version number tag beyond the distribution number. |
That's true, but after offline discussion, what @fweik meant was pinning the digest of the built docker image into the espressomd/espresso
This happened in the past:
With @fweik's proposal we can disable the test and deploy stages in docker. The build stage already deploys the image to a This doesn't address all issues, though. For example, we have images that are used by multiple espresso releases, e.g. Also, we stop using base images with tag |
In the past, we have had to occasionally delete all images from the registry and re-run the build job because the registry got corrupted. This hasn't happened in at least a year, but it makes pinning the digest potentially fragile. |
This happened last month: the cached layers got mixed with corrupted layers and I had to delete all of them. Some cached layers probably got silently corrupted and went live in the deploy stage; we can't know because these deployed images are not recycled.
True, we can't retry an old CI pipeline if the image is damaged or lost. But this is also true for our current infrastructure: our dockerfiles need to be valid for the duration of at least two espresso minor releases. We take a lot of care not to introduce breaking changes in a dockerfile, but we don't actually measure it. We would have to periodically retry old pipelines to make sure that our dockerfiles are fully backward-compatible, which is computationally prohibitive. How confident are we that the dockerfiles in By having dockerfiles directly in espressomd/espresso and pinned images in By having dockerfiles directly in espressomd/espresso, we can also have an extra CMake command that generates a docker image suitable to build espresso. There will be no guarantee that this image will be from a dockerfile that is in sync with the current commit, but we could guarantee that by having |
We moved the Dockerfiles to a separate repository three years ago because @KaiSzuttor got annoyed with non-Espresso pull requests and files polluting the Espresso repository. |
so? This is a very socially inspired argument against it. Do you have any technical criteria pro or con? |
Description of changes: