-
Notifications
You must be signed in to change notification settings - Fork 725
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docker: use docker buildx to build multi arch #673
Conversation
@micahsnyder Can you give me some guidance on how to test locally? |
@arielmorelli when we build the images and then run the script to update the images with the latest databases, we use this: clamav_docker_user="micasnyd"
docker_registry="registry.hub.docker.com"
# Make sure we have the latest alpine image.
docker pull alpine:latest
# Build the base image
docker build --tag "${CLAMAV_DOCKER_IMAGE_NAME}:${CLAMAV_FULL_PATCH_VERSION}_base" .
# Login to docker hub
echo "${_passwd:-${DOCKER_PASSWD}}" | \
docker login --password-stdin \
--username "${clamav_docker_user}" \
"${docker_registry}"
# Make a tag with the registry name in it so we can push wherever
docker image tag ${CLAMAV_DOCKER_IMAGE_NAME}:${CLAMAV_FULL_PATCH_VERSION}_base ${docker_registry}/${CLAMAV_DOCKER_IMAGE_NAME}:${CLAMAV_FULL_PATCH_VERSION}_base
# Push the image/tag
docker image push ${docker_registry}/${CLAMAV_DOCKER_IMAGE_NAME}:${CLAMAV_FULL_PATCH_VERSION}_base
# Give it some time to add the new ${CLAMAV_FULL_PATCH_VERSION}_base image.
# In past jobs, it didn't detect the new image until we re-ran this job. I suspect because it needed a little delay after pushing before pulling.
sleep 20
# Create extra tags of the base image.
docker image tag ${CLAMAV_DOCKER_IMAGE_NAME}:${CLAMAV_FULL_PATCH_VERSION}_base ${docker_registry}/${CLAMAV_DOCKER_IMAGE_NAME}:${CLAMAV_FEATURE_RELEASE}_base
docker image push ${docker_registry}/${CLAMAV_DOCKER_IMAGE_NAME}:${CLAMAV_FEATURE_RELEASE}_base
# Generate and push an image without the "_base" suffix that contains the databases.
#
# TODO: There's bug where this actually updates _all_ containers and not just the tag we specify.
# See https://jira-eng-sjc1.cisco.com/jira/browse/CLAM-1552?filter=16896
CLAMAV_DOCKER_USER="${clamav_docker_user}" \
CLAMAV_DOCKER_PASSWD="$DOCKER_PASSWD" \
DOCKER_REGISTRY="${docker_registry}" \
CLAMAV_DOCKER_IMAGE="${CLAMAV_DOCKER_IMAGE_NAME}" \
CLAMAV_DOCKER_TAG="${CLAMAV_FULL_PATCH_VERSION}" \
./dockerfiles/update_db_image.sh -t ${CLAMAV_FULL_PATCH_VERSION}
# Login to docker hub (again, because the update_db_image.sh script removed our creds in its cleanup stage)
echo "${_passwd:-${DOCKER_PASSWD}}" | \
docker login --password-stdin \
--username "${clamav_docker_user}" \
"${docker_registry}"
# Create extra tags of the main (database loaded) image.
docker image tag ${docker_registry}/${CLAMAV_DOCKER_IMAGE_NAME}:${CLAMAV_FULL_PATCH_VERSION} ${docker_registry}/${CLAMAV_DOCKER_IMAGE_NAME}:${CLAMAV_FEATURE_RELEASE}
docker image push ${docker_registry}/${CLAMAV_DOCKER_IMAGE_NAME}:${CLAMAV_FEATURE_RELEASE}
# log-out (again)
docker logout "${docker_registry:-}" Where Jenkins has parameter fields we fill in for the variables:
You could do something similar if you set up your own. I did some testing with I created this test branch where I switched over to Debian: val-ms@ccfbc02 I was able to build for amd64, arm64, and ppc64le (requested by @snehakpersistent, in #624) using the debian-slim dockerfile. However, I found that the non-native architectures were REALLY slow. This is a problem right now in that the CTest Anyways,... Long story short, |
e9edb28
to
2a11cf5
Compare
@micahsnyder thanks for the help. I was able to build for For I just run the "pipeline" locally with: docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
docker buildx build --platform=linux/amd64,linux/amd64/v3,linux/arm64 -t arielmorelli/clamav:$(date +%s) --push . (of course I'm logger in the docker hub) Some important thoughts here:
The next step is properly test this image running locally, I tried: Can you help me with this topic? This is the current output:
|
@micahsnyder can you check this PR? |
@arielmorelli the freshclam failure should be resolved by now. I imagine you may have tested it too many times and it rate limited you for downloading the same file too often. The test failure you mention is probably because the Overall I think you're on the right track with building with a debian rust image, then switching to a debian slim image for production. That said, I noticed that there is no In reading through the rest of the changes, they all look good to me. I also built it okay with normal
I don't see the image listed on my computer, so I suspect it failed with In short, this looks good to me outside of the ppc64le issue. I am a little worried we'll have some complaints about switching from alpine to debian, but I think we need to for the multi-arch builds to work. |
The problem with freshclam probably was because I was running another clamav instenace in a background (for another project). I also had this problem with load, as long as I figured out example$ cat Dockerfile.yaml
FROM hello-world
example$ docker images | grep new-hello
example$ docker buildx build --platform=linux/amd64,linux/amd64/v3,linux/arm64 -t new-hello -f Dockerfile.yaml .
WARNING: No output specified for docker-container driver. Build result will only remain in the build cache. To push result image into registry use --push or to load image into docker use --load
[+] Building 1.7s (8/8) FINISHED
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load build definition from Dockerfile.yaml 0.0s
=> => transferring dockerfile: 59B 0.0s
=> [linux/arm64 internal] load metadata for docker.io/library/hello-world:latest 1.6s
=> [linux/amd64 internal] load metadata for docker.io/library/hello-world:latest 0.9s
=> [linux/amd64/v3 internal] load metadata for docker.io/library/hello-world:latest 0.9s
=> [linux/amd64/v3 1/1] FROM docker.io/library/hello-world@sha256:62af9efd515a25f84961b70f973a798d2eca956b1b2b026d0a4a63a3b0b6a3f2 0.0s
=> => resolve docker.io/library/hello-world@sha256:62af9efd515a25f84961b70f973a798d2eca956b1b2b026d0a4a63a3b0b6a3f2 0.0s
=> [linux/amd64 1/1] FROM docker.io/library/hello-world@sha256:62af9efd515a25f84961b70f973a798d2eca956b1b2b026d0a4a63a3b0b6a3f2 0.0s
=> => resolve docker.io/library/hello-world@sha256:62af9efd515a25f84961b70f973a798d2eca956b1b2b026d0a4a63a3b0b6a3f2 0.0s
=> [linux/arm64 1/1] FROM docker.io/library/hello-world@sha256:62af9efd515a25f84961b70f973a798d2eca956b1b2b026d0a4a63a3b0b6a3f2 0.1s
=> => resolve docker.io/library/hello-world@sha256:62af9efd515a25f84961b70f973a798d2eca956b1b2b026d0a4a63a3b0b6a3f2 0.0s
example$ docker images | grep new-hello
example$ docker buildx build --platform=linux/amd64,linux/amd64/v3,linux/arm64 -t new-hello -f Dockerfile.yaml . --load
[+] Building 0.0s (0/0)
error: docker exporter does not currently support exporting manifest lists
example$ docker buildx build -t new-hello -f Dockerfile.yaml . --load
[+] Building 1.0s (6/6) FINISHED
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load build definition from Dockerfile.yaml 0.0s
=> => transferring dockerfile: 59B 0.0s
=> [internal] load metadata for docker.io/library/hello-world:latest 0.6s
=> [1/1] FROM docker.io/library/hello-world@sha256:62af9efd515a25f84961b70f973a798d2eca956b1b2b026d0a4a63a3b0b6a3f2 0.2s
=> => resolve docker.io/library/hello-world@sha256:62af9efd515a25f84961b70f973a798d2eca956b1b2b026d0a4a63a3b0b6a3f2 0.0s
=> => sha256:2db29710123e3e53a794f2694094b9b4338aa9ee5c40b930cb8063a1be392c54 2.48kB / 2.48kB 0.2s
=> exporting to oci image format 0.3s
=> => exporting layers 0.0s
=> => exporting manifest sha256:2c7b4489fdf36359ce4c8e009b5ade432f116dab090fb1326142a7abb90332af 0.0s
=> => exporting config sha256:acb998384af1e101aa9d7a0154791d0c0cc023667b24d29abcb8d871963bd396 0.0s
=> => sending tarball 0.1s
=> importing to docker 0.1s
example$ docker images | grep new-hello
new-hello latest acb998384af1 11 months ago 13.3kB I tried to build using debian and using alpine to the final version, but I couldn't so using slim is the best option in my head. |
By the way, I haven't forgotten about this. I am planning to move all our docker stuff to https://github.com/Cisco-Talos/clamav-docker and rather than switching from alpine to debian immediate, I will include both. I am planning for the debian variant to include these changes. I'm hoping I can work on this later next week after we publish 1.0.0 feature release candidate materials for review. We generally do not publish docker tags for release candidates since there is the |
@arielmorelli I have continued your work in our new clamav-docker repo: https://github.com/Cisco-Talos/clamav-docker See: https://github.com/Cisco-Talos/clamav-docker/tree/main/clamav/unstable/debian I have to pause to focus on fixes for the 1.0.0 release and will resume when I get a chance. My plan is to publish image tags to Closing this PR now that the Docker stuff will continue in the new repo. You are welcome to help with contributions over there. |
@micahsnyder Are there any updates or possible ETA on the multi-arch variant of these official images? |
Use
docker buildx
to build and push. This allow multiarch and closes #482