Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docker: use docker buildx to build multi arch #673

Closed
wants to merge 1 commit into from

Conversation

arielmorelli
Copy link

Use docker buildx to build and push. This allow multiarch and closes #482

@arielmorelli arielmorelli changed the title docker: use docker buildx to build multi architecture docker: use docker buildx to build multi arch Aug 9, 2022
@arielmorelli
Copy link
Author

arielmorelli commented Aug 9, 2022

@micahsnyder Can you give me some guidance on how to test locally?

@arielmorelli arielmorelli marked this pull request as draft August 9, 2022 09:27
@val-ms
Copy link
Contributor

val-ms commented Aug 11, 2022

@arielmorelli when we build the images and then run the script to update the images with the latest databases, we use this:

clamav_docker_user="micasnyd"
docker_registry="registry.hub.docker.com"

# Make sure we have the latest alpine image.
docker pull alpine:latest

# Build the base image
docker build --tag "${CLAMAV_DOCKER_IMAGE_NAME}:${CLAMAV_FULL_PATCH_VERSION}_base" .

# Login to docker hub
echo "${_passwd:-${DOCKER_PASSWD}}" | \
	docker login --password-stdin \
	             --username "${clamav_docker_user}" \
	             "${docker_registry}"

# Make a tag with the registry name in it so we can push wherever
docker image tag ${CLAMAV_DOCKER_IMAGE_NAME}:${CLAMAV_FULL_PATCH_VERSION}_base ${docker_registry}/${CLAMAV_DOCKER_IMAGE_NAME}:${CLAMAV_FULL_PATCH_VERSION}_base

# Push the image/tag
docker image push ${docker_registry}/${CLAMAV_DOCKER_IMAGE_NAME}:${CLAMAV_FULL_PATCH_VERSION}_base

# Give it some time to add the new ${CLAMAV_FULL_PATCH_VERSION}_base image.
# In past jobs, it didn't detect the new image until we re-ran this job. I suspect because it needed a little delay after pushing before pulling.
sleep 20

# Create extra tags of the base image. 
docker image tag ${CLAMAV_DOCKER_IMAGE_NAME}:${CLAMAV_FULL_PATCH_VERSION}_base ${docker_registry}/${CLAMAV_DOCKER_IMAGE_NAME}:${CLAMAV_FEATURE_RELEASE}_base
docker image push ${docker_registry}/${CLAMAV_DOCKER_IMAGE_NAME}:${CLAMAV_FEATURE_RELEASE}_base

# Generate and push an image without the "_base" suffix that contains the databases.
#
# TODO: There's bug where this actually updates _all_ containers and not just the tag we specify. 
#       See https://jira-eng-sjc1.cisco.com/jira/browse/CLAM-1552?filter=16896
CLAMAV_DOCKER_USER="${clamav_docker_user}" \
CLAMAV_DOCKER_PASSWD="$DOCKER_PASSWD" \
DOCKER_REGISTRY="${docker_registry}" \
CLAMAV_DOCKER_IMAGE="${CLAMAV_DOCKER_IMAGE_NAME}" \
CLAMAV_DOCKER_TAG="${CLAMAV_FULL_PATCH_VERSION}" \
  ./dockerfiles/update_db_image.sh -t ${CLAMAV_FULL_PATCH_VERSION}

# Login to docker hub (again, because the update_db_image.sh script removed our creds in its cleanup stage)
echo "${_passwd:-${DOCKER_PASSWD}}" | \
	docker login --password-stdin \
	             --username "${clamav_docker_user}" \
	             "${docker_registry}"

# Create extra tags of the main (database loaded) image.
docker image tag ${docker_registry}/${CLAMAV_DOCKER_IMAGE_NAME}:${CLAMAV_FULL_PATCH_VERSION} ${docker_registry}/${CLAMAV_DOCKER_IMAGE_NAME}:${CLAMAV_FEATURE_RELEASE}
docker image push ${docker_registry}/${CLAMAV_DOCKER_IMAGE_NAME}:${CLAMAV_FEATURE_RELEASE}

# log-out (again)
docker logout "${docker_registry:-}"

Where Jenkins has parameter fields we fill in for the variables:

  • CLAMAV_FULL_PATCH_VERSION (e.g. 0.104.4)
  • CLAMAV_FEATURE_RELEASE (e.g. 0.104)
  • CLAMAV_BRANCH (e.g. rel/0.104)
  • CLAMAV_DOCKER_IMAGE_NAME (e.g. micasnyd/clamav for my personal account for testing, and clamav/clamav when we publish the official images)
  • DOCKER_PASSWD for my dockerhub token.

You could do something similar if you set up your own.

I did some testing with docker buildx on my Windows device running Docker Desktop as well as on the 2020 Mac Mini that would actually be our docker image builder through Jenkins. Was unable to build for arm64 on alpine:latest. I forget the exact issue. But we also have an internal interest in switching to use debian as a base.

I created this test branch where I switched over to Debian: val-ms@ccfbc02

I was able to build for amd64, arm64, and ppc64le (requested by @snehakpersistent, in #624) using the debian-slim dockerfile. However, I found that the non-native architectures were REALLY slow. This is a problem right now in that the CTest libclamav_rust test builds the test executable when you run the test. That's annoyingly slow already, and when emulating, it's slow enough to fail because of the test timeout. So the next step I think is to either raise the test time out, or make it build the test executable as a part of the main clamav build. Ideally rust wouldn't have to recompile all the dependencies (that's what takes the most time) when building the test executable because it already compiles them once for libclamav_rust itself. But I don't know how to do that. So... I or someone on my team should find some time to figure that out.

Anyways,... Long story short, buildx isn't working for me on the systems I tested with unless switching from alpine -> debian. I would be curious to know if you ran into that issue at all with buildx on your side.

@arielmorelli arielmorelli force-pushed the main branch 2 times, most recently from e9edb28 to 2a11cf5 Compare August 19, 2022 07:46
@arielmorelli
Copy link
Author

@micahsnyder thanks for the help.

I was able to build for linux/amd64, linux/amd64/v3 and linux/arm64, as you can see here: https://hub.docker.com/layers/clamav/arielmorelli/clamav/1660911701/images/sha256-fc045c014bef9b507d530bcec10c249b23e0f585478481486967a01e17e847be?context=repo

For linux/amd64/v2 the tests fail (but works for v3), and I don't know how to debug properly.

I just run the "pipeline" locally with:

docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
docker buildx build --platform=linux/amd64,linux/amd64/v3,linux/arm64 -t arielmorelli/clamav:$(date +%s) --push .

(of course I'm logger in the docker hub)

Some important thoughts here:

  • To build, using any alpine or slim will takes forever, so using a larger image is fine for I
  • Rust is very very slow for arm in emulators, so using a image without rust will takes forever
  • To serve, using alpine or slim is fine, because the binary is already builded

The next step is properly test this image running locally, I tried: docker run arielmorelli/clamav:1660911701 but freshclamd fails because of DNS.

Can you help me with this topic?

This is the current output:

clamav(main)$ docker run arielmorelli/clamav:1660911701
Unable to find image 'arielmorelli/clamav:1660911701' locally
1660911701: Pulling from arielmorelli/clamav
Digest: sha256:529cf92a449926732abe0a25a05e78193e57b863184e6d97dde1c7232307adac
Status: Downloaded newer image for arielmorelli/clamav:1660911701
Updating initial database
ClamAV update process started at Fri Aug 19 16:51:50 2022
daily database available for download (remote version: 26632)
^Can't download daily.cvd from https://database.clamav.net/daily.cvd
WARNING: FreshClam received error code 429 from the ClamAV Content Delivery Network (CDN).
This means that you have been rate limited by the CDN.
 1. Run FreshClam no more than once an hour to check for updates.
    FreshClam should check DNS first to see if an update is needed.
 2. If you have more than 10 hosts on your network attempting to download,
    it is recommended that you set up a private mirror on your network using
    cvdupdate (https://pypi.org/project/cvdupdate/) to save bandwidth on the
    CDN and your own network.
 3. Please do not open a ticket asking for an exemption from the rate limit,
    it will not be granted.
WARNING: You are on cool-down until after: 2022-08-20 07:58:17
main database available for download (remote version: 62)
^Can't download main.cvd from https://database.clamav.net/main.cvd
WARNING: FreshClam received error code 429 from the ClamAV Content Delivery Network (CDN).
This means that you have been rate limited by the CDN.
 1. Run FreshClam no more than once an hour to check for updates.
    FreshClam should check DNS first to see if an update is needed.
 2. If you have more than 10 hosts on your network attempting to download,
    it is recommended that you set up a private mirror on your network using
    cvdupdate (https://pypi.org/project/cvdupdate/) to save bandwidth on the
    CDN and your own network.
 3. Please do not open a ticket asking for an exemption from the rate limit,
    it will not be granted.
WARNING: You are on cool-down until after: 2022-08-20 07:58:17
bytecode database available for download (remote version: 333)
Testing database: '/var/lib/clamav/tmp.ab12df49dd/clamav-0861c30f2160fa97ffafcc301a0a7782.tmp-bytecode.cvd' ...
Database test passed.
bytecode.cvd updated (version: 333, sigs: 92, f-level: 63, builder: awillia2)
WARNING: Clamd was NOT notified: Can't connect to clamd through /run/clamav/clamd.sock: No such file or directory
Starting ClamAV
Socket for clamd not found yet, retrying (0/1800) ...Fri Aug 19 16:51:51 2022 -> Limits: Global time limit set to 120000 milliseconds.
Fri Aug 19 16:51:51 2022 -> Limits: Global size limit set to 419430400 bytes.
Fri Aug 19 16:51:51 2022 -> Limits: File size limit set to 104857600 bytes.
Fri Aug 19 16:51:51 2022 -> Limits: Recursion level limit set to 17.
Fri Aug 19 16:51:51 2022 -> Limits: Files limit set to 10000.
Fri Aug 19 16:51:51 2022 -> Limits: MaxEmbeddedPE limit set to 41943040 bytes.
Fri Aug 19 16:51:51 2022 -> Limits: MaxHTMLNormalize limit set to 41943040 bytes.
Fri Aug 19 16:51:51 2022 -> Limits: MaxHTMLNoTags limit set to 8388608 bytes.
Fri Aug 19 16:51:51 2022 -> Limits: MaxScriptNormalize limit set to 20971520 bytes.
Fri Aug 19 16:51:51 2022 -> Limits: MaxZipTypeRcg limit set to 1048576 bytes.
Fri Aug 19 16:51:51 2022 -> Limits: MaxPartitions limit set to 50.
Fri Aug 19 16:51:51 2022 -> Limits: MaxIconsPE limit set to 100.
Fri Aug 19 16:51:51 2022 -> Limits: MaxRecHWP3 limit set to 16.
Fri Aug 19 16:51:51 2022 -> Limits: PCREMatchLimit limit set to 100000.
Fri Aug 19 16:51:51 2022 -> Limits: PCRERecMatchLimit limit set to 2000.
Fri Aug 19 16:51:51 2022 -> Limits: PCREMaxFileSize limit set to 104857600.
Fri Aug 19 16:51:51 2022 -> Archive support enabled.
Fri Aug 19 16:51:51 2022 -> AlertExceedsMax heuristic detection disabled.
Fri Aug 19 16:51:51 2022 -> Heuristic alerts enabled.
Fri Aug 19 16:51:51 2022 -> Portable Executable support enabled.
Fri Aug 19 16:51:51 2022 -> ELF support enabled.
Fri Aug 19 16:51:51 2022 -> Mail files support enabled.
Fri Aug 19 16:51:51 2022 -> OLE2 support enabled.
Fri Aug 19 16:51:51 2022 -> PDF support enabled.
Fri Aug 19 16:51:51 2022 -> SWF support enabled.
Fri Aug 19 16:51:51 2022 -> HTML support enabled.
Fri Aug 19 16:51:51 2022 -> XMLDOCS support enabled.
Fri Aug 19 16:51:51 2022 -> HWP3 support enabled.
Fri Aug 19 16:51:51 2022 -> Self checking every 600 seconds.
socket found, clamd started.
Starting Freshclamd
ClamAV update process started at Fri Aug 19 16:51:52 2022
WARNING: FreshClam previously received error code 429 or 403 from the ClamAV Content Delivery Network (CDN).
This means that you have been rate limited or blocked by the CDN.
 1. Verify that you're running a supported ClamAV version.
    See https://docs.clamav.net/faq/faq-eol.html for details.
 2. Run FreshClam no more than once an hour to check for updates.
    FreshClam should check DNS first to see if an update is needed.
 3. If you have more than 10 hosts on your network attempting to download,
    it is recommended that you set up a private mirror on your network using
    cvdupdate (https://pypi.org/project/cvdupdate/) to save bandwidth on the
    CDN and your own network.
 4. Please do not open a ticket asking for an exemption from the rate limit,
    it will not be granted.
WARNING: You are still on cool-down until after: 2022-08-20 07:58:17

@arielmorelli
Copy link
Author

@micahsnyder can you check this PR?

@val-ms
Copy link
Contributor

val-ms commented Sep 14, 2022

@arielmorelli the freshclam failure should be resolved by now. I imagine you may have tested it too many times and it rate limited you for downloading the same file too often.

The test failure you mention is probably because the libclamav_rust test is exceedingly slow, because it builds the test executable during the test, and because it builds it from scratch. I have a fix for this in progress, here: #694

Overall I think you're on the right track with building with a debian rust image, then switching to a debian slim image for production. That said, I noticed that there is no ppc64le version of the rust:1.62.1-bullseye image. So with this change we still won't be able to build for powerpc. Only arm64 and amd64.

In reading through the rest of the changes, they all look good to me. I also built it okay with normal docker build. I tried with docker buildx the last two nights before end of day, and both times when I came back, something different had caused my (windows) computer to reboot. I'm testing this:

docker buildx build --platform linux/arm64/v8,linux/amd64 --tag pr-673

I don't see the image listed on my computer, so I suspect it failed with ctest due to the test timeout. But I don't have the logs. I'll have to try again.

In short, this looks good to me outside of the ppc64le issue. I am a little worried we'll have some complaints about switching from alpine to debian, but I think we need to for the multi-arch builds to work.

@arielmorelli
Copy link
Author

The problem with freshclam probably was because I was running another clamav instenace in a background (for another project).

I also had this problem with load, as long as I figured out docker buildx cannot load the images when building multiplatform, but it can push to a registry.
You can build and check the output with --platform, but to load locally you need to pass --load. Here is an example with the same issue:

example$ cat Dockerfile.yaml 
FROM hello-world
example$ docker images | grep new-hello
example$ docker buildx build --platform=linux/amd64,linux/amd64/v3,linux/arm64 -t new-hello -f Dockerfile.yaml .
WARNING: No output specified for docker-container driver. Build result will only remain in the build cache. To push result image into registry use --push or to load image into docker use --load
[+] Building 1.7s (8/8) FINISHED                                                                                                                                          
 => [internal] load .dockerignore                                                                                                                                    0.0s
 => => transferring context: 2B                                                                                                                                      0.0s
 => [internal] load build definition from Dockerfile.yaml                                                                                                            0.0s
 => => transferring dockerfile: 59B                                                                                                                                  0.0s
 => [linux/arm64 internal] load metadata for docker.io/library/hello-world:latest                                                                                    1.6s
 => [linux/amd64 internal] load metadata for docker.io/library/hello-world:latest                                                                                    0.9s
 => [linux/amd64/v3 internal] load metadata for docker.io/library/hello-world:latest                                                                                 0.9s
 => [linux/amd64/v3 1/1] FROM docker.io/library/hello-world@sha256:62af9efd515a25f84961b70f973a798d2eca956b1b2b026d0a4a63a3b0b6a3f2                                  0.0s
 => => resolve docker.io/library/hello-world@sha256:62af9efd515a25f84961b70f973a798d2eca956b1b2b026d0a4a63a3b0b6a3f2                                                 0.0s
 => [linux/amd64 1/1] FROM docker.io/library/hello-world@sha256:62af9efd515a25f84961b70f973a798d2eca956b1b2b026d0a4a63a3b0b6a3f2                                     0.0s
 => => resolve docker.io/library/hello-world@sha256:62af9efd515a25f84961b70f973a798d2eca956b1b2b026d0a4a63a3b0b6a3f2                                                 0.0s
 => [linux/arm64 1/1] FROM docker.io/library/hello-world@sha256:62af9efd515a25f84961b70f973a798d2eca956b1b2b026d0a4a63a3b0b6a3f2                                     0.1s
 => => resolve docker.io/library/hello-world@sha256:62af9efd515a25f84961b70f973a798d2eca956b1b2b026d0a4a63a3b0b6a3f2                                                 0.0s
example$ docker images | grep new-hello
example$ docker buildx build --platform=linux/amd64,linux/amd64/v3,linux/arm64 -t new-hello -f Dockerfile.yaml . --load
[+] Building 0.0s (0/0)                                                                                                                                                   
error: docker exporter does not currently support exporting manifest lists
example$ docker buildx build -t new-hello -f Dockerfile.yaml . --load
[+] Building 1.0s (6/6) FINISHED                                                                                                                                          
 => [internal] load .dockerignore                                                                                                                                    0.0s
 => => transferring context: 2B                                                                                                                                      0.0s
 => [internal] load build definition from Dockerfile.yaml                                                                                                            0.0s
 => => transferring dockerfile: 59B                                                                                                                                  0.0s
 => [internal] load metadata for docker.io/library/hello-world:latest                                                                                                0.6s
 => [1/1] FROM docker.io/library/hello-world@sha256:62af9efd515a25f84961b70f973a798d2eca956b1b2b026d0a4a63a3b0b6a3f2                                                 0.2s
 => => resolve docker.io/library/hello-world@sha256:62af9efd515a25f84961b70f973a798d2eca956b1b2b026d0a4a63a3b0b6a3f2                                                 0.0s
 => => sha256:2db29710123e3e53a794f2694094b9b4338aa9ee5c40b930cb8063a1be392c54 2.48kB / 2.48kB                                                                       0.2s
 => exporting to oci image format                                                                                                                                    0.3s
 => => exporting layers                                                                                                                                              0.0s
 => => exporting manifest sha256:2c7b4489fdf36359ce4c8e009b5ade432f116dab090fb1326142a7abb90332af                                                                    0.0s
 => => exporting config sha256:acb998384af1e101aa9d7a0154791d0c0cc023667b24d29abcb8d871963bd396                                                                      0.0s
 => => sending tarball                                                                                                                                               0.1s
 => importing to docker                                                                                                                                              0.1s
example$ docker images | grep new-hello
new-hello                                                                      latest                       acb998384af1   11 months ago   13.3kB

I tried to build using debian and using alpine to the final version, but I couldn't so using slim is the best option in my head.

@val-ms
Copy link
Contributor

val-ms commented Oct 20, 2022

By the way, I haven't forgotten about this. I am planning to move all our docker stuff to https://github.com/Cisco-Talos/clamav-docker and rather than switching from alpine to debian immediate, I will include both.

I am planning for the debian variant to include these changes. I'm hoping I can work on this later next week after we publish 1.0.0 feature release candidate materials for review. We generally do not publish docker tags for release candidates since there is the unstable image that folks can play with if they want to test the RC through docker.

@val-ms
Copy link
Contributor

val-ms commented Nov 15, 2022

@arielmorelli I have continued your work in our new clamav-docker repo: https://github.com/Cisco-Talos/clamav-docker

See: https://github.com/Cisco-Talos/clamav-docker/tree/main/clamav/unstable/debian
And: https://github.com/Cisco-Talos/clamav-docker/tree/main/clamav/0.105/debian

I have to pause to focus on fixes for the 1.0.0 release and will resume when I get a chance.

My plan is to publish image tags to clamav/clamav-debian. The original Alpine-based images will continue under clamav/clamav for now. If all goes well, then we can consider setting a date to deprecate the Alpine-based images and switch to Debian for the main images.

Closing this PR now that the Docker stuff will continue in the new repo. You are welcome to help with contributions over there.

@val-ms val-ms closed this Nov 15, 2022
@peschee
Copy link

peschee commented May 30, 2023

@micahsnyder Are there any updates or possible ETA on the multi-arch variant of these official images?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

docker arm64 image is needed
3 participants