Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ARM64 images built and Available on AWX repository #14643

Closed
4 of 9 tasks
jon-nfc opened this issue Nov 9, 2023 · 12 comments · Fixed by #15053
Closed
4 of 9 tasks

ARM64 images built and Available on AWX repository #14643

jon-nfc opened this issue Nov 9, 2023 · 12 comments · Fixed by #15053

Comments

@jon-nfc
Copy link

jon-nfc commented Nov 9, 2023

Please confirm the following

  • I agree to follow this project's code of conduct.
  • I have checked the current issues for duplicates.
  • I understand that AWX is open source software provided for free and that I might not receive a timely response.

Feature type

New Feature

Feature Summary

The feature I'm requesting is officially built ARM images. I note that support has been added to build the images yourself, however not an option for a lot of people out there.

Implementation of this proposal requires no additional infrastructure, just some minor changes to the build commands.

Currently docker has buildx. This tool enables cross-platform building of containers for countless architectures. best of all, this occurs within a single build command, docker buildx build --platform=. Using this command instead of the original docker build. cross-compiles for each of the platforms listed and places all images within the same manifest. In addition, you can prepend --push to push all built images and the manifest to the container registry.

I currently use this method to cross-compile all of my container images.

What requires changing

docker build command.

-docker build -t {{ awx_image }}:{{ awx_image_tag }} \
+docker buildx build -t {{ awx_image }}:{{ awx_image_tag }} \
        -f {{ dockerfile_name }} \
        --build-arg VERSION={{ awx_version }} \
        --build-arg SETUPTOOLS_SCM_PRETEND_VERSION={{ awx_version }} \
        --build-arg HEADLESS={{ headless }} \
+      --platforms=linux/amd64,linux/arm64
+      --push
        .

notes:

  • --platforms=linux/amd64,linux/arm64 this will build both amd64 and arm64 together and place them in a single manifest
  • --push this will push everything together to the container registry

Every FROM declaration within dockerfiles

+ARG TARGETPLATFORM=linux/amd64,linux/arm64

-FROM quay.io/centos/centos:stream9 as builder
+FROM --platform=$TARGETPLATFORM quay.io/centos/centos:stream9 as builder

notes:

  • ARG TARGETPLATFORM=linux/amd64,linux/arm64 sets TARGETPLATFORM variable to have a default value if not specified at runtime
  • FROM --platform=$TARGETPLATFORM tells docker to use the specified architecture for the container. if this value is omitted from the FROM declaration, the build systems architecture is used.

For cross compilation to work packages binfmt-support and qemu-user-static (these are the debian package name). Both together allow the running of binaries of a different architecture. Or you can do the build from a docker container (the method I use), which contains everything required fro the cross-compilation to work. Prior to building you have to activate modules in the kernal for other binaries to run. update-binfmts --enable

I'm not familiar of how the github CI/CD pipelines work, however I am successfully doing cross-compilation within the gitlab ecosystem. their runners are all AMD64. You maybe able to convert this stripped down gitlab ci job. Original here

.build_docker_container:
  stage: build
  image: 
    name: nofusscomputing/docker-buildx-qemu:dev
    pull_policy: always
  services:
    - name: docker:23-dind
      entrypoint: ["env", "-u", "DOCKER_HOST"]
      command: ["dockerd-entrypoint.sh"]
  variables:
    DOCKER_HOST: tcp://docker:2375/
    DOCKER_DRIVER: overlay2
    DOCKER_DOCKERFILE: dockerfile
    # See https://github.com/docker-library/docker/pull/166
    DOCKER_TLS_CERTDIR: ""
  before_script:
    - git submodule foreach git submodule update --init
    - if [ "0$JOB_ROOT_DIR" == "0" ]; then ROOT_DIR=gitlab-ci; else ROOT_DIR=$JOB_ROOT_DIR ; fi
    - echo "[DEBUG] ROOT_DIR[$ROOT_DIR]"
    - docker info
    - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
    - pip3 install setuptools wheel
      # see: https://gitlab.com/gitlab-org/gitlab-runner/-/merge_requests/1861 
      # on why this `docker run` is required. without it multiarch support doesnt work.
    - docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
    - update-binfmts --display
    - update-binfmts --enable # Important: Ensures execution of other binary formats is enabled in the kernel
    - docker buildx create --driver=docker-container --driver-opt image=moby/buildkit:v0.11.6 --use
    - docker buildx inspect --bootstrap

  script: 
    - update-binfmts --display
    - |
        docker buildx build --platform=$DOCKER_IMAGE_BUILD_TARGET_PLATFORMS . \
          --label org.opencontainers.image.created="$(date '+%Y-%m-%d %H:%M:%S%:z')" \
          --label org.opencontainers.image.documentation="$CI_PROJECT_URL" \
          --label org.opencontainers.image.source="$CI_PROJECT_URL" \
          --label org.opencontainers.image.revision="$CI_COMMIT_SHA" \
        --push \
        --build-arg CI_JOB_TOKEN=$CI_JOB_TOKEN --build-arg CI_PROJECT_ID=$CI_PROJECT_ID --build-arg CI_API_V4_URL=$CI_API_V4_URL \
        --file $DOCKER_DOCKERFILE \
        --tag $DOCKER_IMAGE_BUILD_REGISTRY/$DOCKER_IMAGE_BUILD_NAME:$DOCKER_IMAGE_BUILD_TAG;

        docker buildx imagetools inspect $DOCKER_IMAGE_BUILD_REGISTRY/$DOCKER_IMAGE_BUILD_NAME:$DOCKER_IMAGE_BUILD_TAG;

summary:

  • starts a docker container nofusscomputing/docker-buildx-qemu:dev where all commands are run from, including build
  • as the container is dind, links to docker via socket
  • for containers that use python packages, setuptools and wheel wheel are required if the package requires compilation
  • update-binfmts --enable enable kernal modules
  • docker buildx create --driver=docker-container --driver-opt image=moby/buildkit:v0.11.6 --use sets up buildx to use buildkit
  • docker buildx build {etc}... all in one command to cross-compile build, manifest creation and push to a container registry .
  • the final inspect shows the manifest and containing images.

I'm happy to assist or if ok, start a PR. However the latter I will require someone with Github action knowledge to walk me through adjusting.

Select the relevant components

  • UI
  • API
  • Docs
  • Collection
  • CLI
  • Other

Steps to reproduce

.

Current results

.

Sugested feature result

That https://quay.io/repository/ansible/awx contains both amd64 and arm64 images. Yes, I know different repos, same same for the operator and ansible-ee images.

Additional information

No response

@jon-nfc
Copy link
Author

jon-nfc commented Nov 15, 2023

whilst we wait https://gitlab.com/nofusscomputing/projects/ansible/awx-arm and for automagic arm builds and https://hub.docker.com/r/nofusscomputing/awx for the location of builds

@fosterseth
Copy link
Member

fosterseth commented Nov 15, 2023

@jon-nfc awesome work, thanks for this information. A lot of interest around arm64 builds.

I think integrating this into our CI would take some work, and we'd probably need some outside contributors willing to take up that work

Basically someone needs to port the steps you outlined into our GH workflows to build the target image and push to quay

@jon-nfc
Copy link
Author

jon-nfc commented Nov 16, 2023

G'day @fosterseth,

@jon-nfc awesome work, thanks for this information. A lot of interest around arm64 builds.

I think integrating this into our CI would take some work, and we'd probably need some outside contributors willing to take up that work

The amount of work is not as much as seems, to make the changes for the build to be multi-arch took no more than an hour (only due to having to learn layout) and from my side getting the gitlab builds to work another 10-15mins work. I'm leaning towards the conversion for github to take around the same. Although as mentioned in OP, I'm not familiar with Gthub CI/CD. I'm happy to raise a PR to conduct the required changes. Although I will require someone with Github CI/CD knowledge to check my work as I will have to learn how to use/implement it. The latter will increase the time to implement the changes. Who's a good POC for this knowledge and to code review the PR?

Basically someone needs to port the steps you outlined into our GH workflows to build the target image and push to quay

On what I've seen so far, the changes are relatively small. Time will only be increased due to having to wait for confirmation the workflows work/complete.

@jon-nfc
Copy link
Author

jon-nfc commented Jan 9, 2024

haven't forgotten about this issue, however am going to wait before a PR as the work from the following should be easily portable to this repo as these repos appear to share a similar workflow

@smarthusker
Copy link

+1

1 similar comment
@mimianddaniel
Copy link

+1

@adpavlov
Copy link

waiting for this as well

@debanjanbasu
Copy link

Is this also to fix the built awx-ee images? Those are still failing deployment to an ARM64 cluster. Is there any work needed, to make that work?

@syahrul-aiman
Copy link

+1

@ibcht
Copy link

ibcht commented Mar 30, 2024

Hi @jon-nfc, I'm definitely interested in this PR :)

@TheRealHaoLiu
Copy link
Member

working on it...

@ibcht
Copy link

ibcht commented Apr 20, 2024

Thank you @TheRealHaoLiu, I removed my custom image, it works well ! (tested with latest AWX Operator version 2.15.0)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

9 participants