Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Local-only built image cannot be used as base for another Dockerfile #738

Closed
xucian opened this issue Dec 5, 2022 · 6 comments
Closed

Comments

@xucian
Copy link

xucian commented Dec 5, 2022

Troubleshooting

Before submitting a bug report please read the Troubleshooting doc.
Yes.

Behaviour

Steps to reproduce this issue

set the 'images' arg to an unknown, locally-valid-only image

Expected behaviour

Image should be 'dslocal/dbd.docker.be.director' and/or I should be able to use a locally-built image in a previous step as a base for the current image

Actual behaviour

Image is 'docker.io/dslocal/dbd.docker.be.director' and/or getting authentication error when building the 2nd image

Configuration

# Expecting repo to already be checked out
name: "actions.dbd.be.dockerbuild"
inputs:
  # stripped
    
runs:
  using: "composite"
  steps:  
    - name: Set up QEMU
      uses: docker/setup-qemu-action@v2.1.0
      
    - name: Set up Docker Buildx
      # v2.2.1 as of 03.12.2022
      uses: docker/setup-buildx-action@8c0edbc76e98fa90f69d9a2c020dcb50019dc325
      with:
        buildkitd-flags: --debug

    - name: Create image builder commit tag
      shell: bash
      id: image_commit_tag
      run: echo "value=buildersha-${{ inputs.builder_commit }}" >> $GITHUB_OUTPUT

    - name: Docker meta
      id: meta
      uses: docker/metadata-action@v4.1.1
      with:
        images: |
          dslocal/dbd.docker.be.dotnetbase
        tags: |
          type=schedule
          type=semver,pattern={{version}}
          type=semver,pattern={{major}}.{{minor}}
          type=semver,pattern={{major}}
          type=sha
          latest
          ${{ steps.image_commit_tag.outputs.value }}

    - name: Compose cache key
      shell: bash
      run: |
        cache_key=${{ runner.os }}-buildx-${{ github.sha }}-"$(env | md5sum)"-${{ hashFiles(format('{0}/**', inputs.build_context))}}
        echo "BDX_CACHE_KEY=$cache_key" >> $GITHUB_ENV

    # See C1
    - name: Set cache type
      shell: bash
      run: |
        if [ "${{ inputs.cache_storage }}" == "registry" ]; then
          value_from="type=registry,ref=dslocal/dbd.docker.be.dotnetbase:buildcache"
          echo "BDX_CACHE_FROM=$value_from" >> $GITHUB_ENV
          echo "BDX_CACHE_TO=$value_from,mode=max" >> $GITHUB_ENV
        else
          echo "BDX_CACHE_FROM=type=local,src=/tmp/.buildx-cache" >> $GITHUB_ENV
          echo "BDX_CACHE_TO=type=local,dest=/tmp/.buildx-cache-new,mode=max" >> $GITHUB_ENV
        fi

    # See C1
    - name: Init docker layers cache locally if requested
      uses: actions/cache@v2
      if: inputs.cache_storage != 'registry'
      with:
        path: /tmp/.buildx-cache
        key: ${{ env.BDX_CACHE_KEY }}-
        restore-keys: |
          ${{ runner.os }}-buildx-

    - name: Build and push
      uses: docker/build-push-action@v3.2.0
      with:
        context: ${{ inputs.build_context }}
        # Commented: only amd64 (desktop) is needed for now, as we only run on this architecture
        # platforms: linux/amd64,linux/arm64
        platforms: linux/amd64
        push: ${{ inputs.push == 'true' }}
        load: ${{ inputs.load == 'true' }}
        tags: ${{ steps.meta.outputs.tags }}
        labels: ${{ steps.meta.outputs.labels }}
        cache-from: ${{ env.BDX_CACHE_FROM }}
        cache-to: ${{ env.BDX_CACHE_TO }}
        
# Ref C1
## Caching layers as per https://github.com/docker/build-push-action/blob/master/docs/advanced/cache.md#github-cache

Logs

Can only provide this part (notice the image.name property):

Metadata
  {
    "cache.manifest": "{\"mediaType\":\"application/vnd.oci.image.index.v1+json\",\"digest\":\"sha256:2db82a7e8f446a438ace84df224f76b4d850[773](https://github.com/main-quest/dbd.deploy.be/actions/runs/3620781529/jobs/6103420250#step:4:810)ada58001fccfc67ceaf8fa252\",\"size\":1555}",
    "containerimage.buildinfo": {
      "frontend": "dockerfile.v0",
      "attrs": {
        "build-arg:DBD_BUILDER_COMMIT": "3a32687d386fa435e0b4603463ad58c80e81a61b",
        "build-arg:DBD_BUILDER_COMMIT_IMAGE_TAG": "buildersha-3a32687d386fa435e0b4603463ad58c80e81a61b",
        "build-arg:DBD_COMPONENT": "",
        "build-arg:DBD_COMPONENT_ARTIFACT": "",
        "build-arg:DBD_PROJECT_ID": "dragon-blood-dungeon",
        "build-arg:DBD_STAGE": "stage3",
        "filename": "Dockerfile",
        "label:org.opencontainers.image.created": "2022-12-05T14:06:44.137Z",
        "label:org.opencontainers.image.description": "Deployment of back-end components",
        "label:org.opencontainers.image.licenses": "",
        "label:org.opencontainers.image.revision": "3a32687d386fa435e0b4603463ad58c80e81a61b",
        "label:org.opencontainers.image.source": "https://github.com/main-quest/dbd.deploy.be",
        "label:org.opencontainers.image.title": "dbd.deploy.be",
        "label:org.opencontainers.image.url": "https://github.com/main-quest/dbd.deploy.be",
        "label:org.opencontainers.image.version": "latest"
      },
      "sources": [
        {
          "type": "docker-image",
          "ref": "docker.io/library/debian:buster-slim",
          "pin": "sha256:5dbce817ee72[802](https://github.com/main-quest/dbd.deploy.be/actions/runs/3620781529/jobs/6103420250#step:4:839)025a38a388237b0ea576aa164bc90b7102b73aa42fef4d713"
        }
      ]
    },
    "containerimage.config.digest": "sha256:892cfee00d7569bfbf1111f87d6199561b7ce4fd17f6f56b63614be9b6362044",
    "containerimage.descriptor": {
      "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
      "digest": "sha256:2e80d09f55a273c6e2d823e2b047a6a6ed277e91457bc975ae5d5b6c78b4678e",
      "size": 1155,
      "annotations": {
        "org.opencontainers.image.created": "2022-12-05T14:06:52Z"
      }
    },
    "containerimage.digest": "sha256:2e80d09f55a273c6e2d823e2b047a6a6ed277e91457bc975ae5d5b6c78b4678e",
    "image.name": "docker.io/dslocal/dbd.docker.be.dotnetbase:latest,docker.io/dslocal/dbd.docker.be.dotnetbase:buildersha-3a32687d386fa435e0b4603463ad58c80e81a61b,docker.io/dslocal/dbd.docker.be.dotnetbase:sha-3a32687"
  }

I tried without specifying 'dslocal/' prefix, but read that if I don't specify it, docker appends 'docker.io/' so I was trying to avoid that. But it seems this action doesn't work that way.
I only want this image to be local

docker ps shows the correct names:

REPOSITORY                         TAG                                                   IMAGE ID       CREATED         SIZE
dslocal/dbd.docker.be.dotnetbase   buildersha-3a32687d386fa435e0b4603463ad58c80e81a61b   4d735a71cc04   2 hours ago     288MB
dslocal/dbd.docker.be.dotnetbase   latest                                                4d735a71cc04   2 hours ago     288MB
dslocal/dbd.docker.be.dotnetbase   sha-3a32687                                           4d735a71cc04   2 hours ago     288MB

Manually running docker build -t toberemoved:latest . works fine when using it on the 2nd Dockerfile

I might look in the wrong place, as the 'docker.io/' added in the medatada action might not cause this at all (might just be ignored), but while I try to use this image as the base of another image that I want to push to a registry, it fails with (notice the image name is correct):

buildx failed with: error: failed to solve: dslocal/dbd.docker.be.dotnetbase:buildersha-3a32687d386fa435e0b4603463ad58c80e81a61b: pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed

The Dockerfile of the 2nd image starts with

ARG DBD_BUILDER_COMMIT_IMAGE_TAG
FROM dslocal/dbd.docker.be.dotnetbase:${DBD_BUILDER_COMMIT_IMAGE_TAG}

where DBD_BUILDER_COMMIT_IMAGE_TAG == buildersha-3a32687d386fa435e0b4603463ad58c80e81a61b

The reason I'm creating the dotnetbase image is that multiple images depend on it, but no other solution could be useful in my case (docker multi stage builds, for example, seems pretty complicated while also limited for what I want to do)

@xucian xucian changed the title docker.io prepended to the image name Local-only image built cannot be used as base for another Dockerfile Dec 5, 2022
@xucian xucian changed the title Local-only image built cannot be used as base for another Dockerfile Local-only built image cannot be used as base for another Dockerfile Dec 5, 2022
@jedevc
Copy link
Contributor

jedevc commented Dec 5, 2022

Thanks for the report! 🎉

This looks like a duplicate of moby/buildkit#2343.

There are a couple of workarounds:

  • Use the new oci-layout:// named-context - Add OCI source moby/buildkit#2827 (this will be released as part of buildkit v0.11, which isn't yet completed at time of writing).
  • Don't use the setup-buildx-action, and just use the default buildx driver. This has some major implications, and means you can't specify the buildkit version, and can't use multi-platform images, etc.
  • Push the image to an intermediate local registry.

@xucian
Copy link
Author

xucian commented Dec 5, 2022

Thanks for the quick follow-up!
It seems the only solution ATM is the 2nd one. Should I just remove setup-buildx-action or should I also somehow instruct docker to use its 'default' buildx driver?

@jedevc
Copy link
Contributor

jedevc commented Dec 7, 2022

I think you should just be able to remove the setup-buildx-action if that's the route you want to try.

I've also added a note about pushing to an intermediate local registry in the above comment, as another alternative you could consider.

I'm gonna close this for now, we have quite a few similar issues, e.g. see docker/buildx#1453.

@jedevc jedevc closed this as completed Dec 7, 2022
@MGough
Copy link

MGough commented Jan 24, 2023

Hey @jedevc - nice to spot another UoB'er! I see that moby/buildkit#2827 has been merged & v0.11 released.

My knowledge of buildkit is lacking however, I can see that I can get an OCI output from this action using:

outputs: type=oci,dest=oci_output.tar

And that I can potentially feed an OCI back in as a build-context:

build-contexts: foo=oci-layout://

Where I assume foo is the desired image name based on the docs (COPY --from=foo). I'm missing the step in between however where I go from .tar to a path that can be referenced by oci-layout://.

Is there an end-to-end working solution here? Or am I trying to piece together things before they're ready?

@jedevc
Copy link
Contributor

jedevc commented Jan 25, 2023

Heya @MGough! 👋 👋

A full use of the manual buildx commands might look like (which map easily into the action):

$ docker buildx build ... -f intermediate.Dockerfile --output type=oci,dest=oci_output_directory,tar=false
$ docker buildx build ... -f final.Dockerfile --build-context foo=oci-layout://./oci_output_directory

where foo is a stage or a target of a COPY --from in final.Dockerfile,

The tar=false in the first command is necessary since build-context only understands unpacked tars - maybe at some point we'll fix that (docker/buildx#1553, though it seems non-trivial on first glance).

@MGough
Copy link

MGough commented Jan 25, 2023

A full use of the manual buildx commands might look like (which map easily into the action):

$ docker buildx build ... -f intermediate.Dockerfile --output type=oci,dest=oci_output_directory,tar=false
$ docker buildx build ... -f final.Dockerfile --build-context foo=oci-layout://./oci_output_directory

Thanks that clears it up, I hadn't spotted any mention of that tar flag in the buildx docs. When peeking in at the tarred contents myself I was encountering a mismatch between index.json and index.json.lock, but if it's being output directly as a directory hopefully that'll solve the issue or at least make the next steps clearer.

I've gone for the workaround:

Don't use the setup-buildx-action, and just use the default buildx driver. This has some major implications, and means you can't specify the buildkit version, and can't use multi-platform images, etc.

for now as the specific image I'm building isn't particularly complex so it'll survive like this for now until I get a chance to bring it back in line with our other images.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants