Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(assets): allow user to pass platform option to Docker builds #12472

Closed
1 of 2 tasks
nikovirtala opened this issue Jan 12, 2021 · 31 comments · Fixed by #14908, #20439 or #21405
Closed
1 of 2 tasks

(assets): allow user to pass platform option to Docker builds #12472

nikovirtala opened this issue Jan 12, 2021 · 31 comments · Fixed by #14908, #20439 or #21405
Assignees
Labels
@aws-cdk/assets Related to the @aws-cdk/assets package effort/small Small work item – less than a day of effort feature/enhancement A new API to make things easier or more intuitive. A catch-all for general feature requests. feature-request A feature should be added or improved. p1

Comments

@nikovirtala
Copy link
Contributor

nikovirtala commented Jan 12, 2021

Use Case

To be able to build images for amd64 architecture (e.g. AWS Fargate) on a system that is using other architecture like arm64 (e.g. Apple M1).

Proposed Solution

Replace docker build with docker buildx build.

https://docs.docker.com/buildx/working-with-buildx/#build-multi-platform-images

With buildx you can build cross-platform images by declaring --platform argument. e.g. docker buildx build --platform linux/amd64 someimage:sometag . executed on system Apple M1 results in an image which works system with amd64 architecture.

buildx allows you also to build image for multiple platforms at once. e.g. --platform linux/amd64,linux/arm64

Other

Currently image .fromAsset results in an image that works only on the same architecture where it was built. In that sense, this could be considered also a bug – the image built doesn't work on the target system (Fargate).

import { FargateTaskDefinition, ContainerImage } from '@aws-cdk/aws-ecs';

const taskDefinition = new FargateTaskDefinition(this, 'TaskDefinition');

taskDefinition
  .addContainer('Container', {
    image: ContainerImage.fromAsset(path.resolve(__dirname, '../image')),
  });
  • 👋 I may be able to implement this feature request
  • ⚠️ This feature might incur a breaking change

This is a 🚀 Feature Request

@nikovirtala nikovirtala added feature-request A feature should be added or improved. needs-triage This issue or PR still needs to be triaged. labels Jan 12, 2021
@NGL321 NGL321 changed the title (cdk-assets): replace docker build with docker buildx build (assets): replace docker build with docker buildx build Feb 1, 2021
@github-actions github-actions bot added the @aws-cdk/assets Related to the @aws-cdk/assets package label Feb 1, 2021
@eladb eladb added effort/small Small work item – less than a day of effort p1 labels Feb 15, 2021
@bvtujo bvtujo removed the needs-triage This issue or PR still needs to be triaged. label Feb 15, 2021
@eladb eladb removed their assignment Feb 25, 2021
@RuiLoureiro
Copy link

Any progress on this?

@eladb
Copy link
Contributor

eladb commented Mar 25, 2021

We are not actively working on this. Pull requests are more than welcome.

@relm923 I see you closed your pull request. Would you be interested in continuing to work on this? I am happy to help out with the review (sorry you didn't get a response for a while).

@relm923
Copy link
Contributor

relm923 commented Mar 25, 2021

@eladb I'm happy to continue working on it. I closed the first PR because it appears the CodeBuild environment does not have experimental features (buildx) available. Is there someone that can help sort out the CodeBuild side? Thanks

@ericzbeard ericzbeard added the feature/enhancement A new API to make things easier or more intuitive. A catch-all for general feature requests. label Apr 2, 2021
@darko-mesaros
Copy link

Hello all, really looking forward for this feature. But a question from me, is buildx really necessary?

Here is what I am talking about, and mind you I am running this on my M1 Macbook, with the following Docker version installed:

docker --version
Docker version 20.10.5, build 55c4c88

I build my docker image with the --platform option set to linux/amd64:

docker build --platform linux/amd64 .

This provides me with an image that is built with the correct architecture:

# Cut down output of: docker image inspect <imageid>

            "WorkingDir": "/src",
            "Entrypoint": null,
            "OnBuild": null,
            "Labels": null
        },
        "Architecture": "amd64",
        "Os": "linux",
        "Size": 114102371,
        "VirtualSize": 114102371,
        "GraphDriver": {
            "Data": {

I can clearly see that this image is of amd64 architecture. Did something change in docker that does not require to run buildx? Is there a way to pass in the --platform parameter to the AssetImage?

@nikovirtala
Copy link
Contributor Author

Good catch, @darko-mesaros!

Did you happen to check if it works in the same way on the latest x86 Docker for Mac? — Just --platform argument and no buildx. I’d like this change to be implemented in a way that it works on all platforms :)

@darko-mesaros
Copy link

darko-mesaros commented Apr 28, 2021

It indeed did work. I was able to build the image on an Intel Macbook by using the --platform parameter. Even to the arm64 architecture oddly enough.

EDIT: Just tried it, an arm64 image built on the Intel Mac works when I run it on the M1 mac. 👏

@arekt
Copy link

arekt commented May 15, 2021

Did hit same problem building layer for lambda.

WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested

So do we plan to add something like "platform" option ??

const layer = new lambda.LayerVersion(this, 'ruby-common', {
      code: lambda.Code.fromAsset(path.join(__dirname, '../src/layers/ruby-common'), {
        bundling: {
          image: lambda.Runtime.RUBY_2_7.bundlingImage,
          platform: "linux/amd64",                                              // <--- new option presumably
          command: [
            'bash', '-c', `
            pwd && ls -la &&
            bundle install --standalone &&
            cp -a bundle /asset-output/
            `,
          ],
        },
      }),
      compatibleRuntimes: [lambda.Runtime.RUBY_2_7],
      description: 'Common ruby gems'
    });

Running docker manually with --platform linux/amd64 did work ok.
btw. deploying assets to s3 and using code below works quite nice

    const bucket = s3.Bucket.fromBucketName(this, 'bucket-with-layers', "this-is-fake-bucket")

    const layer = new lambda.LayerVersion(this, 'ruby-base', {
      code: lambda.Code.fromBucket(bucket, "deploy/ruby-base.zip")
    })

@alukach
Copy link
Contributor

alukach commented Jun 7, 2021

I'm taking a crack at this is #14908 however am a bit strapped for time. If anyone would like to pair on it, please take a 👀 and provide comments on that PR.

It should be good to go...

@mergify mergify bot closed this as completed in #14908 Jun 14, 2021
mergify bot pushed a commit that referenced this issue Jun 14, 2021
Fixes #12472

Adds an option to specify the platform for the `docker build` command would allow users to build for other targets (e.g. Apple M1 users can build for amd64 architectures)

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
@github-actions
Copy link

⚠️COMMENT VISIBILITY WARNING⚠️

Comments on closed issues are hard for our team to see.
If you need more assistance, please either tag a team member or open a new issue that references this one.
If you wish to keep having a conversation with other community members under this issue feel free to do so.

@pahud
Copy link
Contributor

pahud commented Jul 5, 2021

Hi @alukach

It seems the ContainerImage.fromAsset does not support --platform property so we still can't build linux/amd64 images for Fargate from M1.

Looks like we need another PR, no?

@nikovirtala
Copy link
Contributor Author

Hi @alukach

It seems the ContainerImage.fromAsset does not support --platform property so we still can't build linux/amd64 images for Fargate from M1.

Looks like we need another PR, no?

I noticed the same, and I think it is because the platform property is not implemented to https://github.com/aws/aws-cdk/blob/master/packages/@aws-cdk/aws-ecr-assets/lib/image-asset.ts

@arekt
Copy link

arekt commented Jul 5, 2021

Just a note for people with M1 chip. Docker supports DOCKER_DEFAULT_PLATFORM='linux/amd64' env variable now.
https://docs.docker.com/engine/reference/commandline/cli/

@alukach
Copy link
Contributor

alukach commented Jul 5, 2021

@pahud @nikovirtala

Glad to see the renewed interest. Yes, I do believe that the PR I originally put together affected the wrong part of the codebase 🤦 . I am still unable to build any assets for ECR with my M1 machine.

I previously did some initial research into this and got a bit lost trying to understand how/where the CDK ECR code actually instructs Docker to build the image.

Here are some notes (largely written out for myself while I try to work this through) of what I think may be the next steps:

  1. Update the DockerImageAssetOptions to include the platform flag:
    export interface DockerImageAssetOptions extends FingerprintOptions, FileFingerprintOptions {
    /**
    * ECR repository name
    *
    * Specify this property if you need to statically address the image, e.g.
    * from a Kubernetes Pod. Note, this is only the repository name, without the
    * registry and the tag parts.
    *
    * @default - the default ECR repository for CDK assets
    * @deprecated to control the location of docker image assets, please override
    * `Stack.addDockerImageAsset`. this feature will be removed in future
    * releases.
    */
    readonly repositoryName?: string;
    /**
    * Build args to pass to the `docker build` command.
    *
    * Since Docker build arguments are resolved before deployment, keys and
    * values cannot refer to unresolved tokens (such as `lambda.functionArn` or
    * `queue.queueUrl`).
    *
    * @default - no build args are passed
    */
    readonly buildArgs?: { [key: string]: string };
    /**
    * Docker target to build to
    *
    * @default - no target
    */
    readonly target?: string;
    /**
    * Path to the Dockerfile (relative to the directory).
    *
    * @default 'Dockerfile'
    */
    readonly file?: string;
    }
  2. Update the asset hash to consider the platform flag:
    const extraHash: { [field: string]: any } = { };
    if (props.extraHash) { extraHash.user = props.extraHash; }
    if (props.buildArgs) { extraHash.buildArgs = props.buildArgs; }
    if (props.target) { extraHash.target = props.target; }
    if (props.file) { extraHash.file = props.file; }
    if (props.repositoryName) { extraHash.repositoryName = props.repositoryName; }
  3. Ensure that the platform flag makes its way to the docker build. This is where I get confused. Here's what I can see:
    1. When we construct a DockerImageAsset, we create an AssetStaging object:

      const staging = new AssetStaging(this, 'Staging', {
      ...props,
      follow: props.followSymlinks ?? toSymlinkFollow(props.follow),
      exclude,
      ignoreMode,
      sourcePath: dir,
      extraHash: Object.keys(extraHash).length === 0
      ? undefined
      : JSON.stringify(extraHash),
      });

    2. When we construct an AssetStaging object, we build a cache key from the relevant bundling props (platform will also need to be considered) and send it to an assetCache where the stageThisAsset callback is called if it's a new asset:

      const staging = new AssetStaging(this, 'Staging', {
      ...props,
      follow: props.followSymlinks ?? toSymlinkFollow(props.follow),
      exclude,
      ignoreMode,
      sourcePath: dir,
      extraHash: Object.keys(extraHash).length === 0
      ? undefined
      : JSON.stringify(extraHash),
      });

    3. The stageThisAsset callback either runs this.stageByBundling(...) or this.stageByCopying()... I'm not exactly sure why we'd want to stage by copying, however I think it's safe to assume that it won't be affected by the platform flag:

      if (props.bundling) {
      // Check if we actually have to bundle for this stack
      const bundlingStacks: string[] = this.node.tryGetContext(cxapi.BUNDLING_STACKS) ?? ['*'];
      skip = !bundlingStacks.find(pattern => minimatch(Stack.of(this).stackName, pattern));
      const bundling = props.bundling;
      stageThisAsset = () => this.stageByBundling(bundling, skip);
      } else {
      stageThisAsset = () => this.stageByCopying();
      }

    4. stageByBundling() is where things start getting hazy for me... What we do know is that it expects the first arg to comply with the BundlingOptions interface, so we should ensure that accepts the platform flag:

      private stageByBundling(bundling: BundlingOptions, skip: boolean): StagedAsset {
      export interface BundlingOptions {
      /**
      * The Docker image where the command will run.
      */
      readonly image: DockerImage;
      /**
      * The entrypoint to run in the Docker container.
      *
      * @example ['/bin/sh', '-c']
      *
      * @see https://docs.docker.com/engine/reference/builder/#entrypoint
      *
      * @default - run the entrypoint defined in the image
      */
      readonly entrypoint?: string[];
      /**
      * The command to run in the Docker container.
      *
      * @example ['npm', 'install']
      *
      * @see https://docs.docker.com/engine/reference/run/
      *
      * @default - run the command defined in the image
      */
      readonly command?: string[];
      /**
      * Additional Docker volumes to mount.
      *
      * @default - no additional volumes are mounted
      */
      readonly volumes?: DockerVolume[];
      /**
      * The environment variables to pass to the Docker container.
      *
      * @default - no environment variables.
      */
      readonly environment?: { [key: string]: string; };
      /**
      * Working directory inside the Docker container.
      *
      * @default /asset-input
      */
      readonly workingDirectory?: string;
      /**
      * The user to use when running the Docker container.
      *
      * user | user:group | uid | uid:gid | user:gid | uid:group
      *
      * @see https://docs.docker.com/engine/reference/run/#user
      *
      * @default - uid:gid of the current user or 1000:1000 on Windows
      */
      readonly user?: string;
      /**
      * Local bundling provider.
      *
      * The provider implements a method `tryBundle()` which should return `true`
      * if local bundling was performed. If `false` is returned, docker bundling
      * will be done.
      *
      * @default - bundling will only be performed in a Docker container
      *
      */
      readonly local?: ILocalBundling;
      /**
      * The type of output that this bundling operation is producing.
      *
      * @default BundlingOutput.AUTO_DISCOVER
      *
      */
      readonly outputType?: BundlingOutput;
      /**
      * [Security configuration](https://docs.docker.com/engine/reference/run/#security-configuration)
      * when running the docker container.
      *
      * @default - no security options
      */
      readonly securityOpt?: string;
      }

      I'm not entirely sure what is the intention of stageByBundling(). I see that it runs this.bundle(), which I would think is where the bundling magic ✨ happens, however I'm not really sure:

      localBundling = options.local?.tryBundle(bundleDir, options);
      if (!localBundling) {
      options.image.run({
      command: options.command,
      user,
      volumes,
      environment: options.environment,
      workingDirectory: options.workingDirectory ?? AssetStaging.BUNDLING_INPUT_DIR,
      securityOpt: options.securityOpt ?? '',
      });

      I'm not really sure how local.tryBundle(...) is ever set (I can't see that set anywhere in the files I've referenced in this chain of steps), so we're likely running image.run(...). I'm not really sure what image even represents... The interface describes it as:

      export interface BundlingOptions {
      /**
      * The Docker image where the command will run.
      */
      readonly image: DockerImage;

      So does that mean at this point we already have the docker image (implying that it's been built)?

I'm willing to put some time into creating another PR, however having some others pair with me on understanding how the bundling actually works would be helpful.

@jogold
Copy link
Contributor

jogold commented Jul 7, 2021

Docker image assets are built here

public async build(options: BuildOptions) {
const buildCommand = [
'build',
...flatten(Object.entries(options.buildArgs || {}).map(([k, v]) => ['--build-arg', `${k}=${v}`])),
'--tag', options.tag,
...options.target ? ['--target', options.target] : [],
...options.file ? ['--file', options.file] : [],
'.',
];
await this.execute(buildCommand, { cwd: options.directory });
}

@lmammino
Copy link
Contributor

lmammino commented Aug 14, 2021

One solution that I found (after much head banging 🙉 ) is to put --platform ... in the FROM instruction at the beginning of my Dockerfile:

FROM --platform=linux/amd64 someBaseImage:someVersion
# ... moar cool docker stuff here 🐳

You could also use a build arg to specify an arbitrary value for platform or create different target images for different platforms you need to support.

I hope this helps :)

@nikovirtala
Copy link
Contributor Author

Now that Lambda released support for arm64 / Graviton2, maybe this feature request now finally gets some love from the maintainers?

@pahud
Copy link
Contributor

pahud commented Oct 1, 2021

Now that Lambda released support for arm64 / Graviton2, maybe this feature request now finally gets some love from the maintainers?

I am interested to explore this. I think we need pass the --platform flag all the way down to here.

await this.docker.build({
directory: fullPath,
tag: localTagName,
buildArgs: source.dockerBuildArgs,
target: source.dockerBuildTarget,
file: source.dockerFile,
});
}

@zessx
Copy link

zessx commented Oct 1, 2021

One solution that I found (after much head banging 🙉 ) is to put --platform ... in the FROM instruction at the beginning of my Dockerfile:

FROM --platform=linux/amd64 someBaseImage:someVersion
# ... moar cool docker stuff here 🐳

You could also use a build arg to specify an arbitrary value for platform or create different target images for different platforms you need to support.

I hope this helps :)

This is the way I also choose, but it annoys me as I'm using the same multi-stage Dockerfile for local environment (some developers are using the M1 chip, some other are using an x86 platform). This means every developer with a M1 chip locally needs to remember to manually edit its Dockerfile before a build.

Specify the platform in CDK would be the perfect solution, something like these:

# Using build_args (currently ignored)
task_image_options=ecs_patterns.ApplicationLoadBalancedTaskImageOptions(
    container_name="mytask",
    container_port=5000,
    image=ecs.ContainerImage.from_asset('.',
        build_args={"--platform": "linux/amd64"},
        target="production"))

# Using a dedicated argument (currently not available)
task_image_options=ecs_patterns.ApplicationLoadBalancedTaskImageOptions(
    container_name="mytask",
    container_port=5000,
    image=ecs.ContainerImage.from_asset('.',
        platform="linux/amd64",
        target="production"))

@pahud
Copy link
Contributor

pahud commented Oct 4, 2021

I am working on the PR now. Any comments are appreciated.

#16770

@pahud
Copy link
Contributor

pahud commented Oct 7, 2021

I just noticed an important issue

圖片

I was using the same Dockerfile with the same base image, which supports multi-arch.

When I first docker build with that base image, docker daemon pulls the image with correct platform and everything works great as expected. However, if I change --platform and build again, it seems docker will not pull new layers from the base image and will re-use the existing cached base image. In this case, the --platform flag will not work as expected. Unless we docker rmi the base image and let it pull again.

This could be a major issue if users are using the same base image for different architectures.

Any comments?

UPDATE

The --pull flag will force download newer images, which helps. But in this case we probably will always use --pull flag when we bundle the images, or shall we expose this as another property to the surface?

docker build -t demo --pull --platform linux/amd64 .

@jogold
Copy link
Contributor

jogold commented Oct 7, 2021

However, if I change --platform and build again, it seems docker will not pull new layers from the base image and will re-use the existing cached base image.

I don't have the same behavior. For me changing the platform correctly rebuilds. I'm on Docker version 20.10.8, build 3967b7d. You?

@nikovirtala
Copy link
Contributor Author

nikovirtala commented Oct 7, 2021

Now I understand why you see that happening on Linux (but not on macOS / Docker Desktop?). It is because BuildKit is not the default builder on Linux, but when enabled your issue is not reproducible:

If you run first:

DOCKER_BUILDKIT=1 docker build -t demo --platform linux/amd64 .

and then:

DOCKER_BUILDKIT=1 docker build -t demo --platform linux/arm64 .

You'll get two different images with two different architectures 🎉

I think it would be relatively safe to enable the BuildKit when we set the platform.

Some references from Docker issues:

The latter issue explains the four(4) different ways to build container images with Docker 😄 and also reveals that they're planning to make buildx the default builder.

@nikovirtala
Copy link
Contributor Author

nikovirtala commented Oct 7, 2021

but not on macOS / Docker Desktop?

I don't have the same behavior. For me changing the platform correctly rebuilds. I'm on Docker version 20.10.8, build 3967b7d.

Thanks for confirming @jogold 🙌🏼

The reason why I was a little bit unsure about this is that I run Docker with all experimental features enabled, so it doesn't always match with the experience most people see.

@jogold
Copy link
Contributor

jogold commented Oct 7, 2021

@pahud please see also #16858 where I suggest a "coupling" between the lambda.Architecture and the corresponding platform.

@demiurg
Copy link

demiurg commented Jan 1, 2022

This could be a major issue if users are using the same base image for different architectures.

Any comments?

UPDATE

The --pull flag will force download newer images, which helps. But in this case we probably will always use --pull flag when we bundle the images, or shall we expose this as another property to the surface?

docker build -t demo --pull --platform linux/amd64 .

So, two things would be ideal, but each is also very valuable:

  1. It would be great to expose a new variable like docker_args which would be expanded to the >docker ... command (or future API). This is similar to how someone above tried to say build_args=['--platform', 'amd64/linux'] but I understand that is currently expanded to BUILD TIME environment variables, which is different. The name is a bit confusingly overloaded.
  2. At the same time, you can create an abstraction for platforms=['amd/64', 'arm64v8]` to automate specifying the platform explicitly, as well as potentially support multi-platform, which is near-impossible to do right now. This would also potentially work with non-docker or non-CLI container image building implementations.

@jcpage
Copy link

jcpage commented Jan 15, 2022

For the use case of building an x86 lambda layer on a M1 Mac, after fighting with it for quite a while I realized there was a completely trivial solution...

I just changed

image: Runtime.NODEJS_14_X.bundlingImage,

to

image: DockerImage.fromRegistry(
    `public.ecr.aws/sam/build-nodejs14.x:latest-x86_64`
  ),

@eladb
Copy link
Contributor

eladb commented Jan 25, 2022

@pahud I am assigning this to you to follow up.

@eladb eladb assigned pahud and unassigned eladb Jan 25, 2022
@vincent-dm
Copy link

This issue has been renamed/repurposed a few times, but I think the CDK API should support passing any arguments to docker build. Adding support only for --platform or --pull will just lead to endless similar requests.

In our case, we need support for adding the --output arg to CDKs invocation of docker build.

We use CDK to build and deploy an ARM64 image on an x86-based (i.e. AMD64) Linux EC2 instance. Only Docker desktop offers multi-platform builds out of the box, so we had to install it, as per Docker's guide. We make buildx the default builder by running docker buildx install and then configuring a builder for arm64. Invoking docker build ... (like CDK does) is then actually re-routed to use buildx instead of the legacy Docker builder.

BUT buildx by default doesn't publish the built image into the local registry and requires an additional --output flag (or a shorthand like --load) to actually store and use the image that was built. This is reflected by warnings shown by buildx when invoking docker build without setting the --output flag (or any of its shorthands like --push or --load:

No output specified for docker-container driver. Build result will only remain in the build cache. To push result image into registry use --push or to load image into docker use --load

Because CDK AssetImageProps builder API doesn't allow us to set the output for the docker build step, it fails after building because it tries to tag an image which doesn't exist:

docker tag cdkasset-da3d3b814c3ba23b82b05bcac15892e50aa87059a01ba3dbada59dfef2369aaf foo.dkr.ecr.eu-west-1.amazonaws.com/cdk-hnb659fds-container-assets-foo-eu-west-1:da3d3b814c3ba23b82b05bcac15892e50aa87059a01ba3dbada59dfef2369aaf

Error response from daemon: No such image: cdkasset-da3d3b814c3ba23b82b05bcac15892e50aa87059a01ba3dbada59dfef2369aaf:latest

Workarounds like @nathanpeck's #12472 (comment) are not an option for us, since we need to be able to synthesize our CDK stack without building all the Docker images (which requires long compilation etc).

Our current workaround is to rename the docker binary to docker_real and instead create a docker bash script on the path which adds the flag:

#!/usr/bin/env bash
if [ "$1" = "build" ]; then
  echo "Adding --load param to docker build command"
  docker_real build --load "${@:2}"
else
  docker_real "$@"
fi

But this is obviously a hack and the proper solution would be for CDK AssetImageProps builder to allow us to add the --output flag (or preferrably: any flag) to its invocation of docker build.

Finally, as noted above by @nikovirtala, in #12472 (comment), Docker recently merged a PR to make buildx the default builder. Unless they changed the default behavior of docker build not outputting anything (idk, didn't check), this new Docker version would be unusable for building container images from CDK using fromAsset.

@guilleojeda
Copy link

If you're building docker images for Lambda functions, you can do FROM public.ecr.aws/lambda/python:3.8-x86_64 instead of FROM public.ecr.aws/lambda/python:3.8, and it should enforce the x86_64 architecture even on an M1 mac.

Docker seems to be smart enough to build for x86_64 if the base image is x86_64, but the alias public.ecr.aws/lambda/python:3.8 is not explicit about it so apparently Docker is downloading public.ecr.aws/lambda/python:3.8-arm64 by default on my M1 mac

@mergify mergify bot closed this as completed in #20439 May 26, 2022
mergify bot pushed a commit that referenced this issue May 26, 2022
…ker images (#20439)

This PR adds support for specifying the desired build platform when building docker images (ie: build an arm64 image on an amd64/x86_64 host). Closes #12472 

This PR does NOT touch Lambda builders, only ECR assets. #16770 attempted to implement support for ECR and Lambda but was abandoned. Meanwhile #16858 implemented lambda platform support. This implements the ECR side

I have run `yarn integ`

----

### All Submissions:

* [x] Have you followed the guidelines in our [Contributing guide?](https://github.com/aws/aws-cdk/blob/master/CONTRIBUTING.md)

### Adding new Unconventional Dependencies:

* [ ] This PR adds new unconventional dependencies following the process described [here](https://github.com/aws/aws-cdk/blob/master/CONTRIBUTING.md/#adding-new-unconventional-dependencies)

### New Features

* [x] Have you added the new feature to an [integration test](https://github.com/aws/aws-cdk/blob/master/INTEGRATION_TESTS.md)?
	* [x] Did you use `yarn integ` to deploy the infrastructure and generate the snapshot (i.e. `yarn integ` without `--dry-run`)?

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
@github-actions
Copy link

⚠️COMMENT VISIBILITY WARNING⚠️

Comments on closed issues are hard for our team to see.
If you need more assistance, please either tag a team member or open a new issue that references this one.
If you wish to keep having a conversation with other community members under this issue feel free to do so.

mergify bot pushed a commit that referenced this issue Aug 3, 2022
Based on [this](#16770) PR
Add the missing part to add platform support when using lambda `fromImageAsset`

As we are not allowed to specify `platform` flag for `DockerImageAsset`, users deploying cdk on x86_64 platform will not be able to bundle lambda.DockerImageFunction for the new arm64 architecture. Similarly, users deploying cdk on arm64 architecture like Mac M1 will not be able to bundle images for AWS Fargate, which is x86_64 only.

# builder experience with aws-lambda
For x86_64 users deploying Lambda functions with container runtime on Lambda Graviton2(arm64) from local container image assets with multi-architecture docker base images. Specify the platform to ensure the image would be built for ARM64 architecture.

```
 new DockerImageFunction(stack, 'Lambda', {
      code: DockerImageCode.fromImageAsset(path.join(__dirname, 'docker-arm64-handler')),
      architecture: Architecture.ARM_64,
    });
```

Fixes: #12472, #20907
----

### All Submissions:

* [x] Have you followed the guidelines in our [Contributing guide?](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md)

### Adding new Unconventional Dependencies:

* [ ] This PR adds new unconventional dependencies following the process described [here](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md/#adding-new-unconventional-dependencies)

### New Features

* [x] Have you added the new feature to an [integration test](https://github.com/aws/aws-cdk/blob/main/INTEGRATION_TESTS.md)?
	* [x] Did you use `yarn integ` to deploy the infrastructure and generate the snapshot (i.e. `yarn integ` without `--dry-run`)?

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
josephedward pushed a commit to josephedward/aws-cdk that referenced this issue Aug 30, 2022
Based on [this](aws#16770) PR
Add the missing part to add platform support when using lambda `fromImageAsset`

As we are not allowed to specify `platform` flag for `DockerImageAsset`, users deploying cdk on x86_64 platform will not be able to bundle lambda.DockerImageFunction for the new arm64 architecture. Similarly, users deploying cdk on arm64 architecture like Mac M1 will not be able to bundle images for AWS Fargate, which is x86_64 only.

# builder experience with aws-lambda
For x86_64 users deploying Lambda functions with container runtime on Lambda Graviton2(arm64) from local container image assets with multi-architecture docker base images. Specify the platform to ensure the image would be built for ARM64 architecture.

```
 new DockerImageFunction(stack, 'Lambda', {
      code: DockerImageCode.fromImageAsset(path.join(__dirname, 'docker-arm64-handler')),
      architecture: Architecture.ARM_64,
    });
```

Fixes: aws#12472, aws#20907
----

### All Submissions:

* [x] Have you followed the guidelines in our [Contributing guide?](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md)

### Adding new Unconventional Dependencies:

* [ ] This PR adds new unconventional dependencies following the process described [here](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md/#adding-new-unconventional-dependencies)

### New Features

* [x] Have you added the new feature to an [integration test](https://github.com/aws/aws-cdk/blob/main/INTEGRATION_TESTS.md)?
	* [x] Did you use `yarn integ` to deploy the infrastructure and generate the snapshot (i.e. `yarn integ` without `--dry-run`)?

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment