Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

doc/proposals: Managing Dockerfiles #431

Conversation

shawn-hurley
Copy link
Member

  • Add options for managing dockerfiles
  • Discuss the best solution and the changes needed for this solution.

@xiang90
Copy link

xiang90 commented Aug 27, 2018

the content is wrong.

@shawn-hurley
Copy link
Member Author

@xiang90 thanks for the comment. 🤦‍♂️

@hasbro17
Copy link
Contributor

@shawn-hurley This approach seems fine to me. With multiple dockerfiles users can choose and manage the base image and add other configurations for their operator image as needed.

One thing I wanted to bring up for consideration was making each file a multi-stage Dockerfile. That should address #167 as well.
The first stage builds the binary and then the second stage is the base image with other customizations:

# Build the operator binary
FROM golang:1.10.3 as builder

WORKDIR /go/src/github.com/example-inc/app-operator
COPY . /go/src/github.com/example-inc/app-operator

RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o /go/bin/app-operator github.com/example-inc/app-operator/cmd/app-operator/main.go

# Copy over to the base image with other customizations
FROM alpine:3.6
COPY --from=builder /go/bin/app-operator /usr/local/bin/app-operator
...
ENTRYPOINT ["app-operator"]

@hasbro17
Copy link
Contributor

@shawn-hurley You mentioned offline that we can't do multi-stage docker files:

I believe fedora and RHEL do not by default support multi-stage docker files

Can you elaborate why that's the case.

/cc @spahl Still need your input on this.

@kfox1111
Copy link

multistage builds are a feature only in docker greater then 1.13. rhel has stuck at 1.13.

I believe buildah though supports multistage now, so maybe that is a good alternative?

@estroz estroz added kind/feature Categorizes issue or PR as related to a new feature. blocked documentation labels Oct 1, 2018
estroz added a commit to estroz/operator-sdk that referenced this pull request Oct 2, 2018
…h` instead of `docker`

`buildah` is a tool that builds OCI images, from its CLI interface or by interpreting
Dockerfiles. RHEL does not support docker-ce above v1.13, preventing the SDK from leveraging
the multi-stage build Dockerfile feature; `buildah` supports this feature, and will continue
to be supported in RHEL. See operator-framework#431 for relevant discussion.
estroz added a commit to estroz/operator-sdk that referenced this pull request Oct 2, 2018
…h` instead of `docker`

`buildah` is a tool that builds OCI images, from its CLI interface or by interpreting
Dockerfiles. RHEL does not support docker-ce above v1.13, preventing the SDK from leveraging
the multi-stage build Dockerfile feature; `buildah` supports this feature, and will continue
to be supported in RHEL. See operator-framework#431 for relevant discussion.
estroz added a commit to estroz/operator-sdk that referenced this pull request Oct 2, 2018
…h` instead of `docker`

`buildah` is a tool that builds OCI images, from its CLI interface or by interpreting
Dockerfiles. RHEL does not support docker-ce above v1.13, preventing the SDK from leveraging
the multi-stage build Dockerfile feature; `buildah` supports this feature, and will continue
to be supported in RHEL. See operator-framework#431 for relevant discussion.
estroz added a commit to estroz/operator-sdk that referenced this pull request Oct 2, 2018
…h` instead of `docker`

`buildah` is a tool that builds OCI images, from its CLI interface or by interpreting
Dockerfiles. RHEL does not support docker-ce above v1.13, preventing the SDK from leveraging
the multi-stage build Dockerfile feature; `buildah` supports this feature, and will continue
to be supported in RHEL. See operator-framework#431 for relevant discussion.
@openshift-ci-robot openshift-ci-robot added the size/S Denotes a PR that changes 10-29 lines, ignoring generated files. label Nov 20, 2018
@shawn-hurley
Copy link
Member Author

Copy link
Member

@estroz estroz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This solution still only works with buildah on RHEL. Many users will not be constrained by RHEL docker versioning though. Should we should impose a image builder version check, ex. docker v17.05+ || buildah, or let image builders themselves generate errors?

@shawn-hurley
Copy link
Member Author

@estroz Yeah... this feels like something we are going to have to do unless we are going to constantly have issues. If we want to layer builds like this then I don't see a way around this.

Another option is to concat the files together to a temp file and then use that file for the build, If we don't wan to do the check this could be an option.

@AlexNPavel
Copy link
Contributor

I think a simple image version check is enough for us. We specifically mention that the minimum version of docker is 17.03 in our README, and build --enable-tests already depends on 17.05+ as it needs ARG-before-FROM support (and we need to update the README to reflect that).

The `operator-sdk build` command will add an option to pass in the base image example: `operator-sdk build --base-dockerfile centos-base.Dockerfile`. The `new` command will generate a default base, as it does now, called `base.Dockerfile`. If no option is passed in `base.Dockerfile` will be selected.

#### Operator-SDK Changes
The `operator-sdk` will now add an `sdk.Dockerfile`. This dockerfile will contain all the steps to set up the operator image correctly for whichever type of operator this is. This will be changed/updated and owned by the operator-sdk, and we will not expect or respect any changes made by a user to this file. This will allow us to completely re-write it as we see fit. The testing framework will not be changed as part of this proposal.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a thought but how important is it to name the top level Dockerfile sdk.Dockerfile. Wondering if we can keep this as build/Dockerfile so it doesn't break users with older projects.
I know it's a small change to update existing projects to move build/Dockerfile to build/sdk.Dockerfile but it would be nice if we could avoid even that if the name change isn't strictly necessary.

@hasbro17
Copy link
Contributor

Overall this approach SGTM. Just a nit on how this affects existing projects.

@shawn-hurley
Copy link
Member Author

We have decided to hold on implementing this until real-world problems exist and we can further understand the problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. size/S Denotes a PR that changes 10-29 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants