Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to use docker image of operator-sdk with 'bundle validate' ? #6666

Closed
Jeansen opened this issue Jan 31, 2024 · 14 comments
Closed

How to use docker image of operator-sdk with 'bundle validate' ? #6666

Jeansen opened this issue Jan 31, 2024 · 14 comments
Assignees
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. triage/support Indicates an issue that is a support question.
Milestone

Comments

@Jeansen
Copy link

Jeansen commented Jan 31, 2024

Type of question

General operator-related help

Question

What did you do?

If I want to run e.g.:

docker run --rm  quay.io/operator-framework/operator-sdk:latest bundle validate local-reg/some/image:tag

Then this is not possible because from within the container operator-sdk cannot pull the specified image.

What did you expect to see?

I'd expect the command to succeed or have some documentation about how to do it.

What did you see instead? Under which circumstances?

A workaround would be to run the above command outside the container. But especially in a CI/CD Environment I'd like to run it from within the official conatienr.

Environment

Operator type:

Kubernetes cluster type:

$ operator-sdk version

$ go version (if language is Go)

$ kubectl version

Additional context

@jberkhahn jberkhahn added the triage/support Indicates an issue that is a support question. label Feb 12, 2024
@jberkhahn jberkhahn added this to the Backlog milestone Feb 12, 2024
@acornett21
Copy link
Contributor

/assign

@acornett21
Copy link
Contributor

Hi @Jeansen the issue you are running into isn't an operator-sdk issue, it's a docker in docker issue. To do docker in docker you'd need to be able to run docker as privileged on the host system. Usually CI systems pull in the operator-sdk binary and execute against that.

@Jeansen
Copy link
Author

Jeansen commented Feb 13, 2024

Hi @acornett21 Does this image even support docker in docker, then? Even with --privileged it is the same result and does not work.

@acornett21
Copy link
Contributor

That's correct, because operator sdk image does not ship with docker inside of it. It doesn't make sense for the application to have to have a hard dependency on docker, this is why the --image-builder flag is there.

The --image-builder=none flag should accomplish what you are looking for, if not the only option would be to use the binary.

@Jeansen
Copy link
Author

Jeansen commented Feb 14, 2024

Oh, I see. If I build a custom Docker image where I include my ca-trust and update it accordingly during image creation, then it works. Otherwise I get an X509 error. Unfortunately, it looks like there are no flags to skip tls verification or even ad a ca, as there are with bundle-upgrade. Or am I simply overlooking something?

@acornett21
Copy link
Contributor

@Jeansen I really am not following your use case, or why you would need to add a CA cert to a bundle image. A bundle image is static content of the files in an operators /bundle folder built from a scratch image. The bundle validate cmd does not run the operator in a cluster, it does static analysis of the files (apis,csv,etc yamls). bundle-upgrade and bundle-run on the other hand do require a cluster.

If we look at the below example there are no errors

[vagrant@localhost ~]$ docker run quay.io/operator-framework/operator-sdk:latest bundle validate --image-builder=none quay.io/opdev/simple-demo-operator-bundle:v0.0.7
time="2024-02-14T22:25:30Z" level=info msg="Unpacking image layers"
time="2024-02-14T22:25:32Z" level=warning msg="Warning: Value : (simple-demo-operator.v0.0.7) csv.Spec.minKubeVersion is not informed. It is recommended you provide this information. Otherwise, it would mean that your operator project can be distributed and installed in any cluster version available, which is not necessarily the case for all projects."
time="2024-02-14T22:25:32Z" level=info msg="All validation tests have completed successfully"

What errors are you seeing? What CI system is this? Where is it running?

@Jeansen
Copy link
Author

Jeansen commented Feb 15, 2024

@acornett21 My use case ist simple. I've got a local registry where the currently built bundle resides. Before I create a catalog, I'd like to validate the bundle. Since the registry is accessed ONLY via https, I need to either skip TLS checks or have the ca-cert available. The latter one I could work around by creating a custom image based on the operator-sdk image. I do not want to have any tools I need installed on the build CI/CD server. Everything is done from within different containers. So, if I use the provided image dircectly, I get :

time="2024-02-15T17:21:14Z" level=info msg="Unpacking image layers"
time="2024-02-15T17:21:14Z" level=info msg="trying next host" error="failed to do request: Head \"https://proxy-ng:443/v2/quarkus/cis-op-bdl/manifests/v1.2.0\": tls: failed to verify certificate: x509: certificate signed by unknown authority" host="proxy-ng:443"
time="2024-02-15T17:21:14Z" level=fatal msg="error unpacking image proxy-ng:443/quarkus/cis-op-bdl:v1.2.0: error resolving name for image ref proxy-ng:443/quarkus/cis-op-bdl:v1.2.0: failed to do request: Head \"https://proxy-ng:443/v2/quarkus/cis-op-bdl/manifests/v1.2.0\": tls: failed to verify certificate: x509: certificate signed by unknown authority"

And - my solution with the extended image put aside - I have not way of telling the operator-sdk container where to find my ca-file or to skip validation (although that is also not the best solution). And mounting it into the container does not work (yet) because my host ist Debian whereas the image is based on RHEL.

@openshift-bot
Copy link

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci openshift-ci bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 16, 2024
@Jeansen
Copy link
Author

Jeansen commented May 16, 2024

/remove-lifecycle stale

@openshift-ci openshift-ci bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 16, 2024
@openshift-bot
Copy link

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci openshift-ci bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 14, 2024
@openshift-bot
Copy link

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci openshift-ci bot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 14, 2024
@Jeansen
Copy link
Author

Jeansen commented Sep 14, 2024

/remove-lifecycle rotten

@openshift-bot
Copy link

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

@openshift-ci openshift-ci bot closed this as completed Oct 15, 2024
Copy link

openshift-ci bot commented Oct 15, 2024

@openshift-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. triage/support Indicates an issue that is a support question.
Projects
None yet
Development

No branches or pull requests

4 participants