Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can I specify namespace in "required" operators in clusterserviceversion #1191

Closed
clyang82 opened this issue Dec 12, 2019 · 5 comments
Closed

Comments

@clyang82
Copy link
Contributor

clyang82 commented Dec 12, 2019

Bug Report

What did you do?

I have an operator (OpA) depends on another operator (OpC), so in OpA clusterserviceversion file, I use required to achieve this goal. for example:

customresourcedefinitions:
    owned:
    - description: A configuration file for a Jaeger custom resource.
      displayName: Jaeger
      kind: Jaeger
      name: jaegers.jaegertracing.io
      version: v1
    required:
    - description: An Elasticsearch cluster instance
      displayName: Elasticsearch
      kind: Elasticsearch
      name: elasticsearches.logging.openshift.io
      version: v1

I have another operator (OpB) also depends on OpC, so in OpB clusterserviceversion file, I use required to achieve this goal.

follow the above way, the OpC is installed twice in both namespace of OpA and namespace of OpB.

What did you expect to see?

Can I have only one OpC instance installed in the cluster? in other word, if the OpB finds that the OpC was installed, then just skip OpC installation.

I did not find there can specify the namespace for required operator.

Any suggestions? Thanks.

@clyang82 clyang82 changed the title "required" behaviour in clusterserviceversion Can I specify namespace in "required" operators in clusterserviceversion Dec 12, 2019
@clyang82
Copy link
Contributor Author

I think one approach is use leader election to handles the reconciliation while the other instances are inactive
https://github.com/operator-framework/operator-sdk/blob/master/doc/user-guide.md#leader-election

@awgreene
Copy link
Member

Thank you for opening this issue @clyang82.

The behavior that your describing is by design.

OLM created OperatorGroups to support Multitenency. If an operator is installed into an OperatorGroup, it's dependencies are installed in the same OperatorGroup. This OperatorGroup is then updated with a list of APIs that are available to operators in that OperatorGroup.

In the situation that you're describing, it would appear that the API associated with OpC, while on cluster and available in OpA's OperatorGroup, was not available in OpB's OperatorGroup - likely because the targetNamespaces did not overlap. As a result, OpC was installed in both OperatorGroups.

Closing this as I don't believe that this is an issue, but suggesting to the team that we confirm this behavior is documented. If you have additional concerns please feel free to re-open the issue.

@clyang82
Copy link
Contributor Author

Good to know. Thanks @awgreene

@chenzhiwei
Copy link

https://docs.openshift.com/container-platform/4.3/operators/understanding-olm/olm-understanding-operatorgroups.html#olm-operatorgroups-target-namespace_olm-understanding-operatorgroups

Listing multiple namespaces via spec.targetNamespaces or use of a label selector via spec.selector is not recommended, as the support for more than one target namespace in an OperatorGroup will likely be removed in a future release.

@awgreene Seems the support for multiple target namespaces in a single OperatorGroup will be removed in future release, are there any other ways to handle such dependency cases?

@chenzhiwei
Copy link

@njhale Hi Nick, here is a question related to operator dependency management.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants