-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OLM V1 Milestone 2 #121
Comments
Coming up with this is step 1 for defining a milestone I think. My feeling is that the focus could be on some of the following:
This list is probably too big for everything to be in the milestone, but IMO, these are the next logical steps. |
Another item that I think should be part of Milestone 2 is a centralized upstream OLMv1 distribution, which would:
|
Milestone 1 will introduce rudimentary install and uninstall commands to the kubectl operator plugin, it may make sense to begin adding more of the features described in the OLM V1 UX doc in Milestone 2 or a subsequent milestone. |
Once we are finished with M1 we should look at the deppy framework for bottlenecks (especially with the design). We should make a decision as to whether it is worth to switch to a generics-based framework. The other thing we should consider are the resolution pipelines. Putting that framework in deppy. |
I vote, like @joelanford , to focus on basic upgrade functionality - learn how to pivot from one version to another. Potentially already taking steps for OLM to auto-choose a default path when multiple update options exist but at the same time allow the user to specify a desired target version that may not be the latest. This will likely require more work on the resolution framework. While we are at it, we can also expand that framework to start resolving constraints against cluster properties or other extensions. |
@joelanford is this speaking about:
We reviewed this in the community meeting and suspect it's the former, but wanted to confirm. |
Essentially, I'm suggesting that we need some way of mapping an OLMv1 version to a set of versions of all of the components such that anyone could:
For example, would the subcomponents:
- name: operator-controller
repo: github.com/operator-framework/operator-controller
commit: 47485a7e8e678ff7f922321d014e8ae1adeef352
- name: rukpak
repo: github.com/operator-framework/rukpak
commit: c527e657b4dabfd228068fbe458b72acb013bf00 This is totally made up and I haven't thought too much about what sort of metadata we would need (name, repo, commit are just thrown out there without much thought). But the overall point is: How do we define this mapping such that we can deterministically install a particular distribution of OLM at a particular version of that distribution? And how can we make it easy to sub in something else? |
Goals:
Suggested Milestone Stories:
Current prioritized list of reviewed suggestions:
When reviewing the suggestions, the upstream community has decided that the suggestions should be priortized in the following order (missing suggestions have not yet been reviewed).
-- Defining how constraints are defined in the Operator CR Spec.
--- Could a user learn about supported constraints with
kubeclt explain
?--- A GH Issue is created to discuss Generic Constraints and is added to the GA milestone.
--- The community recommends that we avoid a fully generic template as it may be difficult to use.
-- Channel: Users may specify a channel that the bundle MUST come from
-- Version Range: Users may supply a range OR a specific version that a bundle must satisfy.
-- Update kubectl operator plugin to allow users to specify new constraints.
-- Consider different user personas: Super Cluster Admin comfortable with deploying bundleDeployments, Cluster Admin relying on OLM V1 APIs
-- What are all the "force install" scenarios? Are different "knobs" required for different "forced installs"?
-- How does this work with Operator CRs that introduce constraints.
-- This would likely require use of a DeppySource other than OLMv0 CatalogSources (e.g. a "cluster" entity that provides cluster version properties to the resolver)
The text was updated successfully, but these errors were encountered: