Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Topology Mutation Hook: Variable discovery #6901

Closed
sbueringer opened this issue Jul 12, 2022 · 7 comments
Closed

Topology Mutation Hook: Variable discovery #6901

sbueringer opened this issue Jul 12, 2022 · 7 comments
Assignees
Labels
area/runtime-sdk Issues or PRs related to Runtime SDK kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@sbueringer
Copy link
Member

sbueringer commented Jul 12, 2022

User Story

As a cluster operator I would like to not be responsible for adding variable definitions used by a Topology Mutation Hook extension manually to a ClusterClass.

(mentioned as future work here)

Detailed Description

Some background information:

  • Topology Mutation Hook CAEP
  • Today it is necessary to manually ensure that the variables used by a Topology Mutation Hook extension exist in the ClusterClass in which the extension is used
  • This means that extension authors should document which variables are used (and their type in OpenAPI Schema v3) and ClusterClass authors have to keep the variables in the ClusterClass in sync with the extension.

It would be nice if the variable definitions would be just automatically discovered from a Topology Mutation Hook extension and "added" to a ClusterClass (details TBD).

Anything else you would like to add:

Things to keep in mind:

  • The same variables with different types could be defined in multiple extensions and in the ClusterClass directly
  • We have to figure out when the variables should be discovered (every reconcile is probably a bit too often)
  • variables discovered from extensions should be then treated like every other variable in the ClusterClass (e.g. they should be used for validating/defaulting variables on a Cluster)
  • do we discover variables per ClusterClass, per Cluster or per Extension(Config) (I lean towards per ClusterClass)
  • we could consider only sending the discovered variables + builtin variables in a GeneratePatches call (and not all variables) to enforce that every extension has to "declare" the variables it uses

/kind feature

@sbueringer sbueringer added the area/runtime-sdk Issues or PRs related to Runtime SDK label Jul 12, 2022
@k8s-ci-robot k8s-ci-robot added the kind/feature Categorizes issue or PR as related to a new feature. label Jul 12, 2022
@fabriziopandini
Copy link
Member

This is a super nice improvement, but also tricky to achieve.
I'm happy too pair in discussing a possible way forward

@fabriziopandini fabriziopandini added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed triage/accepted Indicates an issue or PR is ready to be actively worked on. labels Jul 29, 2022
@fabriziopandini
Copy link
Member

/triage accepted

@k8s-ci-robot k8s-ci-robot added the triage/accepted Indicates an issue or PR is ready to be actively worked on. label Aug 5, 2022
@fabriziopandini fabriziopandini self-assigned this Aug 5, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 3, 2022
@fabriziopandini
Copy link
Member

/lifecycle frozen
/assign
I have started collecting some ideas on this topic, I will start moving this to a doc soon

@k8s-ci-robot k8s-ci-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 3, 2022
@fabriziopandini
Copy link
Member

/unassign
/assign @killianmuldoon
@killianmuldoon please consider if we have to keep this issue or close it now that we are tracking this work also in other umbrella issues

@killianmuldoon
Copy link
Contributor

/close

The current state of this issue is covered in the proposal amendment, and the implementation issue #7985

@k8s-ci-robot
Copy link
Contributor

@killianmuldoon: Closing this issue.

In response to this:

/close

The current state of this issue is covered in the proposal amendment, and the implementation issue #7985

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/runtime-sdk Issues or PRs related to Runtime SDK kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

No branches or pull requests

5 participants