Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve the error output of the declarative plugin #2563

Closed
mariuskimmina opened this issue Mar 26, 2022 · 13 comments
Closed

Improve the error output of the declarative plugin #2563

mariuskimmina opened this issue Mar 26, 2022 · 13 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. triage/needs-information Indicates an issue needs more information in order to work on it.

Comments

@mariuskimmina
Copy link
Contributor

What do you want to happen?

As discussed in #2562 the error output of the declarative plugin could be improved.

Errors such as:

 kubebuilder init --domain my.domain --repo my.domain/guestbook --plugins declarative.go.kubebuilder.io/v1
Error: unknown flag: --domain

Can be confusing to the user.
I'll work on this myself, just creating the issue to track the work.

Extra Labels

No response

@mariuskimmina mariuskimmina added the kind/feature Categorizes issue or PR as related to a new feature. label Mar 26, 2022
@mariuskimmina
Copy link
Contributor Author

/assign

@camilamacedo86
Copy link
Member

@mariuskimmina,

Could you please improve the description and add:
When the error/problem is faced? What are the circumstances?
What is the behaviour expected
Then, what is the behaviour faced
Also, what is the proposed solution for this one?

@mariuskimmina
Copy link
Contributor Author

mariuskimmina commented Apr 18, 2022

The problem occurs when people try to use the declarative plugin to initialise a new project

for example, the following command looks like it should be correct

kubebuilder init --domain my.domain --repo my.domain/guestbook --plugins declarative.go.kubebuilder.io/v1

But the user will get the following error

Error: unknown flag: --domain

Obviously the domain flag does exist, the real problem is, that the declarative plugin can't be used alone - if we use the same command but add another plugin:

kubebuilder init --domain my.domain --repo my.domain/guestbook --plugins go.kubebuilder.io/v3,declarative.go.kubebuilder.io/v1

then the command works. But this might not be obvious to a new user and the error in this case is not helpfull at all.

Idealy we would want an error here indicating that the kubebuilder plugin can't be used alone.

@camilamacedo86
Copy link
Member

Thank you for the clarifications. See that each plugin must know what is required for its self-work and not about other plugins explicitly.

For the error: Error: unknown flag: --domain

It happens because the declarative plugin does not have this flag implemented at all.

However, I understand that we might have a hall for improvements here. We might be able to improve the plugin helpers, errors and etc in order to clarify what is or not available, what is a requirement or not for this plugin as improving its messages.

@mariuskimmina
Copy link
Contributor Author

Yes I understand, I already tried to work on this and saw that it wasn't as simple as adjusting just the plugin code itself. That being sad, I would still be looking further into this and see if I can make some improvements - just going to take longer than I originally anticipated.

@rashmigottipati
Copy link
Contributor

@mariuskimmina can you please propose the improvements you are suggesting to handle such scenarios?

@camilamacedo86 camilamacedo86 added the triage/needs-information Indicates an issue needs more information in order to work on it. label May 5, 2022
@mariuskimmina
Copy link
Contributor Author

So, I think the problem is that when we are using the declarative plugin alone, such as an in this command

kubebuilder init --domain my.domain --repo my.domain/guestbook --plugins declarative.go.kubebuilder.io/v1

Which results in the error Error: unknown flag: --domain because the plugin does not have this flag. But I would suggest that the plugin shouldn't try to run at all - instead there should be an error explaining that the declarative plugin can not be used without another plugin.
I'm not sure how to properly implement this tho and I am also currently more focused on another project over at CoreDNS, thus I would unassign myself from this issue for now. If noone else wants to pick this up, feel free to close it.

@mariuskimmina mariuskimmina removed their assignment May 6, 2022
@NikhilSharmaWe
Copy link
Member

NikhilSharmaWe commented May 9, 2022

@camilamacedo86 Do we need to work on this?

If yes, do we want to:

  • Return error whenever the declarative plugin is used without another plugin. And when flag --domain is also used, we should return both errors (declarative plugin and unknown flag).
  • Not run the command when an unknown flag is used, even if another plugin is present other than a declarative one.
    Like in this case
    kubebuilder init --domain my.domain --repo my.domain/guestbook --plugins go.kubebuilder.io/v3,declarative.go.kubebuilder.io/v1, it is mentioned that this command runs without any error.

@camilamacedo86
Copy link
Member

camilamacedo86 commented May 9, 2022

Hi @NikhilSharmaWe,

You do not need to any issue :-). However, if you want to contribute to this one, I am not sure how would be the best approach to address improvements regarding this case.

As part of this issue, we are still required to do an assessment and see if we could propose any improvement/change to address this one. It might also involve seeing if the solution approach could be a more generic design where we could allow the definition, for example, pre-requirements for the plugins. (not sure if that would be the case)

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 7, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 6, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Oct 6, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. triage/needs-information Indicates an issue needs more information in order to work on it.
Projects
None yet
Development

No branches or pull requests

6 participants