Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OSDK->KB: Logging Flags #885

Closed
DirectXMan12 opened this issue Jul 24, 2019 · 19 comments
Closed

OSDK->KB: Logging Flags #885

DirectXMan12 opened this issue Jul 24, 2019 · 19 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature.

Comments

@DirectXMan12
Copy link
Contributor

Optional utilities for adding flags to customize log level.

/kind feature

@DirectXMan12 DirectXMan12 added the kind/feature Categorizes issue or PR as related to a new feature. label Jul 24, 2019
@DirectXMan12 DirectXMan12 added this to Code in KubeBuilder / OperatorSDK Integration via automation Jul 24, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 22, 2019
@DirectXMan12
Copy link
Contributor Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 6, 2019
@camilamacedo86
Copy link
Member

Description:

It is basically to get [0] into controller-runtime. Haseeb has a PR that would make this simpler [1].
[0]: https://github.com/operator-framework/operator-sdk/blob/master/pkg/log/zap/flags.go
[1]: kubernetes-sigs/controller-runtime#639

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 10, 2020
@camilamacedo86
Copy link
Member

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 10, 2020
@Adirio
Copy link
Contributor

Adirio commented Feb 11, 2020

A pattern I really like about log levels is choosing a default log level (usually Info, Warning or Error) and then providing two flags (-v or --verbose and -q or --quiet) that increase/decrease the log level. These flags can be provided several times.

For example, if the default level is Warning:

  • kubebuilder -vv ... will set the level to Debug
  • kubebuilder -q ... will set the level to Error

I already have this pattern implemented in several projects so I can contribute it if you like it.

@camilamacedo86
Copy link
Member

camilamacedo86 commented Feb 11, 2020

Hi @Adirio,

Sure. Feel free to contribute. It shows great 👍 . Just beware that we need to is as described #885 (comment)

@Adirio
Copy link
Contributor

Adirio commented Feb 11, 2020

Do we need to use the flag names and behavior in that file? Or do we just need to provide all the functionality of those flags?

For example, kubebuilder does a good job not showing which logging framework it uses, but the flag names mention zap directly.

@camilamacedo86
Copy link
Member

Hi @Adirio,

Note that we need to merge the PR #1116 first.

Do we need to use the flag names and behavior in that file? Or do we just need to provide all the functionality of those flags?

For example, kubebuilder does a good job not showing which logging framework it uses, but the flag names mention zap directly.

Good question. The reason this task is the integration with SDK. So, I would say that we should keep the same names. However, I think that I would also prefer the common usage as -vv as well. @estroz @hasbro17 wdyt?

@Adirio
Copy link
Contributor

Adirio commented Feb 13, 2020

The following list describes the flags in Operator SDK and how they can be implemented with controller-runtime@0.5.0:

  • --zap-devel: development (true) or production (false) mode.
    sigs.k8s.io/controller-runtime/pkg/log/zap.UseDevMode
  • --zap-encoder: "json" (no spaces after , and :) or "console" (spaces after , and :).
    sigs.k8s.io/controller-runtime/pkg/log/zap.Encoder
  • --zap-time-encoding: time formating in logs, valid values are "iso8601", "ISO8601", "millis" (milliseconds since epoch), "nanos" (nanoseconds since epoch), "epoch" (seconds since epoch).
    sigs.k8s.io/controller-runtime/pkg/log/zap.Encoder
  • --zap-level: "debug", "info", "error", or any positive integer.
    sigs.k8s.io/controller-runtime/pkg/log/zap.Level
  • --zap-stacktrace-level: "debug", "info", "error", or any positive integer.
    sigs.k8s.io/controller-runtime/pkg/log/zap.StacktraceLevel
  • --zap-sample: allows to disable log sampling in production mode (in development mode it is always disabled)
    NOTE: controller-runtime@0.5.0 always samples in production mode

My suggestion would be:

  • --development / -d instead of --zap-devel
  • --verbose / -v and --quiet / -q instead of --zap-level with the behavior described in OSDK->KB: Logging Flags #885 (comment)
  • --log-stacktrace-level instead of --zap-stacktrace-level extending the possible values to include warnings, panics and fatals as levels.
  • Do not implement --zap-encoder. Notice that --development/--zap-devel already modifies the encoding format and the difference from "json" to "console" are just a couple spaces.
  • --log-time-format instead of --zap-time-encoding extending the options to include all zap supported ones.
  • Do not implement --zap-sample as controller-runtime@0.5.0 doesn't allow to disable sampling in production mode.

@shawn-hurley
Copy link

https://github.com/kubernetes-sigs/controller-runtime/pull/767/files

this PR has merged will the current flags being:

zap-devel
zap-encoder
zap-log-level
zap-stacktrace-level

I believe that these names are important to have the zap prefix to make sure that we describe what we are manipulating. There are many systems that one could use behind logr interface, and we are using zap and I think that needs to be exposed in the flags. if we added a new package to configure klog, and use the klogr interface I would suggest that this has a different set of flags that you could bind to.

I think we can add a follow-up's to:

  1. change the name of zap-devel to zap-development. As we don't have short flags I personally feel zap-devel gets the same point across with less typing, but not going to get hung up either way.

  2. change zap-log-level to zap-verbosity. I would vote against this because it is not how zap works but if we had something like klog flags then I would be very much in favor of that style.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 9, 2020
@camilamacedo86
Copy link
Member

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 10, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 8, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 8, 2020
@camilamacedo86
Copy link
Member

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Oct 8, 2020
@StevenACoffman
Copy link

Given that kubernetes-sigs/controller-runtime#767 is merged, what more is remaining on this?

Is it just this?

  • --zap-time-encoding: time formating in logs, valid values are "iso8601", "ISO8601", "millis" (milliseconds since epoch), "nanos" (nanoseconds since epoch), "epoch" (seconds since epoch).
    sigs.k8s.io/controller-runtime/pkg/log/zap.Encoder
  • --zap-sample: allows to disable log sampling in production mode (in development mode it is always disabled)
    NOTE: controller-runtime@0.5.0 always samples in production mode

@camilamacedo86
Copy link
Member

camilamacedo86 commented Oct 14, 2020

Hi @StevenACoffman,

We need to address the comment #885 (comment) in https://github.com/kubernetes-sigs/controller-runtime/blob/0e1344145ac34e036199f4d5abc8c71b59b7d7e5/pkg/log/zap/zap.go#L237 and the flags that you raise as well. Also, see that controller-runtime has the issue kubernetes-sigs/controller-runtime#1035.

IHMO we ough to track this need in controller-runtime and work on it there and then close this issue with the PR #1721. WDYT @joelanford ?

@StevenACoffman, Would you like to help with this one?

@camilamacedo86
Copy link
Member

So, I think we can close this one with the #1721 and then, if we need we can raise a new issue with a scope more defined to know what exactly still missing and should get done in KB. However, please feel free also to re-open if you think that it is required.

KubeBuilder / OperatorSDK Integration automation moved this from Code to Done Oct 23, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature.
Projects
No open projects
Development

No branches or pull requests

7 participants