Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Handle k8s.io/client-go warnings #18961

Closed
ChrsMark opened this issue Jun 4, 2020 · 5 comments · Fixed by #18964
Closed

Handle k8s.io/client-go warnings #18961

ChrsMark opened this issue Jun 4, 2020 · 5 comments · Fixed by #18964
Labels
bug Team:Integrations Label for the Integrations team Team:Platforms Label for the Integrations - Platforms team

Comments

@ChrsMark
Copy link
Member

ChrsMark commented Jun 4, 2020

With #18817 we upgraded the dependency to k8s.io/client-go which resulted in having some warnings of the client now coming in the output of Beats whenever a k8s client is trying to get initialised. A common case for this is running Metricbeat which has add_kubernetes_metadata enabled by default which will result in output like this one:

./metricbeat modules disable system
W0604 10:46:01.769645    8048 client_config.go:559] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0604 10:46:01.769661    8048 client_config.go:566] error creating inClusterConfig, falling back to default config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Module system is already disabled

This is normal in terms of functionality when the k8s configuration is not set up, however it is quite annoying and we should handle these outputs properly.

Related to kubernetes/client-go#18 kubernetes/client-go#610

cc: @exekias

@ChrsMark ChrsMark added bug Team:Integrations Label for the Integrations team Team:Platforms Label for the Integrations - Platforms team labels Jun 4, 2020
@elasticmachine
Copy link
Collaborator

Pinging @elastic/integrations (Team:Integrations)

@elasticmachine
Copy link
Collaborator

Pinging @elastic/integrations-platforms (Team:Platforms)

@exekias
Copy link
Contributor

exekias commented Jun 4, 2020

Thanks for opening, I see from the related issues that we cannot really disable client-go logging.

I wonder if, as an alternative, there is a way to check if inClusterConfig will work before starting the client, that way we could avoid the warning.

@ChrsMark
Copy link
Member Author

ChrsMark commented Jun 4, 2020

That sounds good @exekias!

Better we could "re-implement" the BuildConfigFromFlags on our side in order to avoid this logging. Something like this:

func buildConfig(kubeconfigPath string) (*restclient.Config, error) {
	if kubeconfigPath == "" {
		kubeconfig, err := restclient.InClusterConfig()
		if err == nil {
			return kubeconfig, nil
		}
	}
	return clientcmd.NewNonInteractiveDeferredLoadingClientConfig(
		&clientcmd.ClientConfigLoadingRules{ExplicitPath: kubeconfigPath},
		&clientcmd.ConfigOverrides{ClusterInfo: clientcmdapi.Cluster{Server: ""}}).ClientConfig()
}

instead of calling client.BuildConfigFromFlags() at

cfg, err := clientcmd.BuildConfigFromFlags("", kubeconfig)
:

func BuildConfigFromFlags(masterUrl, kubeconfigPath string) (*restclient.Config, error) {
	if kubeconfigPath == "" && masterUrl == "" {
		klog.Warningf("Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.")
		kubeconfig, err := restclient.InClusterConfig()
		if err == nil {
			return kubeconfig, nil
		}
		klog.Warning("error creating inClusterConfig, falling back to default config: ", err)
	}
	return NewNonInteractiveDeferredLoadingClientConfig(
		&ClientConfigLoadingRules{ExplicitPath: kubeconfigPath},
		&ConfigOverrides{ClusterInfo: clientcmdapi.Cluster{Server: masterUrl}}).ClientConfig()
}

@exekias
Copy link
Contributor

exekias commented Jun 4, 2020

That sounds good to me, good find! I only expect further warnings if you are actually in a kubernetes scenario, where these warnings are still not ideal, but more reasonable

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Team:Integrations Label for the Integrations team Team:Platforms Label for the Integrations - Platforms team
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants