Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Same ClientSet and multiple concurrent requests #564

Closed
pboiseau opened this issue Feb 22, 2019 · 2 comments
Closed

Same ClientSet and multiple concurrent requests #564

pboiseau opened this issue Feb 22, 2019 · 2 comments

Comments

@pboiseau
Copy link

pboiseau commented Feb 22, 2019

Hello,

I have some performance issue when using the same kubernetes Clientset for multiple concurrent requests.

Is it required to create a new Clientset for every request we want to make on the kubernetes API ?

Case 1

Same kubernetes client for multiple requests

config, err := clientcmd.BuildConfigFromFlags("", k8s.KubeConfigDefaultPath())
if err != nil {
	panic(err)
}

clientSet, err := kubernetes.NewForConfig(config)
if err != nil {
	panic(err)
}

h.engine.GET("/test", func(context *gin.Context) {
	n, _ := clientSet.CoreV1().Namespaces().Get("pbo", metav1.GetOptions{})

	context.JSON(http.StatusOK, n)
})

performance result

[GIN-debug] Listening and serving HTTP on :8080
[GIN] 2019/02/22 - 16:10:02 | 200 |   58.440892ms |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:10:02 | 200 |   58.881891ms |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:10:02 | 200 |   59.497636ms |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:10:02 | 200 |    59.56138ms |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:10:02 | 200 |   59.518548ms |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:10:02 | 200 |    60.31543ms |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:10:02 | 200 |   60.800629ms |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:10:02 | 200 |   61.954711ms |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:10:02 | 200 |   63.625243ms |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:10:02 | 200 |   67.860491ms |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:10:02 | 200 |  206.455916ms |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:10:03 | 200 |  408.115984ms |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:10:03 | 200 |  607.690261ms |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:10:03 | 200 |  807.993448ms |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:10:03 | 200 |  1.007878327s |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:10:03 | 200 |  1.208085531s |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:10:04 | 200 |   1.40728416s |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:10:04 | 200 |  1.608727369s |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:10:04 | 200 |  1.807163054s |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:10:04 | 200 |  2.020470871s |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:10:04 | 200 |   2.21428725s |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:10:05 | 200 |  2.407558019s |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:10:05 | 200 |  2.607879629s |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:10:05 | 200 |  2.808593723s |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:10:05 | 200 |  3.047961266s |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:10:05 | 200 |  3.207485163s |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:10:06 | 200 |  3.407720794s |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:10:06 | 200 |  3.606962978s |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:10:06 | 200 |  3.806729891s |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:10:06 | 200 |  4.008047368s |       127.0.0.1 | GET      /test

As you can see, response time is increasing a lot

Case 2

Multiple kubernetes client for multiple requests

config, err := clientcmd.BuildConfigFromFlags("", k8s.KubeConfigDefaultPath())
if err != nil {
	panic(err)
}

h.engine.GET("/test", func(context *gin.Context) {
	clientSet, err := kubernetes.NewForConfig(config)
	if err != nil {
		panic(err)
	}

	n, _ := clientSet.CoreV1().Namespaces().Get("pbo", metav1.GetOptions{})

	context.JSON(http.StatusOK, n)
})

performance results

[GIN-debug] Listening and serving HTTP on :8080
[GIN] 2019/02/22 - 16:15:08 | 200 |   72.396467ms |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:15:08 | 200 |   70.278917ms |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:15:08 | 200 |   70.280447ms |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:15:08 | 200 |   70.540351ms |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:15:08 | 200 |   74.080429ms |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:15:08 | 200 |   64.624353ms |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:15:08 | 200 |   64.625551ms |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:15:08 | 200 |   65.463379ms |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:15:08 | 200 |   64.596104ms |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:15:08 | 200 |   63.223988ms |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:15:08 | 200 |   64.173776ms |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:15:08 | 200 |   71.004154ms |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:15:08 | 200 |   60.494451ms |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:15:08 | 200 |    64.89178ms |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:15:08 | 200 |   58.537382ms |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:15:08 | 200 |   60.858438ms |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:15:08 | 200 |   65.097315ms |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:15:08 | 200 |   65.678975ms |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:15:08 | 200 |   77.676868ms |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:15:08 | 200 |   75.129987ms |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:15:08 | 200 |   66.895068ms |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:15:08 | 200 |   57.573911ms |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:15:08 | 200 |   62.804671ms |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:15:08 | 200 |   63.060401ms |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:15:08 | 200 |   60.212228ms |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:15:08 | 200 |    63.97812ms |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:15:08 | 200 |    62.08053ms |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:15:08 | 200 |   64.246716ms |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:15:08 | 200 |   67.220965ms |       127.0.0.1 | GET      /test
[GIN] 2019/02/22 - 16:15:08 | 200 |   70.261863ms |       127.0.0.1 | GET      /test

Packages versions

[[constraint]]
  branch = "release-1.10"
  name = "k8s.io/api"

[[constraint]]
  branch = "release-1.10"
  name = "k8s.io/apimachinery"

[[constraint]]
  name = "k8s.io/client-go"
  version = "7.0.0"

Thanks

@liggitt
Copy link
Member

liggitt commented Feb 22, 2019

in the client config struct returned by BuildConfigFromFlags, there are throttling configuration options:

QPS: allowed queries per second in steady state. defaults to 5, but you can set higher if desired
Burst: allowed burst. defaults to 10, but you can set higher if desired

@guettli
Copy link

guettli commented Aug 30, 2023

Thank you very much for this hint.

I was really confused why 80 requests to the API server took 14 seconds.

The math is easy: 10 requests are burst. 70 requests with 5 per second ... 14s.

After adjusting the values it takes only a few milliseconds.

Here are the docs about Config.QPS

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants