Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unused threads and thread pools in kubernetes.watch.Watch._api_client #545

Closed
bgagnon opened this issue Jun 1, 2018 · 12 comments
Closed
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@bgagnon
Copy link

bgagnon commented Jun 1, 2018

TL;DR every Watch object consumes 16 threads for no apparent reason

kubernetes.watch.Watch creates an APIClient in its constructor:

    def __init__(self, return_type=None):
        self._raw_return_type = return_type
        self._stop = False
        self._api_client = client.ApiClient()
        self.resource_version = 0

Which is only used for deserialize purposes, not for making calls:

            js['object'] = self._api_client.deserialize(obj, return_type)

The problem is that ApiClient is a rather expensive object that creates a new ThreadPool and RESTClientObject every time:

    def __init__(self, configuration=None, header_name=None, header_value=None, cookie=None):
        if configuration is None:
            configuration = Configuration()
        self.configuration = configuration

        self.pool = ThreadPool()
        self.rest_client = RESTClientObject(configuration)

The RESTClientObject is default-constructed (no configuration), which implies:

        self.connection_pool_maxsize = multiprocessing.cpu_count() * 5

and

class RESTClientObject(object):

    def __init__(self, configuration, pools_size=4, maxsize=None):
@bgagnon
Copy link
Author

bgagnon commented Jun 1, 2018

Workaround that kills some threads right after the Watch object is constructed:

watch = kubernetes.watch.Watch()
watch._api_client.pool.close()
watch._api_client.rest_client.pool_manager.clear()

@minrk
Copy link

minrk commented Jun 13, 2018

This is a bug in the upstream swagger-codegen project fixed by swagger-api/swagger-codegen#8061 . Once that is merged, re-rendering with the updated swagger-codegen should fix it.

@bidyut1990
Copy link

Is there any update to this? I am also trying to use many watches simultaneously and its hogging a lot of memory. Any advise to make a client(capable of setting up watches and digesting the events) really scalable in the most efficient manner.

@tomplus
Copy link
Member

tomplus commented Mar 25, 2019

@bidyut1990 which version do you use? It should be fixed in v9.0.0.

If you need to handle many watches in your application you may be interested asynchronous fork of this library kubernetes-asyncio.

@bidyut1990
Copy link

@tomplus - I am using 9.0.0 only but i did not realize it's been fixed. Anyways, i ll also take a look at the async based package.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 26, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 26, 2019
@yliaog
Copy link
Contributor

yliaog commented Aug 13, 2019

@bidyut1990 could you please verify if the fix works?

@roycaihw roycaihw removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Aug 13, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 11, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 11, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

8 participants