-
-
Notifications
You must be signed in to change notification settings - Fork 184
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature: Get a long-lived client object #928
Comments
You can do multiple requests per client, the limit is set in the config object like in botocore for Max connections |
You should use one client per endpoint/config to keep your connections warm |
@thehesiod -- Thanks! I could also move to a connection pool, I guess (or can I just increase max connections?). The issue I was raising here was that I had to use a private interface to get the client outside of a context manager. |
huh first time I've seen an async property, I guess it works, however kinda non standard. you can increase max connections to what you please: https://github.com/boto/botocore/blob/develop/botocore/config.py#L57. You don't need to use a private interface, you should be storing the client with an AsyncExitStack: from contextlib import AsyncExitStack
class Foo:
def __init__(self):
self._client = None
self._exit_stack = AsyncExitStack()
async def __aexit__(self, exc_type, exc_val, exc_tb):
await self._exit_stack.__aexit__(exc_type, exc_val, exc_tb)
async def get_s3_client(self):
if self._client:
return self._client
self._client = await self._exit_stack.enter_async_context(session.create_client('s3'))
return self._client |
Thanks! ... hmm... didn't know about |
whoops, fixed :) |
your shutdown can just clean up the exit_stack, btw it does the right thing of cleaning things up in reverse order that they're added to it. Closing this as I think this is out of scope for this project. |
Hello @thehesiod I don't understand why forcing us to use a context manager to make S3 calls, thus recreating an S3 client each time (it does need to re establish connection, am I right ?) # aiohttp
session = ClientSession()
# boto
client = boto3.Session().client("s3") |
because it's more correct, otherwise you're relying on GC to clean up the connections. Creating a client doesn't automatically create connections, it creates a connection pool. In one client you can have multiple connections, one per request. Also this is not forcing you to have a context manager to make individual calls, it's just one per client. You should hold on to the client for as long as you need it, not once per call. |
So in case of a REST api for example, every request I receive and need to process, I have to recreate the context, thus the s3 client, to make s3 calls (paginate get, post, etc) ? |
your server should maintain a long lived client to avoid all the creation overhead. Here's an example: class Server:
def __init__(self):
self._app = aiohttp.web.Application()
self._app.router.add_get('/', root_handler)
self._exit_stack = AsyncExitStack()
self._s3_client = None
self._app.on_startup.append(startup_wrapper)
self._app.on_cleanup.append(self._cleanup)
async def _startup(self, app):
session = aiobotocore.session.AioSession()
self._s3_client = await self._exit_stack.enter_async_context(session.create_client('s3'))
async def _cleanup(self, app):
await self._exit_stack.__aexit__(None, None, None)
def start(self):
aiohttp.web.run_app(self._app, handle_signals=False)
async def root_handler(self, req: aiohttp.web.Request):
return json_response(body={}) |
Thanks for the example, although IMO it looks like a bit complicated for a feature that every other http/s3/any client handles (meaning: just creating a session object) Plus, I'm stuck with python3.6 and As a workaround, I wrapped all my S3 calls in a custom |
you don't need to use btw you can replace it with: async def _startup(self, app):
session = aiobotocore.session.AioSession()
self._s3_client_ctx = session.create_client('s3')
self._s3_client = await self._s3_client_ctx.__aenter__()
async def _cleanup(self, app):
self._s3_client_ctx.__aexit__(None, None, None) |
Thanks a lot ! |
@thehesiod last thing, it still doesn't work because you use an
I need to install an external dependency |
oh I just merged a PR that added that dependency, hmm |
fix coming in #935 |
Thank you very much ! |
binary should be out shortly |
@thehesiod You forgot the fix in |
ya I'm positive I had added it, so weird, I wonder what happened, will push new release shortly |
this also questions how the tests passed, hmm |
aha, was totally broken, fix coming in #936 |
@NicoAdrian wow the tests were never running in 3.6 contrary to the GH Action setting, been broken for awhile :( fixed and deployed. Let me know how it goes. We'll now be able to catch that kind of issue in the future |
(painful) btw %) |
@thehesiod lol :-) thanks for the fixes |
I have an a singleton object that encapsulates S3 uploads/download as well as marshal/unmarshal for a particular type of object & destination. I would like to have it encapsulate a long-lived client object. The best I can do so far is: (given it creates
self.session
in__init__
.)I call
shutdown
in global cleanup from signal handler.Is there anything wrong with using a client in this way? It seems that getting a client per request when there could be many requests could wasteful as it (may?) do a network operation on creation (or why do we wait for it?).
If this is kosher, could there be a better interface for it?
The text was updated successfully, but these errors were encountered: