-
Moving a question I had on discord to here @cofin Some version information: I did some load testing for my litestar application (2 uvicorn workers) and realized that I would hit the postgres connection limit relatively easily. This is set to 100 by default, but what's confusing to me is that the engine default settings should not even allow 50 connections in the first place. I tested multiple configurations, but the one below I ended on. For load testing I use locust with 10 users and 0.5 user/s ramp up. The test runs for 1 minute. The test is simple: log in and then just query 1 endpoint. What I would expect is to see a sudden ramp up to 10-20 sessions, then between 10-20 sessions during the test and at the end returning to 10 idle sessions (2x pool size). I appended a snippet from pg admin below - instead the idle sessions ramp up to 61. And stay there after the test. Before I simplified the locust test, the session behavior was a bit more "oscillating", e.g., 22, 54, 35, 54, 44, 80, 42, 96, 45, 55, 48. (see second screenshot) Unfortunately I cannot recreate the problem with a simple example. engine_config = EngineConfig(
pool_pre_ping=True,
pool_size=5,
# pool_recycle=3600,
max_overflow=5,
pool_use_lifo=True,
connect_args={
"application_name": f"LITESTAR-{os.getpid()}"
}
)
session_config = AsyncSessionConfig(expire_on_commit=False)
sqlalchemy_config = SQLAlchemyAsyncConfig(
connection_string=conn or get_connection_string(),
session_config=session_config,
engine_config=engine_config
) # Create 'async_session' dependency.
# This will take care of disposing the async engine at the end of the app lifespan
sqlalchemy_init_plugin = SQLAlchemyInitPlugin(config=sqlalchemy_config) |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Thought some more on this and maybe there is something else that I'm using wrong. I use server-side sessions for user handling and authentication. My user handler is like this. async def session_maker(connection: ASGIConnection) -> AsyncSession:
"""Fetch a session for the current connection."""
plugin: SQLAlchemyInitPlugin = connection.app.plugins.get(SQLAlchemyInitPlugin)
return plugin._config.create_session_maker()()
....
async def retrieve_user_handler(session: dict[str, Any], connection: ASGIConnection) -> User:
db_session: AsyncSession = await session_maker(connection)
async with db_session.begin():
... I took this approach since I couldnt find any other easy way to fetch a session from the application scope. Now I'm wondering if this could create multiple engines under the hood. |
Beta Was this translation helpful? Give feedback.
And that is exactly the issue - instead of fetching the session maker from the plugin I implicitly create a new engine per session maker. Instead, I should get the session maker like
With this, I don't even exhaust the base pool of connections.