Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inefficient use of connection pools that leverage a Connector #416

Open
jackwotherspoon opened this issue Jun 18, 2024 · 0 comments
Open
Assignees

Comments

@jackwotherspoon
Copy link
Collaborator

The Cloud SQL and AlloyDB datastore providers could improve their use of connection pools, leading to more efficient quota and resource consumption.

async def getconn() -> asyncpg.Connection:
async with Connector(loop=loop) as connector:
conn: asyncpg.Connection = await connector.connect_async(
# Cloud SQL instance connection name
f"{config.project}:{config.region}:{config.instance}",
"asyncpg",
user=f"{config.user}",
password=f"{config.password}",
db=f"{config.database}",
)
await register_vector(conn)
return conn
pool = create_async_engine(
"postgresql+asyncpg://",
async_creator=getconn,
)

The creator argument (same for async_creator) for SQLAlchemy is called on each new database connection. This means that initializing a Cloud SQL or AlloyDB Connector object within the get_conn is creating a new Connector for each connection pool connection and not sharing a single Connector across all connections.

For heavy usage this would lead to Cloud SQL or AlloyDB quotas being exhausted and for applications to begin to error.

A more efficient use would be to create a single Connector outside of get_conn so that it can be shared across all connections.

@Yuan325 Yuan325 self-assigned this Jun 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants