Skip to content
This repository has been archived by the owner on Sep 30, 2024. It is now read-only.

Adjust db.SetMaxIdleConns for backend orchestrator to a more dynamic value #210

Merged

Conversation

sjmudd
Copy link
Collaborator

@sjmudd sjmudd commented Jun 13, 2017

Recent issues seen talking to the backend can give a: cannot assign requested address error. The number of open tcp sockets to the backend went over the limit. This was triggered by the hard-coded value of db.SetMaxIdleConns(10) intended to prevent too many backend database
connections being left open triggering the go driver to close connections and shortly afterwards open them again triggering the issue described.

This patch now adjusts the value of SetMaxIdleConns to 25% of MySQLOrchestratorMaxPoolConnections if this value is larger and so will trigger less reconnections and help aleviate this point of stress.

I chose not to make the setting configurable (yet) as this may confuse people. If you make MySQLOrchestratorMaxPoolConnections higher then the number of idle connections is allowed to grow.

  • contributed code is using same conventions as original code
  • code is formatted via gofmt (please avoid goimports)
  • code is built via ./build.sh
  • code is tested via go test ./go/...

…value

Recent issues seen talking to the backend can give a: cannot assign
requested address error. The number of open tcp sockets to the backend
went over the limit.  This was triggered by the hard-coded value of
db.SetMaxIdleConns(10) intended to prevent too many backend database
connections being left open triggering the go driver to close connections
and shortly afterwards open them again triggering the issue described.

This patch now adjusts the value of SetMaxIdleConns to 25% of
MySQLOrchestratorMaxPoolConnections if this value is larger and so will
trigger less reconnections and help aleviate this point of stress.

I chose not to make the setting configurable (yet) as this may confuse
people. If you make MySQLOrchestratorMaxPoolConnections higher then
the number of idle connections is allowed to grow.
@shlomi-noach
Copy link
Collaborator

shlomi-noach commented Jun 13, 2017

I would go farther and hard code:

- MySQLOrchestratorMaxPoolConnections to 3
- SetMaxIdleConns to 3

this will match the changes in 9e1cd18

@shlomi-noach
Copy link
Collaborator

I must have been sleep deprived writing my previous comment. Ignore :)

@sjmudd
Copy link
Collaborator Author

sjmudd commented Jun 13, 2017

Oh good. (not that you were sleep deprived but that you weren't being serious ...)
For context my current setting is:

"MySQLOrchestratorMaxPoolConnections”: 1000

And with spiky backend traffic the go driver was closing and re-opening backend connections too frequently. The proposed change tries to remedy that.

@shlomi-noach
Copy link
Collaborator

shlomi-noach commented Jun 14, 2017

How about allowing SetMaxIdleConns(config.Config.MySQLOrchestratorMaxPoolConnections) ? If we're happy to allocate MySQLOrchestratorMaxPoolConnections connections, they may as well be idle in the go pool, with mysql itself potentially closing them on wait_timeout (or not, depending on your config)

@sjmudd
Copy link
Collaborator Author

sjmudd commented Jun 26, 2017

Hi Shlomi,

Basically seeing hundreds or thousands of idle connections which normally shouldn’t be there is not ideal.

If you don’t set this setting then you’ll get the same affect.

I’d rather the number of connections used is as small as reasonable and every time I’ve seen thousands of connections etc it means you have to pay more attention to memory consumption and a number of bad queries all sent through at once can take you down.

The MySQLOrchestratorMaxPoolConnections is to provide a limit but I’d prefer to keep away from that limit if possible.

@shlomi-noach shlomi-noach merged commit 685e502 into openark:master Jun 26, 2017
@sjmudd sjmudd deleted the more_dynamic_SetMaxIdleConns_for_backend branch December 25, 2021 20:17
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants