Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

introducing SkipMaxScaleCheck config #110

Merged
merged 1 commit into from
Mar 23, 2017
Merged

introducing SkipMaxScaleCheck config #110

merged 1 commit into from
Mar 23, 2017

Conversation

shlomi-noach
Copy link
Collaborator

When SkipMaxScaleCheck is set to true (default is false), orchestrator skips any checks for maxscale servers.

People who never ever have any maxscale binlog server in their topology (not to be confused with maxscale load balancer, which is completely unrelated to this PR) should set this variable to true. 99.9999999% of people don't have maxscale binlog servers in their setups.

The reason for this flag: when connecting to a server, orchestrator first checks whether it is a maxscale server, because maxscale is very picky about the type of queries it can respond to. So first orchestrator needs to determine what is the type of queries valid to issue.

If a server is offline or down, what we get in our error logs is that the maxscale check has failed. I don't have maxscale and most people don't and this message actually bothers me.

But more importantly it is wasting a round trip. For remote servers with high latency saving a single round trip is a worthy change (and I will proceed to attempt to remove other round trips as well).

This PR is backwards compatible.

cc @github/database-infrastructure @sjmudd

@shlomi-noach shlomi-noach temporarily deployed to production March 22, 2017 07:56 Inactive
@shlomi-noach shlomi-noach merged commit 56ce0d2 into master Mar 23, 2017
@shlomi-noach shlomi-noach deleted the skip-maxscale branch March 23, 2017 19:05
This pull request was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants