-
Notifications
You must be signed in to change notification settings - Fork 85
avoid get blocked in dumping when mysql connection is broken #190
Conversation
Do we need an extra MySQL connection? |
v4/export/config.go
Outdated
dsn := fmt.Sprintf("%s:%s@tcp(%s:%d)/%s?charset=utf8mb4", conf.User, conf.Password, conf.Host, conf.Port, db) | ||
// maxAllowedPacket=0 can be used to automatically fetch the max_allowed_packet variable from server on every connection. | ||
// https://github.com/go-sql-driver/mysql#maxallowedpacket | ||
dsn := fmt.Sprintf("%s:%s@tcp(%s:%d)/%s?charset=utf8mb4&readTimeout=30s&writeTimeout=30s&interpolateParams=true&maxAllowedPacket=0", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is there any chance of mysql taking longer than 30 seconds to start sending data when the chunk size is large? Probably not right? Since you query by primary key?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point! How long do you think is appropriate? Or, are there any more suggestions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about keeping it 30s and make it configurable?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was genuinely curious if it can take over 30 seconds to start sending data when you are querying directly by primary key. Since it’s the time to first byte sent that seems highly unlikely so 30 seconds seems fine. I’m not sure though. Configurable sounds safest.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have raised this variable to 15m
and make it configurable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
…#190) * add extra conn to avoid get blocked at concurrently dumping * add readTimeout/writeTimeout to avoid get blocked at fetching data from database server * avoid wasting at finishing metadata when dumping is already failed * add read-timeout parameter and mark it hidden
…#190) * add extra conn to avoid get blocked at concurrently dumping * add readTimeout/writeTimeout to avoid get blocked at fetching data from database server * avoid wasting at finishing metadata when dumping is already failed * add read-timeout parameter and mark it hidden
…#190) * add extra conn to avoid get blocked at concurrently dumping * add readTimeout/writeTimeout to avoid get blocked at fetching data from database server * avoid wasting at finishing metadata when dumping is already failed * add read-timeout parameter and mark it hidden
…#190) * add extra conn to avoid get blocked at concurrently dumping * add readTimeout/writeTimeout to avoid get blocked at fetching data from database server * avoid wasting at finishing metadata when dumping is already failed * add read-timeout parameter and mark it hidden
…/dumpling#190) * add extra conn to avoid get blocked at concurrently dumping * add readTimeout/writeTimeout to avoid get blocked at fetching data from database server * avoid wasting at finishing metadata when dumping is already failed * add read-timeout parameter and mark it hidden
What problem does this PR solve?
Fix #181
dumpling
may get blocked at fetching data from database server when the connection to the database server is closed due to some reason.Here is the goroutine information:
What is changed and how it works?
readTimeout
in dsn. The default value is15m
and add a hidden configurationread-timeout
.extraConn
inconnectionPool
to avoid recreating connections while writing metadata, etc.readUntilEOF
function to read the remained buffer which will waste a lot of time. For aws client, the retry logic is in their function. So we needn't wait for this and should quit directly.Check List
Tests
Try to kill mysql socket connection while dumping. dumpling on master branch may get blocked. But dumpling built from this branch won't.
Related changes
Release note