Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

clientv3/integration: add TestBalancerUnderServerShutdownImmutable #8779

Merged
merged 1 commit into from
Oct 27, 2017

Conversation

gyuho
Copy link
Contributor

@gyuho gyuho commented Oct 27, 2017

@gyuho gyuho changed the title clientv3/integration: add testBalancerUnderServerShutdownImmutable clientv3/integration: add TestBalancerUnderServerShutdownImmutable Oct 27, 2017
donec := make(chan struct{})
go func() {
defer close(donec)
for i := 0; i < 3; i++ {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why retry 3 times? why we do not in a go routine? instead of trying get after the termination?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be better in a goroutine if we want to test range request retrial logic? Also not all range request overlaps with node shutting down so try 3 times.

Or we can send range request right after terminate without sleep?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think the goal here is simple: test if balancer would switch endpoint when a machine is shutdown. we should not mix retry + balancer switch together unless when needed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it. I will simplify the tests.


select {
case <-donec:
case <-time.After(2 * timeout):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this timeout is smaller than line187. line187 can be triggered 3 times. so the max timeout of retry is 21s.

@gyuho gyuho force-pushed the shutdown-test branch 2 times, most recently from 73d6900 to 7100a26 Compare October 27, 2017 17:26
@xiang90
Copy link
Contributor

xiang90 commented Oct 27, 2017

/cc @jpbetz

// pin eps[0]
cli, err := clientv3.New(clientv3.Config{Endpoints: []string{eps[0]}})
if err != nil {
t.Fatal(err)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add context on error. failed to create client: %v, err

err = op(cli, cctx)
ccancel()
if err != nil {
t.Fatal(err)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

context on error. also this should be t.errorf

defer cli.Close()

// wait for eps[0] to be pinned
waitPinReady(t, cli)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does waitPin return an error?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's fatal inside waitPinReady

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

probably should rename to mustWaitPinReady later.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
@xiang90
Copy link
Contributor

xiang90 commented Oct 27, 2017

lgtm if CI passes

testBalancerUnderServerShutdownImmutable(t, func(cli *clientv3.Client, ctx context.Context) error {
_, err := cli.Get(ctx, "foo")
return err
}, 7*time.Second) // give enough time for leader election, balancer switch
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as @jpbetz mentioned in other pr, we should define leader election timeout as a const somehere. and normal waiting timeout too. so this could be rewritten toelectionWait + retryWait

@xiang90 xiang90 merged commit 2cea13b into etcd-io:master Oct 27, 2017
@gyuho gyuho deleted the shutdown-test branch October 27, 2017 19:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants