Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

clientv3/integration: add TestBalancerUnderServerShutdownWatch #8758

Merged
merged 1 commit into from
Oct 26, 2017

Conversation

gyuho
Copy link
Contributor

@gyuho gyuho commented Oct 25, 2017

Current Watch integration tests haven't covered the balancer
switch behavior under server failures.

#8711

@xiang90
Copy link
Contributor

xiang90 commented Oct 25, 2017

/cc @jpbetz can you give this a review?


// TestBalancerUnderServerFailureWatch expects that watch client
// switch its endpoints when the member of the pinned endpoint fails.
func TestBalancerUnderServerFailureWatch(t *testing.T) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

UnderServerShutdown?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

failure is too generic: network partition, server shutdown, etc.

lead := clus.WaitLeader(t)

// close unnecessary clients
clus.Client(lead).Close()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just start with no client. create one client to do put.

@gyuho gyuho changed the title clientv3/integration: add TestBalancerUnderServerFailureWatch clientv3/integration: add TestBalancerUnderServerShutdownWatch Oct 25, 2017
cli.SetEndpoints(eps...)

wch := cli.Watch(context.Background(), "foo", clientv3.WithCreatedNotify())
<-wch
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

select on this. we need to have a timeout.

defer cli.Close()
cli.SetEndpoints(eps...)

wch := cli.Watch(context.Background(), "foo", clientv3.WithCreatedNotify())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

make both watch key and value consts.

// switch to others when eps[lead] fails
for ev := range wch {
received := false
for _, e := range ev.Events {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we need this loop when we only put one key?


// writes to eps[lead+1]
for {
_, err = clus.Client((lead+1)%3).Put(context.Background(), "foo", "bar")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

give a timeout here instead of using context.Background.


select {
case <-donec:
case <-time.After(5 * time.Second): // enough time for leader election, balancer switch
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how long is the election timeout?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

10ms for integration package.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i mean election timeout for the etcdserver. i want to understand why we wait for 5 seconds here, not 10s 20s.

t.Fatal(err)
}
defer cli1.Close()
cli1.SetEndpoints(eps...) // add all eps
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

// add all eps to list, so that when the original pined one fails the client can switch to other available eps.

lead := clus.WaitLeader(t)

// pin eps[lead]
cli1, err := clientv3.New(clientv3.Config{Endpoints: []string{eps[lead]}})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

watchCli

clus.Members[lead].Terminate(t)

// writes to eps[lead+1]
cli2, err := clientv3.New(clientv3.Config{Endpoints: []string{eps[(lead+1)%3]}})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

putCli

@xiang90
Copy link
Contributor

xiang90 commented Oct 25, 2017

lgtm after resolving all the comments.

leave to @jpbetz for a final approve.

// writes to eps[lead+1]
cli2, err := clientv3.New(clientv3.Config{Endpoints: []string{eps[(lead+1)%3]}})
if err != nil {
t.Fatal(err)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add context of the error.

if err == rpctypes.ErrTimeout || err == rpctypes.ErrTimeoutDueToLeaderFail {
continue
}
t.Fatal(err)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add context to the error. only logging the error msg itself is not super helpful

Current Watch integration tests haven't covered the balancer
switch behavior under server failures.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
@jpbetz
Copy link
Contributor

jpbetz commented Oct 25, 2017

/lgtm

Test logic looks correct. I don't have any concerns. Thanks!

@xiang90
Copy link
Contributor

xiang90 commented Oct 26, 2017

SemaphoreCI is unrelated (gRPC issue again). So merging.

@xiang90 xiang90 merged commit 995d79a into etcd-io:master Oct 26, 2017
@gyuho gyuho deleted the failure-test branch October 26, 2017 00:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants