-
Notifications
You must be signed in to change notification settings - Fork 817
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Flaky: TestAllocator #789
Flaky: TestAllocator #789
Conversation
Build Failed 😱 Build Id: 22b8f54e-427b-4043-a686-bec0990bdd57 To get permission to view the Cloud Build view, join the agones-discuss Google Group. |
Build Failed 😱 Build Id: 9411b33c-317f-4e3a-994f-f7232fe88a11 To get permission to view the Cloud Build view, join the agones-discuss Google Group. |
Build Failed 😱 Build Id: 0b808bc2-c8bf-490a-bbe0-72dd50c701d2 To get permission to view the Cloud Build view, join the agones-discuss Google Group. |
Build Succeeded 👏 Build Id: 054af97a-541b-4db1-a6d2-96bb78739307 The following development artifacts have been built, and will exist for the next 30 days:
A preview of the website (the last 30 builds are retained): To install this version:
|
- Fix flakiness in this e2e test - Fix weird issue where killall returns a non zero exit code, even with -q Make sure killall always returns true.
5d0ff9c
to
0acbdee
Compare
@@ -31,5 +31,5 @@ echo "Waiting consul port-forward to launch on 8500..." | |||
timeout 60 bash -c 'until printf "" 2>>/dev/null >>/dev/tcp/$0/$1; do sleep 1; done' 127.0.0.1 8500 | |||
echo "consul port-forward launched. Starting e2e tests..." | |||
consul lock -child-exit-code=true -timeout 30m -try 30m -verbose LockE2E /root/e2e.sh | |||
killall -q kubectl | |||
|
|||
killall -q kubectl || true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Kuqd for interest's sake - this was the fix for the e2e tests. even with -q
this was always returning a non zero status code. 🤷♂️
return | ||
|
||
// wait for the allocation system to come online | ||
var response *http.Response |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pooneh-m for interest's sake - this was what (seems to have) solved the flakiness issue. My theory is that there was a race condition wherein the backing pods for the service would come up after this test was run, but only sometimes (probably depending on e2e test order).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the fix. Makes sense.
Build Succeeded 👏 Build Id: 7a0986f1-46ec-4623-813a-d7d6317b5bd7 The following development artifacts have been built, and will exist for the next 30 days:
A preview of the website (the last 30 builds are retained): To install this version:
|
return | ||
|
||
// wait for the allocation system to come online | ||
var response *http.Response |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the fix. Makes sense.
Attempt to fix flakiness in this e2e test.
Theory is that the allocation deployment is taking a little
while to come online.