Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

multitenant: Increase timeout for kvtenantccl #98997

Merged
merged 1 commit into from
Mar 20, 2023

Conversation

ajstorm
Copy link
Collaborator

@ajstorm ajstorm commented Mar 19, 2023

The new TestTenantUpgradeInterlock test takes a long time to run (up to a minute locally) and has started timing out in nightly runs. On the suspicion that it's timing out strictly because it's long running, increase the timeout to see if that resolves the failures.

Release note: None

@ajstorm ajstorm requested review from knz and healthy-pod March 19, 2023 22:46
@ajstorm ajstorm requested a review from a team as a code owner March 19, 2023 22:46
@cockroach-teamcity
Copy link
Member

This change is Reviewable

Copy link
Contributor

@healthy-pod healthy-pod left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

has started timing out in nightly runs.

Are those stress nightlies? If you meant #98986, it was a normal arm64 post-merge unit tests run not a nightly run. If we need to increase the timeout for stress nightlies only we can make the change here though.

@ajstorm
Copy link
Collaborator Author

ajstorm commented Mar 20, 2023

If you meant #98986, it was a normal arm64 post-merge unit tests run not a nightly run. If we need to increase the timeout for stress nightlies only we can make the change here though.

Yes, my commit record was inaccurate. I think we need this increased timeout for non-stress runs as well. Will update the commit record before merging. I also think that we should investigate tenant rate limiting as a possible cause for this timeout.

The new TestTenantUpgradeInterlock test takes a long time to run (up to a
minute locally) and has started timing (the first instance being in a
post-merge arm64 test). On the suspicion that it's timing out strictly
because it's long running, increase the timeout to see if that resolves
the failures.

Release note: None
Epic: None
Fixes: cockroachdb#98986
@ajstorm
Copy link
Collaborator Author

ajstorm commented Mar 20, 2023

bors r=knz

@craig
Copy link
Contributor

craig bot commented Mar 20, 2023

Build failed:

@knz
Copy link
Contributor

knz commented Mar 20, 2023

flake is #98020

bors r+

@craig
Copy link
Contributor

craig bot commented Mar 20, 2023

Build succeeded:

@craig craig bot merged commit 5839fe0 into cockroachdb:master Mar 20, 2023
ajstorm added a commit to ajstorm/cockroach that referenced this pull request Mar 22, 2023
TestTenantUpgradeInterlock was skipped with cockroachdb#99121 because it was timing out
regularly. In response to the timeouts however, cockroachdb#98997 was merged to increase
the timeout duration for the test (since its run length is proportional to the
number of migrations in the release, which has been increasing). Unfortuntely,
our wires got crossed and the skip was introduced before the test had a chance
to run with the new timeout. cockroachdb#98987 hasn't shown a failure with the new timeout
length, so it may be the case that the longer timeout has resolved the problem.

Reenabling the test to see if it has better success under the new timeout
length.

Fixes: cockroachdb#98987
Epic: None
Release note: None
ajstorm added a commit to ajstorm/cockroach that referenced this pull request Mar 22, 2023
TestTenantUpgradeInterlock was skipped with cockroachdb#99121 because it was timing out
regularly. In response to the timeouts however, cockroachdb#98997 was merged to increase
the timeout duration for the test (since its run length is proportional to the
number of migrations in the release, which has been increasing). Unfortuntely,
our wires got crossed and the skip was introduced before the test had a chance
to run with the new timeout. cockroachdb#98987 hasn't shown a failure with the new timeout
length, so it may be the case that the longer timeout has resolved the problem.

Reenabling the test to see if it has better success under the new timeout
length.

Fixes: cockroachdb#98987
Epic: None
Release note: None
craig bot pushed a commit that referenced this pull request Mar 22, 2023
99216: multitenant: re-enable TestTenantUpgradeInterlock r=knz a=ajstorm

TestTenantUpgradeInterlock was skipped with #99121 because it was timing out regularly. In response to the timeouts however, #98997 was merged to increase the timeout duration for the test (since its run length is proportional to the number of migrations in the release, which has been increasing). Unfortuntely, our wires got crossed and the skip was introduced before the test had a chance to run with the new timeout. #98987 hasn't shown a failure with the new timeout length, so it may be the case that the longer timeout has resolved the problem.

Reenabling the test to see if it has better success under the new timeout length.

Fixes: #98987
Epic: None
Release note: None

Co-authored-by: Adam Storm <storm@cockroachlabs.com>
blathers-crl bot pushed a commit that referenced this pull request Mar 23, 2023
TestTenantUpgradeInterlock was skipped with #99121 because it was timing out
regularly. In response to the timeouts however, #98997 was merged to increase
the timeout duration for the test (since its run length is proportional to the
number of migrations in the release, which has been increasing). Unfortuntely,
our wires got crossed and the skip was introduced before the test had a chance
to run with the new timeout. #98987 hasn't shown a failure with the new timeout
length, so it may be the case that the longer timeout has resolved the problem.

Reenabling the test to see if it has better success under the new timeout
length.

Fixes: #98987
Epic: None
Release note: None
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants