-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Receive: Dead Lock in multitsdb #7856
Comments
Hey @jnyi! I'm here to help you with any bugs, questions, or contributions. Let's work together to solve this issue. The issue you're encountering with potential lock race conditions in the The Given the high number of goroutines and potential lock contention, it might be beneficial to review the usage patterns of these functions to ensure they are not being called excessively or in a way that could lead to contention. Additionally, consider profiling the application to identify specific areas where locks might be causing bottlenecks or where goroutines are not being managed efficiently. |
we plan to revert this pr internally which use more memory but less chance to cause pod goroutine leaks and hurts reliability to the system: #7782 |
Fixing in #7857, sorry for issues |
np, thanks for helping this out quickly, wonder if we could capture this kind of leaks in unit tests, like |
We have |
We are testing the latest thanos main branch code to adopt cuckoo filters and found receiver sometimes has leak go routines very high numbers, did pprof, and found there are potential lock race conditions between function:
Thanos, Prometheus and Golang version used:
thanos: v0.37.0-dev
golang: 1.23.0
Object Storage Provider:
What happened:
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Full logs to relevant components:
Anything else we need to know:
The text was updated successfully, but these errors were encountered: