-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The proxy on a random pod enters fail-fast mode preventing it from working correctly. #8934
Comments
After much research and testing, I found that increasing the maximum number of concurrent streams helps. Since I've done that on the affected microservice, I haven't seen the problem in about 24 hours. That was the only method I found to keep the system working for such time, as previously, an Ingester would enter The way to increase the maximum stream on an Ingester is by adding the following argument to its
The default is 100, and for testing purposes, I decided to use 100000 based on what some people suggested on Cortex's Slack Channel. I'm considering this a workaround rather than a permanent solution, as, without Linkerd, I never needed to increase that limit when handling high traffic on previous tests. However, I'm trying to understand why that helped. I'll continue monitoring the system to ensure it stays working as it is now. |
The same issue is in our grpc service。It can only be solved by restarting. No meaningful information can be found in the log。 |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 14 days if no further activity occurs. Thank you for your contributions. |
What is the issue?
The Linkerd proxy on a random Cortex Ingester enters into the fail-fast mode blocking the communication against the distributors, but not against other Cortex components like Queriers.
That effectively breaks replication as the distributors cannot see the Ingester in a healthy state, even if the communication via
memberlist
is unaffected and the Ingester appears active on the ring.I tried restarting the Ingester, but the problem solves temporarily. The strange part is that, sometimes, another Ingester enters into the fail-fast state after restarting the affected one, which is why I used the term random to describe the problem.
How can it be reproduced?
The problem appears when handling a considerable amount of traffic. Currently, distributors are receiving a constant rate of 50K samples per second in batches of 500, meaning, effectively, distributors are receiving 100 requests (with 500 samples each), and according to
linkerd viz dashboard
, Ingesters are receiving a similar number of RPS.On my initial tests with orders of magnitudes less traffic, the problem doesn't appear.
Logs, error output, etc
As the logs are very verbose due to the injection rate. Here are the last 5 seconds from the affected Ingester (i.e.,
ingester-0
) and the two distributors:https://gist.github.com/agalue/5ecbbfcf37ecf8b5798bf18bbe0473b1
Here is how I got the logs:
output of
linkerd check -o short
➜ ~ linkerd check -o short Status check results are √
Environment
Note: the problem appears with and without Calico (tested on different clusters).
Possible solution
No response
Additional context
In Cortex, all components talk to each other via Pod IP, meaning all the communication happens via Pod-to-Pod through gRPC.
To give more context about what you would see on the proxy logs:
The only error code I found on the distributors proxy is:
In terms of the applications, the affected Ingester reports nothing on its log, as the distributor traffic is not reaching the application.
The distributors, on the other hand, are flooded with the following message; as I presume the proxy on the affected Ingester is rejecting the traffic:
Would you like to work on fixing this bug?
No response
The text was updated successfully, but these errors were encountered: