You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 17, 2020. It is now read-only.
I have a 3 node cluster, and always run out of memory on the stats node if I enabled stats collection. I've tried reducing the retention policies as far as possible, but stats still uses all the memory on the node. During testing, the non-stats nodes use about 1 GB of memory. The stats node uses more than 6 GB of memory before I kill the test.
Here's the most recent settings I've tried. If the only retention policy is 60 seconds long, should I expect the stats node to level off memory usage after 60 seconds?
{collect_statistics, coarse} <-- Even though I set this to coarse, it always changes to fine.
{collect_statistics_interval, 5000}
{rates_mode, basic}
{sample_retention_policies,
[{global, [{60, 5}]},
{basic, []},
{detailed, []}]}
Here is the status of the node 15 minutes after starting the test. During the test, I create a fixed number of queues and consumers. Memory usage continues to increase long after all queues/consumers have been created. Producers are using direct reply-to queues. Does that impact stats?
Please post questions to rabbitmq-users or Stack Overflow. RabbitMQ uses GitHub issues for specific actionable items engineers can work on, not questions. Thank you.
This is the most commonly expressed/asked complained/question this year. You will find dozens of threads about this in rabbitmq-users archives. A few things worth pointing out:
collect_statistics hasn't been effective for several feature versions. The plugin will log a warning if it is used. If you want to disable rates, set rates_mode to none. This is documented.
As discussed many times on rabbitmq-users and now mentioned in the docs, the thing that affects stats DB load most is the number of stats emitting entities (see the docs) and collection interval. You have it set to 5 seconds, which is useful in development environments but rarely needed when there is no human constantly using the UI. Set it to 30 or 60 seconds, it will reduce the load on the management plugin up to 6 or 12 times (compared to 5 seconds).
It is safe to restart the stats DB since all of its contents is entirely transient. How to do this is documented and has been mentioned dozens if not hundreds times on rabbitmq-users.
Direct reply-to has no effect whatsoever on the stats DB. The fact that apps no longer declare any queues or exchanges or bindings doesn't mean that there are no more stats-emitting entities being created: connections and channels are the primary contributors and those can be leaked, for example. That's why rabbit.channel_max and features such as rabbitmq/rabbitmq-server#500 were introduced.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
OS: Windows
Version: 3.6.5
I have a 3 node cluster, and always run out of memory on the stats node if I enabled stats collection. I've tried reducing the retention policies as far as possible, but stats still uses all the memory on the node. During testing, the non-stats nodes use about 1 GB of memory. The stats node uses more than 6 GB of memory before I kill the test.
Here's the most recent settings I've tried. If the only retention policy is 60 seconds long, should I expect the stats node to level off memory usage after 60 seconds?
Here is the status of the node 15 minutes after starting the test. During the test, I create a fixed number of queues and consumers. Memory usage continues to increase long after all queues/consumers have been created. Producers are using direct reply-to queues. Does that impact stats?
Status of node rabbit@RMQTest01 ...
The text was updated successfully, but these errors were encountered: