-
-
Notifications
You must be signed in to change notification settings - Fork 414
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[🐛 BUG]: Memory Leak in KV 'in-memory' driver #2051
Comments
Hey @segrax 👋 |
And, just to clarify some things. I used a hashmap (sync.Map actually, but under the hood this is a two hashmaps, dirty and clean) to store KV items. In Go, maps and more specifically - buckets in the map, can't be shrunk when you delete the key. So, if you add 100 mil items to the map, and then delete them, you would see approximately (minus your data, but including buckets) the same memory usage. And memory won't be reclaimed. The only way to reclaim that memory is to re-create the map itself. |
Might be it'd be better to use binary-search+slice combination, so, on big numbers Go runtime would be able to reclaim freed memory. But after intensive use, the Tree might become a linked list with O(n) search time. So, AVL tree or similar algorithm could be applied here to prevent that. |
Hey @rustatian, $storage->set('key', 'value', 2000); |
If an existing key is set again (before its expired), is the memory reclaimed? my GO experience is pretty limited, but if this runs again (with the same Key) before the callback has executed, is it cleaned up? clbk, stopCh, updateCh := d.ttlcallback(items[i].Key(), tm)
go func() {
clbk(*d.broadcastStopCh.Load())
}()
// store the callback since we have TTL
d.callbacks.Store(items[i].Key(), &cb{
updateCh: updateCh,
stopCh: stopCh,
}) I did come across this issue, |
And this case is interesting, callback won't be cleaned, but it should fire after the TTL is expired and remove that key, but other goroutines would be waiting for the TTL. But yeah, good point I think, I'll double check that case and also fix the panic. |
I have tried setting the same key ~50k times with a TTL of 2000... then waiting 5 minutes, but the memory use never goes down This is my full worker.php, although you should be able to replicate it pretty easily just by calling the RPC in a loop with a fixed payload <?php
require __DIR__ . '/vendor/autoload.php';
use Spiral\RoadRunner;
use Spiral\RoadRunner\Environment;
use Spiral\Goridge\RPC\RPC;
use Spiral\RoadRunner\KeyValue\Factory;
$worker = RoadRunner\Worker::create();
$http = new RoadRunner\Http\HttpWorker($worker);
$rpc = RPC::create('tcp://127.0.0.1:6001');
$factory = new Factory($rpc);
$storage = $factory->select('cache');
try {
while ($req = $http->waitRequest()) {
$storage->set('key', 'value', 2000);
$http->respond(200, 'asd');
}
} catch (\Throwable $e) {
$worker->error($e->getMessage());
} |
@segrax I've located the issue, will release the patch soon. |
Ok, what was the reason:
|
No duplicates 🥲.
What happened?
When setting an item in the KV with a TTL (eg. 2000 -- using a small value, like 5, seems to stop the issue), memory usage will continually increase over time.
I suspect its related to the callback here, which is created each time a key is set
Have provided some pprof outputs in the log
Unrelated, but just discovered, if you set your TTL to 1, you can cause a panic
Version (rr --version)
2024.2.1
How to reproduce the issue?
Just using rakyll/hey to hit the http endpoint
RR Config:
Relevant log output
The text was updated successfully, but these errors were encountered: