-
Notifications
You must be signed in to change notification settings - Fork 75
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
* 1st argument: the table identifier does not refer to an existing ETS table #181
Comments
Hi 👋 !! Real quickly, are you using the latest Nebulex version? or what version are you using? |
Everything is updated.
|
Maybe when the old ets table is removed there are still accesses to it? |
Right, let me check, it is weird because when a new generation is created, the older one is just marked for deletion, but it is not deleted immediately, that happens the next time a new generation is created, giving time precisely to processes still hitting it, which must be a very short period of time, because the references to the new and old generation have changed. Anyway, let me review this and try to reproduce it somehow. |
I was investigating and the case this may happen is if for example two (or more) generations are created one right after the other, almost at the same time not giving time to existing processing to update the new generation list, but it is very weird. My first suspicion is that maybe the GC timer and the cleanup timer are happening one right after the other causing a situation like this, it is very unusual, but it may happen, especially if you mention you have a high volume of data and load on the cache. So, I made a fix to avoid it, I pushed the changes to the master branch already, please try it out and let me know if the issue persists, let's first discard this. |
Ok, I'll give it a try at production... on monday morning |
I'm still investigating because it is not yet clear to me why this is happening, I've pushed some fixes/improvements in the meantime, perhaps it won't fix this issue but let me know once you try it out. |
I'm not push your new code on production yet (I will soon), but I changed config 3 days ago to:
And everything is running without issues, in this time, the cache cleanup itself 4 times with no problems. I believe gc interval (with default value) is related with the bug. |
Ok, there is a bug... I will make a video, something is very wrong. |
My mistake... I need more time investigating it. what I thought was a bug might is not. |
Interesting, this gives me a better idea of where to look into it, will check and keep you posted, thanks!! |
How exactly is the allocated_memory parameter along with a high gc_interval value supposed to work when memory reaches the maximum allowed? A) Everything is deleted and new ets are created I have the impression that in my app option "A" is being made. Becasuse used memory drops from 16gb (allocated_memory) to near zero. I'm adding some telemetry data to help see how the cache behaves over time and I'll be putting it all into production soon. |
Hey 👋 !! There was a fix very much related to this one (#183), perhaps you want to try it out, it is on the master branch. Let me know how it goes, stay tuned! |
Probably yes, because I tested it with several configurations of gc_interval and gc_cleanup_min_timeout, and the bug only happens when gc_interval is not setted. I'll test master branch. |
Closing this issue but feel free to reopen it if even with fix #183 the issue persists. |
@cabol we see the same issue with nebulex 2.5.1. Will try it soon with the latest 2.5.2. The stack trace reports :ets.lookup/2:
Sometimes the stack trace says :ets.take/2:
Our Nebulex usage is relatively simple:
The only methods we call is cache put and get:
We do however run two Elixir nodes on the same machine so that we can restart Elixir without downtime. rel/env.sh.eex:
config/releases.exs:
|
Hey 👋 ! May you share the cache config? Also, if you can elaborate a bit more on when this happens, like if there is a particular situation or something, or just happens randomly? Thanks! |
Hi,
I have an app with a intensive use of nebulex and more or less 100 times per day I receive this error on elixir logs,
can anyone help me?
The flow is: it happens many times at once, run many hours without issue, it happens again... (repeat)
my nebulex config
Error on sentry:
The text was updated successfully, but these errors were encountered: