-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Benchmark's memory requirement #9
Comments
Yeah, I think the larger datasets simply require more than 16GB to generate. You have two options.
Hope this helps :) |
Thanks for your suggestions, @alexandervanrenen Here is the sample output when I comment out the 400M, 600M, and 800M datasets and only try to run the 200M-record ones (GCC 10):
In the previous version, the key-value pairs for 200M records took 3.2 GB (64-bit key & payload), and another 3.2 GB when building any index (copying |
Ok for this one I am not sure what is happening (have not seen it before). Might be the case that RMI is not freeing some memory or RS is allocating too much memory .. has any of you seen this one @RyanMarcus @andreaskipf ? |
Not sure what changed w.r.t. the memory requirements (I have no issue running the benchmark on a machine with 32GiB RAM). However, we will replace the current RS implementation soon with the new one, which operates directly on the input array without creating a copy. I'll ping this thread once this is done. |
Ok, I will investigate ... should be easy enough to figure out :) |
We have just replaced RS with the new version. @alihadian can you please verify whether you can now build it on your 16GiB machine? Thanks! |
[Wrong output was posted in my previous comment] Thanks. If you want to make the benchmark hands-off on 16GB, then prepare.sh must take into account the selected datasets defined in datasets_under_test.txt. The script currently tries to load all datasets and generate queries for all datasets (including the 400M & 600M-record ones), and hence crashes. I manually commented out the 400M+ from prepare & datasets_under_test, but still some algorithms crash:
|
On a system with 16GB data, the benchmark crashes while building the benchmark:
To upgrade the memory, I wonder what is the peak memory usage of the benchmark. Do you have any rough estimate?
Apparently a bit more than 16GB is enough for the build of osm_cellids_600M_uint64, but then perhaps the benchmark execution could take more memory. right?
The text was updated successfully, but these errors were encountered: