-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GMP consuming a lot of memory #3612
Comments
I suspect this is the same issue as #2915 |
I have tried to mitigate the issue by adding a loop if i % 1000 == 0 which effectively keeps the used RAM down to a few hundred Mb in case of calling LLtest( BigInt( 216091 ) ). The whole thing, however, then becomes very ( very ( very ) ) slow. And explicitly calling gc() each 1000th iteration is, IMHO, a dreadful hack. |
Doesn't fix the issue, but gmp has a special operation for a^b mod c, see the |
I think we should merge this code fragment into #2915 and close this one. |
Got it. Don't care, as long as you guys fix this. Please. I want to press ahead with Julia searching Mersenne primes ( as a matter of fact, I am looking to build something faster than GIMPS. I know, I know, quite the challenge, their code is blindingly fast - but then again, without challenges, life is boring :-) |
closing as dup. fix on the way. |
@exercitussolus Could you try now? Looking forward to what you do with Mersenne prime search. |
Given that GIMPS seems to use AVX, do you think you will need intrinsics (#2299)? |
@ViralBShah Thanks for asking :-) Yes, I would need intrinsics, more particularly for a fast modulo operation. Multiplication can be parallelized ( Karatsuba, Toom-Cook, Schönhage-Strasse, Montgomery... ), and I am going to implement these myself in the next days - but a modulo operation is almost impossible to parallelize. I know, in the other thread, kk49 scared the bejeezus out of @StefanKarpinski by asking for multithreading with SIMD :-) Yet, when we are talking high-performance computing, such things are exactly what we need. |
@ViralBShah The issue is fixed now, after your patch, in so far that LLtest( BigInt( 216091 ) ) now only consumes about 200 Mb of RAM. @mlubin using powermod( L, 2, mp ) instead of ( ( L * L ) % mp - 2 ) results in execution times 3.25 worse... Prolly a nice gimmick for Int64 / Int 128, but unusable for BigInt :-( |
Credit for the fix is to @JeffBezanson |
I guess using |
Although new to Julia ( just 3 days ago I built Julia from source and started playing around with it, I am already very enthusiastic.
Yet, when running a "classical" Lucas-Lehmer test on Mersenne prime candidates, I confront a ginormous memory leak. When running the code below with an argument of, let's say, 216091, Julia on my system ( Ubuntu 12 server, 1x 8-Core Xeon E5620, 8 Gb RAM ) uses up several tens of Gigabytes of RAM ! With an argument > 500,000 even running this function with 32 G of swap space ( !!! ) will lead Linux to terminate the Julia process. Am I doing something wrong here ? The goal is, of coarse, to test prime candidates with an exponent > 57,000,000 ;-)
[pao: formatting]
[Viral: Changed title in line with #2915]
The text was updated successfully, but these errors were encountered: