-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to handle the precision of BigFloat
s
#10040
Comments
Don't most computer-algebra systems with arbitrary-precision arithmetic use a global-like concept to control default precision? Maple has I think you are confusing precision with accuracy here. Just because a number has cc: @jiahao |
I've considered this before for a somewhat different reason: we could potentially arrange for BigFloats to be immediate objects if their size is part of their type. |
My guess is that the cost of recompiling everything for every different precision is not worth it. |
I vaguely recall from my first exposure to Julia that having the precision set globally was a conscious decision to discourage mixing
I have not used MPFR directly myself, but the sample program suggests otherwise. We see explicitly that each call to If we're going to do this, we would want the precision as a type parameter. Otherwise any sort of matrix operation on
I think MPFR does try to guarantee that the output of a floating-point computation is correct (roughly) to the working precision of the least precise variable. Their technical paper says: (doi:10.1145/1236463.1236468, Sec. 2.3)
|
You'd only have to recompile anything if you for some reason don't like the default number of bits. In the cases you do change something, we'll only recompile the code you actually use with the different precision. |
I don't have a problem with the first option: operations with different precisions already work correctly, this would just be equivalent to x = with_bigfloat_precision(200) do
BigFloat(3.1)
end I think it's only worth including the precision in the type parameter if there is some sort of performance advantage, which at the moment there doesn't seem to be. As @stevengj says, it is important not to conflate precision with accuracy (as Mathematica does): see, for example, slide 32 onwards of this presentation by Kahan. |
|
Given how slow BigFloats are likely to be anyway, I doubt there's much to gain by specializing their representation per precision. The underlying C code is already designed for run-time flexibility in precision, which we should be thankful for --- it could have been a C++ template library. |
Yes I see that I have been confusing precision and accuracy, thanks. But both @jiahao 's MPFR example, and @stevengj 's Mathematica example seem to show that there is not in fact always a global precision defined, and the precision is specified for each variable independently. This could be dealt with via an optional precision argument for the For the type of application that @stevengj discussed, I had envisaged explicitly changing the precision of variables, something like
so that the mathematical functions would then automatically use the input precision as the output precision. I see that for now there is no advantage, and apparently some significant disadvantage, in having the precision as a type parameter. @StefanKarpinski I don't know what you mean by "immediate object"? I presume something to do with allocation? @ivarne You are right about |
Hmm, getting back to the beginning and the above from @dpsanders , I still think that for Also, in my experience there are quite a bit of speed differences between, say using 128 bit or 1024 bit precision.... and both are useful/ important to have available. |
👍 💯 to the @dpsanders' to the approach with the precision as part of the type. |
@mmaechler, I agree that it would be good to support a precision argument in the |
@stevengj : I think Julia should follow what MPFR does itself. MPFR is a very mature project with many years of growth and consolidation, and all their arithmetic (and special) functions basically use the principle that the precision of the operands determines the precision of the result. AFAIU, Julia strives to be a functional language (wherever sensible). Global settings such as the bigfloat precision are actually quite contrary to functional programming in the above sense. |
Indeed, in general global options appear to be frowned upon by core Julia developers (and rightfully so). |
The precision of the result in MPFR is determined by the precision of the variable created to store the result: the precision of the operands is irrelevant. |
I think that @simonbyrne is correct: MPFR never decides how much precision a result should have – that's always determined by the precision of the first argument which is the return value, I do like the idea of having the precision as part of the type, a la |
@simonbyrne : You are right about MPFR, I was wrong; but I was lead by the very typical use; notably in cases as ours: You translate a BigFloat to an |
From http://www.mpfr.org/mpfr-current/mpfr.html:
This is what I had in mind in one of my previous comments; indeed I was getting confused between accuracy and precision. Maybe Julia could be the layer in which the accuracy (number of correct bits) is tracked: for example, multiplying a number in which 100 bits are guaranteed correct (known from previous calculations) with one with 200 correct bits should (presumably) result in a number with 100 bits correct. However, this sounds like it could be difficult. |
Sub ussues:
Initially I loved the idea of 1. but it has some major drawbacks.
The benefit of 1. would of course be that Julia could allocate storage for |
I feel that the burden on the type system and compiler is unlikely to be too much for most use cases, where one may have at best, a few different precisions. |
I once tried to parametrize the precision and avoid using a global variable for this, but I never managed to solve some issues in a satisfactory manner, like the precision of the returned value. A small example is trying to do arithmetic when you want the result with a higher precision than either the arguments, say: set_bigfloat_precision(2)
x = big(1.)
y = big(3.)
with_bigfloat_precision(()-> x / y, 1024) The pre-merge version of BigFloats had the precision as a type parameter, and one of the first commits tried to allow each function to set the precision of the results, while using either the input or a generic value as the default in other cases, but it made very hard to reason about any code. |
I really feel that we should experiment with having the precision of a BigFloat be part of its type. In practice, programs don't use many different precisions. This could make reasoning about how big the result of an operation should be easy. It would also potentially allow stack allocation of BigFloats. |
The areas I'm most familiar with, computational number theory and interval arithmetic, relied heavily on changing the precision dynamically to obtain sufficient bounds (this paper, for example, dynamically adjusts the precision to isolate polynomial roots efficiently). That said, I like the idea! |
Making the precision of the result depend on the precision of the operands seems wrong to me. Reasoning about how big the result of an operation from the operands is not easy — arithmetic precision is a global (or dynamically scoped, ala Each variable can (and already does) have its own precision, but that precision should not determine the precision of the result of arithmetic with that variable. See also my comments above — we should take the hint here from essentially every other language that supports arbitrary precision well. It's not an accident that they all use dynamically scoped and/or global precision. |
(And if you're not using the precision of the operands to determine the precision of the result, what is to be gained by parameterizing the |
(Note that number theory is not a good guide here. When you do |
in contradistinction to the overwhelming tide of shared opinion on housebreaking BigFloat precision I have made and remade some precision imbued number types. The initial versions kept a current global precision setting and maintained the precisions of type realizations either directly as a field or indirectly as associated dict entries. As the rewrites made better type expression, the extra clutter of managing the precision as an internal Now type definitions include a precision parameter. This requires
Parameterizing the precision has led to shorter, crisper and clearer source code. Even where there is cause to set the precision explicitly. Another benefit is the ease of doubling the working precision:
|
As someone not routinely working with Julia (but quite some experience in the subject), |
What is the reasoning behind the decision that arithmetic with BigFloats of different precision results in a BigFloat with the smaller of the two precisions? While I see a reasoned argument that one might prefer addition and subtraction to resolve to the lesser precision, the more compelling argument (imo) is that multiplication and division (and sqrt) resolve to the greater precision. And it seems reasonable that all arithmetic resolve mixed precisions the same way. |
Yes, I probably won't be involved except as a commentator. Too much to do for 1.0... |
point of clarification:
|
To try BigFloat{Precision} visit https://github.com/JuliaNumberTypes/MPFR3.jl If you run into an issue -- please note it (best with the solution). |
Another issue with a global precision parameter is that numerical functions like
That is actually quite a bummer. I would like to write formulae like in a textbook and trust on the compiler to eliminate common subexpressions for me. |
Came up again recently: https://discourse.julialang.org/t/parametric-bits/26117/10. Would changing operations on BigFloats to produce results with the max precision of their arguments be too breaking? It seems quite bad that the size of results is controlled by a global setting. It's not even dynamically scoped or per-thread, just straight up global. |
^I agree, and do not find it breaking .. rather fixing. |
If the result is the max of precision of the arguments and the Something that might otherwise break is re-running a calculation with a higher
Here it would be very confusing if |
Including the global precision in the decision of the result precision feels like a half-measure 😐 |
At 256 bits, the global precision will dominate the precision of the arg[s] too often to provide a more meaningful computational strategy. From the abacus' perspective, there is at least one (unary ops) and often at least two (binary ops) contributing precisions .. and another precision may have an important role. Of these, none is the global precision setting. There are the precision[s] of the arg[s] and, sometimes another precision relative to those for increasing/decreasing the operational precision at any given, opportune place in a calculation's flow. |
The user could opt in to only using the precision of the args by doing One could also make it an error to It should still be possible to use |
Bump. This has now become pressing since We definitely need a version of |
Whether the type is parameterized by precision is orthogonal to this issue. Even for an unparameterized To make it less breaking, we would need any new precision behavior to be opt-in, e.g. by defining a new |
Presumably we can just change |
Note that it looks like MPFR is not compiled to be thread-safe:
See https://www.mpfr.org/mpfr-current/mpfr.html#Memory-Handling for how their threading works. |
from the MPFR doc "Note: when MPFR is built with the ‘--enable-thread-safe’ configure option, the default precision is local to each thread." So it could be (and that change would be considered nonbreaking, I assume). |
Should probably open a new issue for this. Will have to be done in Yggdrasil and in julia deps. Perhaps we should also consider enabling |
Decimal float support in MPFR is only |
Yes. DecFP.jl is more robust, offering 32, 64 and 128 bit decimal floats. |
I don't think we use the MPFR default precision or rounding modes, as we use the custom interface: Line 113 in ddf7ce9
so I don't think this will be breaking. We store the default precisions and rounding modes on the Julia side. Note that using atomics + the above fix won't fix JuliaIntervals/IntervalArithmetic.jl#376: that would require much larger breaking changes (e.g. using task local storage, or some other sort of change) |
Is Compile MPFR to be thread safe relevant to IntervalArithmetic.jl#376?
|
Certainly sounds like it. |
I think this was basically fixed by #51362. Even if it's not fixed, perhaps it'd be better to start a new discussion than continue this one. |
Currently, the precision of
BigFloat
s is determined by a global variable stored in an arrayDEFAULT_PRECISION
, which is manipulated byset_bigfloat_precision
.This is not very Julian. I have been thinking about two possibilities:
The precision is given explicitly, e.g. as a second argument to the various
BigFloat
constructors and to thebig
function, e.g.a = BigFloat(3.1, 100)
b = big(3.1, 200)
Here, though, the precision is still hidden inside the object.
An arguably more Julian approach, which I would favour, is that the precision be a parameter of the
BigFloat{prec}
type, so that we could writea = BigFloat{100}(3.1)
b = BigFloat{200}(3.1)
This version would have the advantage that operations on
BigFloat
s could explicitly track the precision of the objects being operated on (which MPFR explicitly states that it does not do). E.g.,a + b
in this example should have only 100 bits. (If I understand correctly, with MPFR there is the possibility to specify any precision for the result, but bits beyond 100 will be incorrect.)If there is a consensus about whether either of these would be useful, or am I missing a reason why this would not be possible?
c.c. @simonbyrne, @andrioni, @lbenet
The text was updated successfully, but these errors were encountered: