Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support fp128 and maybe quad-double #757

Closed
cgbaker opened this issue Apr 24, 2012 · 42 comments
Closed

support fp128 and maybe quad-double #757

cgbaker opened this issue Apr 24, 2012 · 42 comments

Comments

@cgbaker
Copy link

cgbaker commented Apr 24, 2012

Extended precision types are occasionally useful in prototyping codes and developing numerical intuition of an algorithm. I am thinking here of double-double/quad-double from the QD package [1]. These are statically sized with reasonable performance. The downside is supporting them via LAPACK/BLAS and other libraries; this would definitely be a value-added component of the Julia environment.

[1] http://crd-legacy.lbl.gov/~dhbailey/mpdist/

@JeffBezanson
Copy link
Member

I agree this would be good to have, especially since most of our library routines "just work" when given new data types.
Do you think it's worth exposing 80-bit floats on x86, or does software double-double/quad-double provide a better tradeoff? I'm also a little concerned that those packages seem to require disabling extended precision on x86.

@cgbaker
Copy link
Author

cgbaker commented Apr 24, 2012

80-bit is better than nothing, but I'm a big fan of dd/qd. There's something to be said about 60 decimal digits of accuracy when you're doing a convergence study of some iterative method. The disabling of extended precision on x86 is a concern, and a hassle. You could theoretical turn this on and off before operations on arrays of dd/qd objects.

@JeffBezanson
Copy link
Member

Hmm, that is pretty disappointing for composability, and writing generic code. It might be easiest to disable it globally once those libraries are included. I wonder if it is possible to develop x86-friendly versions of these libraries; I know I probably couldn't do it though.

@JeffreySarnoff
Copy link
Contributor

Extended precision that 'just works' at the same precision on different systems offers more valuable value.

When run on x86, double-double and quad-double disable extended-precision before each call to a sensitive function and re-enable it after each such call. Using dd/qd does not involve additional management of that state. Using Float80 is OK; considering it an extended precision solution for x86s -- less so.

@JeffBezanson
Copy link
Member

Ok, that is useful feedback. True that Float80 offers relatively little in exchange for using 2x the memory. Sounds like we can just write up an interface to this library any time then, taking care to reproduce the license notice for commercial users. I like that their BSD license is provided as a Word document. Somebody clearly has a sense of humor.

@StefanKarpinski
Copy link
Member

The big problem I can see here after downloading the library and playing around with it for a while is that it's written in C++ and doesn't seem to expose basic things like addition via a C-callable interface. It has the ability to be called from Fortran, however, so that gives hope, although I can't figure out where signatures for e.g. addition are in the header files.

I'm also slightly worried about printing these things. We have a "honest printing" policy, in that we use the double-conversion library to print the minimal number of digits necessary to exactly reconstruct each floating-point value. I wonder what guarantees the printing of dd and qd values makes? I'm not super keen on re-implementing the grisu algorithm in Julia, although that may eventually be necessary.

@cgbaker
Copy link
Author

cgbaker commented May 1, 2012

I think you are looking for these:
http://qd.sourcearchive.com/documentation/2.3.4/c__dd_8h-source.html
http://qd.sourcearchive.com/documentation/2.3.4/c__qd_8h-source.html

And note c_dd_swrite() in c_dd.cc:

void c_dd_swrite(const double *a, int precision, char *s, int len) {
  dd_real(a).write(s, len, precision);
}

@StefanKarpinski
Copy link
Member

@cgbaker: I saw those, but they seem to work on doubles, rather than dd_real and qd_real structs. I see that you can print these things within the libraries, but the grisu algorithm is quite new (2010), and is, as far as I know, the only printing algorithm that guarantees accurate minimal printing of floats as decimals and doesn't have to use slow arbitrary precision numbers to do it.

@cgbaker
Copy link
Author

cgbaker commented May 1, 2012

@StefanKarpinski A dd_real in the C interface is a double[2], a qd_real is a double[4]. The double* in those methods is, in fact, qd objects. See here, for example:
http://qd.sourcearchive.com/documentation/2.3.4/c__test_8c-source.html

(learning this as I go ;)

@JeffBezanson
Copy link
Member

We don't need minimal printing (at least at first) for dd and qd to be useful. In fact in many cases computing with dd/qd in memory and ultimately producing a double for output probably makes sense. Certainly useful for sending minimal data to be plotted in javascript, for example.

@StefanKarpinski
Copy link
Member

@cgbaker: Ah, I see! I did not at all figure that out from looking at the code :-)

@JeffBezanson: That's absolutely true.

So, all-in-all, this looks very usable from Julia.

@nolta
Copy link
Member

nolta commented May 1, 2012

Another option is libquadmath, which is part of gcc >= 4.6. It includes math and string functions.

@StefanKarpinski
Copy link
Member

@cgbaker: does libquadmath also provide all the functionality you'd need? Is there some reason for preferring one library over the other? All other things being equal, I'd be inclined to go with the one that's part of gcc. Even better, of course, would be if there was dd and qd support in LLVM...

@cgbaker
Copy link
Author

cgbaker commented May 2, 2012

I wasn't aware of libquadmath. It looks interesting. It seems only to provide 128-bit, comparable to double-double, so QD would seem to have some advantage their. However, I don't have a good sense for how widely used either of these extended precision types would be. I only use them for exploration, never yet in any production problems.

@StefanKarpinski
Copy link
Member

Doubling the number of bits already seems like a lot of extra precision. Are there situations where quad doubles allow detecting problems that weren't revealed by double doubles? Although, if we're going to support this, we may as well support both. I wonder if libquadmath is faster or just has a more GPL-ish license or what the deal is.

@JeffBezanson
Copy link
Member

LLVM has an fp128 type that it might already support using libquadmath. We should try it and see what happens.

@cgbaker
Copy link
Author

cgbaker commented May 2, 2012

When studying the behavior of a method that converges quadratically or cubically, the extra bits can be consumed pretty quickly. I don't have a particular business need for extended precision, but I find it is useful sometimes in prototyping, which I find congruous to Julia's purpose.

IEEE quadruple precision provides more precision that double-double and more range. Double-double is probably faster (6x the effort of double, I seem to recall), whereas I assume that GCC's __float128 is a bona fide software implementation of IEEE quadruple.

GCC __float128 is better than nothing. In addition to __float128, there are FOSS projects QPFloat:
http://sourceforge.net/p/qpfloat/home/Home/
and HPALib:
http://www.nongnu.org/hpalib/
I don't know anything about these; I found them linked from the Wikipedia page on quad precision :)

On an unrelated note, I'm suddenly thirsty for a strong Trappist ale...

@StefanKarpinski
Copy link
Member

Doing things like this is right up Julia's alley. That's why we spent so much time and effort making sure that new user-defined numeric types can integrate right into the "built-in" type system. Seems like the qd library is the most complete. Perhaps we should just wrap that.

@JeffreySarnoff
Copy link
Contributor

additional info

Extended precision is helpful/requisite e.g. when working with ill-conditioned, non-singular matricies (sometimes with finite element analysis), developing near-best rational approximations (compact ephemerides), and generally where a solution involves high precision iterative refinement or correctly rounded values (q.v. crlibm at http://lipforge.ens-lyon.fr/www/crlibm/).

The latest qd is available from http://crd-legacy.lbl.gov/~dhbailey/mpdist/; if it is used, please do not set the almost-as-accurate-but-faster compile time options. The dd and qd implementations are generally faster than alternative libraries offering the same floating point precisions because dd and qd use double precision floating point ops rather than a new extended precision format to develop results.

XBLAS (the reference implementation of extended precision BLAS at netlib) uses extended precision internally but does not report extended precision results. Nakata Maho has written (C++) extended precision BLAS (noted as 'well tested') and LAPACK that work with dd and qd values (http://mplapack.sourceforge.net/). I have not used it.

@StefanKarpinski
Copy link
Member

Sounds like using qd is the way to go. Let's plan on doing that.

@stevengj
Copy link
Member

@StefanKarpinski and @JeffBezanson, supporting gcc's __float128 quad-precision format (== LLVM fp128 = IEEE 754 binary128 format) has the advantages of being standardized by IEEE, pre-installed (via libquadmath) on many systems, and well supported in the compiler. Furthermore, it is useful for binary compatibility with other libraries.

My vote would be for fp128 as first priority, qd as an optional add-on, and send anyone who wants more precision to bignums.

@simonbyrne
Copy link
Contributor

Is there any plan to provide a native Float80 (or higher) type? This could be useful for implementing special functions.

@jiahao
Copy link
Member

jiahao commented Mar 22, 2014

Looks like OpenBLAS provides some experimental quad-precision kernels - or so the nomenclature of routines like qgemm would suggest.

@quinnj quinnj mentioned this issue Jun 24, 2014
@andrioni
Copy link
Member

Rust just dropped their experimental Float128 support today, unfortunately. rust-lang/rust#15160

@ViralBShah
Copy link
Member

Seems like they want to have complete support before they bring it back. If so, that seems reasonable.

@pwl
Copy link
Contributor

pwl commented Dec 11, 2014

Looks like the support for quads in OpenBLAS is not working and they are going to remove it [1].

[1] https://groups.google.com/forum/#!msg/openblas-users/0kZGFD9L5JI/u635atziwMAJ

@jiahao
Copy link
Member

jiahao commented Dec 11, 2014

Regarding double-double precision, @simonbyrne's DoubleDouble.jl works quite well and is already being used in some linear algebraic applications (see, e.g. Arrowhead.jl).

On the question of extended precision, the underlying algorithms for double-double don't care about whether extended precision is on or off, although to be fully correct some of the machine constants may need to be tweaked.

I think this only leaves the question of full Float128 support; quad-double would be fairly straightforward to incorporate into DoubleDouble afterward if there is still demand.

@stevengj
Copy link
Member

@jiahao, whether or not the FPU is set in extended-precision mode (i.e. whether register arithmetic is done in extended precision even if they started out double), it would be nice to have a Float80 (or LongDouble?) type so that you can actually store results in extended precision, explicitly request the extra bits when needed, and call external C routines that use long double (with compilers where this is extended precision).

@jiahao
Copy link
Member

jiahao commented Dec 11, 2014

Perhaps this issue can be renamed since the only outstanding issue is that of supporting Float80 and possibly Float128 also. In #3467 Float128 support was discussed and the consensus at the time (18 months ago) was that lack of hardware support was a problem.

@simonbyrne
Copy link
Contributor

Now we have llvmcall, it might be possible to create a Float80 package. I did try playing around with it, but I couldn't figure out how to tell LLVM that

bitstype 80 Float80

should be an LLVM x86_fp80 and not an i80.

@PallHaraldsson
Copy link
Contributor

FYI [off-topic license discussion]: @cgbaker "GCC __float128 is better than nothing. In addition to __float128, there are FOSS projects QPFloat:
http://sourceforge.net/p/qpfloat/home/Home/ "

It says "QPFloat (GPL 3.0)"

The GPLv3 is incompatible with GPLv2 (that is already used in Julia). Or more correctly, GPL (v2) allows upgrading to v3, that must be done then.

I'm only aware of GPLv2 so far in Julia and that there is work to make it optional. I'm not sure if Julia is against GPLv3, but the "software as a whole" would be under GPLv3 then, right?

This is assuming, "GPL or any later version" is used (possibly "GPL" is allowed, as the license allows upgrading), at least "GPLv2 only" would not be compatible.

[I think there is a GPLv2 only (git?) already used, that would not be a problem becuase of an extra linking exception.]

[A plus of GPLv3 is Apache 2.0 is for sure compatible with it.]

@vchuravy
Copy link
Member

vchuravy commented Oct 3, 2016

I was working on FP128support and I was wondering how big the desire for FP80 is. My hesitation for adding is that it is only really supported on x86 and I am not sure that there is support at all for them on Power. Just adding the alias to Julia is quite straightforward, but anything else might be non-trivial and should maybe be outside base.

cc: @musm

@simonbyrne
Copy link
Contributor

My opinion is that Float128 and Float80 should probably be in packages, with the absolute minimum necessary (basically defining the intrinsics) in core Julia.

@vchuravy
Copy link
Member

vchuravy commented Oct 3, 2016

Yeah that was my thinking, basically everything in float.jl. For a preview see https://github.com/JuliaLang/julia/tree/vc/float128, but you will also nee my work with compiler-rt to make this run everywhere.

@JeffBezanson JeffBezanson modified the milestones: 2.0+, 1.0, 1.x May 2, 2017
@simonbyrne
Copy link
Contributor

I've manage to create bindings to the libquadmath which we bundle inside gfortran, and it works fairly well on linux and mac but requires quite a cumbersome hack to work around calling convention issues. I also still haven't had any luck figuring out the calling convention on Windows.

Is there any chance we could define the type and calling convention in Base?

@jdh8
Copy link

jdh8 commented Nov 3, 2017

How about a platform-dependent Clongdouble just like Int, which finally enables standard C functions like expl.

@stevengj
Copy link
Member

stevengj commented Nov 3, 2017

@jdh8, I think the primary goal in this issue is to have a Float128 type that is always IEEE quad precision, implemented in software. We could have a separate Float80 for the hardware 80-bit extended precision on x86 etc, and then typedef Clongdouble as appropriate.

(But honestly, I find long double of limited utility in portable numerical work, precisely because its precision is unpredictable. It seems better to have a Float80 type that defaults to a software implementation on CPUs without 80-bit hardware.)

@simonbyrne
Copy link
Contributor

Honestly, everything about the long double type is a mess, starting with its name.

@stevengj
Copy link
Member

stevengj commented Nov 3, 2017

It's not nearly as bad as selected_real_kind in Fortran. ("Hey, let's encourage the programmer to specify exactly how many digits are needed! Because programmers usually know that kind of thing, right?")

@oscardssmith
Copy link
Member

Closing this as packages such as MultiFloats mean that this doesn't need to be in Base.

@jessymilare
Copy link
Contributor

I beg to differ. Float128 is defined by a standard which has well-defined operation on NaNs and Infs and binary exponents ranging from −16382 to 16383 (for normal numbers). MultiFloats is nice, but not a drop-in replacement for Float128:

julia> big(prevfloat(Inf)) * 2
3.595386269724631416290548474634087135961411350516899931978349536063145215600571e+308

julia> Float64x8(big(prevfloat(Inf)) * 2)
Inf

@simonbyrne
Copy link
Contributor

Quadmath.jl provides Float128 support on most platforms.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests