-
Notifications
You must be signed in to change notification settings - Fork 12.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Division of uin64_t by constant on ARM32 should multiply by inverse #37280
Comments
While a 32-bit integer division a/b by a constant is computed as a * (1/b) using the multiply-high technique, a 64-bit division by a constant is not optimized at all on 32-bit platforms. Yet, in https://gmplib.org/~tege/divcnst-pldi94.pdf a technique is described to implement such divisions in a cheaper way. |
You can emit a 64x64->128 multiply on a 32-bit platform by expanding it to four 32x32->64 multiplies. IIRC, someone on my team did an experiment with that on ARM, but we never found a practical case where it was actually profitable, so we didn't upstream the patch. I can dig it up if you're interested. Alternatively, I guess you could use the "Dividing udword by uword" technique described in that paper... but in general you'd have to split the division into two divides, and I don't think that comes out cheaper. |
I have to admit that my performance numbers are obtained using gcc. But the godbolt link shows that LLVM is also calling out to the same division function that presumably is not much faster. Regarding an improvement over the generic division function, I followed the process described in Hacker's delight where one multiplies by the inverse and then does a multiplication to compute an error to the actual solution. The book says that the process is manual since it involves finding patterns in the bit sequence of the inverse so that few shift and adds are needed to do the multiplication. I've implemented /3 and /300. We get these numbers on a 300MHz Cortex R5F CPU: (suffixes: 32 = int32_t, 32u = uint32_t, 64 = int64_t, u64 = uint64_t, a,b,c fixed values that are not small) a32 = b32 / c32: 0.0233 us/call These numbers are to be taken with a grain of salt due to instruction alignment but otherwise we found our benchmarks representative. So the specialization for specific constants is 10x slower than 32-bit division but nearly 5x faster than generic 64-bit division. The implementation of /3 is paraphrased form Hacker's delight: template template <> |
Tracked it down the patch I was thinking of; it was actually posted for review, but never got merged for some reason. See https://reviews.llvm.org/D24822 . |
This dramatically affects the novel Ryu algorithm for converting 64-bit doubles to strings (see ulfjack/ryu#73 ). See the attached div.cpp and div.asm for a minimal repro on x86 (compile with "clang-cl -m32 /EHsc /nologo /W4 /MT /O2 /c /FA div.cpp"). |
The text was updated successfully, but these errors were encountered: