-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
faster _log_ext
#44717
faster _log_ext
#44717
Conversation
The subnormal case should now be fixed I've also moved around a few of the checks, so @Keno you should take a look to ensure the constprop is still behaving as desired. I think this is mostly ready to merge, although it probably needs some good tests for subnormal |
Benchmarking against apple suggests there is another ~2ns (almost 2x) to be gained in |
base/special/log.jl
Outdated
ans_lo = ((c1hi - ans_hi) + res_hi) + (res_lo + 3.80554962542412056336616e-17) | ||
return ans_hi, ans_lo | ||
end | ||
const t_log_ext_Float64 = ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LUTs are tricky to benchmark. They do well on microbenchmarks, but the additional cache pressure can cause problems. They also don't vectorize well. https://stackoverflow.com/questions/41529921/using-simd-on-amd64-when-is-it-better-to-use-more-instructions-vs-loading-from has some discussion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pow has way too many branches to vectorize well anyway. I agree that using a 64 element table is probably worthwhile since it would shrink the size of the LUT from 3Kb to 1.5Kb. That said, we already use a LUT for exp
, and @chriselrod has shown that table based implementations can vectorize very well. Getting a good set of benchmarks on this is important, but I do pretty firmly believe that this is a better implementation that the current one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, but my suggestion would be to keep the LUT-less version around somewhere, so it's easy to benchmark in end-to-end applications to see the effect of the cache pressure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This approach is also fast enough that I'm going to try using the same implementation for regular log
(with some changes to lower the precision) which would save a table with a similar size.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've done some initial testing that confirms that this method should be competitive for regular log
functions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
based implementations can vectorize very well.
That of course requires gather instructions, which requires at least AVX2 on x86 and SVE on Aarch64.
So basically any recent x86 CPU with FMA can also gather, and while SVE2 is standard in ARM9, this means LUTs wont vectorize on ARM8 (except for the A64FX).
That's all mute here if your pow implementation isn't vectorizing anyway.
But it would be great to have a vectorizable implementation, e.g. maybe the @fastmath
variant?
Also, I just checked, and @inline exp(x)
no longer vectorizes on Julia master. It did briefly after callsite inlining was introduced.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that's unfortunate.
Most tests were passing on your first commit, does this mean we aren't testing subnormal bases? If so, maybe good to add tests? |
I've now pushed a version with a compressed table. It bechnes about .5 ns slower, but uses 2/3rds the cache, so it's probably a good compromise. |
Tests added. Assuming everything works, this is good to merge from my perspective. |
I'm planning on merging this monday sans objections. |
Based on https://github.com/ARM-software/optimized-routines/blob/master/math/pow.c. According to initial benchmarks, this appears to bring the time to compute
pow
from18
to10
ns, and accuracy is slightly improved. This currently doesn't handle subnormal bases, so this is slightly WIP, but it's very close to mergable.