Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

faster _log_ext #44717

Merged
merged 5 commits into from
Apr 4, 2022
Merged

faster _log_ext #44717

merged 5 commits into from
Apr 4, 2022

Conversation

oscardssmith
Copy link
Member

Based on https://github.com/ARM-software/optimized-routines/blob/master/math/pow.c. According to initial benchmarks, this appears to bring the time to compute pow from 18 to 10 ns, and accuracy is slightly improved. This currently doesn't handle subnormal bases, so this is slightly WIP, but it's very close to mergable.

@oscardssmith oscardssmith added performance Must go faster maths Mathematical functions labels Mar 23, 2022
@oscardssmith oscardssmith changed the title faster _log_ext faster _log_ext Mar 23, 2022
@oscardssmith
Copy link
Member Author

oscardssmith commented Mar 24, 2022

The subnormal case should now be fixed I've also moved around a few of the checks, so @Keno you should take a look to ensure the constprop is still behaving as desired. I think this is mostly ready to merge, although it probably needs some good tests for subnormal x now that we are doing some aggressively weird stuff for them.

@oscardssmith
Copy link
Member Author

Benchmarking against apple suggests there is another ~2ns (almost 2x) to be gained in exp which is really surprising, but will go in a separate PR.

ans_lo = ((c1hi - ans_hi) + res_hi) + (res_lo + 3.80554962542412056336616e-17)
return ans_hi, ans_lo
end
const t_log_ext_Float64 = (
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LUTs are tricky to benchmark. They do well on microbenchmarks, but the additional cache pressure can cause problems. They also don't vectorize well. https://stackoverflow.com/questions/41529921/using-simd-on-amd64-when-is-it-better-to-use-more-instructions-vs-loading-from has some discussion.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pow has way too many branches to vectorize well anyway. I agree that using a 64 element table is probably worthwhile since it would shrink the size of the LUT from 3Kb to 1.5Kb. That said, we already use a LUT for exp, and @chriselrod has shown that table based implementations can vectorize very well. Getting a good set of benchmarks on this is important, but I do pretty firmly believe that this is a better implementation that the current one.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, but my suggestion would be to keep the LUT-less version around somewhere, so it's easy to benchmark in end-to-end applications to see the effect of the cache pressure.

Copy link
Member Author

@oscardssmith oscardssmith Mar 24, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This approach is also fast enough that I'm going to try using the same implementation for regular log (with some changes to lower the precision) which would save a table with a similar size.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've done some initial testing that confirms that this method should be competitive for regular log functions.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

based implementations can vectorize very well.

That of course requires gather instructions, which requires at least AVX2 on x86 and SVE on Aarch64.
So basically any recent x86 CPU with FMA can also gather, and while SVE2 is standard in ARM9, this means LUTs wont vectorize on ARM8 (except for the A64FX).

That's all mute here if your pow implementation isn't vectorizing anyway.
But it would be great to have a vectorizable implementation, e.g. maybe the @fastmath variant?

Also, I just checked, and @inline exp(x) no longer vectorizes on Julia master. It did briefly after callsite inlining was introduced.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that's unfortunate.

base/special/log.jl Outdated Show resolved Hide resolved
@giordano
Copy link
Contributor

This currently doesn't handle subnormal bases

Most tests were passing on your first commit, does this mean we aren't testing subnormal bases? If so, maybe good to add tests?

base/special/log.jl Outdated Show resolved Hide resolved
base/special/log.jl Outdated Show resolved Hide resolved
@oscardssmith
Copy link
Member Author

I've now pushed a version with a compressed table. It bechnes about .5 ns slower, but uses 2/3rds the cache, so it's probably a good compromise.

@oscardssmith
Copy link
Member Author

Tests added. Assuming everything works, this is good to merge from my perspective.

@oscardssmith
Copy link
Member Author

I'm planning on merging this monday sans objections.

@oscardssmith oscardssmith merged commit 2f9e3a5 into JuliaLang:master Apr 4, 2022
@oscardssmith oscardssmith deleted the faster-_log_ext branch April 4, 2022 15:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
maths Mathematical functions performance Must go faster
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants