-
Notifications
You must be signed in to change notification settings - Fork 15.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable the use of [SU]Int32Size and EnumSize templates for AArch64 #11102
Conversation
The benchmarks are now located in https://github.com/google/fleetbench |
Rebased this as I thought that might be the cause for the 'Mergeable 1/1 Fail(s): LABEL', but that still seems to be there. Any idea what that's trying to tell me? And @ckennelly, thanks. I used proto_benchmark off that repo to benchmark protobuf and it's how I found out we weren't using this path for Arm. The reason I asked about benchmarks was because I remembered the protobuf-repo ones were slightly different (I think they had smaller messages). |
mergable fail is a safeguard to prevent google engineers from accidentally merging it without doing internal testing |
For fleetbench we see 1-1.5% improvement on G2A instances
For int32 size microbenchmarks we have something like 2x speed up
Godbolt looks great. We managed to get a combination of cmhi and sub with usra when needed for LGTM from the arm code generation side. Thanks a lot! |
How do you run the BM_RepeatedFieldSize Benchmarks? |
They are internal to us as we did not want to expose gbench at the time, I guess. Not sure about the current state, we probably should expose them. The benchmark is simple, it just runs Int32Size on a random repeated field. We located it in protobuf/wire_format_unittest.cc |
@avieira-arm could you please rebase on main? Sorry for the trouble, you've caught us in the middle of a migration to GitHub Actions. |
I've rebased it but I've not been able to rebuild proto_benchmark due to some bazel stuff, so I'm hoping the CI can check whether it builds fine here. While I have your attention, maybe you can help me resolve the issue I have with building the proto benchmark in fleetbench. I build it overriding the com_google_protobuf, com_google_absl and com_google_tcmalloc repository's using local ones (that are unchanged, other than this protobuf patch). I get an error complaining the C++ version is too old, and when I check the build commands with --verbose_failures I see '-std=c++0x' is beeing passed. I can't find this option being added in any of the local repositories so I have to assume it is being added by some bazel rule that is being downloaded. I'm not super experienced with bazel and I've only done a minimal level of looking into this for now, in the past I hacked up the config files in abseil to get me past the errors, but the projects now actually use C++14 stuff so making these changes is no longer viable. |
Looks like none of the tests are running because of some missing 'secrets'. I suspect this is an internal thing too? |
I keep getting emails about workflows failing to run. I don't think theres much I can do about that though, can someone please confirm. It would be nice to get this merged or dropped if you don't want it, but the benchmarks seem to indicate that it would be a desirable change :) |
Rebased again. |
sorry to be a pain but you are synced to a bad point and we need another rebase |
When benchmarking proto_benchmark from fleetbench on an AArch64 target we found that clang is able to vectorize these functions and they offer better performance than the scalar alternative.
Rebased again. Let me know if you still want this. |
We definitely want this, this should be a very safe change. LGTM from me |
Sorry for all the delays on this one! |
Hi,
When benchmarking proto_benchmark from fleetbench on an AArch64 target we found that clang is able to vectorize these functions and they offer better performance than the scalar alternative.
I ran //src/google/protobuf:arena_unittest on aarch64-none-linux-gnu. Should I run any other tests? Also protobuf used to have its own set of benchmarks, but I can't find these when I query all targets with bazel. Let me know if you'd like me to run anything else, I couldn't find instructions on what the full test run is.