Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

performance regression in clang-19 when using computed goto #106846

Open
mikulas-patocka opened this issue Aug 31, 2024 · 11 comments · May be fixed by #114990
Open

performance regression in clang-19 when using computed goto #106846

mikulas-patocka opened this issue Aug 31, 2024 · 11 comments · May be fixed by #114990
Assignees

Comments

@mikulas-patocka
Copy link

Hi

I noticed that the Ajla programming language ( https://www.ajla-lang.cz/ ) runs slower when being compiled by clang-19 compared to clang-18 or gcc-14. I looked at the assembler, and it turns out that the bytecode interpreter (the file "ipret.c") is not compiled as efficiently as it could be. In particular, clang-19 joins all the "goto *next_label" statements in a function into just one "jmp *" instruction. That reduces code size, but it also makes branch prediction inefficient, because the CPU cannot learn that a single instruction jumps to multiple targets.

I created this example that shows the issue:
http://www.jikos.cz/~mikulas/testcases/clang/computed-goto.c
(use: clang-19 -O2 computed-goto.c && time ./a.out)

The results (in seconds) are here:
http://www.jikos.cz/~mikulas/testcases/clang/computed-goto.txt

We can see that the worst slowdown happens on Sandy Bridge. Zen 4 shows no slowdown, so it seems that it has smart indirect branch predictor that can predict multiple jump targets from a single instruction.

@nikic
Copy link
Contributor

nikic commented Aug 31, 2024

This is caused by #78582, we get separate indirect gotos with -mllvm -tail-dup-pred-size=30. cc @DianQK Possibly we should not apply the limitation to INDIRECTBR, which is more profitable to tail-duplicate than other instructions.

@mikulas-patocka
Copy link
Author

mikulas-patocka commented Aug 31, 2024

-mllvm -tail-dup-pred-size=30 helps the testcase, but not Ajla. If you compile Ajla with clang-19 -O2 -mllvm -tail-dup-pred-size=30 and do
$ objdump -d ipret.o | grep 'jmp \*'
it shows that there is just one indirect jump in the whole file.

@nikic
Copy link
Contributor

nikic commented Aug 31, 2024

What about -mllvm -tail-dup-pred-size=1000? The value needs to be about as large as the number of instructions / indirect jumps.

@mikulas-patocka
Copy link
Author

Yes - there are 2284 instructions in the Ajla interpreter. So, I set tail-dup-pred-size=3000 and I get the same output as with clang-18 - 2466 indirect jumps in ipret.o.

@pinskia
Copy link

pinskia commented Aug 31, 2024

Note GCC (RTL) has a specific pass which duplicates the computed gotos (as GCC merges all computed goto into one BB for performance reasons) which was added in GCC 4.0 and then improved for GCC 7.1.0 (to also unduplicate some related instructions). This pass happens late in the pipeline after register allocation. This is just explaining GCC's solution to the problem and nothing more.

@DianQK
Copy link
Member

DianQK commented Sep 1, 2024

This is caused by #78582, we get separate indirect gotos with -mllvm -tail-dup-pred-size=30. cc @DianQK Possibly we should not apply the limitation to INDIRECTBR, which is more profitable to tail-duplicate than other instructions.

It looks like makes sense. Thanks!
I will check it later.

@DianQK DianQK self-assigned this Sep 1, 2024
@DianQK
Copy link
Member

DianQK commented Sep 3, 2024

The switch instruction also becomes an INDIRECTBR machine instruction, and I think we should apply the limitation to PHI.
If I remember correctly, both #78578 and #79993 (comment) contain a massive PHI. Duplicating the BB that contains the massive PHI causes the CFG to become exceptionally complex, leading to issues with compile time and runtime performance.

@DianQK
Copy link
Member

DianQK commented Sep 12, 2024

@mikulas-patocka Can you provide an example of Ajla? I may need to investigate this with the help of it.

BTW, whether using INDIRECTBR directly or coming over from SWITCH, we introduce a lot of PHIs.

@mikulas-patocka
Copy link
Author

The Ajla compiler is written in Ajla, you can have a look at files in newlib/compiler/. I usually use self-compilation as a benchmark.

Run './configure && make' with CFLAGS='-O2 -DDEBUG_ENV'. The DEBUG_ENV macro enables envirnment variables that are useful for debugging.

Set the environment variable 'export CG=none' - this will disable the machine code generator and use only the interpreter.

Run the script ./scripts/update.sh - this will compile the Ajla compiler itself and it will re-generate the file builtin.pcd.
The results (on i7-2640M, Sandy Bridge, 2 cores, 4 threads):
gcc-14: 22.8s
clang-18: 22.2s
clang-19: 26.8s
The results (on 7840U, Zen 4, 8 cores, 16 threads):
gcc-14: 4.1s
clang-18: 4.1s
clang-19: 4.4s

If you want some simple benchmark, copy this piece of code to a file loop.ajla

fn main
[
        var sum := 0;
        for i := 0 to ston(args[0]) do
                sum += 1;
        write(h[1], ntos(sum) + nl);
]

and run it with 'CG=none ./ajla --nosave loop.ajla 1000000000'
The results (on i7-2640M):
gcc-14: 17.3s
clang-18: 15.3s
clang-19: 39.0s
On 7840U there is too much jitter in this test.

@DianQK
Copy link
Member

DianQK commented Oct 27, 2024

By comparing the results from gcc, I can observe the following:

  • In the initial OOM example, which creates multiple consecutive switch statements, gcc did not apply tail duplication.
  • For computed-goto.c and ipret.c, it appears that clang applied more tail duplication (perhaps unnecessarily?).
  • The compile time for ipret.c with clang -O3 is unacceptably long, mainly due to Branch Probability Basic Block Placement (which is definitely a separate issue).

Next, I'll look into some gcc code to see if I can find anything useful. Perhaps gcc is simply differentiating between goto and switch at the source level? :) Although I think they should be interchangeable in this scenario.

Detailed commands

$ time -f '%M' gcc -O3 -c oom_manual2.c -S
129608
$ cat oom_manual2.s | grep "jmp.*%" | wc -l
26
$ wc -l oom_manual2.s
18494 oom_manual2.s

$ time -f '%M' clang -O3 -c oom_manual2.c -mllvm -tail-dup-pred-size=3000 -mllvm -tail-dup-succ-size=3000 -S
836248
$ cat oom_manual2.s | grep "jmp.*%" | wc -l
3081
$ wc -l oom_manual2.s
50527 oom_manual2.s


$ time -f '%M' gcc -O3 -S -c computed-goto.c
47556
$ cat computed-goto.s | grep "jmp.*%" | wc -l
32
$ wc -l computed-goto.s
1184 computed-goto.s

$ time -f '%M' clang -O3 -mllvm -tail-dup-pred-size=3000 -mllvm -tail-dup-succ-size=3000 -S -c computed-goto.c
119084
$ cat computed-goto.s | grep "jmp.*%" | wc -l
51
$ wc -l computed-goto.s
1717 computed-goto.s

@DianQK
Copy link
Member

DianQK commented Nov 3, 2024

Possibly we should not apply the limitation to INDIRECTBR, which is more profitable to tail-duplicate than other instructions.

A quick note:
Based on the comments below, this is undoubtedly what GCC does: https://github.com/gcc-mirror/gcc/blob/7f67acf60c5429895d7c9e5df81796753e2913e0/gcc/rtlanal.cc#L3615-L3621.

/* Return true if INSN is an indirect jump (aka computed jump).

   Tablejumps and casesi insns are not considered indirect jumps;
   we can recognize them by a (use (label_ref)).  */

I’ll go with this approach, but from the machine instructions: https://llvm.godbolt.org/z/jcje1xPaW, I haven’t found any fundamental difference between computed goto and table jumps. I plan to investigate further and then file this PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants