Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rewrite pass management with LLVM #8729

Closed
wants to merge 1 commit into from
Closed

Rewrite pass management with LLVM #8729

wants to merge 1 commit into from

Conversation

thestinger
Copy link
Contributor

Beforehand, it was unclear whether rust was performing the "recommended set" of
optimizations provided by LLVM for code. This commit changes the way we run
passes to closely mirror that of clang, which in theory does it correctly. The
notable changes include:

  • Passes are no longer explicitly added one by one. This would be difficult to
    keep up with as LLVM changes and we don't guaranteed always know the best
    order in which to run passes
  • Passes are now managed by LLVM's PassManagerBuilder object. This is then used
    to populate the various pass managers run.
  • We now run both a FunctionPassManager and a module-wide PassManager. This is
    what clang does, and I presume that we may see a speed boost from the
    module-wide passes just having to do less work. I have no measured this.
  • The codegen pass manager has been extracted to its own separate pass manager
    to not get mixed up with the other passes
  • All pass managers now include passes for target-specific data layout and
    analysis passes

Some new features include:

  • You can now print all passes being run with -Z print-llvm-passes
  • When specifying passes via --passes, the passes are now appended to the
    default list of passes instead of overwriting them.
  • The output of --passes list is now generated by LLVM instead of maintaining
    a list of passes ourselves
  • Loop vectorization is turned on by default as an optimization pass and can be
    disabled with -Z no-vectorize-loops

Beforehand, it was unclear whether rust was performing the "recommended set" of
optimizations provided by LLVM for code. This commit changes the way we run
passes to closely mirror that of clang, which in theory does it correctly. The
notable changes include:

* Passes are no longer explicitly added one by one. This would be difficult to
  keep up with as LLVM changes and we don't guaranteed always know the best
  order in which to run passes
* Passes are now managed by LLVM's PassManagerBuilder object. This is then used
  to populate the various pass managers run.
* We now run both a FunctionPassManager and a module-wide PassManager. This is
  what clang does, and I presume that we *may* see a speed boost from the
  module-wide passes just having to do less work. I have no measured this.
* The codegen pass manager has been extracted to its own separate pass manager
  to not get mixed up with the other passes
* All pass managers now include passes for target-specific data layout and
  analysis passes

Some new features include:

* You can now print all passes being run with `-Z print-llvm-passes`
* When specifying passes via `--passes`, the passes are now appended to the
  default list of passes instead of overwriting them.
* The output of `--passes list` is now generated by LLVM instead of maintaining
  a list of passes ourselves
* Loop vectorization is turned on by default as an optimization pass and can be
  disabled with `-Z no-vectorize-loops`
@thestinger thestinger closed this Aug 24, 2013
@thestinger thestinger deleted the better-llvm branch August 24, 2013 03:55
flip1995 pushed a commit to flip1995/rust that referenced this pull request May 5, 2022
Fix missing whitespace in `collapsible_else_if` suggestion

changelog: Fix missing whitespace in [`collapsible_else_if`] suggestion
closes rust-lang#7318
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants