Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Switch the default global allocator to System, remove alloc_jemalloc, use jemallocator in rustc #36963

Closed
brson opened this issue Oct 4, 2016 · 35 comments · Fixed by #55238
Closed
Labels
A-allocators Area: Custom and system allocators C-enhancement Category: An issue proposing an enhancement or a PR with one. relnotes Marks issues that should be documented in the release notes of the next release. T-libs-api Relevant to the library API team, which will review and decide on the PR/issue.

Comments

@brson
Copy link
Contributor

brson commented Oct 4, 2016

Updated description

A long time coming, this issue is that we should implement these changes simultaneously:

  • Remove the alloc_jemalloc crate
  • Default allocations for all crate types to std::alloc::System. While currently the default for cdylib/staticlib, it's not the default for staticlib/executable
  • Add the jemallocator crate to rustc, but only rustc
  • Long-term, deprecate and remove the alloc_system crate

We for the longest time have defaulted to jemalloc as the default allocator for Rust programs. This has been in place since pre-1.0 and the vision was that we'd give programs a by-default faster allocator than what's on the system. Over time, this has not fared well:

  • Jemalloc has been disabled on a wide variety of architectures for various reasons, the system allocator seems more reliable.
  • Jemalloc, for whatever reason as we ship it, is incompatible with valgrind
  • Jemalloc bloats the size of executables by deafult
  • Not all Rust programs are bottlenecked on allocations, and those which are can use #[global_allocator] to opt-in to a jemalloc-based global allocator (through the jemallocator or any other allocator crate).

The compiler, however still receives a good deal of benefit from using jemalloc (measured in #55202 (comment)). If that link is broken, it's basically a blanket across-the-board 8-10% regression in compile time for many benchmarks. (apparently the max rss also regressed on many benchmarks!). For this reason, we don't want to remove jemalloc from rustc itself.

The rest of this issue is now going to be technical details about how we can probably get rid of alloc_jemalloc while preserving jemalloc in rustc itself. The tier 1 platforms that use alloc_jemalloc which this issue will be focused on are:

  • x86_64-unknown-linux-gnu
  • i686-unknown-linux-gnu
  • x86_64-apple-darwin
  • i686-apple-darwin

Jemalloc is notably disabled on all Windows platforms (I believe due to our inability to ever get it building over there). Furthermore Jemalloc is enabled on some linux platforms but I think ended up basically being disabled on all but the above. This I believe narrows the targets we need to design for, as we basically need to keep the above working.

Note that we also have two modes of using jemalloc. In one mode we could actually use jemalloc-specific API functions, like alloc_jemalloc does today. We could also use the standard API it has and the support to hook into the standard allocator on these two platforms. It's not been measured (AFAIK) at this time the tradeoff between these two strategies. Note that in any case we want to route LLVM's allocations to jemalloc, so we want to be sure to hook into the default allocator somehow.

I believe that this default allocator hooking on Linux works by basically relying on its own symbol malloc overriding that in libc, routing all memory allocation to jemalloc. I'm personally quite fuzzy on the details for OSX, but I think it has something to do with "zone allocators" and not much to do with symbol names. I think this means we can build jemalloc without symbol prefixes on Linux, and with symbol prefixes on OSX, and we should be able to, using that build, override the default allocator in both situations.

I would propose, first, a "hopefully easy" route to solve this:

  • Let's link the compiler to the "system allocator". Let's then, on the four platforms above, link to jemalloc_sys, pulling in all of jemalloc itself. This should, with the right build configuration, mean that we're not using jemalloc everywhere in the compiler (just as we're rerouting LLVM we're rerouting the compiler).

I'm testing out the performance of this in #55217 and will report back with results. Results are that this is universally positive almost! @alexcrichton will make a PR.

Failing this @alexcrichton has ideas for a more invasive solution to use jemalloc-specific API calls in rustc itself, but hopefull that won't be necessary...

Original Description

@alexcrichton and I have increasingly come to think that Rust should not maintain jemalloc bindings in tree and link it by default. The primary reasons being:

  • Being opinionated about the default allocator is against Rust's general philosophy of getting as close to the underlying system as possible. We've removed almost all runtime baggage from Rust except jemalloc.
  • Due to breakage we've had to disable jemalloc support on some windows configurations, changing our default allocation characteristics there, and offering different implicit "service levels" on different tier 1 platforms.
  • Keeping jemalloc working imposes increased maintenance burden. We support a lot of platforms and jemalloc upgrades sometimes do not work across all of them.
  • The build system is complicated by supporting jemalloc on some platforms but not all.

For the sake of consistency and maintenance we'd prefer to just always use the system allocator, and make jemalloc an easy option to enable via the global allocator and a jemalloc crate on crates.io.

@brson
Copy link
Contributor Author

brson commented Oct 4, 2016

Depends on having stable global allocators.

Since this will result in immediate performance regressions on platforms using jemalloc today we'll need to be sensitive about how the transition is done and make sure it's clear how to regain that allocator perfomance. It might be a good idea to simultaneously publish other allocator crates to demonstrate the value of choice and for benchmark comparisons.

@brson brson added C-enhancement Category: An issue proposing an enhancement or a PR with one. A-allocators Area: Custom and system allocators T-libs-api Relevant to the library API team, which will review and decide on the PR/issue. labels Oct 4, 2016
@alexcrichton
Copy link
Member

An alternative I've heard @sfackler advocate from time to time is:

  • We make no guarantees about the default allocator, allowing us to choose a fast one like jemalloc
  • We expose in stable Rust the ability to request the system allocator as the global allocator

That would allows us to optionally include jemalloc, but if you want the system allocator for heap profiling, valgrind, or other use cases you can choose so.

@sfackler
Copy link
Member

sfackler commented Oct 5, 2016

I would specifically like to jettison jemalloc entirely and use the system allocator. It breaks way too often, it dropped valgrind support, it adds a couple hundred kb to binaries, etc.

@alexcrichton
Copy link
Member

alexcrichton commented Oct 6, 2016

Some historical speed bumps we've had with jemalloc:

I'll try to keep this updated as we run into more issues.

@kornelski
Copy link
Contributor

kornelski commented Jan 2, 2017

jemalloc also makes Rust look bad to newcomers, because it makes "Hello World" executables much larger (I know it's not a fair way to judge a language, but people do, and I can't stop myself from caring about size of redistributable executables, too)

@raphlinus
Copy link
Contributor

Another observation - jemalloc seems to add a significant amount of overhead to thread creation, on both Linux and macOS. This hasn't been a major issue for me as we plan to use the system allocator on Fuchsia, but probably something worth looking into.

@fweimer
Copy link

fweimer commented Jan 3, 2017

On the glibc side, we would be interested in workloads where jemalloc shows significant benefits. (@djdelorie is working on improving glibc malloc performance.)

japaric pushed a commit to japaric/rust that referenced this issue Jan 4, 2017
@japaric
Copy link
Member

japaric commented Jan 4, 2017

PR implementing this: #38820

@bstrie
Copy link
Contributor

bstrie commented Mar 16, 2017

I'd like to see some light benchmarks to get an idea of the magnitude of the default performance regression we can expect.

@frankmcsherry
Copy link
Contributor

frankmcsherry commented Apr 26, 2017

I'm not sure if this is active, but wanted to voice a recent pain-point:

I am using stable Rust. I wrote a executable. I wrote a dylib. I called one from the other. It explodes because they have different default allocators and I cannot change either on stable.

Independent of which allocator is fasterest, or hardest to maintain, etc., that there is a difference between the default allocators makes the shared library FFI story on stable Rust pretty bad.

Edit: Also, this issue was opened on my birthday so you should just make it happen. <3

@alexcrichton
Copy link
Member

I believe we've also encountered a deadlock on OSX with recent versions of jemalloc - jemalloc/jemalloc#895

@alexcrichton
Copy link
Member

I'm going to close this in favor of #27389. It's highly likely that all programs will start to link to jemalloc by default once we stabilize that feature, but there's not really much we can do until that issue lands.

@SimonSapin SimonSapin changed the title Switch to liballoc_system by default, move liballoc_jemalloc to crates.io Switch the default global allocator to System May 31, 2018
@SimonSapin
Copy link
Contributor

SimonSapin commented May 31, 2018

Reopening because #27389 is about to be closed with the stabilization of the #[global_allocator] attribute (#51241) without changing the default.

This may be blocked on #51038, assuming we want rustc to keep using jemalloc.

@SimonSapin SimonSapin reopened this May 31, 2018
alexcrichton added a commit to alexcrichton/rust that referenced this issue Oct 21, 2018
This commit adds opt-in support to the compiler to link to `jemalloc` in
the compiler. When activated the compiler will depend on `jemalloc-sys`,
instruct jemalloc to unprefix its symbols, and then link to it. The
feature is activated by default on Linux/OSX compilers for x86_64/i686
platforms, and it's not enabled anywhere else for now. We may be able to
opt-in other platforms in the future! Also note that the opt-in only
happens on CI, it's otherwise unconditionally turned off by default.

Closes rust-lang#36963
pietroalbini added a commit to pietroalbini/rust that referenced this issue Oct 25, 2018
Remove the `alloc_jemalloc` crate

This commit removes the `alloc_jemalloc` crate from the standard library and all related configuration. We will no longer be shipping this unstable crate. Rationale for this is provided on rust-lang#36963 and the many linked issues, but I can inline rationale here if desired!

We currently rely on jemalloc for increased perf in the Rust compiler, however. [This perf run shows](https://perf.rust-lang.org/compare.html?start=74ff7dcb1388e60a613cd6050bcd372a3cc4998b&end=7e7928dc0340d79b404e93f0c79eb4b946c1d669&stat=wall-time) that if we switch to glibc 2.23's allocator that it's slower than jemalloc across many benchmarks. [This perf run, however](https://perf.rust-lang.org/compare.html?start=22cc2ae8057d14e980b7c784e1eb2eee26b59e7d&end=10c95ccfa7a7adc12f4e608621ca29f9b98eed29), shows that if we use `jemalloc-sys` from crates.io then rustc actually gets faster across all benchmarks! (presumably because it has a more recent version of jemalloc than our submodule).

As a result, it's expected that this doesn't regress any code (as it's just removing an unstable crate) and it should actually improve rustc performance because it updates jemalloc.

Closes rust-lang#36963
alexcrichton added a commit to alexcrichton/rust that referenced this issue Nov 2, 2018
This commit adds opt-in support to the compiler to link to `jemalloc` in
the compiler. When activated the compiler will depend on `jemalloc-sys`,
instruct jemalloc to unprefix its symbols, and then link to it. The
feature is activated by default on Linux/OSX compilers for x86_64/i686
platforms, and it's not enabled anywhere else for now. We may be able to
opt-in other platforms in the future! Also note that the opt-in only
happens on CI, it's otherwise unconditionally turned off by default.

Closes rust-lang#36963
bors added a commit that referenced this issue Nov 3, 2018
Remove the `alloc_jemalloc` crate

This commit removes the `alloc_jemalloc` crate from the standard library and all related configuration. We will no longer be shipping this unstable crate. Rationale for this is provided on #36963 and the many linked issues, but I can inline rationale here if desired!

We currently rely on jemalloc for increased perf in the Rust compiler, however. [This perf run shows](https://perf.rust-lang.org/compare.html?start=74ff7dcb1388e60a613cd6050bcd372a3cc4998b&end=7e7928dc0340d79b404e93f0c79eb4b946c1d669&stat=wall-time) that if we switch to glibc 2.23's allocator that it's slower than jemalloc across many benchmarks. [This perf run, however](https://perf.rust-lang.org/compare.html?start=22cc2ae8057d14e980b7c784e1eb2eee26b59e7d&end=10c95ccfa7a7adc12f4e608621ca29f9b98eed29), shows that if we use `jemalloc-sys` from crates.io then rustc actually gets faster across all benchmarks! (presumably because it has a more recent version of jemalloc than our submodule).

As a result, it's expected that this doesn't regress any code (as it's just removing an unstable crate) and it should actually improve rustc performance because it updates jemalloc.

Closes #36963
@johnthagen
Copy link
Contributor

@alexcrichton Is there a tracking issue to track when defaulting to system allocator lands on stable? rustc 1.31.0 (abe02cefd 2018-12-04) on macOS still links in jemalloc. Thanks!

@cuviper
Copy link
Member

cuviper commented Dec 9, 2018

@johnthagen It just has to ride the normal release train. The PR that closed this issue is currently on the beta branch, on track for 1.32.

@SimonSapin
Copy link
Contributor

@johnthagen We generally close tracking issues when something is done/implemented in the master branch. We don’t track features individually after that, since the release schedule is predictable.

In this case, you can see that this issue was closed by #55238 on 2018-11-03, so it likely reached the Nightly channel the next day. Every 6 weeks, Beta becomes Stable and Nightly is forked as the new Beta. So it takes 6 to 12 weeks for a PR merge to reach the Stable channel. https://github.com/rust-lang/rust/blob/master/RELEASES.md shows the dates of past releases and https://forge.rust-lang.org/ the expected date of the next release.

@jonhoo
Copy link
Contributor

jonhoo commented Dec 10, 2018

Should this be tagged with relnotes?

@SimonSapin SimonSapin added the relnotes Marks issues that should be documented in the release notes of the next release. label Dec 10, 2018
@SimonSapin
Copy link
Contributor

Good point! Done.

@spacejam
Copy link

spacejam commented Jan 18, 2019

I am quite saddened by this. PL-scale memory throughout regressions like this will use a lot more energy, cost most users (who are unlikely to learn about GlobalAlloc) more on their server bills, and blunt the surprising bliss experienced by so many newcomers whose uncertain first steps blow their previous implementations out of the water.

Binary size is a vanity metric for computing at scale, and for those who require it to be smaller, they have the flexibility to change.

This has real ethical implications, as our DCs are set to consume 20% of the world's electricity by 2025, and the decisions made by those shaping the foundational layers have massive implications.

Overriding GlobalAlloc is not a realistic option for authors of allocation intensive libraries, as it prevents users from using tools like the llvm sanitizers etc...

As engineers building foundational infrastructure, we have an ethical obligation to the planet to minimize the costs we impose on it. This decision was made in direct contradiction of this responsibility to our shared home. Amazing efficiency by default on the platform that is the main driver of world-wide datacenter power consumption is a precious metric for a language with as bright a future for massive scale adoption as rust.

@jonhoo
Copy link
Contributor

jonhoo commented Jan 18, 2019

@spacejam I don't think it's quite fair to characterize this as that grand of a problem. It's not as though jemalloc exclusively makes things faster, and thus not as though this is universally a regression. Quite to the contrary. There are some workloads that are made much better by this. This change also means that, as system allocators improve, so will that of Rust programs. This would not be the case for a compiled-in memory allocator. If you want to go down the life-cycle analysis path, I think it could also be argued that we are saving countless person hours by allowing the user of standardized tools from people who previously had to waste time trying to figure out why valgrind or what didn't just work. Along those same lines, one could argue that every change to the standard library has wide-reaching implications on global energy use, but a) that impact is minute; b) that impact is basically impossible to predict; and c) it is infeasible to perform that kind of analysis on any kind of representative scale for every (if any) change.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-allocators Area: Custom and system allocators C-enhancement Category: An issue proposing an enhancement or a PR with one. relnotes Marks issues that should be documented in the release notes of the next release. T-libs-api Relevant to the library API team, which will review and decide on the PR/issue.
Projects
None yet
Development

Successfully merging a pull request may close this issue.