-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make rustc and cargo produce optimized binaries by default #967
Conversation
493406e
to
32fad3e
Compare
|
||
While this alone would be a large safety & usability boost over the current | ||
state, it would not help Rust newcomers or those playing around with/evaluating | ||
the language. Such users are less likely to know about `cargo`, and even if they |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FWIW, our current docs use cargo from the get go (after "hello world"): http://doc.rust-lang.org/nightly/book/hello-cargo.html is the third chapter.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The rest of that section explains:
and even if [the newbie users do know about cargo], they are likely to shun it while experimenting with simple programs. In such cases, they could easily conclude that "Rust is too slow" based on their initial experiments and never reach cargo build.
If I were coming to Rust today and wanted to play around with it before deciding is this worth my time or not, I wouldn't bother with cargo at all. rustc foo.rs
produces a binary I can run "just fine" (or so it appears).
Why would anyone experimenting with Rust go through the trouble to create a Cargo.toml
file? For what, a simple test program? This isn't a realistic expectation. They haven't yet been sold on the language to bother with something like that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The docs handle that too: http://doc.rust-lang.org/nightly/book/hello-cargo.html#a-new-project
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh I believe the docs point out the easy way to create a Cargo.toml
file, what I don't buy is that all people playing around with/evaluating Rust will create one at all. Again, why would they? If the only thing you're doing is trying to get a grasp for the language syntax, some initial libs and maybe a feel for the perf overhead over C & C++ by writing a few simple algorithms, why go through the trouble of learning this new "cargo" tool and a "TOML" file and yadda, yadda, I just want to see how the language runs.
That's how I'd do it as a Rust newbie. Why spend any effort (no matter how small it is) to learn to use cargo when I can accomplish my task of (lightly) evaluating Rust with just rustc
? Why spend any amount of effort above the strictly necessary?
Never underestimate programmer laziness.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm, I'm not sure that avoiding cargo
is the least lazy path... it's certainly not as clear cut as it would be with, say, C++ (where getting more than just a compiler requires installing/compiling/wrangling things in possibly non-trivial ways).
The things that incline me to think otherwise are the facts that cargo is distributed alongside the compiler by default, the docs use cargo
from the start, and the 'ease' of cargo new
and cargo run
.
That said, I personally do often write tiny test-cases etc. and compile them with just rustc
. And the behaviour from many other languages is definitely to compile with fooc
and then run the created binary (or just run with the foo
interpreter), so the fact that cargo deviates from that probably means it is more effort.
In any case, I suspect that there will be "surprisingly" many newbies who have never run rustc
manually, using play.rust-lang.org and cargo
for everything.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In any case, I suspect that there will be "surprisingly" many newbies who have never run rustc manually, using play.rust-lang.org and cargo for everything.
Oh I don't doubt that, and good for them! :)
I also don't doubt that there will be an unsurprisingly large number of newbies who will spend a non-trivial amount of time with just rustc
, at least at first (and long enough to form incorrect opinions about Rust perf).
A serious problem with optimizing by default is that it removes integer overflow checks, so the assumption that people will generally test code with overflow checks enabled (because they'll use debug builds when developing) may fall down. You could otherwise optimize but keep them in, but rustc isn't really currently designed to minimize the performance impact of those checks... Perhaps the least worst option is to (1) have cargo's echo actually use the word "unoptimized", rather than "debug", and (2) add an echo to rustc (:/), which could be disabled if the user specified any optimization level manually. |
In principle, there doesn't need to be a default. Cargo and/or rustc could make the user explicitly specify an optimization level, either at the command line or in the Cargo.toml file. If an optimization level is not specified, that's the perfect time to spit out some documentation listing the choices and how they differ. I have no idea if this approach would actually be wise, but it could at least be worth mentioning in the Alternatives section. |
The majority of all builds will continue to be debug builds because people will want debug symbols and a faster iteration cycle. Nothing about that changes with this RFC. The only thing that changes is that if you forget to specify what kind of build you want, you'll get the optimized build. Newbies won't know that the debug/opt distinction exists so they'll start with opt-only builds but as they learn a bit more about Rust they'll start using debug builds more often (because of the above-listed reasons). The point of this RFC is not to change the culture to using opt builds in development, it's to prevent users from shooting themselves in the foot. |
I'd be amenable to that, but it might be unnecessary complexity for newbies (and others). There's something to be said for "if I give a compiler a program file, I should get back a binary." Avoiding a reasonable, safe default and forcing a choice every time seems like a cop-out. We know what the safe default is, and it certainly isn't debug-by-default.
I agree; I'll add it to the alternatives section. |
@Valloric Good point, though I'm not sure the default won't have some effect on that culture in practice. Also, I think it may be confusing for newbies if integer overflow is supposed to panic yet they don't see this happen in practice in their first programs. |
Since opt builds are substantially slower to produce and thus iterate on, I think people will quickly gravitate towards the
I find it unlikely that newbies will encounter integer overflow in their initial programs. I've been writing C++ for 15 years and I still haven't seen it bite me. :) |
Optimizing and having debug assertions are completely separate axes of configuration. You can have both, and if we optimize by default, we should have both (outside of |
This is an excellent point. In the C++ world, lots of people run optimized builds with assertions turned on. In fact, all Google (C++) production code runs like this, and assertion statements (as macros) are incredibly widely used. |
Personally, I'd like to see |
I'm against making it the default. In my opinion, |
This was discussed in last weeks meeting: https://github.com/rust-lang/meeting-minutes/blob/master/weekly-meetings/2015-03-03.md#cargo-and-optimizations I don't like the RFC as it is right now but I could get behind it if you add the following: Add a configuration option |
And if we make the use of different optimization flags more common, then they should be called |
I would vote that The optimizations in |
Thanks for pointing that out, I was unaware of it. I'd like to address some of the comments from the meeting minutes:
As is explained in the RFC and as I elaborated in the comments, this RFC does not intend to change the culture of which build type you use during development. That build type will remain debug, it's just that now you'll have to pass
The RFC lists lots and lots of examples (although I could always come up with more; tons are around) of people being surprised by debug-by-default. If someone were surprised by opt-by-default, it would only be so because of poor design decisions made in older compilers that have created this unfortunate convention. Even so, let's look at the consequences of the "surprising default" in both cases: debug-by-default: if you wanted an optimized build instead, your production binary is now incredibly slow. As someone who has actually pushed unoptimized binaries to production by accident, I can confidently say this costs a lot of money. opt-by-deafult: if you actually wanted a debug build, the failure state is "you waited longer for your build to finish than you should have (and no debug symbols)," which while annoying, has far less potential for damage.
Having a "half-optimized" build for default that differs from release mode would only further confusion, because now there are three build modes: debug, release and a weird default mode. The whole point of this RFC is to minimize the chances of people running non-release builds by accident. With a weird third state, that new state (while better than debug) would also show up in production environments.
Doing something only because "that's how C compilers do it" is a very weak argument. Why promulgate hurtful design decisions made 40 years ago? Merely for the sake of consistency? Taking that argument to the absurd, Rust shouldn't have a borrow checker since "C compilers don't have it either."
Same as above; this isn't much of an argument.
It's correct that it would be trading one problem (accidental slow executables) with another (accidental slow builds), and that's the whole point since accidental slow builds are a vastly safer and preferable choice. The second is merely annoying while the first one leads to expensive mistakes. In other words, this problem-trading is a feature, not a bug. To draw a parallel with other Rust design decisions, the borrow-checker is trading considerable up-front programmer effort to save even more effort in debugging memory leaks down the line. One could say "but now everyone has to wrestle with this new problem of convincing the compiler they've written memory-safe code; we've merely traded one problem (possibly memory-unsafe code) with another (harder to write the code in the first place)." Safe design rarely comes for free, but it pays off massively by eliminating lots of future mistakes.
Thanks for this! Super-happy to see it, although I can't say I'm surprised by the result. |
Extra typing is always an issue. |
I don't want that the discussions ends up using too many rethoric devices, please don't do that. I think it's evident that Rust strives for C-like behavior in (at least some) cases where it's not a safety issue. |
Telling people that what they say is not an argument does not help the discussion. As for this specific point, consistency with other languages is a plus. This is an argument and I consider trying to defeat it with "this isn't much of an argument" unacceptable in a useful discussion. |
|
||
# Alternatives | ||
|
||
## Only have `cargo` echo a message stating `debug` or `release` build mode |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really like this alternative. If someone doesn't know what optimized/unoptimized means, it should be relatively easy to google "rust optimized build". I don't think the argument of "A Rust newcomer who's only used Python, Ruby, JavaScript, Java, C# or a similar language is not going to understand the consequences of a "building in debug mode" message." is very convincing. Rust targets cases where one would reach for C/C++/maybe Java, not any of the listed languages. In all those languages, debug is the default.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In practice people have come to Rust from a large variety of languages, so we can't assume that they have C/C++ habits already.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They don't need any uniquely C/C++ habits. All they need to do is read cargo output to realize the build is unoptimized.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rust targets cases where one would reach for C/C++/maybe Java, not any of the listed languages.
That doesn't mean that only people using those languages will be looking at Rust. @steveklabnik was a Rubyist (if I'm not mistaken) before coming to Rust. Plenty of other examples abound.
In all those languages, debug is the default.
And in all those languages, it's a very unsafe default (as this RFC explains why). The idea is to change that in Rust and break away from the harmful, legacy behavior, much like the borrow-checker lets us break away from the errors of manual memory management in C & C++, while still leaving that feature available.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not true that this is "harmful legacy" behavior, it's behavior actually expected by some members of the community (faster builds). You present it like there's only one choice, namely defaulting to release builds, whereas a warning might suffice.
I think that this is absolutely not on the scale of memory-unsafeties, because people want rightfully something else most of the time. Comparing it to borrow-checking is not useful.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not something that I would consider "unsafe". "Unexpected by some" would be my preferred phrasing. Also, just because people come from other languages doesn't mean that rust needs to cater to their expectations.
I, for one, definitely expect to get a debug build with the "minimal-typing way to build my stuff" command. If it does otherwise, I'll be a little bit surprised, but as long as the build system properly communicates what it's doing, that's fine. Right now, we don't communicate the build type at all. If that's fixed, all should be right with the world.
Points that I do agree with:
- Put debug binaries in
target/debug/
instead oftarget/
. Clear the "other" directory on build unless both debug and release flags are passed. - Embed the word "unoptimized" or "unoptimized" in the build output.
- Make rustc optimize by default (as opposed to cargo).
@Valloric I see you've missed my suggestion to make this configurable via configuration files and environment variables. You might want to add this to the RFC or explain why you disagree with it. |
@alexcrichton Thank you for your detailed comments! Sorry for the delay in responding to them, I've been super-busy.
While I greatly appreciate these changes (they're awesome, thank you!), I'm not convinced they entirely address the problem. The main reason is that "debug" does not imply "slow" to people coming from dynamic languages or Java/C#. For those of us used to systems languages, the implication is common. For something like Python, it's an utterly foreign term that implies better debuggability but not slower performance. Case in point, for C# Visual Studio provides a Debug and Release mode but the perf difference between the two is utterly minimal. The difference mostly comes down to the availability of debug symbols. From the linked source:
For Java, the concepts "Release mode" and "Debug mode" might be familiar to the user, depending on which IDE they use. The compiler options that
For Python, passing So of the Big Five languages that do have some concept of Debug and Release mode, the Debug concept is very much linked to "increased debuggability with almost no impact to performance." For others, the concept doesn't exist and thus has no linkage to lower performance. So I remain unconvinced that seeing the string "debug" in cargo output or even the binary location will deter people unfamiliar with systems languages.
With the "easy to gloss over the output when included with lots of other text" argument I'm mostly referring those already familiar with Rust. I've linked to examples of Rust veterans shooting themselves in the foot with debug-by-default; other experienced users of Rust have mentioned similar experience in this thread, on r/rust and IRC.
I think you misunderstood @huonw's point in the comment you linked to; he was saying that bigger companies are in fact probably more likely to absorb their correspondingly-higher losses from mistakes than smaller companies. In other words, while the size of the mistake is a matter of proportion, a "merely" $10,000 mistake can sink smaller players. I agree that
I won't pretend that there isn't a drawback in opt-by-default since we do expect experienced devs to use debug builds far more often than release builds. But I believe this extra cost of passing In a sense, I see this to be similar to the borrow checker. As someone who has been writing C++ code for years, I very rarely actually have a memory leak in my code since I track memory ownership well enough. So the borrow checker would be a nuisance 99% of the time. And yet I welcome having it turned on 100% of the time in Rust (along with the costs that come with that) just so that I can avoid that very painful 1%. That's what it boils down to, IMO. Yes, some cost has to be paid in the 99% case to ensure that the catastrophic 1% case is far less likely to happen. I see that in the borrow checker and in opt-by-default.
That's not true for all of my examples (please look again). You can also see @gankro echoing similar sentiments of getting bitten by debug-by-default, and no one would call him a Rust newbie. :)
I don't think this is a fair analogy; the borrow checker decreases the likelihood of mistakes, and so would opt-by-default.
Other than the absolutely massive general-safety issue of the performance footgun, no, I don't see any other issues. In the same vein, I don't see any point in the borrow checker beyond it "merely" ensuring I don't have memory safety issues. In other words, they both guard against a huge enough problem. (Note that I'm not saying they are equally-sized problems, just huge enough to be worth guarding against by default.)
I agree 100%. I am sure that the decision to include a paradigm-shifting borrow checker in Rust was not made lightly given that C and C++ have no such component and have been memory-unsafe for decades with billions of lines of code written in them.
The performance footgun of getting a debug build when one wanted a release build. It's far too easy to encounter this; the RFC goes into detail. As is listed in the RFC, I do not believe that the creators of C compilers made a mistake; they were bound by backwards compatibility. Initial compilers didn't have optimizations at all, so adding options that change the generated code had to be done without disrupting how people were already relying on the compilers to behave.
I hugely appreciate the tons of work that have gone into rustc and cargo to make them as ergonomic as possible; I thank you personally for this since I know you've put lots of effort into cargo (and, of course, rustc). But I also think that assuming that the tools will be used in a vacuum is a mistake, since they won't be.
Truer words have probably not been spoken in this thread. :) Do you feel that we should go with a substantially more error-prone design (which we'd have to live with forever) because today, rustc is pretty slow at producing optimized code? That would strike me as a very short-sighted decision. Finally, thank you for putting in the time to review this RFC, read my responses and provide a lengthly list of comments! While I may disagree with some of them, I still value them all immensely. |
After reading this question on stackoverflow, I felt the need to finally share my feelings on this matter. It's not my intend to pick up every argument given here so far, but to write about my intuition, about what I would expect when using cargo. Maybe it adds some value. In the following, I will list some common cargo operations, along with the default settings I would expect. All statements marked with (OK) are already present in the cargo we know today.
As usual, all default settings should be overridable per target in the Cargo.toml file. My thinking is that defaults are the right thing™ for the majority of the cases, which implies one must allow overrides for all other cases. About explicitly stating the compilation mode of dependenciesTo me it seems that the majority of time, one will pull in third-party libraries. Therefore one is not interested in debugging them, but in their optimal performance. For example, if I am writing a command line tool to process data using a third party library, the latter should always perform well even though I am debugging my own application. ConclusionCargo already behaves in accordance to my intuition in many cases. The main issue I see is A major issue that seems to remain, but might deserve (a possibly separate) discussion, is how to handle the compilation mode dependencies. Release by default is what I would expect in this case. Off-Topic, but related
|
Please do not make build optimized by default, it make no sense. I would really like to see a vote on that topic by people who use on a day to day basis a system programming language. If really some people out there manually push debug build to production, then It would be a real pleasure for me to teach them about Build Automation. This is a really simple process to put in place. It is easy to do and can be done in roughly 1 hour. And if you need a more complex system, you can use your google skills to find the build bot of your dreams. It'll save you a lot more money than discussing about default for rustc and cargo. |
Why does it make no sense? Most languages like Java and C# optimize by default. There is an audience overlap between Java/C# and Rust because they are all languages that you could write a high-performance server in. In fact, Java servers are so optimized they sometimes beat pure C implementations in benchmarks. If I were a Java programmer evaluating Rust I would never find out about compiling with optimizations on because it's just not prominently mentioned. This happens all the time. Don't just say it makes no sense without substantiating it. |
Are you realizing that you're comparing JIT languages against Rust, an LLVM based language ? That definitely make no sense. As a reminder, the rust-lang.org page says clearly that Rust is a systems programming language. Thus, the main class of languages to compare Rust with, is one such as C or C++. Besides, if you want to talk about Java, all the optimization phase is done by the JIT compiler at runtime. There's no such thing as a Java optimized byte-code build with the javac compiler.
So far, newbies would just look for "why my rust application is slow" and find the first answer on stackoverflow solving their problem. That's probably how you find out about it. And this is just a natural step needed when learning about a systems programming language. I do agree though that a hint explicitly saying that the build is in debug mode from cargo would really be helpful for that case. Finally, coming back to debugging, you can use without trouble "Is rust-gdb broken ? It looks like my code randomly jumps from one place to another. Some statements are completely ignored. Is it normal ?" It's just trading one problem with another. In both cases, a newbie will have to learn what kind of language Rust is. IMO, by default, you want the best option for the day to day developer. |
Java does do optimizations at compile time, not just at runtime. |
This has tripped me up as well. From my experience, it doesn't matter which is default - one default is just as confusing as the other, depending on what I'm expecting, and it doesn't matter what I'm expecting. It just matters that I know what's happening. With the current setup, the easiest way to find that out the default compilation settings are via IRC or google. And I think we can agree that's not ideal. This is what I'd appreciate:
# The development profile, used for 'cargo build'
[profile.dev]
opt-level = 0 # Controls the --opt-level the compiler builds with
debug = true # Controls whether the compiler passes -g or '--cfg ndebug'
debug-assertions = true # Controls whether debug assertions are enabled
verbose-profile = true # Controls whether these profile settings are output while building
This way, there is absolutely no confusion as to what the output binary is going to be, and the defaults will be unchanged. The crusty old-timers will be comfortable, while the young-and-restless know what they're getting themselves into. :) |
I'm against this. It slows down compilation and produces less debuggable programs by default. The best option I can think of is adding to |
@alexcrichton Ping? Rust is approaching 1.0 and if we're willing to implement this RFC, doing it before 1.0 would be a good idea. It might also be acceptable to do it post-1.0 if we're willing to break |
@mkpankov I want an optimized build 99% of the time because when I test my game-playing AI it makes no sense for me to test it at 1/10 speed - I would have to give it 10x the time for testing which would actually slow me down. So it definitely depends on how useful unoptimized builds are for you. I just |
Your desires are... unconventional
Very well. That's just one use case out of myriad. Testing and debugging are different goals. I'm with @Kingsquee in this case - profiles are generic enough solution and everybody can just use profiles they need. |
Gah I'm so sorry about taking so long to get around to this, it's been sitting in my inbox for ages! So after re-reading your last comment (which thanks for taking the time to write!) I think the main difference in our opinions is the degree at which debug-by-default is harmful. It sounds like you think it's quite a serious issue for a number of reasons, but I personally remain much farther on the other side of the spectrum in the sense that I would primarily see this as a nuisance than a critical problem. At this point I think I'm a little less persuaded to keep the status quo because of compile times. @Kingsquee had a good point where it's likely for both defaults to be just as confusing as another. Put another way, I have to remember to make my binary fast right now, but if we switched the defaults I'd have to remember to make my build fast. I do think, however, that it's still a pretty important point that we've had unoptimized-by-default for so long that we've started designing with that assumption by turning off debug assertions, overflow checks, etc, when optimizing. Additionally, we have small features like the Overall, I feel that there's compelling arguments on both sides of the fence here. I am not as persuaded as you are that one outweighs the other, which makes me lean more towards maintaining the status quo. I do agree that that this sort of change would need to be considered before 1.0, and unfortunately that deadline is rapidly approaching. If consensus is reached in the next week or two to switch the defaults here then we can probably still make the change, but it does not seem likely that consensus will be reached. |
@alexcrichton Thank you for your thoughtful-as-usual comments!
Fair point, reasonable people can disagree. :)
Sure, I can see how both defaults could be confusing, depending on the user. But IMO the question is which of the states (opt or debug by default) has the potential to lead to more damage. In other words, while both can be confusing, opt-by-default is the safer choice and should thus be preferred.
I don't see why we'd have to replace any of the flags; I see it more as the user getting a certain selection of flags passed by default if they don't pass any. I've also mentioned above that developers will probably want to use In other words, the changing of the default state is targeting newbies and forgetful experienced devs. If you've sat down and are actively iterating on the code, you're using
If we really go with debug-by-default, it should be because reasonable people have weighed the arguments and have decided that is the more sensible outcome, not because of inertia. We shouldn't be taking the easy choice of inaction just because it's available.
Consensus is great when it's available, but when it isn't, a tough decision needs to be made. I know @pcwalton is fond of saying that the core team needs to be able to make decisions it finds to be in the best interest of the project even if consensus can't be reached (rightly so, IMO). "A consensus couldn't be reached" shouldn't result in "we've decided to make no decisions." Decisions everyone agrees on aren't the hard ones; they require no leadership. Lastly, thank you for your time here again; I know you guys are super-busy getting 1.0 out the door. |
I completely disagree: debug by default is the safer case, because it is more likely to catch errors in the program. With debug by default, the worst case is that your program runs slightly slower than it could do. With optimise by default, the worst case is that there are a whole host of overflow and other problems with the code that will cause the program to malfunction at some unknown point in the future. In addition, the vast majority of compilations are when debugging and iterating on code, whereas producing a release build is a relatively rare event. There's also plenty of scope for eg. IDEs and other tools to improve the situation by automatically passing the correct flags. In those situations it doesn't really matter what the default is. |
Vast majority of compilations are either I If your project doesn't really have any difference between optimized and unoptimized builds, that's great. But most serious Rust projects use it because of its performance characteristics, so trying to profile or debug an unoptimized program will be useless because it's so much slower than the optimized one. I don't feel strongly about which should be the default, but I think that there should be some discussion of what kind of build people choose to do and what the expected outcome of that kind of build should be. For me it's something like:
So maybe I want is not just a certain set of defaults, but the ability to customize the build process with cargo to what I want it to be for that project. |
rustc already supports this with either As for me, I’m strongly in favour of debug by default. We turned off LLVM and rust assertions in beta build already and now people started closing rustc/LLVM ICEs because they don’t assert anymore. That’s rust contributors, of all people! I strongly suspect the similar thing would happen periodically with rust programs if we go with optimised-by-default:
For comparison one of the largest rustc sub-libraries – libsyntax – time to compile at every optimisation level (with a stage0 compiler, which is compiled with optimisations):
Compiling libsyntax with stage1 (non-optimised): … more than 22 minutes (killed before completed) Even then, I’m strongly in favour of debug builds with as much checking as can be done by default. The only thing this proposal fixes is: “huh, my program is slower than expected, strange”. |
Ah yeah this is a good point. If we were to switch the defaults, we would ideally remove the
I completely agree with this. There have been many arguments on this thread in both directions, however, so I think it's quite safe to say that deciding on debug-by-default would not be only because of inertia.
I also agree with this, but this is seen as a fairly radical change by many, in which case broader consensus is generally expected before moving forward. For example when renaming |
I suppose another thing to consider is having something like gcc's As for defaults: all of my "to production" deployment happens via packaging scripts which already know that they need to pass I think it's important to consider here what we expect the "manual" usage of cargo to be. And personally, I don't expect production deployments to be be the majority of manual cargo invocations. |
Another thought: for cargo, we could allow configuration of the default (op-level, debug-assertions, etc) via |
@jmesmon A great idea ! Cascading configuration is very practical, after all, and might help to end this debate with everyone being happy. |
I could live with opt-by-default while having a I honestly think this covers all the complaints about opt-by-default while providing the safety & usability win for newcomers. |
How about optimized builds with overflow checks and assertions on? For my use case I would be interested in something like that. |
@jmesmon the idea of a @iopq any manner of configuration of the compiler is certainly possible (via flags, etc), I think that this is largely just a question of the defaults. It is certainly true, however, that the default could be debug assertions + optimizations turned on. (not sure if this would be desirable, though) |
Overflow checks are a very interesting case. My understanding is that in numerics-heavy code they can have a huge overhead (preventing vectorization?). So it seems contradictory to enable optimizations by default but then leave in overflow checks. Similarly we've been defensive-debug-assert-happy in critical paths on the assumption that an optimized build would strip these. This could have have the odd side-effect of people believing that the default build is totally optimized, when really it's leaving important optimizations on the table. On the other hand, having overflow checks off by default could lead to people writing code that doesn't work at all in debug mode because it instantly starts triggering tons of overflows because they didn't use wrapping ops and never had a reason to enter debug mode. Not good! |
I think this is a really bad generalization to make. In CPU-bound programs, sure, performance is generally very important (though not always to the degree that you're suggesting). But not all programs are CPU-bound programs. As an anecdote, my day job is writing iOS software, and the current project is a (relatively) large Swift-only project. Swift has followed Rust's model of relying on the optimizer to get orders of magnitude performance differences in CPU-bound code. And yet that rarely makes a difference in my program, because it's an application that is usually bound by networking or by user interaction, not by number-crunching. The compile-time difference between optimized and unoptimized builds is huge, but the observable runtime performance is pretty minimal. Which is to say, everybody's requirements are different, and some people are writing programs that are pretty useless without optimization but other people are writing programs that work just fine without optimization.
I think this is dangerous. There have been multiple claims that debug-by-default is "dangerous" because people will forget to pass the That said, a Ultimately, I'm still very much in favor of the status quo. Optimize-by-default is risky because it leads to not running debug assertions or overflow checks during development, it breaks convention with every other compiled language I'm aware of (and therefore defies user expectation), and it has just as much risk of leading to public perception of "rust compiles very slowly" as debug-by-default does of "rust programs run slowly". |
This is a very bad idea, I don't understand why it's still discussed. |
Ok, at this point it looks like this RFC has basically run its course, and I think that one of the key points I've seen is that despite which default is chosen, there will likely be pitfalls either way. There's been quite a bit of discussion about the severity of these pitfalls and how it should possibly play out in terms of choosing a default, but there is not clear consensus on changing the defaults. As a result, I'm going to close this RFC and we're likely going to be sticking with the status quo. I'd like to emphasize though that closing this RFC is not sticking with the status quo "because it's what we did before", but rather that there are legitimate arguments for debug-by-default (spelled out in this thread) and there is not enough consensus for making such a change (and many a very much not in favor). Regardless though thank you for the discussion everyone, and especially @Valloric for keeping up with this RFC and taking the time to write it out! |
great decision there @alexcrichton |
Make rustc and cargo produce optimized binaries by default.
Rendered.