-
-
Notifications
You must be signed in to change notification settings - Fork 13.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Switch stdenv to GCC 4.8.0. #445
Conversation
Well, after reading some references I believe it's a very good idea. Broken builds should be only a tiny fraction in comparison with what 4.7 broke (except if one uses -Werror, but that's rare). http://gcc.gnu.org/gcc-4.7/porting_to.html |
Check out the build errors in this branch at http://hydra.cryp.to:8080/jobset/nixpkgs/switch-stdenv-to-gcc-48/errors. GCC 4.8.0 does pretty well, IMHO. Packages using Anyway, you can get binaries for
|
Hmpf, I didn't intend to close the request. Sorry, wrong button. Re-opening ... |
The gcj job doesn't work on stdenv-updates either. I believe all pre gcc-4.7 versions are broken, due to ppl update (I think). I tried to just copy the expression to use gcj-4.7 by default which built well, but then I encountered a problem in openjdk, where -lgcj couldn't find the library (I don't have a clue why). I think we should:
|
I also think we should use the profiled version of gcc. The only drawback is that with their makefiles this forces us to single-thread build of gcc as is commented -- there's some really silly reason that the profiling output files could rewrite each other or the like... but nevertheless I think it's worth it, gcc will only get rebuilt rarely. |
@vcunat, I agree that we should use a profiled version of gcc. I don't want to make that change in this particular pull request, though, because then I'll have to rebuild the entire stdenv one more time. Personally, I'd prefer to do that once we've committed to gcc 4.8.x as our new stdenv compiler of choice. |
@peti: Yes, I thought this may be the reason. I'm building profiled stdenv with gcc48 right now, to at least a bit try it. |
I tried to build and run some packages: firefox, evince, vlc and lyx. No changes were needed and I encountered no errors (used just plain stdenv-updates with gcc48+PGO). Looks very good. |
On Sat, Apr 06, 2013 at 05:15:12AM -0700, Vladimír Čunát wrote:
I'd prefer to keep the profiled gcc to x86 only. Do you agree? |
Well, I've got no knowledge about PGO on other HW, so I can't really decide that (and therefore I'm certainly not against). |
On Sun, Apr 07, 2013 at 12:40:58PM -0700, Vladimír Čunát wrote:
It just takes a lot to build, in the slow hw I talk about. :) It requires more |
The test t-lucnum_ui fails (on Linux/x86_64) when built with GCC 4.8. Newer versions of GMP don't have that issue anymore.
I see it's in stdenv-updates now. So according to @viric we shall use something like this?
BTW, once we stabilize stdenv, we'll need to widen the amount of compiled packages. I'm sure there are still many build errors (currently not shown) due to gcc-4.7 step. IMHO we'll want to fix most of those before merging to master (usually easy via update or a patch from some other distro). |
I would like to merge this change soon'ish if possible. Does anyone see a compelling reason not to try to switch stdenv to gcc 4.8.x? If you do, please let me know! |
And what about the profiling? Shall I commit the above change (PGO iff on x86*)? |
Yes, please do! I too would like to have PGO enabled. |
We didn't catch it, so hydra will build both, but fortunately the jobset is quite small now. |
Does PGO work with "enableParallelBuilding = true"? |
Probably not. I re-checked the documentation of gcc-4.8 and the comment is still there (but it really seems like a silly reason, IMHO only makefiles would need fixing). |
I feel that's the biggest disadvantage of PGO at the moment as it further slows down gcc's build. OTOH we should only rebuild it very rarely. |
I want to turn off PGO. We cannot get a deterministic result for gcc itself with PGO because the timing information is slightly random. |
I don't think we should turn off PGO by default without good profiling information verifying that there's no significant loss in performance. Users who absolutely need deterministic builds can of course use a non-profiled version. |
PGO achieves significant performance gains. What advantages would |
Hmm, I thought that generation of profile information was deterministic (assuming that we run a deterministic program). In docs I see nothing that would confirm either of the possibilities. http://gcc.gnu.org/onlinedocs/gcc-4.8.2/gcc/Optimize-Options.html#Optimize-Options I found some data that confirm the speedup is likely singificant. Note: while the gcc instances might differ, the results produced by each should certainly be binary equal, so the situation doesn't seem too bad to me. |
On Mon, Apr 14, 2014 at 4:15 PM, Vladimír Čunát notifications@git.luolix.topwrote:
I don't think it is worth sweating over a 10% performance issue. ccache itself can give an order of magnitude faster builds. Already, with For deterministic builds, semi-trusted machines can also be added to These two changes could in theory boost hydra performance by an order of Now that does not preclude having PGO in gcc, but in light of those The security win is that when gcc is deterministic, no backdoor can be Without a deterministic gcc, no security advantage is gained from |
Unfortunately, Nix/Hydra won't be using ccache any time soon, because it's impure (the cache is a global shared variable, i.e. stateful)... |
Being able to use ccache with hydra seems like something that should be In my usage of NixOS, I must have deterministic builds because I am using It is possible to get around this by creating a NixOS distribution that Then, when gcc is actually needed because a derivation is compiled locally, Removing PGO from stdenv makes this much simpler and cleaner. On Mon, Apr 14, 2014 at 4:48 PM, Eelco Dolstra notifications@git.luolix.topwrote:
|
Theoretically with a Make implementation that runs each command as a derivation and recursive nix we might be able to get a more general and better-grounded version of ccache (and distcc). In the mean time though, it's not going to happen. I'd rather not trade a demonstrated 10% increase for a theoretically possible 10x one. If you can show an actual implemented solution that has real performance gains but depends on bit-perfect determinism, fine. Until then, if we can't make PGO deterministic then users who need determinism will have to have a separate stdenv IMO. |
On Mon, Apr 14, 2014 at 07:48:15AM -0700, Eelco Dolstra wrote:
and so is the nix store, hashes linking inputs and outputs, no? :) It should be like some kind of controlled impurity. |
I don't think that caching was meant to be an important point in this discussion. Anyway, my view is below. Ccache-like caching IMO doesn't really need binary repeatability, and current "semantic" repeatability of gcc is enough for it (i.e. the generated code does the same thing and has the same ABI). PGO by definition must not affect any of these properties. BTW, I think we can do similar "memoization" much better than ccache (certainly more efficient), because we have much better than usual control on what are the inputs of each compilation command (most of the inputs will be on immutable paths, etc.). |
On Mon, Apr 14, 2014 at 5:37 PM, Shea Levy notifications@github.com wrote:
This is a too simplified view of the trade-off. It is deterministic builds This "sacrifice" is similar to the performance penalty for using To me it is hard to understand how a 10% increase in build performance is Asking people to use their own stdenv is not a solution in this case, As I mentioned, it is possible to overcome this by requiring every end user This means that what is distributed, the ISO that is downloaded, is That will have to add another stage to the stdenv bootstrap process, but If that's an acceptable road to follow, I'd like to know. |
I would suggest that our next stdenv update goes to gcc 4.8.x directly, skipping 4.7.x.