Skip to content
This repository has been archived by the owner on Nov 21, 2018. It is now read-only.

linux-cross: use glibc-2.14/gcc-4.8 for the arm toolchain #69

Merged
merged 5 commits into from
Mar 13, 2016

Conversation

japaric
Copy link
Contributor

@japaric japaric commented Mar 10, 2016

This should fix the arm half of rust-lang/rust#30966.

I've tested this by cross compiling std to the 3 arm targets and checking that the produced rlibs don't have an undefined symbol to secure_getenv.

I'm pretty sure this image can also be used to cross compile rustc to arm, but it requires patching compiler-rt first. I'll test that and discuss the compiler-rt patch with Alex.

r? @alexcrichton

@alexcrichton
Copy link
Contributor

Thanks @japaric! Do you know if it'd be possible to not use the crosstool-ng project? It seems like that's a massive dependency to pull in and we just need to change glibc versions, right?

@japaric
Copy link
Contributor Author

japaric commented Mar 10, 2016

Do you know if it'd be possible to not use the crosstool-ng project?

We need glibc <2.17 to remove the dep on the secure_getenv symbol. Trusty can't be used because it ships with an arm gcc-4.8/glibc-2.19 toolchainh. Precise can't be used because it ships with an x86_64/arm gcc-4.6/glibc-2.16 toolchain and, IIRC, we need >=gcc-4.8 to (cross) compile LLVM. And I haven't seen too many packaged arm toolchains. There is linaro, but their oldest release is gcc-4.8/eglibc-2.17; also they use crostool-ng to generate their toolchains :-).

It seems like that's a massive dependency to pull in and we just need to change glibc versions, right?

IMO, it's as massive as depending on a PPA package. If we package the crosstool-ng generated arm toolchain in a PPA package and use that package in the Docker images, it wouldn't be different from this mips-gnu toolchain dependency.

@alexcrichton
Copy link
Contributor

Right yeah we gotta build glibc somehow, was just hoping it would look like "download glibc and build it" rather than using crosstool-ng. I guess if crosstool is the standard way to build compilers we may as well use it.

Two questions:

  1. How far back can we move glibc, and how far forward can we move gcc? I believe we should be able to use as new a gcc as possible so long as we're using an old glibc.
  2. Can we avoid checking in these massive configuration files? I'd love to automate generation of them in one way or another if possible. For example if we wanted to do this same thing on all the other architectures, I wouldn't be sure how to generate those config files for those compilers.

@japaric
Copy link
Contributor Author

japaric commented Mar 11, 2016

was just hoping it would look like "download glibc and build it"

Oh, I haven't really thought about that because in my mind gcc and glibc always come in a single
package, i.e. you need to build one to build the other . It may be feasible but idk how complicated
it would be given that we also need to cross compile a libgcc_s and a libstdc++ that link to the
other/older glibc for crossing rustc; and that starts to sound like more work than using crosstool-ng.

How far back can we move glibc, and how far forward can we move gcc?

The newest gcc crosstool-ng can build is 5.2, and the oldest glibc it can build is 2.8.
I haven't tried that combination yet but IME not all the combinations work; so, the answer is: this
needs testing. In practice, I don't think we need to go older than glibc-2.13 though, which is,
AFAIK, the oldest glibc used in ARM Linux systems. Also we don't have to exactly match that glibc
version as long as the generated crates/rustc don't depend on symbols introduced in glibc >=2.14.

Can we avoid checking in these massive configuration files? (...)

Sorry, I should have pointed out the documentation I wrote more clearly! crosstool-ng provides a
menuconfig interface (ct-ng menuconfig) to configure the toolchain. The two configurations
checked in here are "just a few" modifications on top of the default settings as shown here and here.
Sadly, menuconfig is interactive and, AFAICT, crosstool-ng doesn't provide a non-interactive way
(i.e. through shell commands) to generate these config files.

@alexcrichton
Copy link
Contributor

In the past I've actually followed these instructions for just downloading and building gcc plus glibc, but they unfortunately don't always work. For example I couldn't get glibc 2.14.1 (like in this PR) to build. The gcc build, however, seems pretty reliable! Maybe there's just a small delta of what needs to be done to get glibc to build?

Also we don't have to exactly match that glibc version as long as the generated crates/rustc don't depend on symbols introduced in glibc >=2.14.

I know at least in the past we've been burned by situations like this though. I think it looked something like:

  1. Canonical symbol foo was introduced in version X.
  2. Canonical symbol foo tweaked behavior in version Y, introducing a new symbol version.
  3. If you depend on foo, then if you build a lib with Y's version, even though the symbol is available in X it won't resolve because you want a version it doesn't have.

So there may be situations where even though we need no new symbols in 2.14 if something changed between 2.13 and 2.14 we may end up relying on 2.14 anyway. Now that being said, it may not be the case at all here!

Sorry, I should have pointed out the documentation I wrote more clearly!

Ah ok, thanks! That looks reasonable for reconstructing this at least. It seems these configs could be generated inside the docker container, right?

I'll put some comments inline about the selections.


So I guess all in all, it may be worth trying to find some instructions to build glibc manually, but I wouldn't try too hard. It looks like crosstool-ng works well and if it extends to all the other platforms we may want to do that as well.

RUN /bin/bash build_arm_toolchain.sh arm-linux-gnueabi
RUN /bin/bash build_arm_toolchain.sh arm-linux-gnueabihf
USER root
RUN rm -rf /build
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've found that not having a bunch of intermediate artifacts can hugely reduce the size of docker containers, could this be done as part of each step? Some examples I've done in the past are:

RUN /bin/bash build_arm_toolchain.sh arm-linux-gnueabi && rm -rf /build

(or something like that)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've found that not having a bunch of intermediate artifacts can hugely reduce the size of docker containers

Great tip! I'll try to condense these commands as much as possible.

@japaric
Copy link
Contributor Author

japaric commented Mar 11, 2016

In the past I've actually followed these instructions for just downloading and building gcc plus glibc, but they unfortunately don't always work.

Hmm, those instructions seem a little off. AFAIK you can't compile gcc without having compiled glibc first unless you explicitly disable thread support and that's the reason most gcc+glibc builds are performed in two stages/passes. See Linux from Scratch which builds gcc twice.

For example I couldn't get glibc 2.14.1

Were you using gcc-5 to build glibc-2.14.1? It seems glibc configure script doesn't allow that:

[CFG  ]    checking for arm-unknown-linux-gnueabihf-gcc... (cached) arm-unknown-linux-gnueabihf-gcc
[CFG  ]    checking version of arm-unknown-linux-gnueabihf-gcc    ... 5.2.0, bad

I don't know if glibc-2.14 can't be compiled with gcc-5 or it's just that the configure script doesn't accept that version because gcc-5 didn't exist when glibc-2.14 was released.

So there may be situations where even though we need no new symbols in 2.14 if something changed between 2.13 and 2.14 we may end up relying on 2.14 anyway.

Perhaps we should state in the website that the binaries are guaranteed to run on systems with glibc >=2.14, but it may work in systems with older glibcs but those are not officially supported?

Ah ok, thanks! That looks reasonable for reconstructing this at least. It seems these configs could be generated inside the docker container, right?

If we generate them inside the docker container then the person that called docker build will have to manually input the toolchain configuration via a curses interface (menuconfig). And that action would have to be repeated every time we build a new Docker image. But if we do it that way, we can remove the .config files I checked in in this PR.

@alexcrichton
Copy link
Contributor

Yeah the instructions had a crazy sequencing of building half of gcc, some of glibc, some more of gcc, then the rest of glibc, or something like that. I was indeed trying out gcc-5 and was manually tweaking the version checks in the configure script, but there was other weirdness down the road that I wasn't able to get past and I ended up giving up pretty quickly.

Perhaps we should state in the website that the binaries are guaranteed to run on systems with glibc >=2.14, but it may work in systems with older glibcs but those are not officially supported?

Sounds good to me. Out of curiosity, what made you choose 2.14? Is that just what the system you're running on happens to use?

And that action would have to be repeated every time we build a new Docker image.

Ah yeah sorry I just mean that we can spin up a half-bulit image, run the manual configuration, copy out the configuration, and then run the full build again.

@japaric
Copy link
Contributor Author

japaric commented Mar 12, 2016

Yeah the instructions had a crazy sequencing of building half of gcc, some of glibc, some more of gcc, then the rest of glibc

That actually sounds like the normal instructions to me :P.

I was indeed trying out gcc-5 and was manually tweaking the version checks in the configure script, but there was other weirdness down the road that I wasn't able to get past and I ended up giving up pretty quickly.

😢

Sounds good to me. Out of curiosity, what made you choose 2.14? Is that just what the system you're running on happens to use?

The test arm system I have is running precise which has gcc-4.6/glibc-2.15. So I tried that built that combination first with crosstool-ng. And it worked! But building LLVM requires gcc >=4.8, so I bumped gcc to 4.8. But crosstool-ng couldn't build gcc-4.8/glibc-2.15 :-/, so I tried downgrading glibc to 2.14 and the build succeeded! So I just settled on that combination :P.

Ah yeah sorry I just mean that we can spin up a half-bulit image, run the manual configuration, copy out the configuration, and then run the full build again.

That's certainly doable!

@japaric
Copy link
Contributor Author

japaric commented Mar 12, 2016

pushed some commits:

  • added doc with instructions on how to generate the .config files in a docker container
  • bumped gcc to 4.9. I couldn't upgrade gcc further (glibc configure rejects gcc-5) or downgrade glibc further (compile error)
  • merged some command in the Dockerfile and added a comment about running crosstool-ng under rustbuild


- Path and misc options > Prefix directory = /x-tools/${CT_TARGET}
- Target options > Target Architecture = arm
- Target options > Architecture level = armv5t -- (*)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just confirming, but this is still armv5t instead of armv6?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(same with armv7-a below instead of armv6)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Although now that I think about it I'm not sure it matters all that much. This largely only affects glibc, and we're not distributing glibc, so...

I thought you concluded, from this comment, that this default architecture level doesn't affect the C libraries we build, apart from glibc, because we always pass the -march flag to gcc when compiling them. But, perhaps, my guess about your conclusion was wrong!

I'm OK with changing both to armv6 though.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah yeah sorry I shall clarify.

So in general this probably doesn't matter if it's too old. This option affects basically only how the support libraries are compiled, like libgcc and glibc. We don't ship glibc, so if you use a different one at runtime which is appropriately compiled, then you'll just naturally pick up the optimizations. Now we do ship some things compiled by gcc, like jemalloc/libbacktrace, but we compile them at rust-build-time which means we can change the codegen there (to armv6). I think that we'll pull in some bits from libgcc statically, however, which here would be compiled with armv5 but only only run on armv6 systems. So with that in mind, it's probably best to compile with armv6.

I realized, though, that this probably matters a lot for C++. We do ship all of libstdc++ as we link it statically to the compiler for the compilers that we ship. So if we start shipping ARM compilers we can perhaps benefit quite a bit from using armv6 here instead of armv5 (because libstdc++ in theory will be faster).

Most of this rationale is the same for armv7 below, although there is probably definitely matters. We're linking to libgcc with Rust and pulling in some bits statically most likely, and if they're armv7 bits and running on an armv6 machine we're likely to run into problems.

So all in all it's probably just best to make sure these are always the same, hopefully it won't bite us later!

@japaric
Copy link
Contributor Author

japaric commented Mar 12, 2016

I realized, though, that this probably matters a lot for C++.

I def agree with this.

Pushed a commit changing the arch level to armv6.

alexcrichton added a commit that referenced this pull request Mar 13, 2016
linux-cross: use glibc-2.14/gcc-4.8 for the arm toolchain
@alexcrichton alexcrichton merged commit 706074e into rust-lang-deprecated:master Mar 13, 2016
@alexcrichton
Copy link
Contributor

Exciting!

@japaric
Copy link
Contributor Author

japaric commented Mar 13, 2016

Ahhh, I got here too late :P. The arm-linux-gnueabihf toolchain won't compile after downgrading the arch level! We need switch from thumb to arm mode by default and downgrade the FPU level. I'll send a follow up PR.

@alexcrichton
Copy link
Contributor

Ah ok, the build is still running locally but I haven't hit that one just yet so that may be why.

@japaric japaric deleted the arm branch March 13, 2016 05:51
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants