Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFC: Deprecate the build= command #610

Closed
tomaka opened this issue Sep 21, 2014 · 30 comments
Closed

RFC: Deprecate the build= command #610

tomaka opened this issue Sep 21, 2014 · 30 comments
Labels
A-linkage Area: linker issues, dylib, cdylib, shared libraries, so E-hard Experience: Hard P-high Priority: High

Comments

@tomaka
Copy link
Contributor

tomaka commented Sep 21, 2014

The custom build command is a very wonky mechanism. It is not cross-platform, it assumes that the user has certain softwares installed on his system and available in his PATH (usually make/gcc, sometimes curl), it does not support cross-compilation, etc. Basically it is like having a makefile without a configure script.

This is especially problematic on Windows. Because of this command, a lot of Rust libraries depend on the user having MinGW installed (even though rustc no longer requires this), and some libraries simply can't be compiled on Windows, like glfw-rs. I have the sensation that all the efforts spent on improving support for Windows are worthless because of this.

I propose to deprecate the build command. At first, print a warning when compiling a crate that uses it, and in a second time remove support for it entirely.

Replacements

This command should be replaced by two complementary mechanisms.

Binary dependencies

Add a pre-build string entry in Cargo.toml. This string must in the format $package/$bin where $package is the name of a dependency in the dependencies array, and $bin is the name of a binary found in this dependency. The binary will be compiled and then run by Cargo similarly to the current build command.

The binary would have access to the environment variables that the build currently has access to, plus an additional one named COMMAND_CARGO_MANIFEST_DIR which contains the directory of the Cargo manifest of the dependency (instead of the manifest of the package where the command is being run).

Example:
/Cargo.toml:

[package]
name = "whatever"
authors = []
pre-build = "my-compiler/compile"

[dependencies.my-compiler]
path = "my-compiler"

/my-compiler/Cargo.toml:

[package]
name = "my-compiler"
authors = []

[[bin]]
name = "compile"

Why is this better than a regular build command?

  • Instead of having a makefile run curl for example, you can instead write a binary that uses a Rust http library, which is more cross-platform because it does not require the user to have curl in his path.
  • In situations where you would need a very complex pre-build script (for example invoking cmake (which requires determining the list of generators and invoking the right one)), instead of writing a difficult-to-maintain makefile that you copy-paste in multiple projects, you could simply use a Rust library.

Add native

Add a native array in Cargo.toml that specifies a list of non-rust libraries that must be present for this project to compile.

[[native]]
name = "GL"

[[native]]
name = "X11"

[[native]]
name = "Xxf86vm"

Cargo will process each entry in this order:

  • Try to find a pre-compiled version of the library (if X is the library name, try to find libX.* and X.*) in $package_root/native/$target. If it finds a match, copy the file in $(OUT_DIR).
    For example if you are compiling for arm-linux-androideabi and require a library named hello, Cargo will look for native/arm-linux-androideabi/libhello.* and native/arm-linux-androideabi/hello.*.
  • If we are not cross-compiling (ie. no --target option has been passed) and if we didn't find anything in the previous step, try to invoke pkg-config. If a result is found, pass the path of the library to rustc with the -L flag.
  • If we didn't find anything in the previous steps, try to find a pre-compiled version of the library in $(HOME)/.rust. There is no additional step because rustc already looks into $(HOME)/.rust. Cargo must still check if the library is present.
  • If nothing is found, print an error and stop. An additional flag --ignore-missing-native-libs could be added to Cargo in order to bypass this error and print a warning instead.

This system could be improved in the future by adding more entries to native, for example allowing the user to choose whether to link statically or dynamically.

@huonw
Copy link
Member

huonw commented Sep 21, 2014

Would an accurate summary of the first section essentially be: use Rust as a scripting language for Cargo?

(As a complete bystander this seems like a sensible approach, especially if we build up libraries that make the common tasks required there nice.)

@tomaka
Copy link
Contributor Author

tomaka commented Sep 21, 2014

Would an accurate summary of the first section essentially be: use Rust as a scripting language for Cargo?

Yes :)

@mahkoh
Copy link

mahkoh commented Sep 21, 2014

This sounds like a less powerful version of what I wanted for #427:

  1. (Instead of/Additionally to) Cargo.toml, people can add a Configure.rs to their project root.
  2. When you run cargo configure, and if Configure.rs has been modified since the last time, Configure.rs is compiled and run.
  3. The binary can then generate a Cargo.toml file or drive the compilation itself, possibly depending on the flags passed to it.
  4. This is made possible by automatically linking Configure.rs against a library provided by Cargo that allows you to do certain things with easy one-liners:
    • Download things
    • Call external programs
    • Detect native libraries, e.g., cargo::find("curl") will run pkg-config on Linux and do whatever you do on other systems on other systems
    • Use rustc

Add a native array in Cargo.toml that specifies a list of non-rust libraries that must be present for this project to compile.

Make them optional and you have #427.

@alexcrichton
Copy link
Member

Thanks for taking the time to write this up, it looks quite nice!

Here's some thoughts I have on this:

  • Only having one build command was never the long-term plan for cargo, this is just a stepping stone while the full feature set is implemented. Namely Add support for platform-specific build commands #552 is planned which will resolve some of the cross-platform issues you mentioned.
  • It may also be worth mentioning some downsides to a "rust build system" (in addition to the upsides)
    • This does not yet exist, and it is likely very difficult to do well.
    • Writing a Makefile is quite familiar to anyone on unix, whereas this would be "one more build system"
    • Betting heavily on a nonexistent system can often be risky.
  • Relying on common dependencies for build steps sounds amazing

The native section I think is much more nuanced and very tricky to get right. It's probably worth separating it out from the build command proposal. For example, it needs to address things like:

  • Should the library be static or dynamic?
  • What to do when pkg-config isn't available?
  • What to do for different required libraries per-platform?

I believe @wycats and @carllerche have been working on a proposal for a modification to how cargo treats native libraries. I don't know the details of it yet but it is likely highly inter-related to this.

@tomaka
Copy link
Contributor Author

tomaka commented Sep 21, 2014

The native section I think is much more nuanced and very tricky to get right.

I agree, but I really think that Cargo should have an explicit mechanism for this. Custom build commands should not be common, and shouldn't be used to build static libraries "just in case" the user doesn't have them.

I'd be happy to hear and work on another proposal for this.

Should the library be static or dynamic?

If the crate ships with a precompiled library, use it. The user chooses to put libraries in there, so he is not supposed to add multiple ones with the same name. An error should probably be printed if there are multiple matches.
The same applies for the $(HOME)/.rust step.

For pkg-config, since we are not cross-compiling, we can safely assume that the binary is going to be run on the same or a similar system. Linking dynamically shouldn't be a problem.

What to do when pkg-config isn't available?

Simply ignore that step.

We only have three situations:

  • Your local unix system where pkg-config is available and you can use whatever library you have installed.
  • Windows, where there is no mechanism to detect libraries and the common mechanism is to build everything, so we ignore that step.
  • Cross-compiling, where any library available in the host system would probably be incompatible, so we ignore that step.

@vhbit
Copy link
Contributor

vhbit commented Oct 9, 2014

@mahkoh

(Instead of/Additionally to) Cargo.toml, people can add a Configure.rs to their project root.
When you run cargo configure, and if Configure.rs has been modified since the last time, Configure.rs is compiled and run.

It actually doesn't work pretty well as:

  1. It forces developer of a library to think about building and packaging on all possible platforms, which in most cases isn't his intention.
  2. It forces user of a library to use what developer "provided" although it might be not what he needs, it might lag behind, it might a lot of other things.

Pre-built libraries definitely are much more flexible solution of the problem - when you need a lib on Linux most probably you will use {apt,yum,pacman,whatever} to install it rather than downloading and building it from sources.

@shadowmint
Copy link

This proposal would be a significant improvement on the adhoc solution for 'requires mingw' in #552

However, I would suggest slightly different semantics that make this slightly more consistent and plausible.

  1. Make build rules simple and consistent through use of [[bin]]

To add a 'compile' step to a project, add a compiler binary to the project:

[[bin]]
name = "mylib-builder"
path = "build/bin.rs"
  1. The 'build' target to a project may now optionally depend on a local [[bin]] target

Rather than removing 'build' (bad idea, breaks compatibility) we add the /target/ folder to the search PATH for invoking the build command.

If people want to keep using make to build their libraries, they can. It is bad to forbid people from doing this just because some people have strong opinions about platform compatibility (myself included. :P); the developer should always been in charge of make their own decisions.

  1. Start building some strong third party cargo libraries to support invoking various build tools.

Having a .rs file that 'has to do everything' (ie. configure.rs) is probably a terrible idea.

However, I can easily imagine have a series of build libraries:

cargo-build-win32
cargo-build-cmake
cargo-build-autoconf
cargo-build-msbuild
etc.

As simple stand alone dependencies we can gain a rich build ecosystem that uses 3rd party tools correctly without having to fold all of that functionality into cargo itself, which will bloat cargo and never work properly.

We can now use per-platform configuration as normal in rust in the build binary without having to fold all of that stuff into cargo as well (which is why #552 is a bad idea).

By using external packages to manage the build wrappers (that essentially invoke cmake, msbuild, etc) the complexity that cargo has to look after is significantly reduced.

Resulting toml file

Might look something like:

[package]
name = "watch"
version = "0.1.1"
authors = [ "foo" ]
build = "watch-build"

[dependencies.glob]
git = "https://github.com/shadowmint/rust-glob"
tag = "0.1.6"

[dev-dependencies.cargo-cmake]
git = "https://github.com/rust-lang/cargo-cmake"
tag = "0.0.1"

[[bin]]
name = "watch-build"
path = "buiild/bin.rs"

[lib]
name = "watch"
path = "src/lib.rs"

With a build/bin.rs that does whatever it has to do, depending on the libraries:

#[cfg(win32)]
mod build {
  extern crate msbuild;
  use msbuild::builder;
  pub fn build() {
    var b = Builder::new();
    b.seekVisualStudio();
    b.setBuildRoot("...");
    b.build();
  }
}

#[cfg(unix)]
mod build {
  extern crate cmake;
  use cmake::CMake;
  pub fn build() {
    var b = CMake::new("root/");
    if b.configure() {
      b.build();
      #[cfg(test)]
      b.test();
      b.install();
    }
  }
}

fn main() {
  build::build();
}

There's so much richness you gain from this; small stand alone build tools can be maintained for build systems such as scons, cmake, etc. and the developer of a package can choose what to use. All of this stuff already exists.

It also allows for support of cfg options when invoking cargo.

Tangibly actionable items

It's also easy to implement:

  1. Look in target/ for a binary first before calling the build command

  2. Pass additional env variables to the build command

  3. Make sure that certain 'bin' targets are compiled first (see below)

Unresolved

  1. For this to work, (3) must be implemented somehow

So the build order is:

  • build dependencies
  • build 'build binaries'
  • run build command
  • build libraries
  • build binaries

ie. Something is needed to mark certain [[bin]] targets as having build priority.

Perhaps [[dev-bin]] is appropriate for this? I'm not sure exactly the right thing for it.

Anyway, I'm 100% for this solution one way or another. The child-project that gets independently built has various other good properties too (does not clutter cargo.toml with build dependencies) but I prefer the above approach.

  1. C dependency version conflicts / duplicate builds.

I've already encountered situations where rust libraries that build their own C dependencies results in multiple copies of a C dependency being built, causing issues.

One solution to this problem would be to definite a 'staging area' in target/ for dependent libraries to be put into, so that the build libraries could first look into that folder for a matching library before building their own copy.

This is a zero-cost effectively, as all it is is adding documentation and an extra env variable of some kind that the build libraries can use, but it is worth talking about.

(fwiw, the current system doesn't take this into account at all, and basically just assumes you already use the system library)

This would allow someone to write (for example) rust-libuv, which is a rust-free c wrapper package that provides a local cross platform libuv build without any 'specific' rust binding to it.

Other packages could then depend on this and implement their own custom or partial wrappers (eg. the file watch api) without requiring multiple builds of the same C library in different packages.

@shadowmint
Copy link

...and just to be clear, I'm not suggesting that cargo start supporting cargo-foo build libraries; absolutely not. This is the sort of thing that should fall to the 3rd party developer ecosystem as required.

I really think going down the path of forcing cargo to be responsible for searching for native binaries on every possible combination of system and library is a bad idea; it'd be massively complex to implement.

Rather, someone can write a rust-check-dyanmic-dep library that the build tool (if it assumes a dynamic library must be present) can use to query the OS and find a target for.

...or, you can rely on the native dependency resolution from the build tool if it supports it (eg. cmake)

@mahkoh
Copy link

mahkoh commented Oct 10, 2014

@vhbit: I don't know where you get the idea it would build native libraries.

@vhbit
Copy link
Contributor

vhbit commented Oct 10, 2014

@mahkoh from the second part of proposal (native section) + own cross-compiling experience. In most cases build is used just for preparing native (asm/C) libraries/wrappers before building Rust library which depends on them.

@shadowmint
Copy link

fwiw, prebuilt binaries are significantly more convenient in many cases, but also technically much more complex to manage correctly and securely (for example, do you invoke yum/apt to install a library? or download your own prebuilt copy from somewhere? does the prebuilt binary need to be signed? do you require root access to build a crate now?). Have a look at the complexity of wheel & pip.

I would strongly recommend that any plan for prebuilt binaries be deferred until after a solution for built sources is finalized.

I imagine the best solution would be for a package to search for a local copy, and then attempt to download a binary... and finally build its own if no local one is found; but I feel the complexity of that workflow is something that should split out of cargo into dependency libraries with their own custom binary repos and build tools as required.

If we want to standardize the resolution behavior and host binaries centrally at some point, thats something that can happen in a supported cargo-build-tools crate at a later date.

@vhbit
Copy link
Contributor

vhbit commented Oct 10, 2014

I imagine the best solution would be for a package to search for a local copy

I'd say that the only thing I'd like to have is exactly this - flexibility in describing where to find native deps locally. Everything else (extra configure.rs or autodownloading and autobuilding) adds too much complexity. May be "later... someday..." :-)

@shadowmint
Copy link

@vhbit > I'd say that the only thing I'd like to have is exactly this ...

I know, but that's a naive approach to the problem.

On windows, it'll never be installed.

Additionally, on other relatively unix like system (osx say) without a package manager it will also never be installed. ...or it may be installed with different versions with different ABIs in multiple places (brew / port / fink / xcode)

If you're trying for a cross compile (say for android), it'll also never be installed.

Linking against system libraries only works on linux when you're building against your own local machine. It's a relatively useful solution for such a tiny subset of people I'm not actually sure it's worth explicitly supporting in cargo.

@vhbit
Copy link
Contributor

vhbit commented Oct 11, 2014

It's a relatively useful solution for such a tiny subset of people I'm not actually sure it's worth explicitly supporting in cargo.

  1. cargo is a Rust package manager/build tool. I'd be happy, but I can't see why it should solve problems of delivering native libraries (and/or resolving versioning issues). It's responsibility of platform tools and cargo should just build on top of that.
  2. Cross platform native libraries are mostly required on terminating nodes (for example when you're terminating a tree of libraries into an application or a chain of commits into release). And those tasks are usually followed up by other platform specific tasks like packaging. Again this usually involves platform tools for packaging and so on.
  3. When native libraries are required on internal nodes (for example a static library required during development process) - local one could be used no matter how delivered - built from source, used from package manager and so on.

I see cargo as a tool in a build chain which should just build all related to Rust. It should care only about this aspect of development process. Everything else should be done in other places, especially considering that except OS X/iOS[1] every other OS could be freely[2] and automatically[3] setup in a VM and the whole "terminating" process might be run there. Or might not. It depends on your build setup and allows flexibility by composing tools.

Delivering from one machine to all kind of platforms by just invoking cargo sounds extremely tempting and I hope someday it will be possible as number and maturity of available Rust libraries will increase (read after 1.0).

[1]: actually OS X works pretty fine in VM too, but legally you can run it only on Apple hardware.
[2]: there is a free Windows VM package for web developers. It is active 30 days, but that should be enough for building and packaging any kind of project.
[3]: at least VirtualBox provides pretty good scripting for creating and controlling VM, so just get a script for installing all prerequisites, starting build process and have fun.

@shadowmint
Copy link

but I can't see why it should solve problems of delivering native libraries

....because practically speaking, rust doesn't work otherwise.

There are no rust libraries that work without this.

Cargo doesn't even compile without c dependencies.

What I'm proposing here is that the resolution of library dependencies be delegated to 3rd party libraries that can either seek, build or download the required binary to link against.

If you don't want to do that, you can ignore the build step (as most people currently do) and assume the library will be present on the system; that's a poor solution that results in non-robust code, but hey, you're welcome to do that if you like.

Let's not have an argument here about whether libraries and binaries should statically link or dynamically link to their c dependencies (which is where we're heading). Instead, if you have any tangible objections to delegating the library path resolution to a 3rd party built binary as part of the build step, lets here that instead.

I'm not interested in grandstanding. I just want cargo to work better.

(case in point, today I can't build cargo. Because of something to do with the way libgit builds. What do I do? fork alex's git library and update cargo to point at my version? It's failing because it's calling pkg-config and that's returning some strange values on my mac here. Who knows why? I guess I'll start uninstalling and reinstalling brew packages in a second. If the libgit builder was a binary, I could fork it, fix what's wrong with a #[cfg(apple)] and push a fix safe in the knowledge I hadn't broken anything else. Instead, if I want to do that I have to run the build of at least 3 different machines to make sure that the build still works on them)

@wycats
Copy link
Contributor

wycats commented Oct 11, 2014

@shadowmint fwiw I don't disagree with your broader points. I've been chatting with people about improving the linkage story in Rust and Cargo, and reading all of your (and others') comments here carefully.

I think you'll be happy with what we're thinking about.

@vhbit
Copy link
Contributor

vhbit commented Oct 11, 2014

@shadowmint

If you don't want to do that, you can ignore the build step (as most people currently do) and assume the library will be present on the system; that's a poor solution that results in non-robust code, but hey, you're welcome to do that if you like.

There are different kinds of delivery. Delivering to the end user is the terminating node, and see above - in this case I want cargo to build my Rust stuff and everything else will be handled by other tools, i.e. packaging and ensuring all required libraries are installed on user machine by either specifying package deps or bundling them.

Delivering to a developer is definitely other problem. And here are reasons why I'm against solving it in cargo:

  • if it relies on build scripts provided by library developers it means library developer should spend he's time on developing and maintaining it, which might be out of his scope (I'm telling as a library developer). Here is a recipe for building OpenSsl for iOS, as a rust-openssl contributor I wouldn't like to re-write it in Rust.
  • if it relies on cargo internals - I think there are a lot of other issues now time to be spent on. Configuring library dependencies on developer machine is a skill I believe every developer already has + there are already a lot of well established tools which help in that task. Moreover, most of building/configuring recipes available are written for those tools.
  • so it is also about culture and openness - while they are not related to Rust their can be shared, once they're Rust-only - no way to interact with "outer" world. For example, the same script above is easily googled, easily modified almost by anyone. If that script will be in Rust - I guess I as Objective C developer would never find. Or... Considering your pkg-config problem - may be better solution would be create/update some brew recipe which would also benefit much more people around than solving it in my_configure.rs.

@alexcrichton alexcrichton added the A-linkage Area: linker issues, dylib, cdylib, shared libraries, so label Oct 13, 2014
@alexcrichton
Copy link
Member

Alright, I've spent some time sitting down to think about this as well, and @tomaka I'd love to take your ideas and run with them! I'd also like to incorporate some of what others have been saying as well. I've written up a more formal version of this "RFC" with some of the rough bits ironed out around the edges, and I'm curious to get everyone's feedback on it before officially posting it to the rust-lang/rfcs repo. It requires some actual compiler changes, hence the official RFC repo.

Do note however that although some bits here have been ironed out this is still a work in progress and may have some gaps here and there (please let me know!). I've tagged a bunch of issues with linkage in the cargo repo, and I'd love to basically fix them all in one fell swoop with this RFC, and they were may main guidance in writing this up.

gist: https://gist.github.com/alexcrichton/8bb6aba160e12717186a
cargo example: https://github.com/alexcrichton/complicated-linkage-example

cc @mahkoh @vhbit and @shadowmint

@tomaka
Copy link
Contributor Author

tomaka commented Oct 15, 2014

The modifications to the build command are breaking changes to Cargo. To ease the transition, the build comand will be join'd to the root path of a crate, and if the file exists it will be compiled as describe above. Otherwise a warning will be printed and the fallback behavior will be executed.

What if the build command is just ./build.sh? Won't this erroneously trigger the new behavior?

None of the third-party crates with "convenient build logic" currently exist, and it will take time to build these solutions.

Is this really a problem? Creating a crate that calls two or three commands shouldn't be that complicated. The biggest problem is probably the lack of a good HTTP client in Rust yet.

Also, this syntax may be confusing when showing this RFC to people:

[build-dependencies]
make = "*"

At first I thought that this was some sort of hardcoded make option in Cargo, and had to re-read the proposal to understand that it was a dependency from the central repo.

I think that it would be better to just write [build-dependencies.make] git = "..." for now, because that's what Cargo users are used to for the moment.

(but maybe it's just because I'm half asleep right now, or because I don't know toml very well)

Other than that, it's much more polished than my proposal. I'm very happy with this :)

@shadowmint
Copy link

A couple of thoughts:

Prebuilt binaries

For 'pre-built libraries' it would be extremely convenient on windows machines to have some facility to share your local pre-built binary configuration as a package somehow, without directly modifying the .cargo/config

Consider if you had two projects with slightly different local versions of SDL (1.2 and 2.0) that needed to resolved.

It would be nice to allow these link flags to be delegated to an external configuration file in exactly the same format, on a global basis. eg.

 [target.i686-unknown-linux-gnu.sdl] <--- Resolve SDL using these details
 rustc-flags = "-l static:sdl -L /home/build/sdl/lib"

 [target.i686-unknown-linux-gnu.*]  <--- Delegate all other resolutions for this target
 path = "/home/doug/deps_foo/config"  <-- To be loaded from this file

/home/doug/deps_foo/config with identical formatting:

 [target.i686-unknown-linux-gnu.ssl]
 rustc-flags = "-l static:ssl -L /home/build/root32/lib"
 root = "./dlls/" <--- Notice this is a relative path 

Perhaps this is getting a bit too complicated for what is basically quite a simple and direct system, but it would be tangibly useful to be able to host a 'build dependencies' zip file along with a project.

"If you're building windows and do not have C build tools installed, download dependencies.zip and set you .cargo/config to contain this extra line of config" <-- This would be extraordinarily useful for working on windows machines.

Some extension of this idea could also be used to implement binary downloads via the package manager in the future.

Tools

One aspect of this that doesn't seem 100% covered is the dependency on the existence of tools and a sane build environment on the system.

If the build script exits with a nonzero exit code, then Cargo will consider it to have failed and will abort compilation.

It would be useful to be able to direct the user of cargo about how the failure happened and how to fix it.

Did the compile fail? Or did the script fail to find a tool that was needed? Or did the script fail to invoke the tool correctly (eg. permissions error)?

It might be nice to have the tool able to exist with a meaningful tip if possible. eg.

cargo:failure-reason=Unable to locate cmake.exe binary on the system PATH
cargo:failure-tip=Check cmake is installed and runnable from the command prompt

This is purely window dressing, but it's probably more useful that cargo spitting an error out about broken steams being unable to write data.

Async task resolution

It may be naive to assume that all build tasks will resolve in a meaningful length of time.

For example, I can easily imagine a well intentioned (:P) cmake task spawning a copy of the cmake-gui and waiting for the user to set configuration options and exist before continuing with the build.

This would effectively hang the build process while we wait for user input.

It may indeed be quite convenient in some cases, but I think this should be explicitly disable-able via a cargo input, eg. NO_WAIT to inform the build script that the build is being run on jenkins, whatever, and that indefinite async task resolution is not acceptable in this situation.

Overall, great stuff! I really feel like this will make a big difference in working with c depedencies.

@tomaka
Copy link
Contributor Author

tomaka commented Oct 15, 2014

One thing that would not be possible with this proposal is to use the build command to generate Rust code. I only know three libraries that were doing this, and all three have now transitioned to a plugin.

On the one hand, it's a small drawback because it's much more complicated to write a plugin than it is to write some text to a file. But for the sake of hygiene, in my opinion it's better to simply force people to use plugins when they want to generate Rust code.

Did the compile fail? Or did the script fail to find a tool that was needed? Or did the script fail to invoke the tool correctly (eg. permissions error)?

Why not simply let the script print to stderr?

@shadowmint
Copy link

Why not simply let the script print to stderr?

Only because compile stderr can be rather messy. C++ compile failures might be 100s of lines long, simply because say, pkg-config wasn't there.

I'm not strongly moved either way, but rustc does a good job of suggesting resolutions when things don't compile, and I've generally found it a lot more useful than simply printing out what's wrong. A explicit interface for doing so would make cargo able to print a summary of the build failure rather than just dumping an entire log of stderr

@alexcrichton
Copy link
Member

@tomaka,

What if the build command is just ./build.sh? Won't this erroneously trigger the new behavior?

Oh dear, good point! I'll clarify that the new behavior will only be triggered for existing files that end in .rs

Is this really a problem? Creating a crate that calls two or three commands shouldn't be that complicated. The biggest problem is probably the lack of a good HTTP client in Rust yet.

It's not necessarily a problem per-se, but I just wanted to make sure to highlight that I wasn't intending on writing a set of crates for when the overhaul initially landed. I do expect that solutions in this space will develop quite rapidly!

Also, this syntax may be confusing when showing this RFC to people:

Ah indeed, sounds good to me!

Other than that, it's much more polished than my proposal. I'm very happy with this :)

Thanks so much for taking the time to look it over!


@shadowmint,

For 'pre-built libraries' it would be extremely convenient on windows machines to have some facility to share your local pre-built binary configuration as a package somehow

This is interesting! Is modifying .cargo/config so bad though? The project could ship with a skeleton one that isn't activated by default and then you could copy it into place if you wanted to enable the pre-downloaded pre-built copies of the libraries?

I do think that relative paths are a good idea here.

Do you think that the system as proposed would be good enough for now while leaving the door open to extending it in the future with pointers to other files?

One aspect of this that doesn't seem 100% covered is the dependency on the existence of tools and a sane build environment on the system.

Yeah my plan was do to basically what we do today which is to ship stdout/stderr to the user if something fails, and rely on the stdout/stderr to say what's missing or what went wrong. Metadata like failure-foo though seems totally plausible and Cargo could certainly grow support for that over time!

Async task resolution

Could you elaborate a little more on what you're thinking to solve this here? All tasks which can be run in parallel (not dependent on the build script) will continue to be run in parallel, but the compilation of the crate at hand can't progress until the build script finishes, so we'll need some form of blocking waiting for the artifacts to become available.

It may be the case that waiting for the cmake process isn't enough, but wouldn't that the be the responsibility of the 3rd-party cmake crate to wait for the jenkins build or wait for the gui to exit?

Overall, great stuff! I really feel like this will make a big difference in working with c depedencies.

And of course, thank you for taking a look at this!


One thing that would not be possible with this proposal is to use the build command to generate Rust code. I only know three libraries that were doing this, and all three have now transitioned to a plugin.

Ah yes this is a point which I would very much wish to address as part of this proposal. It's looking more and more likely that syntax extensions will not be available in stable Rust at 1.0, so we'll need to find alternate solutions for these situations.

Do you know if these are more complicated than just generating a wad of code on average? If that's all they do, then I've been thinking of proposing:

// Like `include!`, but appends the second argument to the value of the environment 
// variable specified by the first argument.
include_env!("OUT_DIR", "generated.rs");

That would allow cargo crates to generate a file into OUT_DIR and then include it in their code normally via include! basically.

@alexcrichton
Copy link
Member

Ah, thinking more on it I think we can leverage the already-existent include! macro like so:

include!(concat!(env!("OUT_DIR"), "/generated.rs"));

@shadowmint
Copy link

@alexcrichton I certainly think that the proposal as is good enough to go with as it stands. Using ~/.cargo/config certainly works for now; I'm certainly happy enough to just write a few little scripts to swap versions of the config file in and out.

Re: tasks, I was initially proposing that cargo might be invoked with a flag like:

cargo build --build_timeout=3600

Which would pass an additional flag NO_WAIT = True, to the build script via environment variables.

That would hint to the build script that if user interaction is required for some reason, assume default values or fail the build.

Specifically when interacting with command line tools, it's not uncommon for a tool (bower, say...) to suddenly decide that when you run it, it needs to wait for user input before it decides what it's going to do (eg. package resolution, or as, in bower's case, because they suddenly force everyone to opt in/out of anonymous usage analytics).

Practically speaking this means that if cargo invokes a build script and that script takes longer than the 'build_timeout' value in seconds, terminate the build process and fail the build.

Most CI servers are smart enough to terminate the build processes they run (ie. this is configurable at a higher level), so perhaps folding this into cargo is overkill unless it actually turns out to be a problem.

We do, however, need some way of passing arbitrary flags to the build script from the command line, I suspect.

cargo build -DFOO=BAR

Seems clunky. ...but so does using a feature for a build detail. For example, say I need to pass a username and password to a build script to access a remote repository to clone C source code from.

Do I update the compile.rs with the username and password (always a bad idea), or somehow pass this via the command line?

@vhbit
Copy link
Contributor

vhbit commented Oct 16, 2014

@alexcrichton looks pretty good for me.

I have a question regarding features though. For example right now OpenSSL is providing tlsv1_1 and tlsv1_2 features because bundled with OS X version is pretty old and doesn't have required symbols and if you want to get them - you have to install a newer version by yourself. Now, with openssl split into openssl and openssl-sys, symbols for tlsv1_1 and tlsv1_2 must be in openssl-sys right? The question is if it is possible to "transfer" features from openssl to openssl-sys without making user directly aware of openssl-sys?

A workaround is to create openssl-modern-tls-sys which contain only those symbols and this dependency will be triggered by tlsv1_1/tlsv1_2 in openssl.

I'm not sure how often native deps might import different symbols (feature or platform dependent) so this could be considered a corner case which doesn't require immediate solution and just should be kept in mind.

@alexcrichton
Copy link
Member

Yeah we want to be able to reexport features from one package to another: #633 (comment)

I've also now posted this as an RFC: rust-lang/rfcs#403

@alexcrichton alexcrichton added the E-hard Experience: Hard label Oct 20, 2014
@alexcrichton
Copy link
Member

Done! #792

@kornelski
Copy link
Contributor

I appreciate you care about good Windows support.

However, this change doesn't help me support Windows. I don't support Windows because of overwhelming complexity of integration with unfamiliar system and an outdated C compiler. I don't know how to properly integrate with Visual Studio projects, in Rust or any other language.

When you tell me to use Rust instead of bash, I don't even know what am I supposed to do. The docs show horrible example of bash 2-liner as 20 lines of bad Rust code and a hypothetical Rust build system, which I'd rather not use even if it existed.

I have to compile C libraries that—for better or worse—require ./configure && make. There is no way I'm going to write and maintain my own portable Rust version of somebody else's autohell. If a library requires a bash script and gcc to be built, that's what I'm going to use.

Instead of making it harder to use non-Rust build systems, you should be embracing them! Almost by definition every 3rd party code already comes with build scripts for all platforms it supports.

For example libjpeg-turbo uses autotools on Unix and cmake on Windows. I wish I could do something like this:

build="./configure && make"
build.windows="cmake"

Some of my own tools can use Cocoa on OS X (and on Windows I require MinGW anyway because MSVC's C99 support is awful). I wish I could set:

build="make"
build.osx="make USE_COCOA=1"

libpng uses Makefiles on Unix and has a Visual Studio project for Windows. Could you support that?

build.unix="./configure && make"
build.windows="projects/visualc71/libpng.vcproj"

Instead of having a makefile run curl for example, you can instead write a binary that uses a Rust http library

OMG, NO!!! I don't want to deal with downloads either way! I've got better things to invent than yet another half-assed downloader. Why can't Cargo handle download and unarchiving for me!?

[[dependency]]
name=lib_not_cargo
tarball=http://example.com/release.tgz

or

[[dependency]]
name=tool_unaware_of_rust's_existence
git="git://example.com#v1.0.0"

and don't require Cargo.toml to be present! I can't ask every C library to add Cargo.toml to their distribution just so I can use Cargo to download the package, but I don't mind adding extra metadata in my Cargo.toml.

I want to run an existing build script from a downloaded tarball. It seems to me that it's a really basic use-case for a build system interfacing with C libraries. It can be done in 2 lines of bash. I'd like Cargo to make it even easier and more reliable for me, rather than stand in the way and require hundreds of lines of needlessly reinvented code for the same thing.

@shadowmint
Copy link

@pornel please try to stay constructive. Raging doesn't help anything.

This issue is closed; if you have a specific request, open a new issue for it.

Broadly speak to address your concerns:

Cargo is not cmake. It does not require a CMakeLists.txt in the target c library. In fact, this is exactly the opposite of what has been implemented.

Rather, 3rd parties (eg. you or me) can create sys-foo packages. These packages invoke 3rd party build tools (msbuild, cmake, gcc, etc) as required on a per platform basis.

This provides a consistent rust dependency chain for c libraries.

build="make"
build.osx="make USE_COCOA=1"

Is conceptually simpler than using Process to spawn your choice of 3rd party tool, but its also extremely platform specific. What about the choice of architecture? What platforms do you support? Are 'build.linux' and 'build.unix' and 'build.bsd' required?

 I want to run an existing build script from a downloaded tarball.

So write a script that does that. It's not that hard, and once it's been done once, everyone in the rust ecosystem benefits from it.

Once more: build.rs is not designed to invoke a c compiler directly.

It is a way to have a cross platform wrapper that uses the appropriate rust tools (eg. #[cfg(win)]) to invoke 3rd party build tool as appropriate for the platform.

If you have a specific request for changed functionality to make downloading packages / untaring things / some other obscure usecase here / ??? please file a specific issue relating to the use case and how you would like to see it resolved.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-linkage Area: linker issues, dylib, cdylib, shared libraries, so E-hard Experience: Hard P-high Priority: High
Projects
None yet
Development

No branches or pull requests

8 participants