-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFC: Deprecate the build= command #610
Comments
Would an accurate summary of the first section essentially be: use Rust as a scripting language for Cargo? (As a complete bystander this seems like a sensible approach, especially if we build up libraries that make the common tasks required there nice.) |
Yes :) |
This sounds like a less powerful version of what I wanted for #427:
Make them optional and you have #427. |
Thanks for taking the time to write this up, it looks quite nice! Here's some thoughts I have on this:
The
I believe @wycats and @carllerche have been working on a proposal for a modification to how cargo treats native libraries. I don't know the details of it yet but it is likely highly inter-related to this. |
I agree, but I really think that Cargo should have an explicit mechanism for this. Custom build commands should not be common, and shouldn't be used to build static libraries "just in case" the user doesn't have them. I'd be happy to hear and work on another proposal for this.
If the crate ships with a precompiled library, use it. The user chooses to put libraries in there, so he is not supposed to add multiple ones with the same name. An error should probably be printed if there are multiple matches. For pkg-config, since we are not cross-compiling, we can safely assume that the binary is going to be run on the same or a similar system. Linking dynamically shouldn't be a problem.
Simply ignore that step. We only have three situations:
|
It actually doesn't work pretty well as:
Pre-built libraries definitely are much more flexible solution of the problem - when you need a lib on Linux most probably you will use {apt,yum,pacman,whatever} to install it rather than downloading and building it from sources. |
This proposal would be a significant improvement on the adhoc solution for 'requires mingw' in #552 However, I would suggest slightly different semantics that make this slightly more consistent and plausible.
To add a 'compile' step to a project, add a compiler binary to the project:
Rather than removing 'build' (bad idea, breaks compatibility) we add the /target/ folder to the search PATH for invoking the build command. If people want to keep using make to build their libraries, they can. It is bad to forbid people from doing this just because some people have strong opinions about platform compatibility (myself included. :P); the developer should always been in charge of make their own decisions.
Having a .rs file that 'has to do everything' (ie. configure.rs) is probably a terrible idea. However, I can easily imagine have a series of build libraries:
As simple stand alone dependencies we can gain a rich build ecosystem that uses 3rd party tools correctly without having to fold all of that functionality into cargo itself, which will bloat cargo and never work properly. We can now use per-platform configuration as normal in rust in the build binary without having to fold all of that stuff into cargo as well (which is why #552 is a bad idea). By using external packages to manage the build wrappers (that essentially invoke cmake, msbuild, etc) the complexity that cargo has to look after is significantly reduced. Resulting toml fileMight look something like:
With a build/bin.rs that does whatever it has to do, depending on the libraries:
There's so much richness you gain from this; small stand alone build tools can be maintained for build systems such as scons, cmake, etc. and the developer of a package can choose what to use. All of this stuff already exists. It also allows for support of cfg options when invoking cargo. Tangibly actionable itemsIt's also easy to implement:
Unresolved
So the build order is:
ie. Something is needed to mark certain [[bin]] targets as having build priority. Perhaps [[dev-bin]] is appropriate for this? I'm not sure exactly the right thing for it. Anyway, I'm 100% for this solution one way or another. The child-project that gets independently built has various other good properties too (does not clutter cargo.toml with build dependencies) but I prefer the above approach.
I've already encountered situations where rust libraries that build their own C dependencies results in multiple copies of a C dependency being built, causing issues. One solution to this problem would be to definite a 'staging area' in target/ for dependent libraries to be put into, so that the build libraries could first look into that folder for a matching library before building their own copy. This is a zero-cost effectively, as all it is is adding documentation and an extra env variable of some kind that the build libraries can use, but it is worth talking about. (fwiw, the current system doesn't take this into account at all, and basically just assumes you already use the system library) This would allow someone to write (for example) rust-libuv, which is a rust-free c wrapper package that provides a local cross platform libuv build without any 'specific' rust binding to it. Other packages could then depend on this and implement their own custom or partial wrappers (eg. the file watch api) without requiring multiple builds of the same C library in different packages. |
...and just to be clear, I'm not suggesting that cargo start supporting cargo-foo build libraries; absolutely not. This is the sort of thing that should fall to the 3rd party developer ecosystem as required. I really think going down the path of forcing cargo to be responsible for searching for native binaries on every possible combination of system and library is a bad idea; it'd be massively complex to implement. Rather, someone can write a rust-check-dyanmic-dep library that the build tool (if it assumes a dynamic library must be present) can use to query the OS and find a target for. ...or, you can rely on the native dependency resolution from the build tool if it supports it (eg. cmake) |
@vhbit: I don't know where you get the idea it would build native libraries. |
@mahkoh from the second part of proposal ( |
fwiw, prebuilt binaries are significantly more convenient in many cases, but also technically much more complex to manage correctly and securely (for example, do you invoke yum/apt to install a library? or download your own prebuilt copy from somewhere? does the prebuilt binary need to be signed? do you require root access to build a crate now?). Have a look at the complexity of wheel & pip. I would strongly recommend that any plan for prebuilt binaries be deferred until after a solution for built sources is finalized. I imagine the best solution would be for a package to search for a local copy, and then attempt to download a binary... and finally build its own if no local one is found; but I feel the complexity of that workflow is something that should split out of cargo into dependency libraries with their own custom binary repos and build tools as required. If we want to standardize the resolution behavior and host binaries centrally at some point, thats something that can happen in a supported cargo-build-tools crate at a later date. |
I'd say that the only thing I'd like to have is exactly this - flexibility in describing where to find native deps locally. Everything else (extra |
@vhbit > I'd say that the only thing I'd like to have is exactly this ... I know, but that's a naive approach to the problem. On windows, it'll never be installed. Additionally, on other relatively unix like system (osx say) without a package manager it will also never be installed. ...or it may be installed with different versions with different ABIs in multiple places (brew / port / fink / xcode) If you're trying for a cross compile (say for android), it'll also never be installed. Linking against system libraries only works on linux when you're building against your own local machine. It's a relatively useful solution for such a tiny subset of people I'm not actually sure it's worth explicitly supporting in cargo. |
I see Delivering from one machine to all kind of platforms by just invoking [1]: actually OS X works pretty fine in VM too, but legally you can run it only on Apple hardware. |
....because practically speaking, rust doesn't work otherwise. There are no rust libraries that work without this. Cargo doesn't even compile without c dependencies. What I'm proposing here is that the resolution of library dependencies be delegated to 3rd party libraries that can either seek, build or download the required binary to link against. If you don't want to do that, you can ignore the build step (as most people currently do) and assume the library will be present on the system; that's a poor solution that results in non-robust code, but hey, you're welcome to do that if you like. Let's not have an argument here about whether libraries and binaries should statically link or dynamically link to their c dependencies (which is where we're heading). Instead, if you have any tangible objections to delegating the library path resolution to a 3rd party built binary as part of the build step, lets here that instead. I'm not interested in grandstanding. I just want cargo to work better. (case in point, today I can't build cargo. Because of something to do with the way libgit builds. What do I do? fork alex's git library and update cargo to point at my version? It's failing because it's calling pkg-config and that's returning some strange values on my mac here. Who knows why? I guess I'll start uninstalling and reinstalling brew packages in a second. If the libgit builder was a binary, I could fork it, fix what's wrong with a #[cfg(apple)] and push a fix safe in the knowledge I hadn't broken anything else. Instead, if I want to do that I have to run the build of at least 3 different machines to make sure that the build still works on them) |
@shadowmint fwiw I don't disagree with your broader points. I've been chatting with people about improving the linkage story in Rust and Cargo, and reading all of your (and others') comments here carefully. I think you'll be happy with what we're thinking about. |
There are different kinds of delivery. Delivering to the end user is the terminating node, and see above - in this case I want Delivering to a developer is definitely other problem. And here are reasons why I'm against solving it in cargo:
|
Alright, I've spent some time sitting down to think about this as well, and @tomaka I'd love to take your ideas and run with them! I'd also like to incorporate some of what others have been saying as well. I've written up a more formal version of this "RFC" with some of the rough bits ironed out around the edges, and I'm curious to get everyone's feedback on it before officially posting it to the rust-lang/rfcs repo. It requires some actual compiler changes, hence the official RFC repo. Do note however that although some bits here have been ironed out this is still a work in progress and may have some gaps here and there (please let me know!). I've tagged a bunch of issues with gist: https://gist.github.com/alexcrichton/8bb6aba160e12717186a cc @mahkoh @vhbit and @shadowmint |
What if the
Is this really a problem? Creating a crate that calls two or three commands shouldn't be that complicated. The biggest problem is probably the lack of a good HTTP client in Rust yet. Also, this syntax may be confusing when showing this RFC to people: [build-dependencies]
make = "*" At first I thought that this was some sort of hardcoded I think that it would be better to just write (but maybe it's just because I'm half asleep right now, or because I don't know toml very well) Other than that, it's much more polished than my proposal. I'm very happy with this :) |
A couple of thoughts: Prebuilt binariesFor 'pre-built libraries' it would be extremely convenient on windows machines to have some facility to share your local pre-built binary configuration as a package somehow, without directly modifying the .cargo/config Consider if you had two projects with slightly different local versions of SDL (1.2 and 2.0) that needed to resolved. It would be nice to allow these link flags to be delegated to an external configuration file in exactly the same format, on a global basis. eg.
/home/doug/deps_foo/config with identical formatting:
Perhaps this is getting a bit too complicated for what is basically quite a simple and direct system, but it would be tangibly useful to be able to host a 'build dependencies' zip file along with a project. "If you're building windows and do not have C build tools installed, download dependencies.zip and set you .cargo/config to contain this extra line of config" <-- This would be extraordinarily useful for working on windows machines. Some extension of this idea could also be used to implement binary downloads via the package manager in the future. ToolsOne aspect of this that doesn't seem 100% covered is the dependency on the existence of tools and a sane build environment on the system.
It would be useful to be able to direct the user of cargo about how the failure happened and how to fix it. Did the compile fail? Or did the script fail to find a tool that was needed? Or did the script fail to invoke the tool correctly (eg. permissions error)? It might be nice to have the tool able to exist with a meaningful tip if possible. eg.
This is purely window dressing, but it's probably more useful that cargo spitting an error out about broken steams being unable to write data. Async task resolutionIt may be naive to assume that all build tasks will resolve in a meaningful length of time. For example, I can easily imagine a well intentioned (:P) cmake task spawning a copy of the cmake-gui and waiting for the user to set configuration options and exist before continuing with the build. This would effectively hang the build process while we wait for user input. It may indeed be quite convenient in some cases, but I think this should be explicitly disable-able via a cargo input, eg. Overall, great stuff! I really feel like this will make a big difference in working with c depedencies. |
One thing that would not be possible with this proposal is to use the On the one hand, it's a small drawback because it's much more complicated to write a plugin than it is to write some text to a file. But for the sake of hygiene, in my opinion it's better to simply force people to use plugins when they want to generate Rust code.
Why not simply let the script print to stderr? |
Only because compile stderr can be rather messy. C++ compile failures might be 100s of lines long, simply because say, pkg-config wasn't there. I'm not strongly moved either way, but rustc does a good job of suggesting resolutions when things don't compile, and I've generally found it a lot more useful than simply printing out what's wrong. A explicit interface for doing so would make cargo able to print a summary of the build failure rather than just dumping an entire log of stderr |
Oh dear, good point! I'll clarify that the new behavior will only be triggered for existing files that end in
It's not necessarily a problem per-se, but I just wanted to make sure to highlight that I wasn't intending on writing a set of crates for when the overhaul initially landed. I do expect that solutions in this space will develop quite rapidly!
Ah indeed, sounds good to me!
Thanks so much for taking the time to look it over!
This is interesting! Is modifying I do think that relative paths are a good idea here. Do you think that the system as proposed would be good enough for now while leaving the door open to extending it in the future with pointers to other files?
Yeah my plan was do to basically what we do today which is to ship stdout/stderr to the user if something fails, and rely on the stdout/stderr to say what's missing or what went wrong. Metadata like
Could you elaborate a little more on what you're thinking to solve this here? All tasks which can be run in parallel (not dependent on the build script) will continue to be run in parallel, but the compilation of the crate at hand can't progress until the build script finishes, so we'll need some form of blocking waiting for the artifacts to become available. It may be the case that waiting for the
And of course, thank you for taking a look at this!
Ah yes this is a point which I would very much wish to address as part of this proposal. It's looking more and more likely that syntax extensions will not be available in stable Rust at 1.0, so we'll need to find alternate solutions for these situations. Do you know if these are more complicated than just generating a wad of code on average? If that's all they do, then I've been thinking of proposing: // Like `include!`, but appends the second argument to the value of the environment
// variable specified by the first argument.
include_env!("OUT_DIR", "generated.rs"); That would allow cargo crates to generate a file into |
Ah, thinking more on it I think we can leverage the already-existent include!(concat!(env!("OUT_DIR"), "/generated.rs")); |
@alexcrichton I certainly think that the proposal as is good enough to go with as it stands. Using ~/.cargo/config certainly works for now; I'm certainly happy enough to just write a few little scripts to swap versions of the config file in and out. Re: tasks, I was initially proposing that cargo might be invoked with a flag like:
Which would pass an additional flag That would hint to the build script that if user interaction is required for some reason, assume default values or fail the build. Specifically when interacting with command line tools, it's not uncommon for a tool (bower, say...) to suddenly decide that when you run it, it needs to wait for user input before it decides what it's going to do (eg. package resolution, or as, in bower's case, because they suddenly force everyone to opt in/out of anonymous usage analytics). Practically speaking this means that if cargo invokes a build script and that script takes longer than the 'build_timeout' value in seconds, terminate the build process and fail the build. Most CI servers are smart enough to terminate the build processes they run (ie. this is configurable at a higher level), so perhaps folding this into cargo is overkill unless it actually turns out to be a problem. We do, however, need some way of passing arbitrary flags to the build script from the command line, I suspect.
Seems clunky. ...but so does using a feature for a build detail. For example, say I need to pass a username and password to a build script to access a remote repository to clone C source code from. Do I update the compile.rs with the username and password (always a bad idea), or somehow pass this via the command line? |
@alexcrichton looks pretty good for me. I have a question regarding features though. For example right now OpenSSL is providing A workaround is to create I'm not sure how often native deps might import different symbols (feature or platform dependent) so this could be considered a corner case which doesn't require immediate solution and just should be kept in mind. |
Yeah we want to be able to reexport features from one package to another: #633 (comment) I've also now posted this as an RFC: rust-lang/rfcs#403 |
Done! #792 |
I appreciate you care about good Windows support. However, this change doesn't help me support Windows. I don't support Windows because of overwhelming complexity of integration with unfamiliar system and an outdated C compiler. I don't know how to properly integrate with Visual Studio projects, in Rust or any other language. When you tell me to use Rust instead of bash, I don't even know what am I supposed to do. The docs show horrible example of bash 2-liner as 20 lines of bad Rust code and a hypothetical Rust build system, which I'd rather not use even if it existed. I have to compile C libraries that—for better or worse—require Instead of making it harder to use non-Rust build systems, you should be embracing them! Almost by definition every 3rd party code already comes with build scripts for all platforms it supports. For example libjpeg-turbo uses autotools on Unix and cmake on Windows. I wish I could do something like this: build="./configure && make"
build.windows="cmake" Some of my own tools can use Cocoa on OS X (and on Windows I require MinGW anyway because MSVC's C99 support is awful). I wish I could set:
libpng uses Makefiles on Unix and has a Visual Studio project for Windows. Could you support that?
OMG, NO!!! I don't want to deal with downloads either way! I've got better things to invent than yet another half-assed downloader. Why can't Cargo handle download and unarchiving for me!?
or
and don't require Cargo.toml to be present! I can't ask every C library to add Cargo.toml to their distribution just so I can use Cargo to download the package, but I don't mind adding extra metadata in my Cargo.toml. I want to run an existing build script from a downloaded tarball. It seems to me that it's a really basic use-case for a build system interfacing with C libraries. It can be done in 2 lines of bash. I'd like Cargo to make it even easier and more reliable for me, rather than stand in the way and require hundreds of lines of needlessly reinvented code for the same thing. |
@pornel please try to stay constructive. Raging doesn't help anything. This issue is closed; if you have a specific request, open a new issue for it. Broadly speak to address your concerns: Cargo is not cmake. It does not require a CMakeLists.txt in the target c library. In fact, this is exactly the opposite of what has been implemented. Rather, 3rd parties (eg. you or me) can create sys-foo packages. These packages invoke 3rd party build tools (msbuild, cmake, gcc, etc) as required on a per platform basis. This provides a consistent rust dependency chain for c libraries.
Is conceptually simpler than using Process to spawn your choice of 3rd party tool, but its also extremely platform specific. What about the choice of architecture? What platforms do you support? Are 'build.linux' and 'build.unix' and 'build.bsd' required?
So write a script that does that. It's not that hard, and once it's been done once, everyone in the rust ecosystem benefits from it. Once more: build.rs is not designed to invoke a c compiler directly. It is a way to have a cross platform wrapper that uses the appropriate rust tools (eg. #[cfg(win)]) to invoke 3rd party build tool as appropriate for the platform. If you have a specific request for changed functionality to make downloading packages / untaring things / some other obscure usecase here / ??? please file a specific issue relating to the use case and how you would like to see it resolved. |
The custom
build
command is a very wonky mechanism. It is not cross-platform, it assumes that the user has certain softwares installed on his system and available in his PATH (usually make/gcc, sometimes curl), it does not support cross-compilation, etc. Basically it is like having a makefile without a configure script.This is especially problematic on Windows. Because of this command, a lot of Rust libraries depend on the user having MinGW installed (even though rustc no longer requires this), and some libraries simply can't be compiled on Windows, like
glfw-rs
. I have the sensation that all the efforts spent on improving support for Windows are worthless because of this.I propose to deprecate the
build
command. At first, print a warning when compiling a crate that uses it, and in a second time remove support for it entirely.Replacements
This command should be replaced by two complementary mechanisms.
Binary dependencies
Add a
pre-build
string entry inCargo.toml
. This string must in the format$package/$bin
where$package
is the name of a dependency in thedependencies
array, and$bin
is the name of a binary found in this dependency. The binary will be compiled and then run by Cargo similarly to the currentbuild
command.The binary would have access to the environment variables that the
build
currently has access to, plus an additional one namedCOMMAND_CARGO_MANIFEST_DIR
which contains the directory of the Cargo manifest of the dependency (instead of the manifest of the package where the command is being run).Example:
/Cargo.toml
:/my-compiler/Cargo.toml
:Why is this better than a regular
build
command?curl
for example, you can instead write a binary that uses a Rust http library, which is more cross-platform because it does not require the user to havecurl
in his path.Add
native
Add a
native
array inCargo.toml
that specifies a list of non-rust libraries that must be present for this project to compile.Cargo will process each entry in this order:
X
is the library name, try to findlibX.*
andX.*
) in$package_root/native/$target
. If it finds a match, copy the file in$(OUT_DIR)
.For example if you are compiling for
arm-linux-androideabi
and require a library namedhello
, Cargo will look fornative/arm-linux-androideabi/libhello.*
andnative/arm-linux-androideabi/hello.*
.--target
option has been passed) and if we didn't find anything in the previous step, try to invokepkg-config
. If a result is found, pass the path of the library to rustc with the-L
flag.$(HOME)/.rust
. There is no additional step because rustc already looks into$(HOME)/.rust
. Cargo must still check if the library is present.--ignore-missing-native-libs
could be added to Cargo in order to bypass this error and print a warning instead.This system could be improved in the future by adding more entries to
native
, for example allowing the user to choose whether to link statically or dynamically.The text was updated successfully, but these errors were encountered: