Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to keep only last build's artifacts without having to rebuild everything? #9

Open
Boscop opened this issue Nov 22, 2018 · 16 comments
Labels
question Further information is requested

Comments

@Boscop
Copy link

Boscop commented Nov 22, 2018

How to keep only the last build's artifacts without having to rebuild everything?

@holmgr
Copy link
Owner

holmgr commented Nov 22, 2018

Currently you will have to use the stamp and file loading features outlined in the README.
I.e cargo sweep -s then cargo build, cargo test etc and finally cargo sweep -f.
This is due to limitations in the information provided by Cargo and the way Travis etc handles caching.
Does this not work for you?

@holmgr holmgr added the question Further information is requested label Nov 22, 2018
@Boscop
Copy link
Author

Boscop commented Nov 23, 2018

A full rebuild takes hours for my project, so I want to avoid that..

@holmgr
Copy link
Owner

holmgr commented Nov 23, 2018

Well, cargo sweep -s will not clean anything, just generate a timestamp. If you have not cleaned your project already (with cargo clean) cargo build will go very quickly, but is needed so that access times are updated. Since all used artifacts now will have a newer access times than the one in the timestamp file, cargo sweep -f will clean the unused ones.

Unfortunately cargo does not yet output the needed information to allow for a single cargo-sweep pass without intermediate step, but this will be added as soon as the build-plan flag is stabilized, see #2

@Boscop
Copy link
Author

Boscop commented Nov 23, 2018

Thanks, I did this now, and it worked. Unfortunately it also deleted all my frontend artifacts, requiring me to rebuild the whole yew/wasm frontend including its deps (with cargo web deploy into target/deploy)..
Is there a way to keep the artifacts of the latest build of BOTH backend and frontend? (Both are crates in the same workspace.)

@holmgr
Copy link
Owner

holmgr commented Nov 23, 2018

Ah that is not very good, and might be a bug with multiple crates in the same workspace.
Just to make sure though, when you built the system (between cargo sweep -s and cargo sweep -f), did you build both the front end and backend, or just the backend?

@Boscop
Copy link
Author

Boscop commented Nov 23, 2018

At first, I only built the backend between -s and -f but now I wanted to try it again with building both in between. BUT: Now it builds all the deps of my backend from scratch, as if they were deleted or dirty, that shouldn't happen! It should reuse the artifacts from the last build, since nothing changed and cargo sweep -f should have preserved them..

@holmgr
Copy link
Owner

holmgr commented Nov 23, 2018

Okay, that is very strange. Is your project open source by any chance? In that case I could try and take a look specifically on your project and try to find the problem.
And might I ask you with platform you are on/building for?

Will hopefully have time to look at this this weekend and integrate it as part of 0.3.0 if it is indeed a bug.

@Boscop
Copy link
Author

Boscop commented Nov 23, 2018

@holmgr Unfortunately it's closed source. I'm building the backend on Win 8.1 x64 with rustc-msvc and the frontend for wasm32-unknown-unknown.

Btw, now this build of the backend failed with hundreds of errors like this:

warning: file-system error deleting outdated file `\\?\D:\projects\myproject\target\debug\incremental\foo-iunoksi4y22j\s-f6xn1j2202-v70o0s-working\3i6cw8dnqbbpva2r.bc.z`: The system cannot find the file specified. (os error 2)

for different files, and then ends with:

thread 'main' panicked at 'failed to open bitcode file `\\?\D:\projects\myproject\target\debug\incremental\foobar-gin5a1xi91ne\s-f6xn1j0ddz-12k5ctb-working\4gav9wcduaqo3mho.pre-thin-lto.bc`: The system cannot find the file specified. (os error 2)', librustc_codegen_ssa\back\write.rs:1821:9
stack backtrace:
   0: std::sys_common::alloc::realloc_fallback
   1: std::panicking::take_hook
   2: std::panicking::take_hook
   3: rustc::ty::structural_impls::<impl rustc::ty::context::Lift<'tcx> for rustc::ty::instance::InstanceDef<'a>>::lift_to_tcx
   4: std::panicking::rust_panic_with_hook
   5: std::panicking::begin_panic_fmt
   6: std::panicking::begin_panic_fmt
   7: <rustc_codegen_llvm::metadata::LlvmMetadataLoader as rustc::middle::cstore::MetadataLoader>::get_dylib_metadata
   8: <unknown>
   9: <rustc_codegen_llvm::LlvmCodegenBackend as rustc_codegen_utils::codegen_backend::CodegenBackend>::codegen_crate
  10: rustc_driver::driver::build_output_filenames
  11: rustc_driver::driver::phase_4_codegen
  12: <humantime::wrapper::Duration as core::convert::Into<core::time::Duration>>::into
  13: <humantime::wrapper::Duration as core::convert::Into<core::time::Duration>>::into
  14: <rustc_driver::pretty::NoAnn<'hir> as rustc_driver::pretty::HirPrinterSupport<'hir>>::sess
  15: <humantime::wrapper::Duration as core::convert::Into<core::time::Duration>>::into
  16: rustc_driver::driver::compile_input
  17: rustc_driver::run_compiler
  18: rustc_driver::driver::build_output_filenames
  19: rustc_driver::run_compiler
  20: <rustc_driver::derive_registrar::Finder as rustc::hir::itemlikevisit::ItemLikeVisitor<'v>>::visit_item
  21: _rust_maybe_catch_panic
  22: rustc_driver::target_features::add_configuration
  23: rustc_driver::main
  24: <unknown>
  25: std::panicking::update_panic_count
  26: _rust_maybe_catch_panic
  27: std::rt::lang_start_internal
  28: <unknown>
  29: <unknown>
  30: BaseThreadInitThunk
  31: RtlUserThreadStart
query stack during panic:
end of query stack

error: internal compiler error: unexpected panic

note: the compiler unexpectedly panicked. this is a bug.

note: we would appreciate a bug report: https://github.com/rust-lang/rust/blob/master/CONTRIBUTING.md#bug-reports

note: rustc 1.32.0-nightly (5aff30734 2018-11-19) running on x86_64-pc-windows-msvc

note: compiler flags: -C opt-level=1 -C panic=abort -C debuginfo=2 -C debug-assertions=on -C incremental --crate-type lib

note: some of the compiler flags provided by cargo are hidden

error: Could not compile `foobar`.
warning: build failed, waiting for other jobs to finish...
error: build failed

:(

What should I do now?

@holmgr
Copy link
Owner

holmgr commented Nov 23, 2018

Ah very bad then... :(
Sorry for that, but not really sure why that would happen. That said, I will look into windows/wasm a bit more since I have not tested those platforms nearly enough.
For now I think that a cargo clean is your only solution.

@holmgr
Copy link
Owner

holmgr commented Nov 23, 2018

Might be related to rust-lang/rust#48700, tagging this so I can personally keep track of it in this issue

@Boscop
Copy link
Author

Boscop commented Nov 23, 2018

Is it really a rustc bug? I thought it's because cargo-sweep deleted some files that were still needed for incremental builds.. (I never had "occasional panics" from rustc before, especially not about missing files.)

@holmgr
Copy link
Owner

holmgr commented Nov 23, 2018

Probably not, but the issue might be that cleaning the incremental folder is not really a safe thing to do. Currently the tool is agnostic about the types of files/folders in target. Which might not be safe even if I have tested it with about a dousin projects.

Again, sorry for any problems I may have caused :(

@Boscop
Copy link
Author

Boscop commented Nov 23, 2018

Maybe it's because the artifacts from incremental builds aren't touched between -s and -f even when they are still needed?

@holmgr
Copy link
Owner

holmgr commented Nov 23, 2018

Yes exactly. So for now I might just try and ignore the experimental subfolder

@holmgr
Copy link
Owner

holmgr commented Nov 23, 2018

But at the same time that would mean that those files were not read when building, but at the same time they are needed when building. Perhaps cargo just checks if they are available but not reading, thus not updating access times. This might be related to #3, but since it does not "crash" it seems unlikely

@Boscop
Copy link
Author

Boscop commented Nov 23, 2018

@holmgr Cargo tries to delete a lot of them at least (the many warnings), but it tried to read at least one (the error/panic).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants