A fun experimental paper from the recent C++ committee mailing - "Are modules fast?" (http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2019/p1441r1.pdf)
The proposed C++ modules are very similar to our packages and they similarly don't scale to massively parallel builds.
I wonder if Cargo eventually ends up compiling leaf crates multiple times given enough parallelism to compensate deep dependency trees, that would be kind of a reasonable return to the header inclusion model.
Or maybe that's a nonsense, unless builds span multiple machines.
FWIW I've often though that Cargo's model of compilation is basically unsuitable for the "we have infinite parallelism" compilation model (aka distributed compilation)
My gut is that we would have to rethink quite a lot to really take advantage of infinite parallelism, and all of cargo/rustc/parallelism we have today is really heavily optimized for "reasonable parallelism on one machine"
although that is also starting to show wear and tear with lots of cpus and big crate graphs...
I have a machine with 14 physical cores and the difference between C++ compilation and Rust compilation is huge (e.g. in Firefox). Compilation of Rust code doesn't get close to saturating the cores.
That's my primary motivation for working on pipelining.
IMO fixing https://github.com/rust-lang/rust/issues/54712#issuecomment-485656192 and https://github.com/rust-lang/rust/issues/58485#issuecomment-485658943 would provide a better improvement than pipelining
Rust also saturates all the cores during codegen, and ofc during running tests
but this is with incremental. for batch compilation, pipelining should be a clear win
(I assumed you meant "compiling rustc" but that's not all the Rust code out there, heh)
@eddyb your improvements are rustc-specific though I think, right? They're not generally applicable to crate graphs in the large?
that last message from me should've started with "oops"
(I had misread what @nnethercote had said. and now I can't edit my messages to make it even clearer)