hi! I'm curious what your plans are, so far
you can definitely do codegen "on the second run" right now, by relying on incremental caches
and we want to optimize those, going forward
@eddyb so far my plan was not to use incremental at all because I didn't want the incremental inputs to dirty the cache entries
once parsing is incremental, unchanged sources would probably mean it goes straight to codegen the second time, so you only have the overhead of loading the caches into memory (which is one of the things we want to optimize)
I'm not sure what you're referring to
what are "incremental inputs" and what "cache entries" would be dirtied?
cc @mw @Zoxc @nikomatsakis
so the short version of this is that you tell the tool which command to run and what files it needs in order to run that command on either the local machine or another machine. the naiive behavior is to rerun whenever any of those inputs changes.
if you sync incremental state you could be doing
cargo check locally and codegen remotely
potentially very efficiently
@eddyb I want to avoid potentially doing anything at all locally
my kneejerk reaction is "that's an antiquated design, you need more integration", but it might still work
okay, sure, what's the problem then?
naively applying incremental just means some of those commands take less to run
clever use of incremental would result in distributed deduplication of work
kind of like how "parallel rustc" does queries, except cross-machine instead of cross-thread
I still don't know what you mean by "incremental inputs" or "cache entires"
@eddyb when incremental reruns, it is taking advantage of the results of a previous build
sort of, but less "results" and more "intermediary computations"
it's really just persisted memoization
almost the same thing as
rmeta, but with a newer design
no, the key difference is that the data is different depending on the last thing that you compiled
the result of the previous build cannot be an input to the current build process
wait, what is this even for?
it seems like you have a different usecase of "distributed compilation" in mind
is this 100% batch builds?
can you clarify what you mean by "batch builds"?
like, with a strong requirement of starting from scratch every time?
Yes, previous compilation outputs shouldn't affect the results of the current compilation.
okay then that's not at all what I was thinking of, sorry
so why would you even need split compiles? to run codegen on a different machine?
you can still do what I said, you'd just be starting incremental from scratch and using it only once
rustc --emit=metadata or w/e, in incremental mode, and copy the incremental cache to another machine
you wouldn't be using it across builds, but within one build
rmeta doesn't have enough information to do codegen, but the incremental caches do)