Stream: t-compiler/wg-parallel-rustc

Topic: 2020-01-09 meeting


Alex Crichton (Jan 09 2020 at 17:04, on Zulip):

Action items from today:

Alex Crichton (Jan 09 2020 at 17:04, on Zulip):
Santiago Pastorino (Jan 09 2020 at 17:05, on Zulip):

have you recorded the meeting?

Alex Crichton (Jan 09 2020 at 17:06, on Zulip):
Alex Crichton (Jan 09 2020 at 17:06, on Zulip):
Alex Crichton (Jan 09 2020 at 17:07, on Zulip):

@Santiago Pastorino oh rats we forgot to record :(

Santiago Pastorino (Jan 09 2020 at 17:24, on Zulip):

no worries, should have asked for that before

cuviper (Jan 09 2020 at 23:30, on Zulip):

RE the effect of a custom lazy spawns, I found only a few ways that rayon would actually notice that threads are missing:

  1. if you don't actually spawn _any_ threads, then calls into the threadpool will block forever
  2. Registry::wait_until_primed will block until all threads have checked in from their main loop. This is only called from ThreadPoolBuilder::build_global, which doesn't affect rustc, and frankly I don't think we really need it anyway. The loose justification is just to help warm up for benchmarking purposes.
  3. Registry::wait_until_stopped will block until all threads have indicated that they've exited their main loop. This is #[cfg(test)] only on vanilla rayon, but rustc-rayon calls this at the end of ThreadPoolBuilder::build_scoped to make sure they all call the release handler before we exit.
    - if rustc uses a fully custom spawn handler instead, then ThreadPoolBuilder::build, it could handle that differently too

  4. rustc-rayon's deadlock detection will count all threads as "active" by default, so those lazy threads will keep the deadlock_check from ever triggering.

Alex Crichton (Jan 14 2020 at 16:58, on Zulip):

@cuviper to clarify, is the conclusion that if we unconditionally spawn one rayon thread then we should be able to slowly spawn the others?

cuviper (Jan 14 2020 at 17:03, on Zulip):

@Alex Crichton point 3 means you'd have to spawn all the threads eventually. you might be able to fake that by running them directly during the threadpool shutdown, if you can find the right time to do that.
for point 4, I don't know if anything is actually depending on that being effective? if so, I think that's a problem, and if not, maybe it should be removed

Alex Crichton (Jan 14 2020 at 17:04, on Zulip):

hm ok, if we're forced to do it anyway then that may be a bit of a bummer, but this may just be part of rustc-rayon reviews/etc

cuviper (Jan 14 2020 at 17:18, on Zulip):

both of those are specific to rustc-rayon extensions. maybe they can be tweaked to help in this regard

Alex Crichton (Jan 14 2020 at 21:10, on Zulip):

Ok to follow up on some of the investigation from the internals thread as well, I fille din some more results here -- https://hackmd.io/rZZtwhJMTHm_Ivszf6uWmg?both

Alex Crichton (Jan 14 2020 at 21:10, on Zulip):

in general I wasn't really able to reproduce anything, sort of as expected, but I did develop a few theories for why the regressions were so drastic

Alex Crichton (Jan 14 2020 at 21:11, on Zulip):

the primary one is that the jobserver management bug in rustc which allows many rustc's to get spawned could blow memory limits easily, triggering swap, causing a big slowdown

Alex Crichton (Jan 14 2020 at 21:11, on Zulip):

another one though is that when the parallel compiler is single threaded it's slower than it is today (as we already know) and if you're build is already extremely parallel (as some of them were) then you're getting very little benefit, this is the "dynamically avoiding atomics" topic in zulip right now

Alex Crichton (Jan 14 2020 at 21:12, on Zulip):

Overall I don't see anything glaring or too too worrisome, I think we need to keep on fixing bugs though of course

simulacrum (Jan 14 2020 at 21:17, on Zulip):

I plan to get my work item done tonight

Last update: Jul 02 2020 at 19:55UTC