Stream: t-compiler

Topic: design meeting 2019.05.17


nikomatsakis (May 17 2019 at 13:56, on Zulip):

Hi @T-compiler/meeting ! We're going to be having our first ever design meeting here in 5 minutes. The topic of discussion is moving to a parallel rustc. Big thanks :heart: to @simulacrum and @Zoxc for working hard on gathering data over the last week or so.

nikomatsakis (May 17 2019 at 13:56, on Zulip):

You can view the "measurements hackmd file" here

nikomatsakis (May 17 2019 at 13:58, on Zulip):

( cc @Alex Crichton and @lwshang )

nikomatsakis (May 17 2019 at 14:01, on Zulip):

Hmm I realize it'd be nice to have the "absolute perf differences" too; @simulacrum there isn't some "quick-n-dirty" way to dump the data into a spreadsheet, is there?

nikomatsakis (May 17 2019 at 14:02, on Zulip):

(Is the data available in json form?)

nikomatsakis (May 17 2019 at 14:04, on Zulip):

Shall we start?

nikomatsakis (May 17 2019 at 14:04, on Zulip):

(Is @Zoxc here?)

nikomatsakis (May 17 2019 at 14:04, on Zulip):

I didn't see them "wave" in the meeting announcement :)

pnkfelix (May 17 2019 at 14:05, on Zulip):

Did they confirm availability for this slot?

nikomatsakis (May 17 2019 at 14:05, on Zulip):

They were here when we settled on the time

nikomatsakis (May 17 2019 at 14:05, on Zulip):

But in any case we might as well get started

Zoxc (May 17 2019 at 14:05, on Zulip):

I'm here =P

simulacrum (May 17 2019 at 14:06, on Zulip):

ah for some reason Zulip wasn't updating for me, I'm not sure what you mean by absolute perf differences

simulacrum (May 17 2019 at 14:06, on Zulip):

each "row" on perf is expandable if you click the black triangle on the left

nikomatsakis (May 17 2019 at 14:06, on Zulip):

What I meant is seeing not just "20% slower" but "slower by 0.2s" or whatever

nikomatsakis (May 17 2019 at 14:06, on Zulip):

yeah, the data is there, I just have to do the math myself :)

simulacrum (May 17 2019 at 14:06, on Zulip):

I will see if I can hack something together

nikomatsakis (May 17 2019 at 14:07, on Zulip):

Anyway, the context is that some time back @mw, @Alex Crichton and I drew up this rough plan to talk about trying to deploy parallel rustc.

nikomatsakis (May 17 2019 at 14:07, on Zulip):

The idea was to deploy in 3 phases, but basically the most important is the first one: enabling parallel compilation (meaning, the use of locks etc) but making it opt-in.

nikomatsakis (May 17 2019 at 14:08, on Zulip):

This is a "rough patch" because, if you're not using parallelism for whatever reason, you don't get to recoup the perf benefits.

nikomatsakis (May 17 2019 at 14:08, on Zulip):

At the time, I think we were shooting for performance that said no "major regressions", which we defined as:

A major regression is a regression in compilation time that is both >5% and greater than 1 second.

nikomatsakis (May 17 2019 at 14:09, on Zulip):

That may or may not be achievable :)

nikomatsakis (May 17 2019 at 14:09, on Zulip):

So @simulacrum (in the parallel measurements doc) has a bunch of useful links to various measurements

mw (May 17 2019 at 14:11, on Zulip):

(are there more whole-crate-graph numbers on the way?)

nikomatsakis (May 17 2019 at 14:11, on Zulip):

Anyway, the point of this meeting I would say is to:

simulacrum (May 17 2019 at 14:11, on Zulip):

it's somewhat slow to collect them (same as with regular perf) -- if we have specific requests I can kick those off but we probably won't have them by meeting

nikomatsakis (May 17 2019 at 14:12, on Zulip):

Should we start by reviewing the measurements?

mw (May 17 2019 at 14:12, on Zulip):

@simulacrum cargo with -t4 -j4 would be interesting

nikomatsakis (May 17 2019 at 14:12, on Zulip):

The perf.r-l.o section has:

Wall time, single crate:

nikomatsakis (May 17 2019 at 14:12, on Zulip):

I'm curious about the final line, this is why I was asking about absolute measurements

nikomatsakis (May 17 2019 at 14:13, on Zulip):

e.g., helloworld-check is 36% slower but it's like 0.6 to 0.7 :)

mw (May 17 2019 at 14:13, on Zulip):

@simulacrum -tX-j1 means that rustc will never be allowed to use more than one thread (which is useful for measuring overhead but doesn't allow measuring speedups)

nikomatsakis (May 17 2019 at 14:13, on Zulip):

(Oh, I am also remembering now... back when we were talking about NLL performance, someone -- who was it? -- was citing bits of research about how much slowdown there is before things become noticeable)

nikomatsakis (May 17 2019 at 14:14, on Zulip):

I want to say scottmcm

simulacrum (May 17 2019 at 14:15, on Zulip):

okay spreadsheet with absolute data is up but no formulas run yet

simulacrum (May 17 2019 at 14:15, on Zulip):

see link in document

mw (May 17 2019 at 14:15, on Zulip):

@nikomatsakis https://github.com/rust-lang-nursery/rustc-perf/issues/192#issuecomment-394611567

Zoxc (May 17 2019 at 14:15, on Zulip):

packed-simd takes a 20% perf hit and it isn't small.

nikomatsakis (May 17 2019 at 14:15, on Zulip):

oh, nice

Alex Crichton (May 17 2019 at 14:17, on Zulip):

(sorry woke up recently, but I"m now following here!)

nikomatsakis (May 17 2019 at 14:19, on Zulip):

packed-simd takes a 20% perf hit and it isn't small.

yeah, that's one of the few I've found so far

pnkfelix (May 17 2019 at 14:19, on Zulip):

packed-simd doesn't take as big a hit in the single vs. -Zthreads=1 data, FWIW

nikomatsakis (May 17 2019 at 14:19, on Zulip):

the hit depends also on opt, debug, check

Wesley Wiser (May 17 2019 at 14:19, on Zulip):

(Oh, I am also remembering now... back when we were talking about NLL performance, someone -- who was it? -- was citing bits of research about how much slowdown there is before things become noticeable)

It's probably from the book "Usability Engineering" which has:

nikomatsakis (May 17 2019 at 14:19, on Zulip):

which is another way that might be useful to slice the data

nikomatsakis (May 17 2019 at 14:21, on Zulip):

(I guess we have no idea why packed-simd in particular takes a hit?)

pnkfelix (May 17 2019 at 14:22, on Zulip):

a lot of the big percentages on the single vs. -Zthreads=1 data seems like it originates from regressions solely in "patched incremental" ... does that sound plausible>?

pnkfelix (May 17 2019 at 14:22, on Zulip):

(any ideas as to why "patched incremental" in particular would take a signficant hit?)

Zoxc (May 17 2019 at 14:23, on Zulip):

@pnkfelix Yes. Incremental is lock heavy. https://github.com/rust-lang/rust/pull/60035 helps with that

mw (May 17 2019 at 14:23, on Zulip):

it might involve more locking than the other workloads

pnkfelix (May 17 2019 at 14:23, on Zulip):

(perhaps also because those are already starting off as small numbers overall, and so the percentages get exagerrated)

pnkfelix (May 17 2019 at 14:24, on Zulip):

(ah my hypothesis is wrong anyway; there are tests where the "clean" build was the one with the big perf hit, and not their "patched incremental" variants)

simulacrum (May 17 2019 at 14:25, on Zulip):

that may partially depend on whether the incremental case is hitting the happy path -- i.e., being actually incremental

simulacrum (May 17 2019 at 14:25, on Zulip):

(which may cause more locks to be hit)

mw (May 17 2019 at 14:25, on Zulip):

I think the incremental cases can allow for more of a hit because they don't aggregate up so much

simulacrum (May 17 2019 at 14:26, on Zulip):

but I'm not sure if that's accurate from how incremental is implemented

mw (May 17 2019 at 14:26, on Zulip):

i.e. you are usually only re-compiling the leaf crates incrementally

mw (May 17 2019 at 14:27, on Zulip):

I think the perf.rlo numbers look OK, with a few outliers (must of them synthetic tests)

simulacrum (May 17 2019 at 14:27, on Zulip):

true -- it is worth noting that all of perf data is "leaf" compilations (i.e., single-crate)

mw (May 17 2019 at 14:27, on Zulip):

I'd be more interested in the whole-crate-graph scenarios

nikomatsakis (May 17 2019 at 14:28, on Zulip):

the things that are "major regressions" from what I can tell:

simulacrum (May 17 2019 at 14:28, on Zulip):

whereas at least in the rustc whole-crate-graph (i.e., x.py build --stage 0) we see significantly worse performance with multi-threaded as total runtime

nikomatsakis (May 17 2019 at 14:28, on Zulip):

I think the perf.rlo numbers look OK, with a few outliers (must of them synthetic tests)

I tend to agree with this

Alex Crichton (May 17 2019 at 14:28, on Zulip):

This is the sort of testing though that we wanted to make a call on internals for right? (whole crate graph testing)

nikomatsakis (May 17 2019 at 14:29, on Zulip):

(notably, script-servo-debug starts to have perf improvements of 17% with even 2 cores)

simulacrum (May 17 2019 at 14:29, on Zulip):

I'm personally suspecting that the rustc case is fairly representative of many crate graphs -- and that we gain more from coarse parallelism than from fine grained parallelism, at least in the --release case

mw (May 17 2019 at 14:29, on Zulip):

@Alex Crichton we wanted to have some numbers for this meeting already

Alex Crichton (May 17 2019 at 14:30, on Zulip):

I may also be misunderstanding the purpose of the mtg here, but is this trying to provide data that we shouldn't turn on parallel -j1 nightlies because the overhead in singlethreaded mode is too high?

nikomatsakis (May 17 2019 at 14:30, on Zulip):

we're trying to decide whether to kick off the plan, basically, yes

nikomatsakis (May 17 2019 at 14:31, on Zulip):

I have to say that the rustc numbers are a bit scary

pnkfelix (May 17 2019 at 14:31, on Zulip):

well it also sounds like @simulacrum is seeing evidence that there is negative return in a very important use case

mw (May 17 2019 at 14:31, on Zulip):

the cargo-tX-j1 results (that seem to be gone again) were rather interesting actually

nikomatsakis (May 17 2019 at 14:31, on Zulip):

in what way?

mw (May 17 2019 at 14:32, on Zulip):

they showed no wall-time regression

mw (May 17 2019 at 14:32, on Zulip):

although the single-threaded compiler was not tested, I think

mw (May 17 2019 at 14:32, on Zulip):

so, nevermind :/

simulacrum (May 17 2019 at 14:32, on Zulip):

yeah, I didn't get a chance to run that

simulacrum (May 17 2019 at 14:32, on Zulip):

it's really just showing -t1-j1

mw (May 17 2019 at 14:32, on Zulip):

at least the jobserver seems to work :)

pnkfelix (May 17 2019 at 14:32, on Zulip):

@simulacrum : what was the invocation used to gather the rustc-stage0 numbers?

simulacrum (May 17 2019 at 14:33, on Zulip):

x.py build --stage 0 with rustc = set to the parallel compiler in bootstrap

nikomatsakis (May 17 2019 at 14:34, on Zulip):

at least the jobserver seems to work :)

i.e., doesn't crash? :)

mw (May 17 2019 at 14:34, on Zulip):

huh, do the rustc stage 0 compilation look really bad?

nikomatsakis (May 17 2019 at 14:34, on Zulip):

or what did you mean by that?

simulacrum (May 17 2019 at 14:34, on Zulip):

it's worth noting that the parallel compiler there is a beta-versioned parallel compiler (i.e., not the most recent possible parallel compiler)

simulacrum (May 17 2019 at 14:34, on Zulip):

which is probably not great in hindsight :/

nikomatsakis (May 17 2019 at 14:34, on Zulip):

i.e., doing the stage0 build?

pnkfelix (May 17 2019 at 14:34, on Zulip):

@mw : https://hackmd.io/s/B1NAo-m3V#Whole-crate-graph-data-%E2%80%93-from-Mark%E2%80%99s-computer-8-core-16-thread

pnkfelix (May 17 2019 at 14:35, on Zulip):

(if i understand correctly)

Zoxc (May 17 2019 at 14:35, on Zulip):

Here's some numbers I got from rustc compilation https://github.com/rust-lang/rust/pull/59530#issuecomment-481557551

simulacrum (May 17 2019 at 14:35, on Zulip):

hm, is that after x.py clean?

mw (May 17 2019 at 14:35, on Zulip):

at least the jobserver seems to work :)

i.e., doesn't crash? :)

it keeps the time constant, which suggest that now more threads are used than the jobserver allows

nikomatsakis (May 17 2019 at 14:36, on Zulip):

I wonder why @Zoxc's numbers seem so different. I guess it makes sense for us to do a bit more testing here.

Wesley Wiser (May 17 2019 at 14:36, on Zulip):

Especially considering those are stage1 numbers

pnkfelix (May 17 2019 at 14:37, on Zulip):

yeah I am sitting here wondering if there's something goofy with --stage 0 ?

nikomatsakis (May 17 2019 at 14:37, on Zulip):

So, it seems safe to say:

simulacrum (May 17 2019 at 14:37, on Zulip):

Well, so it's worth noting that this is a) a beta compiler essentially (just compiled with --enable-parallel-compiler)

simulacrum (May 17 2019 at 14:38, on Zulip):

I'm also not sure how @Zoxc collected data

nikomatsakis (May 17 2019 at 14:38, on Zulip):

Didn't we like just branch a beta -- or aren't we about to?

pnkfelix (May 17 2019 at 14:38, on Zulip):

maybe we need to start sharing benchmarking scripts? just to keep everyone in sync?

simulacrum (May 17 2019 at 14:38, on Zulip):

RUSTFLAGS="-Zthreads=4" perf stat -o multi-threaded-t4-j16 -ddd --repeat=10 --pre "./x.py clean" -- ./x.py build -j16 --stage 0

nikomatsakis (May 17 2019 at 14:38, on Zulip):

I'm wondering if we should spend a bit of time looking beyond the perf measurements -- it seems like we want to do more measurements on rustc at minimum.

mw (May 17 2019 at 14:39, on Zulip):

I think we should not focus on rustc too much

mw (May 17 2019 at 14:39, on Zulip):

it's always been a weird special case

nikomatsakis (May 17 2019 at 14:39, on Zulip):

I feel like it's kind of one that important to us in a number of ways =)

pnkfelix (May 17 2019 at 14:39, on Zulip):

okay, but we should select another large crate graph then

mw (May 17 2019 at 14:39, on Zulip):
mw (May 17 2019 at 14:40, on Zulip):

from the document

nikomatsakis (May 17 2019 at 14:40, on Zulip):

But definitely :+1: to gathering more crate graph numbers

simulacrum (May 17 2019 at 14:40, on Zulip):

I can try to get us numbers on all three of those this weekend; unfortunately it takes a lot of time :/

nikomatsakis (May 17 2019 at 14:40, on Zulip):

Yeah :/ I'm game to run some scripts too

simulacrum (May 17 2019 at 14:40, on Zulip):

but regardless - I think we should maybe discuss the next few items on the agenda

pnkfelix (May 17 2019 at 14:40, on Zulip):

@simulacrum I infer it takes a lot of time in part because you are exploring multiple dimensions, varying both thread-count and job-count, yes?

simulacrum (May 17 2019 at 14:41, on Zulip):

@pnkfelix yes, and running each build at least 3-4 times to get the +/- down lower)

simulacrum (May 17 2019 at 14:41, on Zulip):

as you can see in the current data multi-threaded builds even with 10 runs are still +/- 8% or so

pnkfelix (May 17 2019 at 14:41, on Zulip):

(I'm not sure if I have much in the way of suggestions as to how to address that combinatoric blowup, but it seems worth trying to reduce in some way...)

nikomatsakis (May 17 2019 at 14:42, on Zulip):

( I feel like we are mostly interested in the 1 thread and maybe like 4 thread case )

nikomatsakis (May 17 2019 at 14:42, on Zulip):

or 2 thread, something like that

simulacrum (May 17 2019 at 14:42, on Zulip):

It would certainly reduce the blowup if we do 1, 2, 4 threads / -j{4,8,16} perhaps?

simulacrum (May 17 2019 at 14:42, on Zulip):

not sure on the -j bit

pnkfelix (May 17 2019 at 14:43, on Zulip):

yes, but I do think its interesting to have comparisons against increasing the job-count while keeping thread count fixed

pnkfelix (May 17 2019 at 14:43, on Zulip):

which seems to be what @simulacrum was doing

simulacrum (May 17 2019 at 14:43, on Zulip):
echo 0 | sudo tee /proc/sys/kernel/nmi_watchdog
for cargo_threads in 2 3 4 6 8 16; do
for rust_threads in 1 2 3 4 6 8 16 ; do
RUSTC=/home/mark/.rustup/toolchains/6f087ac1c17723a84fd45f445c9887dbff61f8c0/bin/rustc RUSTFLAGS="-Zthreads=$rust_threads" perf stat -o "/home/mark/perf-results/cargo-multi-threaded-t$rust_threads-j$cargo_threads" -ddd --repeat=4 --pre "cargo clean" cargo build --release -j$cargo_threads
done
done
echo 1 | sudo tee /proc/sys/kernel/nmi_watchdog
Zoxc (May 17 2019 at 14:44, on Zulip):

If you're varying the job count, use 1 CGU to avoid LLVM threads affecting the result

pnkfelix (May 17 2019 at 14:44, on Zulip):

okay. my most immediate suggestion: don't do N x M

pnkfelix (May 17 2019 at 14:44, on Zulip):

at least, not for now

pnkfelix (May 17 2019 at 14:44, on Zulip):

do instead 2 x N + 2 x M

pnkfelix (May 17 2019 at 14:44, on Zulip):

(does that make sense?)

nikomatsakis (May 17 2019 at 14:44, on Zulip):

anyway, the other item was documentation and maintenance

nikomatsakis (May 17 2019 at 14:44, on Zulip):

let's break the measurement stuff out into a separate topic

nikomatsakis (May 17 2019 at 14:45, on Zulip):

I guess the real question, ultimately, is when bug reports come, do we feel like we're gonna' be able to fix them :)

pnkfelix (May 17 2019 at 14:45, on Zulip):

only if we can replicate them

nikomatsakis (May 17 2019 at 14:45, on Zulip):

I feel like there are a fair number of moving parts in the system -- I don't understand it that well, but off the top of my head:

nikomatsakis (May 17 2019 at 14:46, on Zulip):

should we try to document them? for sure some of these things (cough interning) are only barely documented today, though the @WG-learning group is sort of working on trying to improve that :)

Zoxc (May 17 2019 at 14:47, on Zulip):

Interning is pretty much unchanged

nikomatsakis (May 17 2019 at 14:47, on Zulip):

did we basically just put locks around the central hashmaps?

Zoxc (May 17 2019 at 14:47, on Zulip):

Yes

mw (May 17 2019 at 14:47, on Zulip):

that's an interesting piece of information in itself :)

Zoxc (May 17 2019 at 14:48, on Zulip):

Basically RefCell -> Mutex

nikomatsakis (May 17 2019 at 14:48, on Zulip):

I remember that when we decided to merge this stuff there was a fairly "long tail" of random bits of mutexes and the like

nikomatsakis (May 17 2019 at 14:49, on Zulip):

I think my feeling is that it'd be great to try and document the design, perhaps together with the impl of queries -- @Zoxc what are the last few bugs you fixed :)

Vadim Petrochenkov (May 17 2019 at 14:49, on Zulip):

Anything here is an interesting piece of information, especially an ELI5-style overview would be nice.
All e.g. I know about parallel rustc is that some Rc/Cell become Arc/Mutex and something becomes parallel.

nikomatsakis (May 17 2019 at 14:49, on Zulip):

ELI5?

nikomatsakis (May 17 2019 at 14:50, on Zulip):

(I ask about bugs because i'm trying to pull out what are some of the trickier things that may not jump immediately to mind...)

Vadim Petrochenkov (May 17 2019 at 14:50, on Zulip):

*explain like I'm 5

mw (May 17 2019 at 14:50, on Zulip):

there's this: https://rust-lang.github.io/rustc-guide/queries/query-evaluation-model-in-detail.html#parallel-query-execution

mw (May 17 2019 at 14:50, on Zulip):

but it doesn't go into any detail at all

nikomatsakis (May 17 2019 at 14:50, on Zulip):

Ah, yeah. I definitely feel like a "high-level strategy overview with details about interesting add'l cases" would be nice; I'm not sure it's reasonable to put that on @Zoxc to write, but it'd be nice if they could support whoever does

nikomatsakis (May 17 2019 at 14:51, on Zulip):

I guess other topics that jump to mind might be jobserver integration

Zoxc (May 17 2019 at 14:51, on Zulip):

Query execution, waiting and cycle handling are the tricky bits

nikomatsakis (May 17 2019 at 14:52, on Zulip):

@Zoxc do you think you'd be game to produce an "outline form" of docs? i.e., what are the interesting bits, and a few brief notes? I think something like that would be very interesting -- and actually already good docs in and of itself

nikomatsakis (May 17 2019 at 14:53, on Zulip):

but more importantly I think it could be "handed off" to others to work on elaborating and coming back to you with questions

nikomatsakis (May 17 2019 at 14:53, on Zulip):

alternatively, I could try to sketch out what i'm thinking, and you can correct the mistakes I make :)

Vadim Petrochenkov (May 17 2019 at 14:54, on Zulip):

might be jobserver integration

That too, please, e.g. from the discussion above it wasn't clear why -t and -j options are separate, I though the total number of "tasks" (both threads and processes) are controlled by the jobserver.

Zoxc (May 17 2019 at 14:54, on Zulip):

https://internals.rust-lang.org/t/parallelizing-rustc-using-rayon/6606 is still pretty good, but there's some changes from it

mw (May 17 2019 at 14:55, on Zulip):

Can we collect @Vadim Petrochenkov's questions somewhere and try to answer them?

mw (May 17 2019 at 14:55, on Zulip):

and invite others to ask questions like that

nikomatsakis (May 17 2019 at 14:55, on Zulip):

I'm going to make a hackmd :)

nikomatsakis (May 17 2019 at 14:56, on Zulip):

please add your questions into here everybody

nikomatsakis (May 17 2019 at 14:56, on Zulip):

I think this is actually a good idea :)

nikomatsakis (May 17 2019 at 15:01, on Zulip):

OK, well, we're at the hour point -- this was informative. I think the conclusion was:

@simulacrum seems to be coordinating the second bullet -- correct?

I am happy to take on the job of coordinating documentation etc though I'll probably pester @Zoxc. I will hopefully try to delegate this to @Santiago Pastorino or somebody else from @WG-learning at some point. =)

simulacrum (May 17 2019 at 15:01, on Zulip):

I will try to coordinate first point (see https://rust-lang.zulipchat.com/#narrow/stream/131828-t-compiler/topic/measuring.20parallel.20rustc)

simulacrum (May 17 2019 at 15:02, on Zulip):

(and I guess bullet two?)

nikomatsakis (May 17 2019 at 15:02, on Zulip):

sorry, I inserted a "0th" bullet :)

nikomatsakis (May 17 2019 at 15:02, on Zulip):

I'd be curious to know if we should dig a bit more into the perf results -- i.e., can we figure out the problems with those major regressions? @Zoxc, you mentioned a pending PR?

nikomatsakis (May 17 2019 at 15:03, on Zulip):

Any other pending improvements, @Zoxc?

nikomatsakis (May 17 2019 at 15:03, on Zulip):

Here's a final question -- who wants to write-up and post a summary comment somewhere... =)

Zoxc (May 17 2019 at 15:03, on Zulip):

https://github.com/rust-lang/rust/pull/60035 is pending, but quite large and invasive

Iñaki Garay (May 17 2019 at 15:05, on Zulip):

I think the current WG-learning todo list is this paper doc https://paper.dropbox.com/doc/Learning-Working-Group-Ideas--AdBCtWsfHAZYwARoovEhE2PzAg-2R49k1GWZFojHIQ5x1fuB

simulacrum (May 17 2019 at 15:05, on Zulip):

@Zoxc To what extent does it make sense to benchmark with a try build from that PR?

Zoxc (May 17 2019 at 15:06, on Zulip):

We might be able to get rid of some locks by having large virtual memory allocation for Vecs like the list of symbols

Zoxc (May 17 2019 at 15:07, on Zulip):

@simulacrum That would probably make a lot of sense, especially to compare it with parallel compiler @ master, which perf can't do

simulacrum (May 17 2019 at 15:07, on Zulip):

well, given just-collected data, we can (i.e., PR vs PR is fine with perf)

simulacrum (May 17 2019 at 15:07, on Zulip):

but I meant the whole crate graph stuff more so

Zoxc (May 17 2019 at 15:12, on Zulip):

Whole crate graphs have less incremental crates, so probably less important for that PR

simulacrum (May 17 2019 at 15:13, on Zulip):

okay, I will hold off on anything then for now -- you should be able to compare vs. master if you use parallel-rustc-N where N is the -Zthreads argument on perf

nikomatsakis (May 17 2019 at 16:00, on Zulip):

As a final note, @scottmcm wrote to me this:

Yeah, 20% faster or slower is what the book I have said, as the "Just Noticable Difference"

scottmcm (May 17 2019 at 17:11, on Zulip):

Yeah, 20% faster or slower is what the book I have said, as the "Just Noticeable Difference"

Ref: https://learning.oreilly.com/library/view/designing-and-engineering/9780321562944/ch05.html#ch05lev2sec2

scottmcm (May 17 2019 at 17:20, on Zulip):

That book also includes some latency categories:
- Instantaneous (0.1 - 0.2s) -- like a light switch
- Immediate (0.5 - 1s) -- prompt acknowledgment
- Continuous (2 - 5s) -- flow of conversation
- Captive (7 - 10s) -- "goldfish time" after which people do something else

(Essentially the same as the order of magnitude ones that @Wesley Wiser mentioned earlier)

Last update: Nov 22 2019 at 04:35UTC