Stream: t-compiler/rust-analyzer

Topic: Lazy impl bodies

matklad (Mar 05 2020 at 13:28, on Zulip):

@Florian Diebold are you around? I have a question about our handing of impls. Currently, we load both impl headers and items at the same time, I want to defer loading of times

matklad (Mar 05 2020 at 13:28, on Zulip):

But I am recolling that, during type inference, we do things like "given a method name, find all methods with this name, and look up their traits"

matklad (Mar 05 2020 at 13:29, on Zulip):

Do you know what exactly I am remembering? :-) I think I've seen this "method name -> impl" code somewhere in rustc...

matklad (Mar 05 2020 at 13:29, on Zulip):

The reason I am asking is that, if we need this operation, than lazy-loading of impl bodies won't work :-(

Florian Diebold (Mar 05 2020 at 13:31, on Zulip):

well, for traits, we just go through all traits in scope and check whether they have a given method name. But for traits we don't need to look at impl bodies at all

Florian Diebold (Mar 05 2020 at 13:32, on Zulip):

for inherent methods, we get all inherent impls that might match our receiver type and go through them to find a method with the given name

matklad (Mar 05 2020 at 13:32, on Zulip):

Ahhh, right

matklad (Mar 05 2020 at 13:33, on Zulip):

Right, up front we need to know methods of traits and inherent impls.... This is sad

Florian Diebold (Mar 05 2020 at 13:34, on Zulip):

well, we don't need to know the inherent impl methods upfront, only when we have a method call on a type that matches the impl ;)

matklad (Mar 05 2020 at 13:37, on Zulip):

Yeah, there's still might be significant wins here... I'll try to split ImplData into ImplHeader and ImplItems = Vec<AssocItemId> and see how it goes

matklad (Mar 05 2020 at 13:54, on Zulip):

Hm, switching to ImpHeader is actually trivial, but, for some reason, it doesn't bring perf benefits I hoped for...

matklad (Mar 05 2020 at 14:14, on Zulip):

:sad: so further investigation shows that this is not "we parse bodies", but actually "we parse at all". Just increasing LRU capacity makes stuff much faster

Florian Diebold (Mar 05 2020 at 14:34, on Zulip):

so... the problem is that we parse multiple times because we query the different things that lower from AST at different times? so it might be better to just put raw items and impl data into the same query?

matklad (Mar 05 2020 at 14:42, on Zulip):

yeah, this is exactly what I am thinking about...

Basically, we merge all various x_data and raw_items query into a single query which takes a file and produces an "ast" of items.

For crate_def_map query, we additionally project this ast to include only names, visibilities, and children for module and enums.

matklad (Mar 05 2020 at 14:43, on Zulip):

We also call this thing "FileStub", because at this stage it becomes exactly the StubIndex from IntelliJ

matklad (Mar 05 2020 at 14:43, on Zulip):

It is also unclear how exactly does it square with expressions in types...

matklad (Mar 05 2020 at 14:53, on Zulip):

And it's also unclear if this should be a win?

Like, right now we don't construct StructDatas for various internal items at all. Why is it so hard to not compute things twice, but at the same to promptly forget and drop most of the freshly computed things?

Florian Diebold (Mar 05 2020 at 15:21, on Zulip):

... is that a rhetorical question? :sweat_smile:

matklad (Mar 05 2020 at 15:23, on Zulip):


Florian Diebold (Mar 05 2020 at 15:27, on Zulip):

all in all I'd guess if we go through the AST anyway, collecting StructDatas etc. shouldn't be much overhead, and they don't take much memory either, I hope

matklad (Mar 05 2020 at 16:13, on Zulip):

Did a small experiment only for impls:

matklad (Mar 05 2020 at 16:14, on Zulip):

It is measurably faster, although it still does a ton of parsing: when we constuct impls_in_trait, we look at the generics at the target of the impl, and that still does parsin

matklad (Mar 05 2020 at 16:24, on Zulip):

measurable = 30%

I guess we need to refactor out all our Data's :weary:

Laurențiu (Mar 05 2020 at 16:32, on Zulip):

Re parsing, I recall we parse a lot and spend a surprising amount of time hashing syntax nodes. There's a fixme in rowan about it.

matklad (Mar 06 2020 at 10:43, on Zulip):

Opened Would be interested in feedback

std::Veetaha (Mar 06 2020 at 10:55, on Zulip):

What do you mean by stubs?

matklad (Mar 06 2020 at 12:05, on Zulip):

matklad (Mar 06 2020 at 12:05, on Zulip):

matklad (Mar 25 2020 at 15:26, on Zulip):

Starting drafting the stubs thing at

std::Veetaha (Mar 25 2020 at 15:51, on Zulip):

Okay, thats one more level of IR...

std::Veetaha (Mar 25 2020 at 15:59, on Zulip):

I wonder what the implications for memory consumption will be.

Laurențiu (Mar 25 2020 at 17:08, on Zulip):

std::Veetaha said:

I wonder what the implications for memory consumption will be.

You say tomato IR, I say let's dump this into SQLite/LSIF :-)

Last update: Jul 27 2021 at 22:00UTC