Stream: t-compiler/rust-analyzer

Topic: Completion benchmarks

Kirill Bulatov (Mar 10 2021 at 21:43, on Zulip):

I'm considering options on how to write the actual completion benchmarks.

I'm most interested in the flyimport functionality, but in order to test it with something real, I have to add a few external crates into the project.
Should I use something like //- / crate:main deps:dep in one big file with big number of items and parse that into TestDb?

And then, how I'm supposed to check the actual completion?
Looks like there's no good way to pass some continuous by-char input into those tests?
What are the alternative, to refactor, expose and directly query the main completion method from the ide_completion/

matklad (Mar 11 2021 at 11:04, on Zulip):

@Kirill Bulatov have you seen the recent macro/parsing benchmarks?

matklad (Mar 11 2021 at 11:05, on Zulip):

matklad (Mar 11 2021 at 11:06, on Zulip):

this I think is the right structure for the benchmark:

matklad (Mar 11 2021 at 11:08, on Zulip):

The hard thing here is comming up with a test fixture. My feel is that we should avoid pulling existing crates from crates io, and instead write a generator for pseudo crates. That is, a bit of code that generates text for a tree of nested modules with items, maybe even split across several crates.

matklad (Mar 11 2021 at 11:09, on Zulip):

See also for an example of synthetic generation.

Kirill Bulatov (Mar 11 2021 at 11:40, on Zulip):

Yes, I've seen the previous benchmarks and apparently grasped the general concepts, thanks for confirming :)

I'm wondering if completions with their "preload almost everything and then query rapidly" approach have any caveats worth thinking of, but I'll check it out along the way now, I guess.

matklad (Mar 11 2021 at 11:42, on Zulip):

I think you want to compute the crate def map outised of the main timing block

matklad (Mar 11 2021 at 11:42, on Zulip):

or maybe just use two timing blocks - one for def map, one for completion per se

matklad (Mar 11 2021 at 11:42, on Zulip):

or maybe even better: you do the timling, then you "type" one character, then you do timling again

matklad (Mar 11 2021 at 11:43, on Zulip):

year, the last one is I think what we want

Kirill Bulatov (Mar 11 2021 at 11:45, on Zulip):

Don't see any other way around to measure the lookup speed, but good point about the preload separate tests: that is mostly the data generator related work and might make sense to check along the way.

Kirill Bulatov (Apr 21 2021 at 19:44, on Zulip):

A sanity check:

am I getting it right that for emulating a continuous user input, I have to replace $0 with ${next_symbol}$0 in the whole fixture and reparse it with ChangeFixture::parse to get the Change and FilePosition? (former goes into RootDatabase as an update, then the db and the latter goes into ide_completion::completions function)

For a benchmark, I'd like to generate big fixtures and parsing the whole fixture multiple times feels a bit weird.

matklad (Apr 22 2021 at 08:00, on Zulip):

@Kirill Bulatov Hm, I don't think we need to emulate continuous user input

matklad (Apr 22 2021 at 08:01, on Zulip):

I think it's enough to just do the following:

matklad (Apr 22 2021 at 08:01, on Zulip):

That should be enough to expose analysis-level characteristics.

matklad (Apr 22 2021 at 08:02, on Zulip):

This won't cover things like "are we prioritizing edits over completion over hightlighting in the LSP layer", but I don't think we are quite ready for that sort of the benchmark

Last update: Jul 27 2021 at 21:45UTC