Hey @Florian Diebold -- are you around for me to ask questions of?
To start, you posed these numbers in another topic:
Well, it seems like it might. Here are numbers for analyzing the Chalk repo:
Everything enumerable, using fuel: 20.685380613s
Everything enumerable, without fuel: 20.752197389s
Everything enumerable, removing our hardcoded blacklist of 'bad' traits (Send, Sync, Sized, the Fn traits): seems to hang indefinitely
Marking those traits as non-enumerable instead: 21.311616989s
where did they come from and how did you get them?
I just ran ra_cli's
analysis-stats command by hand, nothing scientific ;)
(we've run into new cases where trait solving takes a long time since then though: https://github.com/rust-analyzer/rust-analyzer/issues/1684)
Do you have some open issues or other things with steps to reproduce really painful slowdowns?
I guess that's an example :)
Though I was hoping for something a bit more .. "reduced"
Yeah, I haven't looked into that case in detail either so far
I could do that later today and try to create a small example if you think it'll help though ;)
I think this hurts a lot lately: https://github.com/rust-analyzer/rust-analyzer/issues/1684#issuecomment-526589665
I think that's mainly because we're using Chalk in more cases though
OK, so, I've been reading into the code. I've been collecting some notes in this paper document, though they're more notes to myself.
I think that the integration of normalization into rust-analyzer doesn't quite look like the "lazy normalization" design we are shooting for in rusc.
I also think I see a good sol'n to the fuel problem
hmm, that may not be entirely true
in any case I was thinking that, with some relatively minimal tweaks, we can modify how chalk's solver works so that it doesn't have as many "loops" -- in particular, handling of cyclic impls today potentially requires iterating over all impls
I think my ideal would be that fuel is more-or-less moved out from chalk and I think we could move closer to that -- though I realize that it's possible that even processing one strand might require a lot of recursion
that said, the other thing I remember now that I want to investigate is that I want to look at the "other side" of how the integration with rust-analyzer is working -- i.e., the data that chalk pulls from r-a about what traits/impls exists. I sort of remember thinking that we needed to improve that setup, and I've kind of forgotten where it stands now.
I was meaning to ask about lazy normalization, since I couldn't find any explanation of it -- my impression was that it means normalization happens during unification (resulting in additional obligations), but I'm not very clear on what practical difference that makes
Concerning fuel, I did try a bit to reorganize the chalk solver loop so we could handle it from the outside, but it wasn't so easy because after getting the first solution, the loop has to run a bit further to determine whether the answer is ambiguous
yes. that will still be needed. the main thing we can do is move those invocations a bit out so they can be interrupted if you no longer care
Btw @nikomatsakis did you see the issue I opened in the chalk repo? It outlines two other problems I ran into with associated types that I could use feedback on. Maybe they're related to the lazy normalization thing
@Florian Diebold nope, I'll check out now
to a 1st approx you can assume I see no github notifications :P