I'm getting this error occasionally: "Progress handler for token rustAnalyzer/indexing already registered"
I'm fairly sure this did not happen when I worked on that, were there any recent changes that could have caused this?
My guess is a reindexing is happening? Reading the spec I think we're doing it wrong
The code should handle this correctly, even in the presence of cancelation
Servers can also initiate progress reporting using the window/workDoneProgress/create request. This is useful if the server needs to report progress outside of a request (for example the server needs to re-index a database). The returned token can then be used to report progress using the same notifications used as for client initiated progress. A token obtained using the create request should only be used once (e.g. only one begin, many report and one end notification should be sent to it).
Reading that leads me to believe that the token has to be unique for each indexing operation
It does, but we should only run one indexing operation at once
This is why
cargo check indicators are numbered when you have a workspace open, to ensure uniqueness
Okay. I thought it also ran indexing again
It is rerun on every keypress, but should get canceled first
Actually... should we only send a begin notification if the create request is successful (and maybe log an error if the request fails)?
I'm not saying that's the issue but it might help us track it down. Things could be out of order in some weird way too
huh, looking at
GlobalState::report_progress, couldn't that end up sending the notification before the request has been processed or something?
I think so... I think we need to wait for the reply of the create (and that it's happy) to make sure it's good to go. It's actually a little weird to me that the client doesn't reply with a token but...
After reading the code I believe the request will be sent on the wire but you're right in that it doesn't mean the client has finished processed it. Still that wouldn't explain the error message (assuming that it's correct)
yeah, the sending order is correct
but we don't check the response before sending notifications
That's a bug. We also ignore any errors when registering capabilities for
Do we have an issue for this at GitHub? (I didn't see one).
I'm seeing this (and some of our devs are seeing it constantly), as well as this, which I assume is different?
thread '<unnamed>' panicked at ' Failed to lookup FN@1951..2703 in this Semantics. Make sure to use only query nodes, derived from this instance of Semantics. root node: SOURCE_FILE@0..9482 known nodes: SOURCE_FILE@0..9482 ', crates/hir/src/semantics.rs:556:13
@Paul Faria (fyi)
Is that on the last release or nightly/master?
I'm pretty sure that's different. I haven't run across it. Can you get a better trace?
I can reliably reproduce the duplicate tokens if I use vscode's Search functionality and click on a new file to open while indexing is happening
In fact... every time I switch to a new editor reindexing starts and we try to create new tokens
Sorry that's every time I open a new document and sometimes an existing one
Ah, so it was caused by https://github.com/rust-analyzer/rust-analyzer/pull/6637
Maybe it's just missing the
self.status == Status::Ready check
Maybe... if indexing is cancelled do we automatically send a WorkDoneProgressEnd to the client?
though maybe that code path should just reuse
(nevermind, that doesn't make sense)
It only reliably reproduces when out_dirs/proc macro support is enabled
https://github.com/rust-analyzer/rust-analyzer/pull/6701 fixes the problem for me, though I unfortunately still don't fully understand what was going on
I can't reproduce it after your change
I'm not seeing either of the above with yesterday's (today's) nightly (although that doesn't include pull/6701...