Hi @nikomatsakis , I started reviewing the code for the mir predecessors cache. The notes suggested refactoring over trying to fix the locks. Is there any documented potential ways to move forward, or would that be best figured out here?
I also wasn't sure if the issues were created yet. I thought I remembered during the meeting (I'll have to rewatch it) that there would be a list of issues created so we could track who was doing what. I tried searching, but all I could find was https://github.com/rust-lang/rust/issues/63643.
I'm not sure what issues were created, @Paul Faria, good question. Anyway re: the predecessors cache, I'm not sure what I would do. The most minimal change would be to use an
Arc so that we avoid returning a lock guard, though to my mind that is not ideal.
I'm going to cc @oli and @eddyb, as they might have some ideas. Y'all, the context here is that we were looking at the MIR
predecessors function, which currently stores the preds for a block in a kind of ref-cell (or, in parallel mode, a mutex) and returns it. The cache is cleared upon modification and rebuilt lazilly. It all seems kind of complicated and not great -- for example, it is sad to need a lock when MIR is accessed by only one thread at a time when mutating.
This discussion probably belongs in #t-compiler/wg-mir-opt as the only reason this requires locking at all is because optimizations may change the basic block structure.
We could make everything without interior mutability by adding a MirPass that computes the cache. That pass would have to be added before any passes that need the cache and after the last pass
everyone with immutable access will only ever see a filled cache (because the filling happens after the last pass) and the passes needing access to the cache, have mutable access anyway
Ok, I'll move the conversation there and link important details back here for cross reference
Just a small update for those not following the thread in #t-compiler/wg-mir-opt : It's very likely we can completely remove interior mutability in this case and not need to worry about any parallel issues :)