So I'm starting this topic to try and log some intermediate thoughts.
I've been thinking a lot about lazy normalization in chalk since @Florian Diebold opened chalk#234. I just posted a new comment here about the interaction of lazy norm and floundering with the "on demand" API changes. In particular, I wanted to try and ensure that we never request "all impls" of a trait (or, at least, not outside coherence), but chalk's existing approach to lazy norm is a complication there.
@scalexm I'm interested to hear what you think about the lazy norm desugaring I proposed here. I'm sort of planning to pursue implementing it a bit as an experiment. It seems increasingly important to be able to know the "self type" we are looking for.