Hey @davidtwco, out of curiosity, did we not create new issues when warranted on #52663 ?
/me goes to do a quick skim
We did for a handful of them.
I think my main problem is that I wanted some way to keep track of partial progress through the list of deviations
I started working through the list again last week and there were a handful that were already fixed or the issues had been closed but the table not updated.
which was sort of handled by the "Need Further Review" bucket on #52663
yeah I see; keeping the table up to date is indeed a problem
It might make sense to do that in a comment on the “do another review” issue.
okay that sounds good to me: to try to keep the gross summary of progress and work items on #54528 itself
but to try to avoid keeping too much detail about the specific deviations that need addressing
so what's the deal with this project thing...
Having issues for the buckets means that they can be closed by PRs that resolve them.
where do the categories come from .. . oh I see, each project gets to make its own set of "Cards" ?
Here’s an example of one: https://github.com/rust-lang/rust/projects/8
I just think it would be a nice way to view it.
yeah I'm skimming over https://github.com/rust-lang/rust/projects/9
Though it’s not that important, GitHub issues are more than fine.
(since I'm more immediately familiar with a few of the issues there)
I’m not sure @nikomatsakis is a fan of projects.
the other question is
What PR's are currently up that I'd like to land before I start this round of review
I know you have some
which maybe you can just answer the Q here instead of me putting it up a third time: Is there a reason you've been restricting your filters to only work on Projections whose
.base is a
If you look for PRs that I’m the author of then there’s a handful.
Not particularly, that’s just how I’ve been doing it. Normally I’ll need a local to find a type or something.
(for example, are the resulting diagnostics misleading or just worse if you try to generalize to a
.base that is an arbitrary
You can ask a
Place for its type
its not the most obvious thing, but the
.place_ty method takes a
tcx and gives back a
PlaceTy enum that can then be turned into a
If you’ve mentioned that before and I’ve not been making sure I don’t do that in other PRs then my bad.
(the reason we have the
PlaceTy enum to be able to keep track of more specific info when the
Place is a projection into an Enum variant.)
@pnkfelix fixed both of those PRs.
@davidtwco that's great, thanks for following up
I'm barely scratching the surface of this, and yet I'm already considering whether it would save time overall for me to first pause and look into resolving some of the instances of "NLL migration mode sometimes prints more diagnostics than either AST-borrowck or NLL" #53004 that I am encountering...
Hey is there anyone looking for something to do?
I ask because I am building up a queue of "investigate further" issues during my sweep of the "compare .stderr to .nll.stderr output files"
I'll have some time this evening.
(there's also one potentially quite serious issue that's in a different column but I think I might look at that one myself later today...)
It looks like there aren't many issues coming out of the review though, which is good - a handful of potential improvements and some redundancies.
(I guess it's still early days, didn't see the "yet to be looked at" column)
yes. I'm not surprised by this outcome, but it nonetheless makes me happy
I am very glad we switched to testing the migrate mode
I probably should try to keep track of how fast I am churning through the todo list
to figure out if its worth trying to delegate the load
(i'm also very close to suggesting that we hack the
compare-mode=nll to do some sort of "normalization"... ignore trivial span differences, ignore distinctions like "use of ..." vs "borrow of ..." ...)
@pnkfelix I'm thinking that it might be worth archiving the cards on the diagnostic review that are addressed - so straight away archiving the "equivalent" or "improvement" columns and then any of the "migrate downgrade" cards that your PR for that fixed. Would be helpful for only looking over things that need work.
Where does archived stuff go?
I’m not sure. Presumably there’s some option to show them again.
lets figure out the answer to that question first. Then we can consider adopting the protocol you suggest.
So, I looked into this a little. There's a option to show archived cards and restore them. Apparently a card was archived six days ago. pasted image
hmm okay. I have now figured out how to access that list of Archived Cards
So yes, I think it sounds good to adopt the protocol you suggest. But first I want to make sure that the single currently archived card actually belongs there
I'm not sure how you found the process for using a project while doing the review. But being able to pick a card, click "convert to issue", and then have a easy issue number to refer to it and close via a PR and then just archiving that card is a nicer workflow from the "working through the review items" point of view compared to the issue w/ comments we had.
the project system is okay i suppose. I prefer text-based interactions to mouse-based ones when it comes to really repetitive stuff, so that was the main drawback I was able to identify for my own personal workflow compared to updating comments on a issue. (However, I did note that updating comments is not really workable once a comment gets too long, and that thus invalidates the claim that a text-based workflow is actually usable in practice...)
I will say that its been good to actually go through the cards and archive them (and link to the relevant PR or Issue where they are now addressed), since there was at least one test that had a card that I failed to transcribe into #55533 when I made the test list there.
Oops, sorry: @davidtwco I added another commit to PR #55700
I'll stop doing that now, I didn't realize you were going to get to the review so fast.
Feel free to keep going, I can take a look again later.
Was going to wait for a Travis pass before pinging bors anyway.
I suppose my biggest question about project workflow is that I currently take the approach of treating cards like lightweight entities
and so I often end up with several cards that are all related; i.e. they want to be handled by a single Github issue
I don't know whether this is a case where I am simply wrong to treat cards as lightweight, or if I should hope/expect Github to make further enhancements that would allow me to more easily group sets of cards and then assign all of them to a single issue
In those cases, I'd edit them to reference the same issue; in other cases where a card corresponds to a single unit of work (like the unions one I looked at over the weekend), then I'd can convert to issue.
It would be nice to have more grouping/bulk selection. Or even a way of adding some sort of checklist items to a single card.
Yes some sort of checkmark based selection of sets of cards could be useful for mass modifications
so far I've been getting by with using search for filters
I'm going to move the empty columns to the end.
I think in most cases, GitHub probably expected that projects would have issues as cards rather than lots of notes.
I'll go a step further and try to put the higher priority columns at the beginning
That's not really viable in our case with all the tests, but I can see how it would probably work a little nicer.
Should the "expected enhancement" cards be archived?
There's no work to be done for those?
I think it would be good to change those to use revisions (if they don't already) and add
basically, I'd like the expected changes injected by NLL to be flagged explicitly in the test code, via
// revisions: ast nll and then associated
//[nll]~ ERROR ... annotations.
you can see some instances of this pattern in PR #55700
We could probably hide the "motivation" card on the leftmost column too since the project description has that information (in the "Projects" listing; or by pressing "menu" when in the project).
ah I was just musing about where that text could go
I think I prefer having compare modes over revisions, it'd be nice to be able to do the error annotations for the compare mode rather than need to switch everything to revisions.
I guess there's not that much of a difference though.
(out of scope, but it'd be nice if compare modes were just "global revisions" that applied implicitly to every test, then you'd get that for free)
hmm. And an error that is meant to apply to the base mode alone would be annotated with, what,
//~ ERROR ... ?
//~ ERROR ... which I would like to apply to all of the modes under comparison)
Ah, yeah, that's a good idea.
//~ ERROR or
//[default]~ ERROR would work fine.
I mean, if you don't do that, then you hit a huge annotation burden...
Yeah, I hadn't thought about that. Makes sense.
I prefer the
x.py output for compare modes where it runs one compare mode and then the next over that of revisions where each revision is run one after another before the next test. In my ideal situation it'd run each revision one at a time, with only the N tests that have that revision (where N = all tests if it's a "global revision").
hmm, why do you prefer that?
I think I just like knowing when I watch it run that the
ast compare mode should all pass, but some of
nll might not, and then seeing that confirmed as it goes.
There's no functional difference.
I don't know if I have a preference. When I'm working on fine-tuning the tests, after the first test run, future runs tend to have a huge set of ignored tests, so I guess I haven't noticed too much of a difference.
Well you do have to get the whole AST suite passing before you get any feedback about the NLL compare mode ones
Yeah, I'm not really sure why I prefer it.
I guess I just like seeing the separation.
(which I then take to be an advantage of revisions over compare mode; but thats minor, and I suppose easy to bypass by just feeding in an explicit subset of the tests)
anyway I'm basically falling back on "I think I don't have a preference..." :smile:
//~ ERROR ...which I would like to apply to all of the modes under comparison)
this would be great, @pnkfelix, I've wanted this before — also it's surprising today because it's basically just ignored
yeah I think I filed an issue for that too
this one: "compiletest: tests using revisions treat files with just
//~ ERROR has having no errors" #55695