Stream: t-compiler

Topic: simd in const fn #64738


gnzlbg (Sep 25 2019 at 08:30, on Zulip):

I'm not sure when should I prefer MPlaceTy vs PlaceTy, or Immediate vs ImmTy, or whether the way in which I read an Abi::Vector into a Vec<ImmTy> is the right way to do that

gnzlbg (Sep 25 2019 at 08:31, on Zulip):

I'd rather understand those well before sending PRs for the remaining intrinsics

oli (Sep 25 2019 at 08:41, on Zulip):

yes, looks generally alright from the way you use operand, place and mplace

oli (Sep 25 2019 at 08:41, on Zulip):

I left a comment for a refactoring

gnzlbg (Sep 25 2019 at 08:59, on Zulip):

@oli by assert you mean to use bug! right?

oli (Sep 25 2019 at 09:14, on Zulip):

there's an if, you can just assert the condition instead of bug!ing in the else

gnzlbg (Sep 25 2019 at 09:49, on Zulip):

@oli looking at the other simd intrinsics

gnzlbg (Sep 25 2019 at 09:50, on Zulip):

i wonder why there are so many binary_op_ methods

gnzlbg (Sep 25 2019 at 09:50, on Zulip):

and why the "general" one does not handle all cases

gnzlbg (Sep 25 2019 at 09:50, on Zulip):

e.g. we have BinOp::Add, so we need many binary_op_ methods to handle wrapping, saturating, overflowing, etc. add

gnzlbg (Sep 25 2019 at 09:51, on Zulip):

why not have BinOp::WrappingAdd, BinOp::SaturatingAdd, BinOp::OverflowingAdd, etc. instead ?

oli (Sep 25 2019 at 09:52, on Zulip):

the reason is that the general one is equivalent to checked_* and the others are just essentially wrappers around the checked_* op

oli (Sep 25 2019 at 09:53, on Zulip):

basically for code deduplication

gnzlbg (Sep 25 2019 at 09:55, on Zulip):

But can't that happen inside a generic binary_op function ?

gnzlbg (Sep 25 2019 at 09:56, on Zulip):

For SIMD types, I'd like to just loop over the elements, call binary_op, and then do some result handling before writing to the destination vector

oli (Sep 25 2019 at 09:56, on Zulip):

it essentially doesn't matter where you move the complexity to

gnzlbg (Sep 25 2019 at 09:56, on Zulip):

well the complexity now gets duplicated to all simd ops, because now they have to handle it again at that level

gnzlbg (Sep 25 2019 at 09:57, on Zulip):

in all other interpreters I know, you just have one variant per operation, and a binary_op function that handles all binary operations

oli (Sep 25 2019 at 09:57, on Zulip):

simd does overflowing ops?

gnzlbg (Sep 25 2019 at 09:57, on Zulip):

it does saturating, wrapping

gnzlbg (Sep 25 2019 at 09:57, on Zulip):

it also does comparisons, etc.

oli (Sep 25 2019 at 09:58, on Zulip):

oh... I've never seen those

gnzlbg (Sep 25 2019 at 09:58, on Zulip):

simd_saturating_add I think

oli (Sep 25 2019 at 09:58, on Zulip):

https://github.com/rust-lang-nursery/packed_simd/blob/master/src/codegen/llvm.rs#L51 has only nonwrapping ones

gnzlbg (Sep 25 2019 at 09:58, on Zulip):

https://github.com/rust-lang/rust/search?q=simd_saturating_add&unscoped_q=simd_saturating_add

gnzlbg (Sep 25 2019 at 09:59, on Zulip):

either way, even if its just for Add vs Eq

gnzlbg (Sep 25 2019 at 09:59, on Zulip):

I need two different branches for the two different kinds of Ops :/

oli (Sep 25 2019 at 10:00, on Zulip):

looking at codegen, you have the same problem there

oli (Sep 25 2019 at 10:00, on Zulip):

I'm not sure how to avoid it

oli (Sep 25 2019 at 10:01, on Zulip):

you can always add more wrapper functions ;)

oli (Sep 25 2019 at 10:01, on Zulip):

if you manage to share the wrappers with regular wrapping/checked/... math, all the better

gnzlbg (Sep 25 2019 at 10:02, on Zulip):

the codegen for the SIMD instructions is completely different than for the scalar instructions

gnzlbg (Sep 25 2019 at 10:02, on Zulip):

for an interpreter, the execution of a SIMD instruction is just "loop over the elements and execute the scalar operation"

oli (Sep 25 2019 at 10:03, on Zulip):

right, and you can just create a function that takes a closure, does the iteration, call the closure on each element

gnzlbg (Sep 25 2019 at 10:03, on Zulip):

no, I can't

oli (Sep 25 2019 at 10:03, on Zulip):

oh?

gnzlbg (Sep 25 2019 at 10:03, on Zulip):

I want to write this code:

gnzlbg (Sep 25 2019 at 10:06, on Zulip):
v if is_simd_binop(v) => {
   let binop: mir::BinOp = simd_to_binop(v);
   let len = simd_len(args[0]);
   for i in 0..len {
        let result = self.binary_op(binop, self.operand_field(args[0], i)?, self.operand_field(args[1], i)?)?;
        self.write_immediate(result, self.operand_field(dest, i)?)?;
   }
}
gnzlbg (Sep 25 2019 at 10:08, on Zulip):

but I can't, because I need to duplicate the logic of the saturatin, wrapping, checked, etc. intrinsics in here

gnzlbg (Sep 25 2019 at 10:08, on Zulip):

because the binop for all of them is just Add

gnzlbg (Sep 25 2019 at 10:09, on Zulip):

These are, however, different binary operations

gnzlbg (Sep 25 2019 at 10:09, on Zulip):

if we were to have mir::BinOp::CheckedAdd, WrappingAdd, SaturatingAdd, UncheckedAdd, etc. instead

gnzlbg (Sep 25 2019 at 10:09, on Zulip):

then at least at this level it would be super simple to just fall back to scalar code

gnzlbg (Sep 25 2019 at 10:10, on Zulip):

that would mean that binary_op would need to match on these, and dispatch on them, and that wherever SaturatingAdd is implemented, that code might want to execute a BinOp::CheckedAdd internally

gnzlbg (Sep 25 2019 at 10:11, on Zulip):

I mean, having to match and convert Add to (Add, true, false) or whatever to indicate wrapping, checked, etc. semantics is weird

gnzlbg (Sep 25 2019 at 10:35, on Zulip):

so doing this would require basic fundamental changes to MIR, cc @eddyb

gnzlbg (Sep 25 2019 at 11:18, on Zulip):

so having look into that a bit more, I think the mir::BinOp only are intended to match the builtin + binary operators

gnzlbg (Sep 25 2019 at 11:18, on Zulip):

maybe we could have a different BinOp enum in the interpreter for the binary operations

oli (Sep 25 2019 at 11:24, on Zulip):

I still don't see why we would need that. You can create your own binary_op function that also takes an enum OpKind { Checked, Wrapping, Saturating, Unchecked } and dispatches the logic depending on that

oli (Sep 25 2019 at 11:24, on Zulip):

then simd_to_binop can return two enums

oli (Sep 25 2019 at 11:25, on Zulip):

as I said before, I don't think this needs deeper changes, you can just add more wrappers that give you the API you want

gnzlbg (Sep 25 2019 at 11:34, on Zulip):

so now that i'm doing the asserts

gnzlbg (Sep 25 2019 at 11:34, on Zulip):

how do I check that the compiler ices properly ?

gnzlbg (Sep 25 2019 at 11:35, on Zulip):

the expected error is now 101

gnzlbg (Sep 25 2019 at 11:35, on Zulip):

and not just 1

gnzlbg (Sep 25 2019 at 11:35, on Zulip):

I also need one test per failure

gnzlbg (Sep 25 2019 at 11:35, on Zulip):

:/

gnzlbg (Sep 25 2019 at 11:43, on Zulip):

as I said before, I don't think this needs deeper changes, you can just add more wrappers that give you the API you want

So... I should add a method to operator.rs that's called from intrinsics.rs and dispatches back to intrinsics.rs when necessary ?

gnzlbg (Sep 25 2019 at 11:45, on Zulip):

Like, for example, the binary saturating add simd operation would call the simd intrinsic for that, which would call this generic method, which would dispatch back to the scalar saturating intrinsics, which then call the normal add operations in operators.rs

gnzlbg (Sep 25 2019 at 11:45, on Zulip):

I suppose I could do that

oli (Sep 25 2019 at 12:04, on Zulip):

how do I check that the compiler ices properly ?

you don't need to do that

gnzlbg (Sep 25 2019 at 12:10, on Zulip):

the PR currently does that, should I remove it ?

oli (Sep 25 2019 at 12:29, on Zulip):

yea, there's so many ways to ICE the compiler with unstable things and especially intrinsics, there's no point to it

oli (Sep 25 2019 at 12:29, on Zulip):

it won't actuall help us catch bugs if we test for it

oli (Sep 25 2019 at 12:31, on Zulip):

otherwise the PR lgtm now

gnzlbg (Sep 25 2019 at 12:32, on Zulip):

I'm removing the fail tests, will update it soon

gnzlbg (Sep 25 2019 at 12:33, on Zulip):

Done

Last update: Nov 16 2019 at 02:35UTC