(Since the meeting is over...)
I did try and flip through them and see if one was "the obvious fit", but I wasn't entirely sure. The abom stuff (as well as a few related things, like Rust-on-Rust FFI) don't exactly care what the layout is, just that it stays put. So, it may be weaker than needing a defined binary repr.
But, if there is an obvious place to watch (e.g. data-invariants) or just for the next meeting, I'll do that and get back to my book. :)
@Frank McSherry yeah I think what you are asking about is mostly happening several layers higher... like, safety invariants for references permitting transmute of
& &mut T to
&&T or so
I was thinking the same thing; this proposed topic of validity invariants might be relevant, but we're not there yet
but if that gives you any hope, the formal model we built last year to prove Rust's safety was specifically taking into account http://www.frankmcsherry.org/serialization/2015/05/04/unsafe-at-any-speed.html so that we could prove a transmute from
it seems like the biggest area of concern would be exactly those places where the "definition of memory" deviates from "just bytes" — e.g., references and (maybe but probably not?) uninit memory
Yeah, I recall you observing that the "hard part" of abomonation looked plausible. I'm mostly just planning on showing up so that no one forgets that something like it is useful too. I.e., in the glorious battle between "zomg random data layout opts" and "zomg please stop we are trying to make computers fast", I represent the latter.
I think both teams are on the "make go faster" side ;-)
@RalfJ I could totally update that post, if it would help (stroke-be-less-harmful); there are several things that are wrong in it (you've probably noticed, but lots of other people noticed too; I opted not to edit it once it was live).
@Jake Goulding yes, sorry that's totally fair. Maybe the real point is that optimizing/randomizing layouts come with a cost, and I am here to represent that (vs just hearing about the great advantages of layout re-optimization).
@Frank McSherry whatever you prefer, I think I got out of that post what I can.^^ that single transmute is, to me, the essence of what is going on. (the post does it for
Vec, of course, but that makes no fundamental difference.)
I think everyone knew what you meant, so it wasn't a slam.
It's most likely all about control and visibility...
I'd guess that your POV is that you know more beyond what the compiler does/can so you want the ability to control the layout to increase speed.
I'd guess the compilers POV is that it knows more beyond what the user does/can so it wants the ability to control layout to increase speed.
@RalfJ I think I'll get to re-write it soon. Timely just got a zero-copy dataplane, and part of that is "not copying from bytes to typed data" (courtesy abom) and I suspect explaining that will require re-explaining abom. Btw, have you seen / have opinions on https://github.com/frankmcsherry/blog/blob/master/posts/2017-07-27.md, in which I pretend that abomonation is a lot like region allocation?
@Jake Goulding > I'd guess that your POV is that you know more beyond what the compiler does/can so you want the ability to control the layout to increase speed.
It's a bit weaker than this, personally. I'm up for the compiler determining the layout, but I need some structure to know when I can re-interpret bytes. If compiler says "they must be like this" I'm good with that as long as it stays true for a while. At the moment, I'm weirded out by the fact that the compiler can change this with every recompile, and as I understand it doesn't directly promise this (but is boxed in by needing to be able to link dynamically with other libraries).
Ah, I assumed that you were constrained by a pre-existing external layout (yon blobs of bytes on the disk)
Most of my uses are "I have a program, it wants to turn types to bytes and back to &types". It would also be cool to have a stable layout for longer term storage, but I think at that point you want the grown-up Protobuf, CapnProto, FlatBuffers stuff (or shudder JSON).
As one example, that I think is pretty hard with lots of approaches: it would be great to be able to memmap a blob of data and have it behave as a &type, allowing random access without needing a full scan of the data. The systems-level difference between "process starts; data read immediately, go" vs "process starts, rescans 100GB of data, go" is huge in the space I'm working in (big data, fault-tolerance, blah blah).
I was saddened to realize that
mmap will always be
unsafe because you cannot guarantee that you are the sole owner of a piece of memory (a.k.a. a big point of
Also I feel I have to make a declaration: While @Frank McSherry and @Jake Goulding are in different camps of the "make computers be fast" team (I know you're not really but bear with me here ;), I am squarely in the "make computers be sane" team. I know I am fighting windmills, but I will be a bastion of sanity against everyone who intents to sacrifice everything and their grandma on the altar of performance. Too long have the evangelists of speed reigned in the space of compilers, it is time we break their rule and make programming languages beautiful again!
@RalfJ I claim no camp.
@Jake Goulding I know but the story worked better this way :P
Do I get to be the Jedi or the Sith
I leave that up to you. as far as I am concerned, performance people are Sith and of course I am on the light side. ;)
wow, I had no idea you had such poetry in you, @RalfJ
Still new to this, particularly in English. ;)
I personally think computers are way too fast; we should switch back to before all this caching BS :3
@RalfJ I think that, a year ago, people might have melted down at the suggestion that speed was not the one true god. But with the spectre of 2018 looming close still, I think that people may be willing to, at the very least, speculate alternatives, and so I'll support you in this execution.