-
-
Notifications
You must be signed in to change notification settings - Fork 111
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Canonical encoding spec issue #581
Comments
(This isn't a full response, I'm only starting to get into go-capnp development)
Perhaps it should suffice to also follow the pointers in order to build the canonicalized data section?
Not everyone may be paying the performance price of a cryptographic operation such that switching to this (hypothetical) new implementation wouldn't introduce relevant performance regressions. Though, to be fair, this doesn't invalidate your earlier points about correction of the canonicalization process.
Yes, this is an issue that is easy to trip on (IMO) due to the API design of the go library: using I have some tentative code in my refactor PR which allows overwriting data (when new data is smaller than old data), but it's only a prototype and in any case does not invalidate your points about the possibility of wasted space and extraneous data.
Is that in a public repo that you could share? |
Huh? The data section of a struct has a static layout, with each field located at a fixed offset from the start of the section, and having a fixed size. It cannot be re-ordered at all. It therefore is canonical as-is. I don't know what "pre-order" would even mean for the data section, as the data section is not a tree, it's just a fixed-length byte blob. |
This golang implementation uses a slice Arena. Code can 'set' fields in arbitrary order, thus appending them to the slice expansion potentially randomly. Data fields are not serialized out in ordinal (or any) order, rather the entire sub-slice is opaquely copied and may include orphaned data; sorry if I confused 'pre-order' here, you're right the data section is not a tree. Two different implementations can thus disagree about the byte ordering in the data section. Not typically important for say RPCs in general, but has implications in cryptographic scenarios. |
This direction of thinking is good, as I think it walks around and solves the orphaned data problem. However, it only delays the ordering problem. For example, a pointer may reference a struct with only data fields; their serialization order is not guaranteed. What I've done in the past is build my own registry cache on top of // CodeOrderFields returns the fields in canonical order
// (the order in which they appeared in the schema).
func (n Node) CodeOrderFields() (fields []schema.Field, err error) {
var list schema.Field_List
if list, err = n.StructNode().Fields(); err != nil {
return
}
count := list.Len()
fields = make([]schema.Field, count)
for i := 0; i < count; i++ {
f := list.At(i)
fields[f.CodeOrder()] = f
}
return
} // ledger.Node.CodeOrderFields()
I think However, that thought does lead me to think this might be partly solvable with a new kind of Arena based on iovec that maximizes memory efficiency and reuse (at the expense of having the
It's not public (yet), but happy to share relevant bits. Would it be useful to PR an enhanced |
Setting a data field does not "append" anything. It overwrites bytes at a specific offset within the data section of the struct. The data section is allocated with a fixed size when the struct itself is first created. Calling a setter for a data field does not allocate any new space, it just overwrites part of that existing space.
No, they cannot. |
You're right; it appears I forgot about that since the data section only contains primitive types, and things like Should I repurpose this issue to a request for an |
I am not sure if I'd say there's an issue that can be fixed here. I'm not deeply familiar with with the Go implementation. In the C++ implementation, if you unlink some part of the message from the tree and drop it, the unlinked data is zero'd out -- but there is still a hole left over which cannot be reused, and that wastes bytes on the wire. To fix that, you have to create a new message and copy the struct over to it -- this essentially garbage-collects the holes. It sounds like you're saying the Go implementation doesn't zero anything out? I would personally argue that the zeroing is desirable behavior and the Go implementation should probably add it if it doesn't do so already. But there's really no way to get rid of the "holes" left behind, except by copying, and the whole design goal is to be zero-copy. The fact of the matter is that Cap'n Proto message builders are not great at mutation. You really need to think of them as write-once. This is a just a consequence of the zero-copy goal. |
I'm not familiar with
I was more interested in the context around where you were using it, no worries.
I doesn't zero out the (now) orphaned data, no. This can be easily proven with the following code: func TestLeftover(t *testing.T) {
_, seg, _ := capnp.NewMessage(capnp.SingleSegment(make([]byte, 0, 1024)))
root, _ := air.NewRootBenchmarkA(seg)
root.SetSiblings(0xabcd)
root.SetName(">This is my name<")
root.SetName(">My Second Name<")
root.SetName("")
arenaData := seg.Data()
t.Log(spew.Sdump(arenaData))
} Which yields the following result
However, after cannonicalizazing the above (with
|
It's an old libc/syscall idiom for higher throughput in scatter/gather operations, the main idea is to avoid copying by passing arrays of small descriptors (typically located temporarily on the stack during the syscall) to buffers with arbitrary location, possibly non-contiguous. My thinking is that you set a first data pointer (such as text), then set a second data pointer, then go back and change the first (much like your example). An
It seems like we should be zeroing this out? Two major benefits:
|
On my refactor PR, I have an experiment where I go in, read the pointer, and update the memory location with the new value (instead of simply allocating new space in the segment and updating the pointer). This achieves space reuse, but only for cases where the new text has length <= the original one (and adds padding when the length is < the original, so it doesn't result in a canonical structure). A natural alternative would be to allocate a new segment for each new object (struct or list), which would resemble the I expect a true
Yes, this seems like a legitimate bug that should be addressed. |
There are a few (unofficial) examples around; e.g. https://pkg.go.dev/github.com/google/vectorio. Regardless, the syscalls are generally there on POSIX systems (including Windows). It's probably more of an enterprise style use case is my guess, i.e. when you care about things like hardware offload (DMA) to the NIC; or maybe people are just sticking to C/Rust for those things. But I was also suggesting that the more common golang idiom for this would be I guess I was really hoping to discuss getting to parity with the C++ version of capnp. Anyways, I'll leave this alone now. Fixing the |
Yep, I got that. The current implementation is just nowhere near where it would be possible to do (whence why I mentioned using one segment for each object - this would be "easier" to do in the current implementation, but nowhere near easy in an absolute sense).
I don't see how generics would help. The main issue today is that it's actually impossible to create an Arena implementation externally because it requires accessing private fields in |
Well I went looking again, and it turns out ... is it that far off? ;) func (e *Encoder) write(bufs [][]byte) error {
_, err := (*net.Buffers)(&bufs).WriteTo(e.w)
return err
} https://pkg.go.dev/net#Buffers
So it was implemented for |
Ah, nice find! Looks like it's implemented for both windows and unix I'll have to reassess some of my plans based on this, though I should probably at the very least give it a rough benchmark. |
The main benefit is everything could be zero-copy until the point of serialization. Right now just doing My issue is I've been building template objects for table-driven tests, altering fields in each scenario, and my payloads were getting unexpectedly large and filled with "cruft". |
Sure, but some workflows may want to copy (because they're building an object with well-known size, not rewriting fields, and want to reuse the object reference on another slice that will aliased-upon). So this would have to be an optional feature (possibly selected upon based on the Arena).
Correct. I believe (though now I can't find a reference) this has always been a suggestion for CapNProto usage: not to use it as an intermediate object representation. |
I've always just used // Create second feature.
err = capnp.Struct(c.feature[1]).CopyFrom(capnp.Struct(c.feature[0]))
require.NoError(c.T(), err, "feature[1].CopyFrom(feature[0])")
id, err = c.feature[1].Id()
require.NoError(c.T(), err, "feature[1].Id")
// change some values in "id" ... |
Sure, but that pays the price of doing an additional copy, when you wouldn't have to. We probably don't want to downgrade the performance of the existing workflow in favor of a new one - we should try to improve the performance of both :P |
I agree, but I specifically ran into this issue in testing where I don't care about performance. I'm more concerned with reuse without having to write a bunch of extra methods to build messages or hundreds of lines more code; I had thought There are use cases where performance would be better. Consider a multi-casting scenario where I need to inform N "hooks" of an event with the client registering their hook by passing a capability to my RPC for me to callback. The messages are all mostly identical, except for each client I need to modify a pointer field. For simplicity, let's say I pass a capability back to the client for the event object for further calls. Instead of using a template object which I use // Read the updated Node from the Message.
var node Node
if node, err = ReadRootNode(
&capnp.Message{
Arena: capnp.SingleSegment(data),
TraverseLimit: math.MaxUint64,
DepthLimit: 2,
},
); err != nil { ... } |
Off topic, is there a better place to discuss other issues? I ran into another problem with futures in pipelining. I can do |
Not sure about "better", but the only other place where there are discussions on go-capnp is the matrix channel: https://matrix.to/#/!pLcnVUHHRZrUPscloW:matrix.org?via=matrix.org |
This hits home. I spent an inordinate amount of time debugging a similar incident in production. The code was re-using an arena, to the point that it grew large enough to trip the RPC flow-controller and deadlock everything. I believe Ian's comment at the time was that Zeroing/GC had been left out as a performance optimization, under the assumption that arenas weren't going to be reused.
+1 |
The canonicalization spec has an issue. It requires encoding pointers in pre-order, but doesn't require the same for data fields in an effort to avoid requiring a schema and reflection.
Because segment arenas are append-only allocators in the implementation, "cruft" data may be present in the slice from temporary operations but otherwise unattached to the root struct (i.e. orphaned). Obviously, this library does not implement orphans or adoption. Due to this line of code, the segment is opaquely copied along with the "cruft" data, which should not be part of a canonical security message. Furthermore, the data elements may not be in pre-order layout.
For a canonical security message to be truly useful, all fields should be in a defined order, including non-pointer data fields. Two individuals should be able to produce the exact same semantic message independently, using different libraries, languages, and implementations, and have their canonical form (and ensuing cryptographic hash) match. Verifying a raw-payload (when the spec has already chosen to rearrange the pointers) is pointless when the same isn't applied to the data fields. Furthermore, it's more powerful to cryptographically verify a network object versus a raw message, which is what Cap'n'Proto is all about. A solid example is serializing and signing RPC persistent capabilities encoded as a struct.
Proposed changes:
func Canonicalize(s Struct) ([]byte, error)
method should be improved tofunc Canonicalize(s Struct) (Struct, error)
. It's a more useful API if users don't have to re-read a message from[]byte
; e.g. sending a struct via RPC interface in canonical form, alongside a cryptographic signature that verifies it. It could also add an optionaltypeId ...uint64
parameter to enable reflection (the alternative encoding mode suggested previously).capnp.Struct
should add an exportedCanonical() bool
accessor, indicating that the struct uses a single segment in the expected form. This makes it useful to embed in larger multi-segment messages (e.g. an RPC call).capnp.Struct.Canonicalize()
method.copy
. Copying the fields over one-by-one in pre-order also leaves any orphaned data in the source segment behind. The extra overhead from not usingcopy
is insignificant in the context of performing cryptogrpahic operations anyways.As a side-note, I've noticed an undesirable bug(?) in the RPC implementation; messages with "cruft" data in their segment are sent on the network despite the data's orphaned status. This then requires great care on the library user's part to never 'build' messages directly in the target segment if they ever have temporary operations, and are then thus likely to mostly implement their own 'canonical form' by having their code perform operations in a very specific, idiomatic style to avoid this. I personally ran into this writing unit test tables, where it look a hundred lines of code to build a message, and then I used
capnp.Struct.CopyFrom()
for subsequent message test cases where I altered a single field; the RPC data sent included (potentially large payloads) of the previous data copied and orphaned from the original message. The implication of an improvement is perhapscapnp.Struct.CopyFrom
should essentially do it in canonical fashion as proposed above as an option. We could add acanonical ...bool
field to the method to implement this without breaking backwards compatibility (which this library in its 'alpha' state does reserve the right to do).I'm essentially suggesting that canonical form should be a merkle tree; i.e. blockchain. I've implemented a blockchain using Cap'n'Proto as the
mmap
ledger, and had to implement my ownCanonicalize
using reflection. ;)The text was updated successfully, but these errors were encountered: