Technically Protocol Buffers is a serialization framework, not RPC. gRPC and Stubby are RPC frameworks built on Protobuf, but I never worked on either of them.
> #@n marks order of values in the message.
Technically it marks the order in which fields were added to the message definition, not literally the order in which they appear in the serialization. The order in which fields were added to the definition is important in order to ensure that fields don't change position as new fields are added. However, newer fields can be placed before older fields, if there happened to be space available from padding.
> as apposed text encoding used
> Cap'n Proto instead tries to represent values in wire-protocol as they would be when written to disk or even in-memory as much as possible
This sentence implies that on-disk formats and in-memory formats are typically similar, with network formats being different. In practice, though, on-disk formats are typically closer to network protocols. Like protocols, storage formats often need to be position-independent, forwards- and backwards-compatible, language-independent, architecture-independent, secure, etc.
I think what this sentence was trying to get at is that Cap'n Proto tries to make the storage/network protocol appropriate as an in-memory format, so that no translation step is needed when reading data into memory or writing data out of memory.
> Unlike other binary serialization protocols such as XMI or Protocol Buffers, Cap'n Proto considers fine-grained data validation at the RPC level an anti-feature that limits a protocols ability to evolve. This was informed by experiences at Google where simply changing a field from mandatory to optional would cause complex operational failures
For the most part, Protobuf followed the same philosophy from the start. The only exception was required vs. optional, which is a fairly minor bit of validation. As you note, "required" was removed from the language in proto3, making Protobuf and Cap'n Proto basically equivalent in this regard. But, even before "required" was removed, I think they were pretty similar.
I don't know anything about XMI.
> As of October 2020, the reference implementation only supports level 2
The level numbers were defined by Cap'n Proto itself, so it's a little weird to reference "level 2" without defining what that means. I might say: "As of March 2021, three-party handoff and capability equality -- two advanced-but-important features of CapTP -- are not yet supported by Cap'n Proto."
> After Sandstorm.io transitioned from a commercial to community focus
You can just say: "After Sandstorm failed commercially"
> Cloudflare, which subsequently adopted Cap'n Proto as their primary internal RPC system
This isn't quite accurate.
Cloudflare used Cap'n Proto serialization long before I joined, but it also uses many other technologies, including Protobuf/gRPC, JSON, etc., in other places.
I joined and started the Cloudflare Workers project. Workers specifically makes use of Cap'n Proto in its implementation. However, the rest of Cloudflare still uses a variety of technologies.
> The primary speedup over Protocol Buffers comes from using arena-style memory allocation.
I would argue the primary speedup comes from zero-copy parsing/serialization, not from arena-style allocation. Protobuf actually supports arena-style allocation, at least in the C++ implementation.
That said, in reality this all depends on the use case. Sometimes arena allocation might be what makes the difference. Sometimes Protobuf is actually faster. But often Cap'n Proto is faster because it doesn't need a translation step.