RPC forward/backward compat

104 views
Skip to first unread message

Matt Mastracci

unread,
Jul 24, 2009, 11:48:41 AM7/24/09
to Google-Web-Tool...@googlegroups.com
Hey all,

We've been struggling with the issue of RPC forward/backward
incompatibility for a little while and I thought I'd bring it to the
list for discussion.

As some of you know, one of our use-cases for GWT is embedding the
compiled JS in a Firefox extension. Unfortunately, the lifetime of
the code in the extension cannot be easily tied to the lifetime of the
code on the server. The consequence of this is that when our server-
side code is updated, the client-side code starts throwing
IncompatibleRemoteService exceptions until the user updates. We also
see this issue briefly during our website code updates - our load-
balanced servers are incrementally updated with background code which
means that for a brief period, some active users get RPC exceptions.

There's a number of ways we can work around it for the extension
(forcing the user to update is the easiest), but this is somewhat
inconvenient for our users.

The ideal solution would be offering a forward/backward compatible RPC
protocol that would allow some flexibility in versioning between
client and server. I'd like to be able to push out a server that is
backward-compatible aware and have it able to serve downlevel requests
for some period of time. For our website, the period of time would be
a few minutes while all the backends update and the static code is
pushed out. For our extension, this period could be on the order of
days or even a week.

One of the skunkworks projects I've been working for a while is a GWT
port of Thrift (a versioned protocol similar to Protobuf), basically a
direct eval RPC library that would let us use the versioned protocol
to replace our RPC code. One of the big downsides is that we'd lose a
lot of the niceties of GWT RPC, mostly smart objects and
polymorphism. It's getting close to completion, but it's been a while
since I've last had a chance to work on it and the new deRPC branch
has shown up.

Is the concept of versioning something that belongs in the core GWT
RPC code, or is this something better suited for an external library?

Thanks,
Matt.


BobV

unread,
Jul 24, 2009, 8:39:56 PM7/24/09
to Google-Web-Tool...@googlegroups.com
> Is the concept of versioning something that belongs in the core GWT
> RPC code, or is this something better suited for an external library?

I have a design wave going on about how to add this to the new RPC
implementation. Here's a cruddy copy-and-paste of the current state
of the document.


The crux of the schema-skew problem is that Java objects don't have
support for optional fields in the way that protocol buffers do. An
Object either declares or inherits a field, or it doesn't. Moreover,
there is a difference between not having a value for a field and not
knowing about the field (the difference between undefined and null).

In this document, I'll refer to client and server CLs. What I'm
really referring to is the version of the RPC object schema. It is
likely the case that perforce CL X and X+eplison have the same object
schema, but CL is easier to write.

Assumptions:
The server code rolling forward is more common (in terms of potential
user impact) than the server rolling back. That is, a client is more
likely to see a server going from CL X to X+1 than it is to see a
server going from Y to Y-1.
Version skew support is an opt-in feature, because the developer has
to agree to semantics that are different from regular Java
serialization. Types that support version skew must still be usable
with legacy RPC.
Both client and server must have identical "Required" fields. This
allows the developer to guarantee a minimum level of schema
compatibility.
"Optional" fields need not be present on both the client and the
server. It is acceptable for a field to change from Required to
Optional or vice versa.
The server has access to gwt.rpc files for all permutations that might
still be running. Retention policies are specific to the app (e.g.
from two pushes ago for online apps or two months ago for apps with
offline support).
Because of the static nature of the Java code, it is possible to
analyize the server schema and compare it with the saved ClientOracle
data to determine which permutations a given RPC service is
incompatible with, either as a presubmit check or as a
garbage-collection measure.
The client must be able to preserve newly-added, optional fields so
that any object sent by the server and merely echoed by the client
will be returned to the server with the same data. The server is
under no obligation to preserve unknown, optional fields sent by the
client.

Implementation details:
For any object that is not trivially serializable, the client
presently consults the RpcProxy's TypeOverride object for a list of
field names to serialize. Trivially serializable types are
serialized with a for-in loop to save overall JS code size, but these
could be changed to using a list of fields. When the server sends a
type with new, optional fields, the object will be created with the
extra fields assigned an expando hung off of the Object::expando
field, which is accessible through the WeakMapping client API.
Essentially Object::expando['rpcExtraFields'] = {'field1' : value,
'field2' : value, '_fieldNames' : ['field1', 'field2']};
Jul 17

me:
This is an extension of Dan's JDO magic-field support.
If the server does not receive an optional field, its value will be
null if it is a nullable field, or the default value if the field is a
primitive type. The default primitive values can be overriden on a
per-field basis via the use of an annotation.
If the server's schema does not have an optional field that is known
to the client (as indicated by the ClientOracle), it will synthesize
an appropriate value in the return payload.

Visible API changes:
A new, RPC-specific annotation will be created to designate optional
fields, called @Optional. This annotation may be specified on fields,
types, and packages to designate tha the field, fields in the object,
or fields in the objects in the package, have the optional semantic.
An @Required annotation will be provided to override wide-cast
@Optional annotations.

--
Bob Vawter
Google Web Toolkit Team

Matt Mastracci

unread,
Jul 24, 2009, 9:52:58 PM7/24/09
to Google-Web-Tool...@googlegroups.com
On 24-Jul-09, at 6:39 PM, BobV wrote:

> I have a design wave going on about how to add this to the new RPC
> implementation. Here's a cruddy copy-and-paste of the current state
> of the document.

Bob, this is awesome!

Is the plan to land this as part of deRPC, or is this a future feature
that will land beyond deRPC?

Also, will this be supported on methods themselves? For instance, can
I mark a new method parameter as @Optional so that older clients don't
need to provide it? Conversely, could we remove a parameter from a
method and still support clients sending data with the old signature?

One more question... Is it possible to incorporate the idea of
"numbered" fields into this design? This would make it much easier to
interop with Thrift (and possibly protobuf), both of which use numeric
keys for versioning. We'd probably write some code to output GWTRPC-
compatible Thrift objects from our IDL, which means that we'd have
numeric keys already set up and we wouldn't have to worry about
versioning issues when renaming fields.

Thanks,
Matt.

BobV

unread,
Jul 24, 2009, 11:11:28 PM7/24/09
to Google-Web-Tool...@googlegroups.com
> Also, will this be supported on methods themselves?  For instance, can
> I mark a new method parameter as @Optional so that older clients don't
> need to provide it?  Conversely, could we remove a parameter from a
> method and still support clients sending data with the old signature?

I could go either way on that. My thinking here was to just use the
existing Java method override semantics. As long as the servlet has a
method that will accept the parameters actually sent by the client,
the request would proceed. The older methods wouldn't need to be in
the RPC interfaces themselves, just (deprecated) methods on the
servlet.

> One more question...   Is it possible to incorporate the idea of
> "numbered" fields into this design?  This would make it much easier to
> interop with Thrift (and possibly protobuf), both of which use numeric
> keys for versioning.  We'd probably write some code to output GWTRPC-
> compatible Thrift objects from our IDL, which means that we'd have
> numeric keys already set up and we wouldn't have to worry about
> versioning issues when renaming fields.

I'm not really sure why you'd want to do this. The numbering / tags
in protocol buffers are more of an implementation detail to minimize
the number of bytes in the payload and to provide a meaningful way to
support the use of the protocol message across different languages.
If you have a concrete use case, please give me an example.

Matt Mastracci

unread,
Jul 25, 2009, 12:03:52 AM7/25/09
to Google-Web-Tool...@googlegroups.com
On 24-Jul-09, at 9:11 PM, BobV wrote:

>> Also, will this be supported on methods themselves? For instance,
>> can
>> I mark a new method parameter as @Optional so that older clients
>> don't
>> need to provide it? Conversely, could we remove a parameter from a
>> method and still support clients sending data with the old signature?
>
> I could go either way on that. My thinking here was to just use the
> existing Java method override semantics. As long as the servlet has a
> method that will accept the parameters actually sent by the client,
> the request would proceed. The older methods wouldn't need to be in
> the RPC interfaces themselves, just (deprecated) methods on the
> servlet.

From a dev perspective, it might be easiest if we could make methods
optional directly on the method. It more closely matches our current
modus operandi w.r.t. thrift - the new parameters are added directly
to the method, while the old ones are simply removed.

Regardless of the method compatibility story, it would be very helpful
to have an RPC validation utility: given this set of client RPC
manifests, can we successfully parse the requests?

>> One more question... Is it possible to incorporate the idea of
>> "numbered" fields into this design? This would make it much easier
>> to
>> interop with Thrift (and possibly protobuf), both of which use
>> numeric
>> keys for versioning. We'd probably write some code to output GWTRPC-
>> compatible Thrift objects from our IDL, which means that we'd have
>> numeric keys already set up and we wouldn't have to worry about
>> versioning issues when renaming fields.
>
> I'm not really sure why you'd want to do this. The numbering / tags
> in protocol buffers are more of an implementation detail to minimize
> the number of bytes in the payload and to provide a meaningful way to
> support the use of the protocol message across different languages.
> If you have a concrete use case, please give me an example.

Good point- I'm not sure I can really offer a good example that would
justify this effort. The only reason I can see to do it would be to
prevent field renames from breaking serialization compatibility
(something we take for granted today). That can be mitigated through
developer education and as it stands, there's nothing in this model
that would prevent us from generating GWT-compatible stubs from our
thrift IDL.

Thanks for answering my questions,
Matt.

Reply all
Reply to author
Forward
0 new messages