The spec I posted extends the request object to be include an object
(not just an array), it defines some role based terminology such as
client and server, and it removes transport specific suggestions.
This may seem like I placed a glass upside down on a table covered in
flour, then blew away all the useful flour to just to gain a clean
circle. In essence this is what I did. But my reasoning is that with
that done, we can start to focus on the sectioning off and defining
where the rest of the flour should sit and where it makes sense for
overlap, all around json-rpc.
My principle behind this is that json-rpc should be simple (and
therefore powerful), not encumbered and polluted.
Items such as transport specific details, service descriptions,
referencing (circular or otherwise), etc... should be handled in
separate specs.
What this really gives us is the ability to pass an object in as a
param. It then forces us to have conversations around the other topics
within their own merit, and not as something that must always be
included with json-rpc spec.
Hopefully I can get some feedback on my view/suggestion. Contact me
off list if you like, but I prefer to have the discussions around json-
rpc here in the open.
To:
id - A Scalar value. It is used to match the response object with the request object.
- result - The Object that was returned by the invoked method. This must be null in case there was an error invoking the method.
to:
- result - The Object, Array, or Scalar that was returned by the invoked method. This must be null or omitted in case there was an error invoking the method.
- result - The Object that was returned by the invoked method. This must be null in case there was an error invoking the method.
to:
- result - The value that was returned by the invoked method. This MUST be null or omitted in case there was an error invoking the method.
I also understand the point about omitted result, error, and id
(notifications). I like the more minimal style and it is much clearer
compared to a null result and a null error in a single object. Would
you also advocate omitting the param in the request object if nothing
is passed?
In one of your examples you pass a string as a param, not an object/
array. I think we may be pushing to cover too much ground if the below
are expected to have the same effect:
{ "method": "link", "params": "Client Present" }
{ "method": "link", "params": ["Client Present"] }
{ "method": "link", "params": {"name":"Client Present"} } //
assumes name is the correct param implied in the others
As a caller, it would be incorrect to assume that the above would all
work all the time or would mean the same thing. The server should
dictate what it accepts on a per method basis and accepting an object
would be up to their ability to map it out. For 1.0 servers, it could
accept an object as the first param and then map it out (otherwise it
would fail).
I guess that calls up the reason why I suggested we allow an object as
the param in addition to the array. It is mainly to allow those of us
who tend to pass objects anyway to map them as we choose instead of
pulling them out of an array. It may not be worth adding except that
others had expressed an interest for their own reasons. Besides, my
primary goal is to break the transport binding habits of the other
spec above all else.
I think it would be wrong to assume that the order of the properties
equates to positional param order. Extra properties should be dropped
prior to the call, but from what I understand in python having extra
properties throws an exception (IIRC). I was not viewing the server
trying to use whatever the client passed unless it wants to, but more
that it could require one or the other.
Presently in my 1.0 systems I often pass an object as the only item in
an array. This is because my calls are mostly wrappers into a more
complex api that is never directly exposed. My thought is that systems
that can take objects and apply them to matching named arguments can,
but it should not be a requirement. Systems that want to take both
could, but a method could reject a call if didn't know how to or want
to accept a param object. Existing 1.0 systems should reject the
request anyway, if the param is not an Array.
I'll dwell on this tonight some while I sleep, and hopefully those
that wanted a object to be passed directly can state their interest.
I'm mostly happy with passing a single object in an array, but it is
not as apparent that it is being used as the params of the call on the
server, compared to an object being passed directly.
An ID that is an object or an array is a little strange as a mechanism
of matching a response with the request. Perhaps someone here could
offer a good reason or a good use if it was an object/array?
I also understand the point about omitted result, error, and id
(notifications). I like the more minimal style and it is much clearer
compared to a null result and a null error in a single object. Would
you also advocate omitting the param in the request object if nothing
is passed?
In one of your examples you pass a string as a param, not an object/
array. I think we may be pushing to cover too much ground if the below
are expected to have the same effect:
{ "method": "link", "params": "Client Present" }
{ "method": "link", "params": ["Client Present"] }
{ "method": "link", "params": {"name":"Client Present"} } //
assumes name is the correct param implied in the others
As a caller, it would be incorrect to assume that the above would all
work all the time or would mean the same thing.
The server should
dictate what it accepts on a per method basis and accepting an object
would be up to their ability to map it out.
For 1.0 servers, it could
accept an object as the first param and then map it out (otherwise it
would fail).
I guess that calls up the reason why I suggested we allow an object as
the param in addition to the array. It is mainly to allow those of us
who tend to pass objects anyway to map them as we choose instead of
pulling them out of an array.
It may not be worth adding except that
others had expressed an interest for their own reasons. Besides, my
primary goal is to break the transport binding habits of the other
spec above all else.
In practice, in Zope 3, named parameters would be thrown into the
machinery that handles HTML forms. The object publishing model then
applies the form parameters to the callable.
What I liked best about the separated named and positional parameters
was that they would be simpler to deal with on the client side. With a
unified "params" object-or-array, it's (I think) a lot of work for a
javascript client to inspect the call and choose whether to send the
request as an array or an object. As the specification currently
stands, the choice means that the more general formulation to use 1.1
(1.0 is still supported as the Array case) is to always pass an Object,
and (informally) use Atif's suggestion for numeric keys to put
positional arguments in order, which would also be a lot of code.
So, I think that not separating named and positional arguments means
that the major javascript libraries will remain at 1.0. After all, it
works fine, and 1.1 does not bring much to the table but complication,
additional code, and breakage if the server supports only 1.0.
Just my opinion.
Before this is all final, I would like to hear a good story about an
upgrade path to 1.1 for javascript client libraries.
-Jim Washington
Even in java the params object could be manually linked, and that
linking could be a defined step. This does not change even if we send
an object as the first item of an array and stick with the 1.0 spec,
the mapping still takes place somewhere. I felt with an object as the
param the intension was clear, where as an object as the first item
might just be that. The server would not be able to tell, and would be
left to guess as best.
Having just positional parameters are sometimes insufficient to
express the desire of the data. I don't think most of us work in
languages where we are accessing the arguments within the function
body through an array, they are being mapped into local variables by
the language. To me it almost feels backwards presently, as we are
mapping positional values to argument names in practice, which happen
to follow the same order.
As an example:
function a(b=1,c=3,d,e,f) { /*do something in here*/ }
Sending over {"method":"a", "params":{"b":20,"d":30,"e":40,"f":50}}
clearly shows the data's intention not to provide a value for c. But
how would one do that with positional parameters without knowing the
defaults ahead of time. This starts to wander into the service
description issues, but I do not think that should be something the
client always need be aware of.
For this to work correctly into 1.0 servers, the order of dividend and
divisor would have to be known and honored by the client. If the order
did not match, despite looking fine in the data, an unintended result
would be returned by 1.0 servers. Something like this (kwparam) was
discussed in earlier threads, but I get the feeling trying to shoe
horn named data into positional data is going to break one way or
another, just too many assumptions have to be made to make it work.
The server really needs to know what names it wants out of the object,
and place them into the call it makes. My systems don't expose `real`
api's, so json-rpc is a thin wrapper over the calls I do expose.
Perhaps it is my perspective since calling a external method might hit
a wrapper function before it calls the real function, which could
change based on the object properties.
Could this be done w/ a name to position map in Java (or other
languages) on the fly?
If the order
did not match, despite looking fine in the data, an unintended result
would be returned by 1.0 servers.
http://www.php.net/manual/en/function.func-get-args.php
function foo()
{
$numargs = func_num_args();
echo "Number of arguments: $numargs<br />\n";
if ($numargs >= 2) {
echo "Second argument is: " . func_get_arg(1) . "<br />\n";
}
$arg_list = func_get_args();
for ($i = 0; $i < $numargs; $i++) {
echo "Argument $i is: " . $arg_list[$i] . "<br />\n";
}
}
foo(1, 2, 3);
Does anyone thus far take issue the omitting of extra (sometimes
confusing) null/empty fields?
Do we anticipate that that alone would break existing 1.0 servers?
This is currently what seems to be clearest:
{id, method}
{id, method, params}
{method}
{method, params}
{id, result}
{id, error}
Perhaps this should go to it's own thread, and this thread should stay
for the scalar/object as a params discussion.
We use Java reflection in jabsorb to match the method name itself when
a JSON-RPC call is made (and this works out really well.)
I've never actually used reflection to match method parameter names,
but I assume it's possible.
A quick look at the Javadoc appears it's possible in JDK 1.5 (via the
GenericDeclaration interface.)
We use Java reflection in jabsorb to match the method name itself when
a JSON-RPC call is made (and this works out really well.)
A quick look at the Javadoc appears it's possible in JDK 1.5 (via the
GenericDeclaration interface.)
Or the request object param specification for the method/server simply
would not accept named params at all, which I think makes more sense
for people who may lose the ability to do that mapping depending on
their environment/language/compile options. Much like the reduced
field suggestion, I am not sure the goal is to make sure everything
conceptually maps into the 1.0 spec and works fine in the existing
implementations.
> If you were using pure name-value pairs (no positional info), one would have
> to alter client communication based on compiler options. Yikes...
Something about this still just bothers me. There are fundamental
differences in saying `these are the named values to use in your call
of the method` vs `the order I sent matches what you expect`. If the
names are not in the same order the server expects, this could easily
lead to unintended results. Passing the same object to 2 different
servers and getting two different results because one didn't support
named params and applied a different, but still valid, order.
Unwrapping an object in request object construction would require
client side knowledge about the servers expected mapping/capabilities
to prevent this. Is it wise to try to do this type of mapping?
A legitimate question I have to those who would like to keep it as a
separate list, what advantages does having knowing the param names
offer if you have to match the order client side anyway (besides 1.0
compat)?
A legitimate question I have to those who would like to keep it as a
separate list, what advantages does having knowing the param names
offer if you have to match the order client side anyway (besides 1.0
compat)?
What advantage(s) does 2 lists (list of values and a list of names)
have over a single object within a 1.1 server?
What advantage(s) does 2 lists (list of values and a list of names)
have over a single object within a 1.1 server?
I agree that annotations for this would be anti-DRY and not good.
What about making use of javadoc information with some kind of custom
doclet (I know this is not optimal either, but maybe a little more DRY
than annotations, because we all should be javadoc'ing our public APIs
anyway, right?) -- it could make use of the javadoc info stored in
some config files somewhere and just throw an exception if the info is
not there.
This would require an additional build step to generate the info
properly... or maybe you could also have that ability to use
annotations as well for those that want it (supporting both ways, or
even including byte code introspection as well as a 3rd method for
when debug info is available)
(this is starting to sound like hibernate xdoclet/annotations isn't it?)
Just some more ideas...
On Nov 5, 2007 12:41 PM, Kris Zyp <kri...@gmail.com> wrote:
>
>
I'm in favour of going back to a pure jsonrpc 1.0 style parameter array
and instead to use the "service procedure description" information to
allow for robust client/server support for positional and named
arguments (as well as potentially interface versioning).
Let me explain my reasoning.
A distillation of the requirements of various people into the simplest
common denominator is what I think we should be trying to achieve with
the development of this standard and also consideration of prior
versions of the standard for compatibility unless there is a good reason
to break it (i.e. not a nice way to meet a new requirement as an
evolution on top of the present 1.0 draft).
I believe we can meet the requirements for positional and named
arguments through an augmentative approach rather than a complete change
in interface - (and at that, an interface change that is fragmented into
3 different approaches (implied position, named and explicit positioned
arguments) to only complicate new implementations / backward
compatibility and interoperation between different implementations).
The common denominator for most/all computer language
procedure/functions declarations is an "arity" comprised of an ordered
set of arguments, each with a specific type (and perhaps other
attributes such as c# in/out and various other type qualifiers beyond
our scope). This is IMHO where we should start.
From my understanding the need for named and/or explicitly positioned
arguments is to support easier versioning of interfaces.
So for the following example procedure interface (in whatever language
you choose):
void updateUser [ string name, string address ]
you would get this type of request:
{
"method": "updateUser",
"params": { "name": "Michael", "address": "Waikikamukau" }
}
and if I add phone as the second argument to the procedure on the server:
void updateUser [ string name, string phone, string address ]
I can safely send it in a different order to the client (or not send the
new parameter at all, whatever the case may be).
{
"method": "updateUser",
"params": { "name": "Michael", "address": "Waikikamukau", "phone":
"555-1234" }
}
This looks simple but in practice as others are noting - it is not
really supported by all languages (due to lack of information of
parameter names at runtime) so is perhaps is not the best candidate for
a basic format for procedure parameters in jsonrpc.
But by making procedure parameter descriptors mandatory, the required
information for named and/or positional arguments could be known a
priori (making this additional request data redundant) just by applying
the simple optimisation rule of factoring out repeated information and
moving it instead to the the initiation step (fetching procedure
descriptors a single time in the beginning):
e.g. procedure information known in advance:
{
"name": "foo"
"return": "void"
"param": [
/* arity information used to supporting named/positional arguments */
{ "type": "string", "name": "name" }
{ "type": "string", "name": "phone" }
{ "type": "string", "name": "address" }
]
}
Then a call in a JS client with just "name" and "address" named
arguments could be converted on the wire to:
{
"method": "foo",
"params": [ "Michael", null, "Waikikamukau" ]
}
The client could then provide a call interface that accepted an object
with named arguments (for those trying to achieve this) instead of an
array and use the reflection information to guarantee type and interface
version safety (auto-filling arguments not provided with null or
whatever policy the client decides on). We have just optimized this
repeated information out of the call format into the initiation
information - but we still have all of the information needed for
argument names and explicit positions in the client.
<< IMHO the named parameter is not such an attractive option in the long
run from a client coding perspective as it precludes a native binding
that can infer these argument names from a traditional style JS call or
a call from most other languages (this is after all an rpc so asides
from async callbacks, it would be nice to have clients support a regular
style function call syntax). In any case by making a method with a
single parameter of object you can get the named argument approach for
free any way (so in fact array parameters is a natural superset). >>
For languages like Java, where we can't easily get argument names, we
could just name them "arg0", "arg1", etc ... in the parameter
descriptor. The position is implied by the index in the parameter
descriptor array.
This is much rather where I would see the spec going than the currently
quite complex/fragmented approach having to support 3 different styles
of argument passing especially when all the requirements could be
achieved by a simpler and more concise approach (mandating procedure
descriptors containing method arity). This current parameter nonsense is
one of the main reasons why I personally would be hesitant to adopt the
1.1 spec (cost/benefit).
The advantages of the 1.0 array approach augmented with arity
information from the procedure parameter descriptors:
Pros
* supports named and positional arguments
* fully backwardly compatible with 1.0
* smaller on the wire
* simplicity in line with original json-rpc objectives (i.e. we don't
want another SOAP)
Cons
* Can't distinguish between null or no argument provided (how important
is this?)
* Client needs to assemble array if named or explicit position type
requests are done
This approach could even be taken a step further and the procedure
descriptors could also have a version number for each method - making
rock solid support for mutliple versions of interfaces (assuming the
method version is included in the request). Then by allowing multiple
procedures descriptors with the same name (but differing arity and/or
version) would go along way towards solving these problems in a much
more solid way (and create a more official way to support overloading
which exists in the jabsorb java jsonrpc implementation).
Also WRT Service procedure description. Could we possibly change the
type naming to the standard ECMAScript naming? This service descriptor
is only sent once so saving 5 or 6 bytes should be less important than
clarity. i.e. could we use instead:
boolean, number, string, array, object and void
Any consensus on taking this kind of approach?
My 2c.
Michael.
if we should start cherrypicking features, I want the Lisp model which
combines named parameters, optional parameters (with an is-supplied
feature to discriminate between null and missing values), and a &rest
parameter for what is left.
http://www.lisp.org/HyperSpec/Body/sec_3-4-1.html
Don't want to implement that? Then lets settle for a simple model, as
Michael suggested.
I suggest that in the procedure parameter descriptors you should be
allowed extra information, like the default and documentation I've
added below.
/Henrik Hjelte
{
"name": "foo"
"return": "void"
"param": [
/* arity information used to supporting named/positional arguments */
{ "type": "string", "name": "name" , "default" : "No name yet" }
{ "type": "string", "name": "phone" , "documentation": "Excluding
area code! }
{ "type": "string", "name": "address" }
]
"documentation": "The foo method does all sorts of things"
}
My goal was to be able to pass an object as a param, because under the
1.0 spec I end up pass the object that contains the params as a single
item array. It just seemed a little strange and counter-intuitive
compared to just sending an object. In essence, for such calls I have
a public facing wrapper taking 1 param that maps to the real call's
arguments.
I think I support Kris's direction (and others) more than this. At
heart I'm 1.0 purist... but extending the params to an object seemed
to feel right, because 1.0 servers would not be able to process it.
This avoids the confusion of trying to create a meta-system + code of
making it work in 1.0 clients/servers. Even at that, in my view not
all servers/calls have to support both methods. Perhaps I am not
looking at this in strict enough rpc terms in my head...
Going to dwell on this for a bit, perhaps it is best left as an
experiment/extension spec until I have better arguments/examples. I'm
still in support of making sure json-rpc is not bound to a particular
protocol however. I also find the reduced fields concept very
interesting, maybe these are the things that belong in a 2.0 which
breaks backwards compatibility with 1.0.
--
Matt (MPCM)
----- Original Message -----
From: "Michael Clark" <mic...@metaparadigm.com>
To: <json...@googlegroups.com>
Sent: Monday, November 05, 2007 11:41 PM
Subject: [json-rpc] Re: JSON-RPC (object specification)
>
But can we take a step back...
What problem are we trying to solve? What does 1.0 not do that we
need from 1.1?
It feels like we've gotten so into the weeds on this issue that we've
lost track of the goal.. (What is the goal?)
For example, i think there is value in a standard service description
being a standard requirement, then generic tools could easily be
created for testing and generating stub libraries against anybody's
json-rpc X.X service.
Right, but my point is: why do we want these features? What does it
buy us? If we can't answer these questions then maybe we don't need
them..
From my POV, the service description is what I think has the most value
in the 1.1 spec.
> My goal was to be able to pass an object as a param, because under the
> 1.0 spec I end up pass the object that contains the params as a single
> item array. It just seemed a little strange and counter-intuitive
> compared to just sending an object. In essence, for such calls I have
> a public facing wrapper taking 1 param that maps to the real call's
> arguments.
>
As you say the named parameter method is presently universally supported
in 1.0 - it just requires an [ ] wrapped around the object. Passing an
object in a one arg array is perhaps a relatively minor trade-off when
you get what you want, plus compatibility to boot.
As an aside, the whole request and response should be wrapped in a [] as
this is safer for clients that use eval (not 100% sure on this but this
security issue has been mentioned many times).
The cometd/bayeux protocol does this also (I believe it is best common
practice):
http://svn.xantus.org/shortbus/trunk/bayeux/protocol.txt
This of course would break 1.0 compat but is logically much simpler to
implement (than mapping object fields to procedure arguments when you
don't have their names).
> I think I support Kris's direction (and others) more than this. At
> heart I'm 1.0 purist... but extending the params to an object seemed
> to feel right, because 1.0 servers would not be able to process it.
> This avoids the confusion of trying to create a meta-system + code of
> making it work in 1.0 clients/servers. Even at that, in my view not
> all servers/calls have to support both methods. Perhaps I am not
> looking at this in strict enough rpc terms in my head...
>
> Going to dwell on this for a bit, perhaps it is best left as an
> experiment/extension spec until I have better arguments/examples. I'm
> still in support of making sure json-rpc is not bound to a particular
> protocol however. I also find the reduced fields concept very
> interesting, maybe these are the things that belong in a 2.0 which
> breaks backwards compatibility with 1.0.
>
I really don't see an easy way to make clients and servers interoperate
if it is optional. e.g.. if an interface is being called with named
parameter arguments from client code but the server only supports the
array arguments approach, the client will then need to translate these
fields into an array - but it doesn't have any information on the order
(without a parameter descriptor). It would also need some sort of
capabilities descriptor exported from the server to know it can do this.
I think it would have to be mandatory in the server if you wanted
interoperation (as well as the array method to access these methods if
this was also to be supported) - or alternatively to provide the service
descriptor information as proposed so the client is able to convert to
an array (needs the order information) if that is the only method
supported by the server.
I'm resigned to the fact that it is in the spec the way it is now as I
know my feedback is pretty late. FWIW I was just pointing out what I
would like to see (mandatory procedure descriptors) and making the
marshalling format simpler i.e. removing the need to implement 3 types
or argument handling to make an interoperable implementation).
If interoperability is a burden, then some folk will just not bother to
adopt the spec (or at least not fully conform to it).
In any case, I have given my feedback.
Michael.
From my perspective I see the major benefits in the procedure
descriptors (which can be used to accomplish interface version safety
and define arguments schema as well as support automatic proxy
generation in smart clients and servers).
The rest of the spec has a good clarification of what was unwritten in
1.0 which is all good! - but the parameter marshalling just appears to
me as a burden with no immediate benefit in my use case (although a
procedure descriptor to allow selection of a method with a matching
arity would be great).
> It feels like we've gotten so into the weeds on this issue that we've
> lost track of the goal.. (What is the goal?)
>
* Interoperability vs Flexibility?
* Grab bag vs common denominator?
* Support for smart introspection vs Manually annotating exported
methods (i.e. add_methods)?
The 1.0 format allows you to do this with 4 bytes overhead - just pass
an object as a single argument to your method. No need to try to map
this methodology onto existing multiple argument procedures (which is a
can of worms if you look at it closely).
> is just not as obvious as
>
> error in call to method store, params: area:555, exchange:123,
> phone:4567, street_num:33805, street_name:'jollie',
> zip_plus_4:'pahrump', zip:89048
I agree with this totally. Although it shouldn't be forced on everyone.
Consider a common denominator which supports both methods - rather than
adding complexity to the spec.
> and can never be. My example is a little silly, but you shouldn't
> have to memorize or refer to documentation on the particular order of
> parameters all the time. Regardless if you have a formal service
> description or not, you SHOULD be able to send data without it being
> positional, expressing it as what it is - an object - rather than as a
> single-element list. Additionally, there is no reason to force fields
> to appear when it is functionally identical to not have them there at
> all. You're just finding excuses to throw exceptions, instead of
> doing What Is Logical And Expected.
>
> I think this is answer enough. We do want these features, and because
> its our perogative to improve something even if the old thing works -
> we're going to make it better. Less arbitrary. Simple, Clear,
> Logical. After all, when I read, "YOU MUST HAVE PARAMS BE AN ARRAY",
> my first thought was "Peh, thats stupid! I want to send an object!"
Who said that. I was only suggesting the superset approach which
intrinsically supports both methods with 1.0 compatibility and only 4
bytes more on the wire (including whitespace).
> If you're using legacy code that only supports positional parameters,
> you'll either have to A. send positional parameters or B. construct a
> layer to map them, or its just not going to work. Both are outside
> the scope of the JSON-RPC main specification. This lightweight,
> simple protocol is for making calls and sending and receiving simple
> through complex data - not defining the niggly details of anybody's
> legacy code abstraction layers.
Somehow I don't see this as simplicity. Us mere legacy people have to be
penalised.
Anyway i'll go back to my legacy code ;) I have voiced my opinion which
is why we are all here after all.
~mc
Skylos wrote:
> On 11/6/07, *Jeffrey Damick* <jeffre...@gmail.com
> <mailto:jeffre...@gmail.com>> wrote:
>
>
> Right, but my point is: why do we want these features? What does it
> buy us? If we can't answer these questions then maybe we don't need
> them..
>
>
> Because named parameter passing is the future. it makes more sense,
> its more compatible, its the whole reason behind tagged formats in
> general - like XML or JSON. Positional parameters are a relic of
> procedurally based, inflexible programming. If your application only
> supports positional - then send positional! You know your client and
> server implementations are of necessity going to pretty closely
> step-locked for any number of reasons - By using named parameters
> there is LESS locking. Documentation and human readability is
> increased, seeing a debug message like
>
> error in call to method store, params: 555, 123, 4567, 33805,
> 'jollie', null, 'pahrump', null, 89048, null
The 1.0 format allows you to do this with 4 bytes overhead - just pass
an object as a single argument to your method. No need to try to map
this methodology onto existing multiple argument procedures (which is a
can of worms if you look at it closely).
> is just not as obvious as
>
> error in call to method store, params: area:555, exchange:123,
> phone:4567, street_num:33805, street_name:'jollie',
> zip_plus_4:'pahrump', zip:89048
I agree with this totally. Although it shouldn't be forced on everyone.
Consider a common denominator which supports both methods - rather than
adding complexity to the spec.
> I think this is answer enough. We do want these features, and because
> its our perogative to improve something even if the old thing works -
> we're going to make it better. Less arbitrary. Simple, Clear,
> Logical. After all, when I read, "YOU MUST HAVE PARAMS BE AN ARRAY",
> my first thought was "Peh, thats stupid! I want to send an object!"
Who said that. I was only suggesting the superset approach which
intrinsically supports both methods with 1.0 compatibility and only 4
bytes more on the wire (including whitespace).
> If you're using legacy code that only supports positional parameters,
> you'll either have to A. send positional parameters or B. construct a
> layer to map them, or its just not going to work. Both are outside
> the scope of the JSON-RPC main specification. This lightweight,
> simple protocol is for making calls and sending and receiving simple
> through complex data - not defining the niggly details of anybody's
> legacy code abstraction layers.
Somehow I don't see this as simplicity. Us mere legacy people have to be
penalised.
Anyway i'll go back to my legacy code ;) I have voiced my opinion which
is why we are all here after all.
The reason I wanted to include an object as a valid param is because
that is how I end up using the system under 1.0. Over 85% of my
systems calls use only single item arrays. Around 97% of my calls by
volume are these single item arrays.
This results in me having an additional wrapping layer of calls to get
to the real functions. Our systems do not expose direct calls, often a
single interface may map to different internal calls. Again this is
fine (and desired) in my case.
The reason I felt objects need to be a param is because more and more
systems expect the name mapping to come with the data. This moves away
from traditional positional rpc systems role. The power of such
technology like JSON is that the data is descriptive and not some
complex non-descriptive nest.
If I call do_something(data) on more than one system, they may do
different things. But with only positional arguments they may
understand my data to mean different things if the client is not
careful. This always requires more client side knowledge of server
expectations. Self describing data is a much more reasonable approach
in a world where a client may be talking with many servers about the
same data without wanting/having to know what each server expects. Or
passing it just what it needs in the format the server needs it. This
is not a traditional approach, but it is a reality of where some
systems are and many are heading.
It abstracts a layer above traditional rpc systems into the realm of
intent and data from the client's view. If the server can map onto 1.0
setup, that is great. It shouldn't have to however, and thus can
choose not accept a param object (like 1.0 for just certain calls).
Service descriptions come in here a lot, and I like mine very loosely
defined. If others would like to do named to positional magic with
that data, and there is need I support that. But mapping one onto the
other anywhere but in the server is not going to be clean. Personally
I want my client kept as simple as possible, and my server as loose as
possible.
It is likely I'll put the object question to a vote during the next
week or so, to see if there is enough interest to continue these
discussions or move it into the 1.1 spec, or perhaps I should push it
into an extension spec. Since this would make 1.0 a subset, perhaps
these should be json-rpc 2.0 topics...
> What if we parameter names were provided through a separate property like
> this:
> {"id":"call1",
> "method","divide",
> "params":[1,2],
> "names":["dividend","divisor"]}
no, this is bad. see below.
> This is nice because it would still work with 1.0 RPC servers [...], but
> smarter RPC 1.1 servers (like PHP and JS servers) could still use parameter
> names to do more robust name matching
It definitely would *NOT* work with 1.0 RPC servers.
It hides the errors and returns wrong results, which is **much** worse than
returning an error and no result.
Look at this simple example:
with a 1.1 server:
-> {"id":1, "method":"divide", params:[1,2], "names":["dividend","divisor"] }
<- {"id":1, "result": 0.5}
-> {"id":2, "method":"divide", params:[2,1], "names":["divisor","dividend"] }
<- {"id":2, "result": 0.5}
that's ok.
now, with a 1.0 server:
-> {"id":1, "method":"divide", params:[1,2], "names":["dividend","divisor"] }
<- {"id":1, "result": 0.5}
still ok.
-> {"id":2, "method":"divide", params:[2,1], "names":["divisor","dividend"] }
<- {"id":2, "result": 2}
wrong!
so, that would be the wrong way.
(because it *silently* returns *wrong results*.)
regards,
Roland
> From my understanding the need for named and/or explicitly positioned
> arguments is to support easier versioning of interfaces.
partly, yes. other reasons are e.g. readability/verbosity,
self-explaining data and simpler usage.
> I can safely send it in a different order to the client (or not send the
> new parameter at all, whatever the case may be).
>
> {
> "method": "updateUser",
> "params": { "name": "Michael", "address": "Waikikamukau", "phone":
> "555-1234" }
> }
>
> This looks simple but in practice as others are noting - it is not
> really supported by all languages (due to lack of information of
> parameter names at runtime) so is perhaps is not the best candidate for
> a basic format for procedure parameters in jsonrpc.
>
> But by making procedure parameter descriptors mandatory, the required
> information for named and/or positional arguments could be known a
> priori
yes, that's right.
> e.g. procedure information known in advance:
>
> {
> "name": "foo"
> "return": "void"
> "param": [
> /* arity information used to supporting named/positional arguments */
> { "type": "string", "name": "name" }
> { "type": "string", "name": "phone" }
> { "type": "string", "name": "address" }
> ]
> }
>
> Then a call in a JS client with just "name" and "address" named
> arguments could be converted on the wire to:
>
> {
> "method": "foo",
> "params": [ "Michael", null, "Waikikamukau" ]
> }
but why should this be done in the *client*? With this information, the
server could equally make the conversion (in probably all languages).
And in my opinion, it's *much* better to do this conversion in the
server. so that:
- the client stays simple
- the server can transform the named parameters to positional parameters.
(if it needs to.)
or -- if it's language supports named arguments -- can simply use the
named arguments directly.
this would have the same "Pros" you listed in your mail, but eliminate
the "Cons".
(except that it would be bigger on the wire, but that's ok for most because
you get self-described data. and you can still use pure positional parameters
where the size on the wire is a major issue.)
regards,
Roland
> I had proposed using an object in addition to an array as the param
> value in the last proposed spec.
yes, and I think this is good.
> However, this clearly breaks the way many languages work and without
> introspection/remapping this will not even be possible in many
> languages unless hand coded or through eval type calls.
I don't think so.
Yes, many languages don't support named arguments directly, so they
have to map the named arguments to positional ones. So if you want
to use named arguments which such languages, it should be possible
to write a (small) wrapper (or wrapper-generator) which does this.
It shouldn't be necessary to write the mapping for every function
yourself.
And yes, many languages don't support introspection. There, the
information about the function has to be made explicit. But you
already do have this explicit information, as soon as you want
to have service-descriptions (like system.describe).
And last: if you don't want to use named parameters, you don't have to.
SO: I would strongly suggest to keep named parameter (or "object"
as param), because many of us want it.
The only question is, *how* to implement them:
- use "param" with an object or array (the "cleanest"/most obvious way)
- use "param" with an array and e.g. "kwparam" with an object
- ...
This maybe affects how easy it could be implemented in different
languages.
> For these
> reasons and because I don't desire to break json-rpc or make it
> unusable I am going to withdraw my proposal as such. If others still
> believe strongly in adding object params to json-rpc, I'm happy to
> keep the discussions going as I am a big supporter of it conceptually.
I am, too. And there are many others who like/need/want named
parameters.
Without named parameters, I could as well use json-rpc 1.0...
regards,
Roland
And yes, many languages don't support introspection. There, the
information about the function has to be made explicit.
But you
already do have this explicit information, as soon as you want
to have service-descriptions (like system.describe).
And last: if you don't want to use named parameters, you don't have to.
- use "param" with an object or array (the "cleanest"/most obvious way)\
> > But you
> > already do have this explicit information, as soon as you want
> > to have service-descriptions (like system.describe).
> Presumably, this would usually come from the introspection.
Ok, if you don't already have this explicit information for
service-descriptions, you at least need them to document your
functions. And if you document them in a machine-readable way,
you again have the needed explicit information.
> Consequently I wouldn't be upset if they were added. I wouldn't use them, I
> would stick to positional parameters for the sake of interoperability,
> compactness, and simplicity, but I would probably support object parameters.
ok :).
regards,
Roland
Our discussions regarding Java introspection have been based around the
java.lang.reflect API, however, one important point we have been making is
that parameter names are not (unless debug is on) stored in Java bytecode,
because it is not necessary for making calls and therefore is not accessible
from this API. java.lang.reflect can provide information about the position
and type of parameters, but not the parameter names. So, yes, you are
correct, java.lang.reflect definitely can and generally should be used to
create service descriptions, complete with type information, but it can only
do so on it's own for positional parameters. It requires extra information
to do named parameters (either through manual configuration, debug enabled
bytecode inspection or access to the source, which unlike PHP, is not
necessarily also available).
Kris