JSON-RPC (object specification)

171 views
Skip to first unread message

Matt (MPCM)

unread,
Oct 30, 2007, 10:38:11 AM10/30/07
to JSON-RPC
I have added a page called 'JSON-RPC (object specification)', which I
feel is the answer to some of the issues with 1.1 WD and 1.1 ALT. The
both include some great points, but I feel seriously muddy the waters.

The spec I posted extends the request object to be include an object
(not just an array), it defines some role based terminology such as
client and server, and it removes transport specific suggestions.

This may seem like I placed a glass upside down on a table covered in
flour, then blew away all the useful flour to just to gain a clean
circle. In essence this is what I did. But my reasoning is that with
that done, we can start to focus on the sectioning off and defining
where the rest of the flour should sit and where it makes sense for
overlap, all around json-rpc.

My principle behind this is that json-rpc should be simple (and
therefore powerful), not encumbered and polluted.

Items such as transport specific details, service descriptions,
referencing (circular or otherwise), etc... should be handled in
separate specs.

What this really gives us is the ability to pass an object in as a
param. It then forces us to have conversations around the other topics
within their own merit, and not as something that must always be
included with json-rpc spec.

Hopefully I can get some feedback on my view/suggestion. Contact me
off list if you like, but I prefer to have the discussions around json-
rpc here in the open.

Skylos

unread,
Oct 30, 2007, 3:51:57 PM10/30/07
to json...@googlegroups.com
My suggestions:

Because your wording is awkward,

Change:

params - An Object to pass as arguments to the method (can also be an array).

To:

params - An Object or Array containing the arguments to the method

Because comparing objects is a whole issue of semantics itself...

Change:

id - The request id can be of any type. It is used to match the response object with the request object.

To:

id - A Scalar value.  It is used to match the response object with the request object.

Because I think you should be able to return an array or scalar and I think null and not-existing are functionally equivalent, and think this is more the user-expected behavior which is just good design,

Change:


  • result - The Object that was returned by the invoked method. This must be null in case there was an error invoking the method.
  • error - An Error object if there was an error invoking the method. It must be null if there was no error.
  • id - This must be the same id as the request it is responding to.

to:
  • result - The Object, Array, or Scalar that was returned by the invoked method. This must be null or omitted in case there was an error invoking the method.
  • error - An Error object if there was an error invoking the method. It must be null or omitted if there was no error.
  • id - This must be the same id as the request it is responding to.

Once again, in notifications, I think id should be omittable.

Change:
  • id - Must be null.
To:
  • id - Must be null or omitted.

An additional example illustrating async communciation, notifications, and various data types:

--> { "method": "link", "params": "Client Present" }
--> { "method": "list", "params": { "when":"yesterday", "what":"sales" }, "id": 1}
--> { "method": "echo", "params": ["HELO JSON-RPC"], "id": 2}
--> { "method": "login", "params": {"user":"skylos","password":"secret"}, "id": 3}
--> { "method": "echo", "params": "MAIL JSON-RPC", "id": 4}
<-- { "result": "Hello JSON-RPC", "id": 2} 
<-- { "result": "SUCCESS", "id": 3} 
<-- { "error": { "status":500, "message":"That is not your address"}, "id": 4}
<-- { "result": [ 50, 40, 60, 20, 70, 10, 80 ], "id": 1} 
<-- { "method": "link", "params": "CTS" }


Naturally the use of the term Scalar can be edited to be whatever appropriate

Thoughts?

Skylos

Kris Zyp

unread,
Oct 30, 2007, 4:21:02 PM10/30/07
to json...@googlegroups.com
Has there been any discussion about the relative merit of an object instead of using array with a single object (i.e. sticking to the 1.0 params syntax). What is the advantage of:
{"a":1,"b":2}
vs 1.0 compliant:
[{"a":1,"b":2}]
I think the answer is that {"a":1,"b":2} can take the place [1,2] for 1.1 aware receivers to be more robust. In situations where people are writing custom handlers for the RPCs, this doesn't seem to make any difference, they can just easily take the first param and have fun with it. However, in situations where we are building frameworks where we are using introspection to automatically match RPCs to methods (which I am doing, and Arthur is as well I think), this is relevant. However, I am wondering if parameter name matching is even possible in most environments. I am not aware of how to do this in Java (it may be possible to do with live Bytecode analysis, but they may be brittle, compilation process dependent).
In JavaScript this is possible, albiet a little bit tricky (I think one has to do regex on toString of the target function). If there are other ways to do this, I would be curious to know. In other languages is parameter name matching possible?
Is there other advantages to using an object as the params value that I am not aware of?
Kris

Kris Zyp

unread,
Oct 30, 2007, 4:21:34 PM10/30/07
to json...@googlegroups.com

To:

id - A Scalar value.  It is used to match the response object with the request object.
 
I agree.


  • result - The Object that was returned by the invoked method. This must be null in case there was an error invoking the method.
to:
  • result - The Object, Array, or Scalar that was returned by the invoked method. This must be null or omitted in case there was an error invoking the method.
I agree. This seems to be a wording mistake in 1.0. The 1.0 spec actually has examples of non-Object values. However, to be more specific, I believe that it should be allowable for result to be null even when error is null (most languages support returning a null value, and a null can be very meaningful). Another wording could simple be a "value".

Skylos

unread,
Oct 30, 2007, 6:25:26 PM10/30/07
to json...@googlegroups.com
I agree with Kris and revise suggestion to change result definition thusly:
  • result - The Object that was returned by the invoked method. This must be null in case there was an error invoking the method.
to:
  • result - The value that was returned by the invoked method. This MUST be null or omitted in case there was an error invoking the method.


Matt (MPCM)

unread,
Oct 30, 2007, 11:09:02 PM10/30/07
to JSON-RPC
An ID that is an object or an array is a little strange as a mechanism
of matching a response with the request. Perhaps someone here could
offer a good reason or a good use if it was an object/array?

I also understand the point about omitted result, error, and id
(notifications). I like the more minimal style and it is much clearer
compared to a null result and a null error in a single object. Would
you also advocate omitting the param in the request object if nothing
is passed?

In one of your examples you pass a string as a param, not an object/
array. I think we may be pushing to cover too much ground if the below
are expected to have the same effect:


{ "method": "link", "params": "Client Present" }

{ "method": "link", "params": ["Client Present"] }
{ "method": "link", "params": {"name":"Client Present"} } //
assumes name is the correct param implied in the others

As a caller, it would be incorrect to assume that the above would all
work all the time or would mean the same thing. The server should
dictate what it accepts on a per method basis and accepting an object
would be up to their ability to map it out. For 1.0 servers, it could
accept an object as the first param and then map it out (otherwise it
would fail).

I guess that calls up the reason why I suggested we allow an object as
the param in addition to the array. It is mainly to allow those of us
who tend to pass objects anyway to map them as we choose instead of
pulling them out of an array. It may not be worth adding except that
others had expressed an interest for their own reasons. Besides, my
primary goal is to break the transport binding habits of the other
spec above all else.

Matt (MPCM)

unread,
Oct 30, 2007, 11:20:25 PM10/30/07
to JSON-RPC
http://groups.google.com/group/json-rpc/browse_thread/thread/30b54b0336ad1c53

I think it would be wrong to assume that the order of the properties
equates to positional param order. Extra properties should be dropped
prior to the call, but from what I understand in python having extra
properties throws an exception (IIRC). I was not viewing the server
trying to use whatever the client passed unless it wants to, but more
that it could require one or the other.

Presently in my 1.0 systems I often pass an object as the only item in
an array. This is because my calls are mostly wrappers into a more
complex api that is never directly exposed. My thought is that systems
that can take objects and apply them to matching named arguments can,
but it should not be a requirement. Systems that want to take both
could, but a method could reject a call if didn't know how to or want
to accept a param object. Existing 1.0 systems should reject the
request anyway, if the param is not an Array.

I'll dwell on this tonight some while I sleep, and hopefully those
that wanted a object to be passed directly can state their interest.
I'm mostly happy with passing a single object in an array, but it is
not as apparent that it is being used as the params of the call on the
server, compared to an object being passed directly.

Skylos

unread,
Oct 30, 2007, 11:35:33 PM10/30/07
to json...@googlegroups.com
On 10/30/07, Matt (MPCM) <Wicke...@gmail.com> wrote:

An ID that is an object or an array is a little strange as a mechanism
of matching a response with the request. Perhaps someone here could
offer a good reason or a good use if it was an object/array?

I think you'd have to canonize the mechanism by which they are compared if you did.

I also understand the point about omitted result, error, and id
(notifications). I like the more minimal style and it is much clearer
compared to a null result and a null error in a single object. Would
you also advocate omitting the param in the request object if nothing
is passed?

Yes.  Sometimes the method is all of it.  Like the notification method on my json enabled bomb, to which I expect no response, and requires no parameters.  Security is part of the transport layer, of course.  Can't accept these from just ANYBODY!

{ method: "HCF" }

In one of your examples you pass a string as a param, not an object/
array. I think we may be pushing to cover too much ground if the below
are expected to have the same effect:
{ "method": "link", "params": "Client Present" }
{ "method": "link", "params": ["Client Present"] }
{ "method": "link", "params": {"name":"Client Present"} }        //
assumes name is the correct param implied in the others

Agreed.  Those are not the same thing, inasmuch as something tagged name is not something merely 'first position'.  A single value is in first and only position, so the first two could be considered the same.  When it comes down to it I think HOW that is handled is up to the people managing the communication, though.  This specifies the RPC format, not what is equivalent within different possible call methods or not within the contents  of the params value.  I do think it should be allowed to send a single scalar value though. 

Call key 'params' - required when there is a parameter value to send.  May contain any single value.  (in the concept that an array or object are compound single values)

As a caller, it would be incorrect to assume that the above would all
work all the time or would mean the same thing.

Yes.

The server should
dictate what it accepts on a per method basis and accepting an object
would be up to their ability to map it out.

Yes.

For 1.0 servers, it could
accept an object as the first param and then map it out (otherwise it
would fail).

If it was trying to be equivalent or something.   This seems a bit far to assume that if I call with different text I'll get similar semantics.  Application layer, not our problem.

I guess that calls up the reason why I suggested we allow an object as
the param in addition to the array. It is mainly to allow those of us
who tend to pass objects anyway to map them as we choose instead of
pulling them out of an array.

I think this is a good thing.  I dislike putting something as the first value of a required array.  I think that if we're specifying that it is an array, we're guaranteeing that it is a series of values of some sort.  Which shouldn't be the specs problem WHAT a params is..

It may not be worth adding except that
others had expressed an interest for their own reasons. Besides, my
primary goal is to break the transport binding habits of the other
spec above all else.

I'm just advocating my suggestions and such.  :)  TIMTOWTDI.  I'm with you, and will happily listen to other input.

Skylos

--
   A: Because it disrupts the normal flow of information.
   Q: Why is top-posting annoying?
   A: Putting your remark(s) at the very top of your reply to whatever message you're replying to, regardless of where, in that message, the specific bit you're replying to are actually found.
   Q; What is "top-posting"?

Jim Washington

unread,
Oct 31, 2007, 8:19:56 AM10/31/07
to json...@googlegroups.com
Matt (MPCM) wrote:
> http://groups.google.com/group/json-rpc/browse_thread/thread/30b54b0336ad1c53
>
> I think it would be wrong to assume that the order of the properties
> equates to positional param order. Extra properties should be dropped
> prior to the call, but from what I understand in python having extra
> properties throws an exception (IIRC). I was not viewing the server
> trying to use whatever the client passed unless it wants to, but more
> that it could require one or the other.
>
>
Yes. Python balks when the wrong number of positional params are
passed, or if it gets confused about keyword params.

In practice, in Zope 3, named parameters would be thrown into the
machinery that handles HTML forms. The object publishing model then
applies the form parameters to the callable.

What I liked best about the separated named and positional parameters
was that they would be simpler to deal with on the client side. With a
unified "params" object-or-array, it's (I think) a lot of work for a
javascript client to inspect the call and choose whether to send the
request as an array or an object. As the specification currently
stands, the choice means that the more general formulation to use 1.1
(1.0 is still supported as the Array case) is to always pass an Object,
and (informally) use Atif's suggestion for numeric keys to put
positional arguments in order, which would also be a lot of code.

So, I think that not separating named and positional arguments means
that the major javascript libraries will remain at 1.0. After all, it
works fine, and 1.1 does not bring much to the table but complication,
additional code, and breakage if the server supports only 1.0.

Just my opinion.

Before this is all final, I would like to hear a good story about an
upgrade path to 1.1 for javascript client libraries.

-Jim Washington

Weston Ruter

unread,
Oct 31, 2007, 11:36:59 AM10/31/07
to json...@googlegroups.com
Hey Kris,
I'm doing parameter name matching in a PHP server implementation <http://code.google.com/p/json-xml-rpc/>. It works by re-parsing the service source files (using the native PHP tokenizer) to obtain the named parameter lists of the public methods.

Weston

Kris Zyp

unread,
Oct 31, 2007, 12:41:58 PM10/31/07
to json...@googlegroups.com
That is very cool, but I still don't think it is possible in Java (and perhaps other bytecoded languages). Is named parameters really a step towards greater interoperability if some languages simply can't support it as a mechanism for method matching? We would need to still include positional parameters for these bytecoded languages wouldn't we? I would love to know if I am wrong about this.
Kris

Weston Ruter

unread,
Oct 31, 2007, 1:02:23 PM10/31/07
to json...@googlegroups.com
For languages which do not support introspection of method parameters, implementations could simply provide a means of explicitly specifying parameters. For example, using a JavaScript analogy:

var server = new RPCServer("http://example.com/service/");
function findBusiness(name, location){
    //do logic
}
server.addMethod(findBusiness, ['name', 'location']);

For compiled languages, macros or scripts could be written to automatically do this as a pre-processing step.

Kris Zyp

unread,
Oct 31, 2007, 1:50:47 PM10/31/07
to json...@googlegroups.com
This just seems like a big burden. Either some type of preprocessor to needs to be run on source code, or users have to manually add parameter names. What if we parameter names were provided through a separate property like this:
{"id":"call1",
 "method","divide",
 "params":[1,2],
 "names":["dividend","divisor"]}
This is nice because it would still work with 1.0 RPC servers as well as uninformed Java servers that are not aware of method parameter names, but smarter RPC 1.1 servers (like PHP and JS servers) could still use parameter names to do more robust name matching (if dividend and divisor were swapped for example).

Matt (MPCM)

unread,
Oct 31, 2007, 1:52:40 PM10/31/07
to JSON-RPC
If the languages cannot automatically use a params object, it is up to
them to require an array as part of their description/api and throw an
error otherwise. I think the goal should not be to have a rpc format
constrained by just which languages can automate the process. It is
not the client who dictates that a bit of data be accepted anywhere it
sends it.

Even in java the params object could be manually linked, and that
linking could be a defined step. This does not change even if we send
an object as the first item of an array and stick with the 1.0 spec,
the mapping still takes place somewhere. I felt with an object as the
param the intension was clear, where as an object as the first item
might just be that. The server would not be able to tell, and would be
left to guess as best.

Having just positional parameters are sometimes insufficient to
express the desire of the data. I don't think most of us work in
languages where we are accessing the arguments within the function
body through an array, they are being mapped into local variables by
the language. To me it almost feels backwards presently, as we are
mapping positional values to argument names in practice, which happen
to follow the same order.

As an example:

function a(b=1,c=3,d,e,f) { /*do something in here*/ }

Sending over {"method":"a", "params":{"b":20,"d":30,"e":40,"f":50}}
clearly shows the data's intention not to provide a value for c. But
how would one do that with positional parameters without knowing the
defaults ahead of time. This starts to wander into the service
description issues, but I do not think that should be something the
client always need be aware of.

Matt (MPCM)

unread,
Oct 31, 2007, 3:40:13 PM10/31/07
to JSON-RPC
On Oct 31, 1:50 pm, "Kris Zyp" <kris...@gmail.com> wrote:
> {"id":"call1",
> "method","divide",
> "params":[1,2],
> "names":["dividend","divisor"]}

For this to work correctly into 1.0 servers, the order of dividend and
divisor would have to be known and honored by the client. If the order
did not match, despite looking fine in the data, an unintended result
would be returned by 1.0 servers. Something like this (kwparam) was
discussed in earlier threads, but I get the feeling trying to shoe
horn named data into positional data is going to break one way or
another, just too many assumptions have to be made to make it work.

The server really needs to know what names it wants out of the object,
and place them into the call it makes. My systems don't expose `real`
api's, so json-rpc is a thin wrapper over the calls I do expose.
Perhaps it is my perspective since calling a external method might hit
a wrapper function before it calls the real function, which could
change based on the object properties.

Could this be done w/ a name to position map in Java (or other
languages) on the fly?

Kris Zyp

unread,
Oct 31, 2007, 4:05:58 PM10/31/07
to json...@googlegroups.com
Don't get me wrong, I definitely see the benefit of name parameters. My application, Persevere, is in JavaScript, server and client, so it can certainly do name matching. However, interoperability seems it should be key goal of the spec. If we can deliver requests with parameter name information without necessarily breaking 1.0 servers, it seems advantageous. Clients (meaning RPC request sender), can include give parameter values and names without needing to know what version the server is on. The server can be upgraded from 1.0 to 1.1 and automatically become parameter name aware, without any modifications on the client.
 

If the order
did not match, despite looking fine in the data, an unintended result
would be returned by 1.0 servers.
 
Of course, the 1.0 client has always had to know the order. If there are 1.0 agents involved the order must be known. However, that doesn't mean we can't smoothly and compability transition into something that will support naming and order change for when 1.0 agents are gone.
If you are writing your own client and server application that never has to interoperate with a third party, than all of this is irrelevant, you don't need our permission to deviate from the spec. Where the spec provides value is when we are working with other agents that implement the spec. If we can do so with the maximum opportunity for compability, isn't that the best for the goals of the spec?
Kris

Kris Zyp

unread,
Oct 31, 2007, 4:37:08 PM10/31/07
to json...@googlegroups.com
Another thing that came to mind, using a params & names arrays would also work more coherently with method with variable parameters, which I know is at least supported in JS and Java (1.5+):
JS:  function showName(firstName,otherNames) {
 for (var i=1;i<arguments.length;i++)
  arguments[i] // do something with it
}
Java:  showName(String firstName, String... otherNames)
 
RPC:
{"id":"call1","method":"showName",
 "params":["Kristopher","William","Zyp"],
 "names":["firstName","otherNames"]}
 
I don't know how you would do that with just an object.
Kris

 
On 10/31/07, Kris Zyp <kri...@gmail.com> wrote:

Matt (MPCM)

unread,
Nov 1, 2007, 10:06:01 AM11/1/07
to JSON-RPC
PHP has something like the JS example:

http://www.php.net/manual/en/function.func-get-args.php
function foo()
{
$numargs = func_num_args();
echo "Number of arguments: $numargs<br />\n";
if ($numargs >= 2) {
echo "Second argument is: " . func_get_arg(1) . "<br />\n";
}
$arg_list = func_get_args();
for ($i = 0; $i < $numargs; $i++) {
echo "Argument $i is: " . $arg_list[$i] . "<br />\n";
}
}
foo(1, 2, 3);

Does anyone thus far take issue the omitting of extra (sometimes
confusing) null/empty fields?
Do we anticipate that that alone would break existing 1.0 servers?

Matt (MPCM)

unread,
Nov 1, 2007, 11:16:09 AM11/1/07
to JSON-RPC
http://groups.google.com/group/json-rpc/web/proposed-format-visual

This is currently what seems to be clearest:

{id, method}
{id, method, params}
{method}
{method, params}
{id, result}
{id, error}

Perhaps this should go to it's own thread, and this thread should stay
for the scalar/object as a params discussion.

Kris Zyp

unread,
Nov 1, 2007, 11:44:55 AM11/1/07
to json...@googlegroups.com
I like it.
Kris

Jeffrey Damick

unread,
Nov 5, 2007, 9:00:24 AM11/5/07
to json...@googlegroups.com
Sure it is possible in Java, just use annotations.. We should support
named parameters.

Kris Zyp

unread,
Nov 5, 2007, 9:59:01 AM11/5/07
to json...@googlegroups.com
> Sure it is possible in Java, just use annotations..
There is an infinite number of ways parameter names can be matched with
manually added information, annotations just being one of them. However,
necessitating additional work for existing working code, such as adding
annotations, does not seem like a feature, and the resultant duplication of
information does improve not robustness, but rather increases the chances of
mistakes. DRY!

> We should support named parameters.
I agree!
Kris

Arthur Blake

unread,
Nov 5, 2007, 10:02:19 AM11/5/07
to json...@googlegroups.com
Am I missing something?  Couldn't we just match them by parameter name (using java reflection...)

Kris Zyp

unread,
Nov 5, 2007, 10:16:16 AM11/5/07
to json...@googlegroups.com
Maybe I am missing something. How do you do that?

Arthur Blake

unread,
Nov 5, 2007, 10:28:03 AM11/5/07
to json...@googlegroups.com
Within the context of a Java based JSON-RPC framework, it should be
quite easy to do it with Java reflection.

We use Java reflection in jabsorb to match the method name itself when
a JSON-RPC call is made (and this works out really well.)

I've never actually used reflection to match method parameter names,
but I assume it's possible.

A quick look at the Javadoc appears it's possible in JDK 1.5 (via the
GenericDeclaration interface.)

Kris Zyp

unread,
Nov 5, 2007, 11:37:41 AM11/5/07
to json...@googlegroups.com

We use Java reflection in jabsorb to match the method name itself when
a JSON-RPC call is made (and this works out really well.)
This is use reflection is what I am assuming as well.
 
A quick look at the Javadoc appears it's possible in JDK 1.5 (via the
GenericDeclaration interface.)
I tested this, and it doesn't seem to return parameter names. As matter of fact, I just tried compiling a test class. If you don't use the "Generate debug info" option, the parameter names do not exist in the bytecode at all. It is simply impossible for that information to be retrieved through reflection, unless you have that option enabled (at which point at least it would be accessible through bytecode inspection). I assume the same issue would exist with C#.
There may still be value in accessing property names through bytecode inspection with debug info enabled. If the format of positional parameters plus positional names is used, when debug info is enabled a framework can determine if the positional parameter names are correct and throw an exception if they are not. This would still reap the benefit of using parameter to assure correctness. And when debug info is not generated (for example for deployment to production), the positional parameters can simply be used and the names will be ignored.
If you were using pure name-value pairs (no positional info), one would have to alter client communication based on compiler options. Yikes...
Kris

Stephen McKamey

unread,
Nov 5, 2007, 11:54:06 AM11/5/07
to json...@googlegroups.com
C# / .NET Reflection does allow method and parameter name inspection.
It is quite effective for this purpose. There are also other
mechanisms which can be attached at build-time for code generation, so
that the performance hit of Reflection can be avoided at runtime.
This is how I've been doing JSON-RPC named params in C# for about a
year now.

Matt (MPCM)

unread,
Nov 5, 2007, 12:07:02 PM11/5/07
to JSON-RPC
> example for deployment to production), the positional parameters can simply
> be used and the names will be ignored.

Or the request object param specification for the method/server simply
would not accept named params at all, which I think makes more sense
for people who may lose the ability to do that mapping depending on
their environment/language/compile options. Much like the reduced
field suggestion, I am not sure the goal is to make sure everything
conceptually maps into the 1.0 spec and works fine in the existing
implementations.

> If you were using pure name-value pairs (no positional info), one would have
> to alter client communication based on compiler options. Yikes...

Something about this still just bothers me. There are fundamental
differences in saying `these are the named values to use in your call
of the method` vs `the order I sent matches what you expect`. If the
names are not in the same order the server expects, this could easily
lead to unintended results. Passing the same object to 2 different
servers and getting two different results because one didn't support
named params and applied a different, but still valid, order.

Unwrapping an object in request object construction would require
client side knowledge about the servers expected mapping/capabilities
to prevent this. Is it wise to try to do this type of mapping?

A legitimate question I have to those who would like to keep it as a
separate list, what advantages does having knowing the param names
offer if you have to match the order client side anyway (besides 1.0
compat)?

Kris Zyp

unread,
Nov 5, 2007, 12:16:41 PM11/5/07
to json...@googlegroups.com
A legitimate question I have to those who would like to keep it as a
separate list, what advantages does having knowing the param names
offer if you have to match the order client side anyway (besides 1.0
compat)?
 
If you are talking with 1.1 server, you don't need to know the order. Knowing the names is enough.

Matt (MPCM)

unread,
Nov 5, 2007, 12:36:30 PM11/5/07
to JSON-RPC
The same would be true if you passed an object param directly
(right?).

What advantage(s) does 2 lists (list of values and a list of names)
have over a single object within a 1.1 server?

Kris Zyp

unread,
Nov 5, 2007, 12:41:48 PM11/5/07
to json...@googlegroups.com
What advantage(s) does 2 lists (list of values and a list of names)
have over a single object within a 1.1 server?
It would support variable argument methods (where the number of arguments exceeds the number of parameter names).
Kris
 
 



Arthur Blake

unread,
Nov 5, 2007, 1:48:49 PM11/5/07
to json...@googlegroups.com
Sorry Kris, didn't mean to put you on a wild goose chase for that one :)

I agree that annotations for this would be anti-DRY and not good.
What about making use of javadoc information with some kind of custom
doclet (I know this is not optimal either, but maybe a little more DRY
than annotations, because we all should be javadoc'ing our public APIs
anyway, right?) -- it could make use of the javadoc info stored in
some config files somewhere and just throw an exception if the info is
not there.

This would require an additional build step to generate the info
properly... or maybe you could also have that ability to use
annotations as well for those that want it (supporting both ways, or
even including byte code introspection as well as a 3rd method for
when debug info is available)

(this is starting to sound like hibernate xdoclet/annotations isn't it?)

Just some more ideas...

Kris Zyp

unread,
Nov 5, 2007, 2:02:18 PM11/5/07
to json...@googlegroups.com
BTW, I will not be terribly disappointed if you all think that passing an object for the params is better. While I still think that technically two lists is a better solution for flexibility and interoperability, the object as a parameter has one very significant advantage: It is the prettiest, most obvious solution.
Kris

Jeffrey Damick

unread,
Nov 5, 2007, 5:05:09 PM11/5/07
to json...@googlegroups.com
You couldn't use a map to do that?

On Nov 5, 2007 12:41 PM, Kris Zyp <kri...@gmail.com> wrote:
>
>

Kris Zyp

unread,
Nov 5, 2007, 7:40:31 PM11/5/07
to json...@googlegroups.com
How would you do it with a map? There are more values than names.
If I am the only one in favor of two lists (names list and values list), you certainly don't need to let my dissent keep you from proceeding with 1.1 progress. If everyone else wants an object/map as a valid params value, than go for it.
Kris

Michael Clark

unread,
Nov 6, 2007, 2:41:22 AM11/6/07
to json...@googlegroups.com
Hi All,

I'm in favour of going back to a pure jsonrpc 1.0 style parameter array
and instead to use the "service procedure description" information to
allow for robust client/server support for positional and named
arguments (as well as potentially interface versioning).

Let me explain my reasoning.

A distillation of the requirements of various people into the simplest
common denominator is what I think we should be trying to achieve with
the development of this standard and also consideration of prior
versions of the standard for compatibility unless there is a good reason
to break it (i.e. not a nice way to meet a new requirement as an
evolution on top of the present 1.0 draft).

I believe we can meet the requirements for positional and named
arguments through an augmentative approach rather than a complete change
in interface - (and at that, an interface change that is fragmented into
3 different approaches (implied position, named and explicit positioned
arguments) to only complicate new implementations / backward
compatibility and interoperation between different implementations).

The common denominator for most/all computer language
procedure/functions declarations is an "arity" comprised of an ordered
set of arguments, each with a specific type (and perhaps other
attributes such as c# in/out and various other type qualifiers beyond
our scope). This is IMHO where we should start.

From my understanding the need for named and/or explicitly positioned
arguments is to support easier versioning of interfaces.

So for the following example procedure interface (in whatever language
you choose):

void updateUser [ string name, string address ]

you would get this type of request:

{
"method": "updateUser",
"params": { "name": "Michael", "address": "Waikikamukau" }
}

and if I add phone as the second argument to the procedure on the server:

void updateUser [ string name, string phone, string address ]

I can safely send it in a different order to the client (or not send the
new parameter at all, whatever the case may be).

{
"method": "updateUser",
"params": { "name": "Michael", "address": "Waikikamukau", "phone":
"555-1234" }
}

This looks simple but in practice as others are noting - it is not
really supported by all languages (due to lack of information of
parameter names at runtime) so is perhaps is not the best candidate for
a basic format for procedure parameters in jsonrpc.

But by making procedure parameter descriptors mandatory, the required
information for named and/or positional arguments could be known a
priori (making this additional request data redundant) just by applying
the simple optimisation rule of factoring out repeated information and
moving it instead to the the initiation step (fetching procedure
descriptors a single time in the beginning):

e.g. procedure information known in advance:

{
"name": "foo"
"return": "void"
"param": [
/* arity information used to supporting named/positional arguments */
{ "type": "string", "name": "name" }
{ "type": "string", "name": "phone" }
{ "type": "string", "name": "address" }
]
}

Then a call in a JS client with just "name" and "address" named
arguments could be converted on the wire to:

{
"method": "foo",
"params": [ "Michael", null, "Waikikamukau" ]
}


The client could then provide a call interface that accepted an object
with named arguments (for those trying to achieve this) instead of an
array and use the reflection information to guarantee type and interface
version safety (auto-filling arguments not provided with null or
whatever policy the client decides on). We have just optimized this
repeated information out of the call format into the initiation
information - but we still have all of the information needed for
argument names and explicit positions in the client.

<< IMHO the named parameter is not such an attractive option in the long
run from a client coding perspective as it precludes a native binding
that can infer these argument names from a traditional style JS call or
a call from most other languages (this is after all an rpc so asides
from async callbacks, it would be nice to have clients support a regular
style function call syntax). In any case by making a method with a
single parameter of object you can get the named argument approach for
free any way (so in fact array parameters is a natural superset). >>

For languages like Java, where we can't easily get argument names, we
could just name them "arg0", "arg1", etc ... in the parameter
descriptor. The position is implied by the index in the parameter
descriptor array.

This is much rather where I would see the spec going than the currently
quite complex/fragmented approach having to support 3 different styles
of argument passing especially when all the requirements could be
achieved by a simpler and more concise approach (mandating procedure
descriptors containing method arity). This current parameter nonsense is
one of the main reasons why I personally would be hesitant to adopt the
1.1 spec (cost/benefit).

The advantages of the 1.0 array approach augmented with arity
information from the procedure parameter descriptors:

Pros
* supports named and positional arguments
* fully backwardly compatible with 1.0
* smaller on the wire
* simplicity in line with original json-rpc objectives (i.e. we don't
want another SOAP)

Cons
* Can't distinguish between null or no argument provided (how important
is this?)
* Client needs to assemble array if named or explicit position type
requests are done

This approach could even be taken a step further and the procedure
descriptors could also have a version number for each method - making
rock solid support for mutliple versions of interfaces (assuming the
method version is included in the request). Then by allowing multiple
procedures descriptors with the same name (but differing arity and/or
version) would go along way towards solving these problems in a much
more solid way (and create a more official way to support overloading
which exists in the jabsorb java jsonrpc implementation).

Also WRT Service procedure description. Could we possibly change the
type naming to the standard ECMAScript naming? This service descriptor
is only sent once so saving 5 or 6 bytes should be less important than
clarity. i.e. could we use instead:

boolean, number, string, array, object and void

Any consensus on taking this kind of approach?

My 2c.

Michael.

Henrik Hjelte

unread,
Nov 6, 2007, 3:58:41 AM11/6/07
to json...@googlegroups.com
I think that the idea by Michael Clark is the best one.
The goal should be to pick something that is the common denominator
for all computer languages.

if we should start cherrypicking features, I want the Lisp model which
combines named parameters, optional parameters (with an is-supplied
feature to discriminate between null and missing values), and a &rest
parameter for what is left.
http://www.lisp.org/HyperSpec/Body/sec_3-4-1.html

Don't want to implement that? Then lets settle for a simple model, as
Michael suggested.

I suggest that in the procedure parameter descriptors you should be
allowed extra information, like the default and documentation I've
added below.

/Henrik Hjelte

{
"name": "foo"
"return": "void"
"param": [
/* arity information used to supporting named/positional arguments */

{ "type": "string", "name": "name" , "default" : "No name yet" }
{ "type": "string", "name": "phone" , "documentation": "Excluding
area code! }


{ "type": "string", "name": "address" }
]

"documentation": "The foo method does all sorts of things"
}

Matt (MPCM)

unread,
Nov 6, 2007, 7:44:09 AM11/6/07
to JSON-RPC
You made some very good points. But I would support not a required
service description. I see that as much more heavy handed approach,
and would not want to see types dragged into the system by default.

My goal was to be able to pass an object as a param, because under the
1.0 spec I end up pass the object that contains the params as a single
item array. It just seemed a little strange and counter-intuitive
compared to just sending an object. In essence, for such calls I have
a public facing wrapper taking 1 param that maps to the real call's
arguments.

I think I support Kris's direction (and others) more than this. At
heart I'm 1.0 purist... but extending the params to an object seemed
to feel right, because 1.0 servers would not be able to process it.
This avoids the confusion of trying to create a meta-system + code of
making it work in 1.0 clients/servers. Even at that, in my view not
all servers/calls have to support both methods. Perhaps I am not
looking at this in strict enough rpc terms in my head...

Going to dwell on this for a bit, perhaps it is best left as an
experiment/extension spec until I have better arguments/examples. I'm
still in support of making sure json-rpc is not bound to a particular
protocol however. I also find the reduced fields concept very
interesting, maybe these are the things that belong in a 2.0 which
breaks backwards compatibility with 1.0.

--
Matt (MPCM)

Kris Zyp

unread,
Nov 6, 2007, 10:15:20 AM11/6/07
to json...@googlegroups.com
On the service descriptor side, I would like to make a proposal for using
the JSON Schema (http://www.json.com/json-schema-proposal/) definition as
the basis for service description/type definitions. The JSON Schema type
definitions are very similiar, but by using the JSON Schema, RPC can utilize
an existing spec and share the same typing definition as other JSON data.
The JSON Schema type definitions are also more robust and flexible. Does
that seem reasonable? I will create a proposal for this and get it out to
this list shortly. I am hoping, by leaning on an existing spec, the JSON-RPC
could really remain simple and clean.
Thanks,
Kris

----- Original Message -----
From: "Michael Clark" <mic...@metaparadigm.com>
To: <json...@googlegroups.com>
Sent: Monday, November 05, 2007 11:41 PM
Subject: [json-rpc] Re: JSON-RPC (object specification)


>

Matt (MPCM)

unread,
Nov 6, 2007, 10:21:42 AM11/6/07
to JSON-RPC
I am in support of JSON Schema being used with JSON-RPC for service
descriptions. Looking forward to the post. :)

Jeffrey Damick

unread,
Nov 6, 2007, 1:00:54 PM11/6/07
to json...@googlegroups.com
Great points.

But can we take a step back...

What problem are we trying to solve? What does 1.0 not do that we
need from 1.1?

It feels like we've gotten so into the weeds on this issue that we've
lost track of the goal.. (What is the goal?)

Weston Ruter

unread,
Nov 6, 2007, 2:20:34 PM11/6/07
to json...@googlegroups.com
The long-standing proposal in 1.1WD for the ability to pass an object (named parameters) as a method's params, in addition to a positional array, has resulted in implementations that accept either. Something to consider in the debate is the waves that have been created by the published working draft. If we decided to scrap named parameters, then these existing 1.1WD implementations would become invalid.

Regarding going with the lowest common denominator for all computer languages: limitations in languages should be made up for by implementations; language limitations shouldn't dictate or limit what JSON-RPC is, I believe. Client and server implementations in any language can be built to handle named parameters; if a language cannot perform runtime method named parameter-list introspection (such as Java), then implementations simply have to provide an API for redundantly declaring a method's named parameters and their types. This redundancy is already standard in Java land with Doc Comments in Javadoc. For example, as I illustrated previously with a JavaScript example:

var server = new RPCServer(" http://example.com/service/");
function findBusiness(name, location){
    //do logic
}
server.addMethod(findBusiness, ['name', 'location']);

Jeffrey Damick

unread,
Nov 6, 2007, 2:48:07 PM11/6/07
to json...@googlegroups.com
Right, but my point is: why do we want these features? What does it
buy us? If we can't answer these questions then maybe we don't need
them..

For example, i think there is value in a standard service description
being a standard requirement, then generic tools could easily be
created for testing and generating stub libraries against anybody's
json-rpc X.X service.

Skylos

unread,
Nov 6, 2007, 10:54:06 PM11/6/07
to json...@googlegroups.com
On 11/6/07, Jeffrey Damick <jeffre...@gmail.com> wrote:

Right, but my point is: why do we want these features?  What does it
buy us?  If we can't answer these questions then maybe we don't need
them..

Because named parameter passing is the future.  it makes more sense, its more compatible, its the whole reason behind tagged formats in general - like XML or JSON.  Positional parameters are a relic of procedurally based, inflexible programming.  If your application only supports positional - then send positional!  You know your client and server implementations are of necessity going to pretty closely step-locked for any number of reasons - By using named parameters there is LESS locking.  Documentation and human readability is increased, seeing a debug message like

error in call to method store, params: 555, 123, 4567, 33805, 'jollie', null, 'pahrump', null, 89048, null

is just not as obvious as

error in call to method store, params: area:555, exchange:123, phone:4567, street_num:33805, street_name:'jollie', zip_plus_4:'pahrump', zip:89048

and can never be.  My example is a little silly, but you shouldn't have to memorize or refer to documentation on the particular order of parameters all the time.  Regardless if you have a formal service description or not, you SHOULD be able to send data without it being positional, expressing it as what it is - an object - rather than as a single-element list.  Additionally, there is no reason to force fields to appear when it is functionally identical to not have them there at all.  You're just finding excuses to throw exceptions, instead of doing What Is Logical And Expected.

I think this is answer enough.  We do want these features, and because its our perogative to improve something even if the old thing works - we're going to make it better.  Less arbitrary.  Simple, Clear, Logical.  After all, when I read, "YOU MUST HAVE PARAMS BE AN ARRAY", my first thought was "Peh, thats stupid! I want to send an object!"

If you're using legacy code that only supports positional parameters, you'll either have to A. send positional parameters or B. construct a layer to map them, or its just not going to work.  Both are outside the scope of the JSON-RPC main specification.  This lightweight, simple protocol is for making calls and sending and receiving simple through complex data - not defining the niggly details of anybody's legacy code abstraction layers.

Skylos


PS: I don't think named parameter passing as it stands now is the ultimate future, I have a suspicion that something like a registry of information descriptors may be in order sometime - kind of like a global xml schema - the only way I see to have a mass quantity of code blocks that can work with each other cleanly is to have a less than insignificant definition layer for data passing.  Massive, somewhat inefficient, yes - on an overall streamlining level - but as computers get more powerful, streamlining is LESS important, and interoperability is MORE important.  We should step right along with the interoperability - not necessarily to keep alive a dying protocol or format to which we have silly emotional ties - but to encourage this path.  Hierarchal arbitrary data structures for passing is the first step in learning to work in a system like this.  Why else would I have to spend so much time gluing things together with scripts?  Out from the web page, into the script, off to a database, out of the database, process in this portion of a script, play with it oin that portion, go through a set of rules, make changes, put more out to a screen ad infinitum.  If this global interoperability schema was in effect, I wouldn't be needed on this level.  I could merely glom code together in a diagram, and it works because the data passing format is already defined and implicit in the very system itself, regardless of who makes the code.  But I wax verbose and this is tangential at best.

Michael Clark

unread,
Nov 7, 2007, 3:41:34 AM11/7/07
to json...@googlegroups.com
Matt (MPCM) wrote:
> You made some very good points. But I would support not a required
> service description. I see that as much more heavy handed approach,
> and would not want to see types dragged into the system by default.
>

From my POV, the service description is what I think has the most value
in the 1.1 spec.

> My goal was to be able to pass an object as a param, because under the
> 1.0 spec I end up pass the object that contains the params as a single
> item array. It just seemed a little strange and counter-intuitive
> compared to just sending an object. In essence, for such calls I have
> a public facing wrapper taking 1 param that maps to the real call's
> arguments.
>

As you say the named parameter method is presently universally supported
in 1.0 - it just requires an [ ] wrapped around the object. Passing an
object in a one arg array is perhaps a relatively minor trade-off when
you get what you want, plus compatibility to boot.

As an aside, the whole request and response should be wrapped in a [] as
this is safer for clients that use eval (not 100% sure on this but this
security issue has been mentioned many times).

The cometd/bayeux protocol does this also (I believe it is best common
practice):

http://svn.xantus.org/shortbus/trunk/bayeux/protocol.txt

This of course would break 1.0 compat but is logically much simpler to
implement (than mapping object fields to procedure arguments when you
don't have their names).

> I think I support Kris's direction (and others) more than this. At
> heart I'm 1.0 purist... but extending the params to an object seemed
> to feel right, because 1.0 servers would not be able to process it.
> This avoids the confusion of trying to create a meta-system + code of
> making it work in 1.0 clients/servers. Even at that, in my view not
> all servers/calls have to support both methods. Perhaps I am not
> looking at this in strict enough rpc terms in my head...
>
> Going to dwell on this for a bit, perhaps it is best left as an
> experiment/extension spec until I have better arguments/examples. I'm
> still in support of making sure json-rpc is not bound to a particular
> protocol however. I also find the reduced fields concept very
> interesting, maybe these are the things that belong in a 2.0 which
> breaks backwards compatibility with 1.0.
>

I really don't see an easy way to make clients and servers interoperate
if it is optional. e.g.. if an interface is being called with named
parameter arguments from client code but the server only supports the
array arguments approach, the client will then need to translate these
fields into an array - but it doesn't have any information on the order
(without a parameter descriptor). It would also need some sort of
capabilities descriptor exported from the server to know it can do this.

I think it would have to be mandatory in the server if you wanted
interoperation (as well as the array method to access these methods if
this was also to be supported) - or alternatively to provide the service
descriptor information as proposed so the client is able to convert to
an array (needs the order information) if that is the only method
supported by the server.

I'm resigned to the fact that it is in the spec the way it is now as I
know my feedback is pretty late. FWIW I was just pointing out what I
would like to see (mandatory procedure descriptors) and making the
marshalling format simpler i.e. removing the need to implement 3 types
or argument handling to make an interoperable implementation).

If interoperability is a burden, then some folk will just not bother to
adopt the spec (or at least not fully conform to it).

In any case, I have given my feedback.

Michael.


Michael Clark

unread,
Nov 7, 2007, 3:48:43 AM11/7/07
to json...@googlegroups.com
Jeffrey Damick wrote:
> Great points.
>
> But can we take a step back...
>
> What problem are we trying to solve? What does 1.0 not do that we
> need from 1.1?
>

From my perspective I see the major benefits in the procedure
descriptors (which can be used to accomplish interface version safety
and define arguments schema as well as support automatic proxy
generation in smart clients and servers).

The rest of the spec has a good clarification of what was unwritten in
1.0 which is all good! - but the parameter marshalling just appears to
me as a burden with no immediate benefit in my use case (although a
procedure descriptor to allow selection of a method with a matching
arity would be great).

> It feels like we've gotten so into the weeds on this issue that we've
> lost track of the goal.. (What is the goal?)
>

* Interoperability vs Flexibility?
* Grab bag vs common denominator?
* Support for smart introspection vs Manually annotating exported
methods (i.e. add_methods)?

Michael Clark

unread,
Nov 7, 2007, 4:02:54 AM11/7/07
to json...@googlegroups.com
Skylos wrote:
> On 11/6/07, *Jeffrey Damick* <jeffre...@gmail.com
> <mailto:jeffre...@gmail.com>> wrote:
>
>
> Right, but my point is: why do we want these features? What does it
> buy us? If we can't answer these questions then maybe we don't need
> them..
>
>
> Because named parameter passing is the future. it makes more sense,
> its more compatible, its the whole reason behind tagged formats in
> general - like XML or JSON. Positional parameters are a relic of
> procedurally based, inflexible programming. If your application only
> supports positional - then send positional! You know your client and
> server implementations are of necessity going to pretty closely
> step-locked for any number of reasons - By using named parameters
> there is LESS locking. Documentation and human readability is
> increased, seeing a debug message like
>
> error in call to method store, params: 555, 123, 4567, 33805,
> 'jollie', null, 'pahrump', null, 89048, null

The 1.0 format allows you to do this with 4 bytes overhead - just pass
an object as a single argument to your method. No need to try to map
this methodology onto existing multiple argument procedures (which is a
can of worms if you look at it closely).

> is just not as obvious as
>
> error in call to method store, params: area:555, exchange:123,
> phone:4567, street_num:33805, street_name:'jollie',
> zip_plus_4:'pahrump', zip:89048

I agree with this totally. Although it shouldn't be forced on everyone.
Consider a common denominator which supports both methods - rather than
adding complexity to the spec.

> and can never be. My example is a little silly, but you shouldn't
> have to memorize or refer to documentation on the particular order of
> parameters all the time. Regardless if you have a formal service
> description or not, you SHOULD be able to send data without it being
> positional, expressing it as what it is - an object - rather than as a
> single-element list. Additionally, there is no reason to force fields
> to appear when it is functionally identical to not have them there at
> all. You're just finding excuses to throw exceptions, instead of
> doing What Is Logical And Expected.
>
> I think this is answer enough. We do want these features, and because
> its our perogative to improve something even if the old thing works -
> we're going to make it better. Less arbitrary. Simple, Clear,
> Logical. After all, when I read, "YOU MUST HAVE PARAMS BE AN ARRAY",
> my first thought was "Peh, thats stupid! I want to send an object!"

Who said that. I was only suggesting the superset approach which
intrinsically supports both methods with 1.0 compatibility and only 4
bytes more on the wire (including whitespace).

> If you're using legacy code that only supports positional parameters,
> you'll either have to A. send positional parameters or B. construct a
> layer to map them, or its just not going to work. Both are outside
> the scope of the JSON-RPC main specification. This lightweight,
> simple protocol is for making calls and sending and receiving simple
> through complex data - not defining the niggly details of anybody's
> legacy code abstraction layers.

Somehow I don't see this as simplicity. Us mere legacy people have to be
penalised.

Anyway i'll go back to my legacy code ;) I have voiced my opinion which
is why we are all here after all.

~mc

Skylos

unread,
Nov 7, 2007, 9:58:00 AM11/7/07
to json...@googlegroups.com
On 11/7/07, Michael Clark <mic...@metaparadigm.com> wrote:

Skylos wrote:
> On 11/6/07, *Jeffrey Damick* <jeffre...@gmail.com
> <mailto:jeffre...@gmail.com>> wrote:
>
>
>     Right, but my point is: why do we want these features?  What does it
>     buy us?  If we can't answer these questions then maybe we don't need
>     them..
>
>
> Because named parameter passing is the future.  it makes more sense,
> its more compatible, its the whole reason behind tagged formats in
> general - like XML or JSON.  Positional parameters are a relic of
> procedurally based, inflexible programming.  If your application only
> supports positional - then send positional!  You know your client and
> server implementations are of necessity going to pretty closely
> step-locked for any number of reasons - By using named parameters
> there is LESS locking.  Documentation and human readability is
> increased, seeing a debug message like
>
> error in call to method store, params: 555, 123, 4567, 33805,
> 'jollie', null, 'pahrump', null, 89048, null

The 1.0 format allows you to do this with 4 bytes overhead - just pass
an object as a single argument to your method. No need to try to map
this methodology onto existing multiple argument procedures (which is a
can of worms if you look at it closely).

I disagree.  There is a semantic difference.

The 1.0 format says, "call this method with one parameter which is an object".

the 1.1 object option format says, "call this method with this series of named parameters"

I think that the method's input should not necessarily be 'one parameter that is an object'.  I think it should be 'each of the name tagged values sent'.  By forcing the array syntax, you're either introducing magical behavior - a single element array whose element is an object magically turns into "call method with these named parameters" (thus masking the option of ACTUALLY doing so) - or you're making it impossible to actually call a method with named parameters, instead forcing all methods that will be used with the JSON-RPC with named parameters to be called with a single object that has to be internally dereferenced into the parameter set.  (as opposed to implicitly in the call)

Whether the intention either 'call with a single parameter which is an object' or 'call with a series of named parameters' should be Distinct from the client sense!  Surely you can map it the other way in your server to make it consistent within your own local purposes as desired - while still allowing the protocol to be distinct remotely.  The protocol SHOULD express a distinct difference in these concepts.

> is just not as obvious as
>
> error in call to method store, params: area:555, exchange:123,
> phone:4567, street_num:33805, street_name:'jollie',
> zip_plus_4:'pahrump', zip:89048

I agree with this totally. Although it shouldn't be forced on everyone.
Consider a common denominator which supports both methods - rather than
adding complexity to the spec.

I think the magical behavior is a bad idea.  But then I see it as a conceptual call semantic, thus not common.

> I think this is answer enough.  We do want these features, and because
> its our perogative to improve something even if the old thing works -
> we're going to make it better.  Less arbitrary.  Simple, Clear,
> Logical.  After all, when I read, "YOU MUST HAVE PARAMS BE AN ARRAY",
> my first thought was "Peh, thats stupid! I want to send an object!"

Who said that. I was only suggesting the superset approach which
intrinsically supports both methods with 1.0 compatibility and only 4
bytes more on the wire (including whitespace).

That is semantically different without defining magical behavior which is not desired. Allowing objects does not prohibit arrays, I don't see how its not compatible.  We're not making your code stop working.

> If you're using legacy code that only supports positional parameters,
> you'll either have to A. send positional parameters or B. construct a
> layer to map them, or its just not going to work.  Both are outside
> the scope of the JSON-RPC main specification.  This lightweight,
> simple protocol is for making calls and sending and receiving simple
> through complex data - not defining the niggly details of anybody's
> legacy code abstraction layers.

Somehow I don't see this as simplicity. Us mere legacy people have to be
penalised.

You're ALREADY penalized, by having to fight with positionally parametered functions. The fact that you aren't allowed to use object-notation to call your positionally called functions should be beneath relevancy - you wouldn't be doing it anyway, so the fact that that 1.1 spec allows people to make such a call is dramatically irrelevant.  There is no map, therefore an object input is meaningless.  Your consumer code, as I mentioned before, is going to have to lockstep with your vendor code on some level anyway.  You're stuck with arrays, great.  That way at least you have an earlier point at which to say, "oh, we can't handle this call, its not compatible with this library"  - or say, "for the purposes of this server, we translate an object into a single element array with an object in it, because thats how we defined our api"

Anyway i'll go back to my legacy code ;) I have voiced my opinion which
is why we are all here after all.

Yes.  :)

Skylos
 

Matt (MPCM)

unread,
Nov 7, 2007, 11:15:21 AM11/7/07
to JSON-RPC
>Right, but my point is: why do we want these features? What does it buy us? If we can't answer these questions then maybe we don't need them..

The reason I wanted to include an object as a valid param is because
that is how I end up using the system under 1.0. Over 85% of my
systems calls use only single item arrays. Around 97% of my calls by
volume are these single item arrays.

This results in me having an additional wrapping layer of calls to get
to the real functions. Our systems do not expose direct calls, often a
single interface may map to different internal calls. Again this is
fine (and desired) in my case.

The reason I felt objects need to be a param is because more and more
systems expect the name mapping to come with the data. This moves away
from traditional positional rpc systems role. The power of such
technology like JSON is that the data is descriptive and not some
complex non-descriptive nest.

If I call do_something(data) on more than one system, they may do
different things. But with only positional arguments they may
understand my data to mean different things if the client is not
careful. This always requires more client side knowledge of server
expectations. Self describing data is a much more reasonable approach
in a world where a client may be talking with many servers about the
same data without wanting/having to know what each server expects. Or
passing it just what it needs in the format the server needs it. This
is not a traditional approach, but it is a reality of where some
systems are and many are heading.

It abstracts a layer above traditional rpc systems into the realm of
intent and data from the client's view. If the server can map onto 1.0
setup, that is great. It shouldn't have to however, and thus can
choose not accept a param object (like 1.0 for just certain calls).

Service descriptions come in here a lot, and I like mine very loosely
defined. If others would like to do named to positional magic with
that data, and there is need I support that. But mapping one onto the
other anywhere but in the server is not going to be clean. Personally
I want my client kept as simple as possible, and my server as loose as
possible.

It is likely I'll put the object question to a vote during the next
week or so, to see if there is enough interest to continue these
discussions or move it into the 1.1 spec, or perhaps I should push it
into an extension spec. Since this would make 1.0 a subset, perhaps
these should be json-rpc 2.0 topics...

Matt (MPCM)

unread,
Nov 8, 2007, 2:17:15 PM11/8/07
to JSON-RPC
My post seems to have kept everyone quite for over 24 hours, either I
said something smart or something really dumb... ;-)

Matt (MPCM)

unread,
Nov 8, 2007, 2:17:58 PM11/8/07
to JSON-RPC
Quiet even.... needs coffee.

Kris Zyp

unread,
Nov 8, 2007, 2:27:36 PM11/8/07
to json...@googlegroups.com
I think you may have convinced me :).
Michael comments about using service description to maintain call robustness actually made me think that by using service description, the parameter robustness issue can be adequately addressed for positional parameters. I think that the positional parameters will and should be the parameter of choice for introspective/reflective frameworks:
  • Conceptually closest to how calls work in almost all languages (I don't see any evidence that futuristic/modern languages are moving away from positional arguments)
  • Most compact format
  • Greatest flexiblity for variable argument calls
  • Simplest translation from callers and to callees
  • Protection against parameter changes can still be mitigated with service descriptors
However, not everyone is using introspective frameworks (like Matt), and in those cases it is perhaps conceptually more relevant to use objects as parameters. And still, if the object parameter is passed to an introspective framework, at least there is hope that it might be able to handle it.
I don't know though, I am still undecided to tell you the truth. Are we going to vote?
BTW, does anybody have any feedback on my service descriptor proposal?
Kris

Roland Koebler

unread,
Nov 24, 2007, 7:06:48 AM11/24/07
to json...@googlegroups.com
hi,

> What if we parameter names were provided through a separate property like
> this:
> {"id":"call1",
> "method","divide",
> "params":[1,2],
> "names":["dividend","divisor"]}
no, this is bad. see below.

> This is nice because it would still work with 1.0 RPC servers [...], but
> smarter RPC 1.1 servers (like PHP and JS servers) could still use parameter
> names to do more robust name matching
It definitely would *NOT* work with 1.0 RPC servers.
It hides the errors and returns wrong results, which is **much** worse than
returning an error and no result.


Look at this simple example:

with a 1.1 server:
-> {"id":1, "method":"divide", params:[1,2], "names":["dividend","divisor"] }
<- {"id":1, "result": 0.5}
-> {"id":2, "method":"divide", params:[2,1], "names":["divisor","dividend"] }
<- {"id":2, "result": 0.5}
that's ok.

now, with a 1.0 server:
-> {"id":1, "method":"divide", params:[1,2], "names":["dividend","divisor"] }
<- {"id":1, "result": 0.5}
still ok.

-> {"id":2, "method":"divide", params:[2,1], "names":["divisor","dividend"] }
<- {"id":2, "result": 2}
wrong!

so, that would be the wrong way.
(because it *silently* returns *wrong results*.)

regards,
Roland

Roland Koebler

unread,
Nov 24, 2007, 8:19:13 AM11/24/07
to json...@googlegroups.com
hi Michael,

> From my understanding the need for named and/or explicitly positioned
> arguments is to support easier versioning of interfaces.

partly, yes. other reasons are e.g. readability/verbosity,
self-explaining data and simpler usage.


> I can safely send it in a different order to the client (or not send the
> new parameter at all, whatever the case may be).
>
> {
> "method": "updateUser",
> "params": { "name": "Michael", "address": "Waikikamukau", "phone":
> "555-1234" }
> }
>
> This looks simple but in practice as others are noting - it is not
> really supported by all languages (due to lack of information of
> parameter names at runtime) so is perhaps is not the best candidate for
> a basic format for procedure parameters in jsonrpc.
>
> But by making procedure parameter descriptors mandatory, the required
> information for named and/or positional arguments could be known a
> priori

yes, that's right.

> e.g. procedure information known in advance:
>
> {
> "name": "foo"
> "return": "void"
> "param": [
> /* arity information used to supporting named/positional arguments */
> { "type": "string", "name": "name" }
> { "type": "string", "name": "phone" }
> { "type": "string", "name": "address" }
> ]
> }
>
> Then a call in a JS client with just "name" and "address" named
> arguments could be converted on the wire to:
>
> {
> "method": "foo",
> "params": [ "Michael", null, "Waikikamukau" ]
> }

but why should this be done in the *client*? With this information, the
server could equally make the conversion (in probably all languages).
And in my opinion, it's *much* better to do this conversion in the
server. so that:

- the client stays simple
- the server can transform the named parameters to positional parameters.
(if it needs to.)
or -- if it's language supports named arguments -- can simply use the
named arguments directly.

this would have the same "Pros" you listed in your mail, but eliminate
the "Cons".
(except that it would be bigger on the wire, but that's ok for most because
you get self-described data. and you can still use pure positional parameters
where the size on the wire is a major issue.)


regards,
Roland

Matt (MPCM)

unread,
Nov 25, 2007, 11:21:40 AM11/25/07
to JSON-RPC
I had proposed using an object in addition to an array as the param
value in the last proposed spec.

However, this clearly breaks the way many languages work and without
introspection/remapping this will not even be possible in many
languages unless hand coded or through eval type calls. For these
reasons and because I don't desire to break json-rpc or make it
unusable I am going to withdraw my proposal as such. If others still
believe strongly in adding object params to json-rpc, I'm happy to
keep the discussions going as I am a big supporter of it conceptually.

I like sending objects and because of that I started a similar less
rpc bound spec. It is close to json-rpc, but is incompatible because
of a reduction in required fields, loose typing, multicalls at the
base level, and a redefinition of the error object structure to be
more clear. I'm calling it json-oms (object messaging specification),
and there is a google group for it.

I'm not advocating a fork or a split, as the goals are very different.
json-oms is very much a play ground to explore these object mapping
ideas and perhaps bring conclusions back to json-rpc and see if any of
them remain relevant in that context at that time. We can take a vote
on discussing objects further in json-rpc, but I won't be the one
calling for that vote at this point.

Roland Koebler

unread,
Nov 25, 2007, 6:37:07 PM11/25/07
to json...@googlegroups.com
hi Matt, hi all,

> I had proposed using an object in addition to an array as the param
> value in the last proposed spec.

yes, and I think this is good.

> However, this clearly breaks the way many languages work and without
> introspection/remapping this will not even be possible in many
> languages unless hand coded or through eval type calls.

I don't think so.

Yes, many languages don't support named arguments directly, so they
have to map the named arguments to positional ones. So if you want
to use named arguments which such languages, it should be possible
to write a (small) wrapper (or wrapper-generator) which does this.
It shouldn't be necessary to write the mapping for every function
yourself.

And yes, many languages don't support introspection. There, the
information about the function has to be made explicit. But you
already do have this explicit information, as soon as you want
to have service-descriptions (like system.describe).

And last: if you don't want to use named parameters, you don't have to.


SO: I would strongly suggest to keep named parameter (or "object"
as param), because many of us want it.

The only question is, *how* to implement them:
- use "param" with an object or array (the "cleanest"/most obvious way)
- use "param" with an array and e.g. "kwparam" with an object
- ...
This maybe affects how easy it could be implemented in different
languages.

> For these
> reasons and because I don't desire to break json-rpc or make it
> unusable I am going to withdraw my proposal as such. If others still
> believe strongly in adding object params to json-rpc, I'm happy to
> keep the discussions going as I am a big supporter of it conceptually.

I am, too. And there are many others who like/need/want named
parameters.

Without named parameters, I could as well use json-rpc 1.0...

regards,
Roland

Kris Zyp

unread,
Nov 25, 2007, 11:30:14 PM11/25/07
to json...@googlegroups.com

And yes, many languages don't support introspection. There, the
information about the function has to be made explicit.
Introspection is supported in most languages used for the web, but introspection does not necessarily provide information about the names of parameters. Source inspection is the only guaranteed way to determine the parameter names in most languages. Introspection often just provides the number of parameter, and types by position, because calls are almost exclusively made by position internally. The names of parameters is not necessary for calls, and therefore is often omitted in introspection schemes. And source inspection availability can be counted on only in languages that do not provide intermediate representation (bytecode).
 
But you
already do have this explicit information, as soon as you want
to have service-descriptions (like system.describe).
 
Presumably, this would usually come from the introspection. When introspection can't determine names, they may be omitted (to only allow positional parameters).


And last: if you don't want to use named parameters, you don't have to.
 
 
Consequently I wouldn't be upset if they were added. I wouldn't use them, I would stick to positional parameters for the sake of interoperability, compactness, and simplicity, but I would probably support object parameters.
 

- use "param" with an object or array (the "cleanest"/most obvious way)\
This seems like the one to use if we do it.

Weston Ruter

unread,
Nov 26, 2007, 12:50:44 AM11/26/07
to json...@googlegroups.com
I too strongly feel that we should allow passing of named parameters as a params object. The key benefits to allowing for this have been reiterated many times, but two that stand out the most to me are:
  1. Passing parameters in an arbitrary order.
  2. Passing arbitrary parameters, that is, instructing the server to use default parameters any number of arguments.
These are simply not possible in JSON-RPC 1.0, and neither are they possible to implement in a backwards compatible fashion using positional parameters; such capabilities were not provided for in JSON-RPC 1.0. But, such capabilities should (in my opinion) be provided for in JSON-RPC 1.1 as it greatly improves the usability of the protocol. We should evolve JSON-RPC now, before widespread adoption, so that in the future such an improvement will not be difficult (besides, 1.1WD implementations already support passing a params object).

If an author wants to maintain backwards compatibility for 1.0, they simply can write their methods in a way that will accommodate 1.0 libraries.

Here's a vote for adoption of 1.1WD-style params object!

Roland Koebler

unread,
Nov 26, 2007, 5:01:03 AM11/26/07
to json...@googlegroups.com
On Sun, Nov 25, 2007 at 08:30:14PM -0800, Kris Zyp wrote:
> Source inspection is the only guaranteed way to determine the
> parameter names in most languages. [...]

> And source inspection availability can be counted on only in
> languages that do not provide intermediate representation (bytecode).
for those languages, you then have to write the information in an
explict way (so that you don't need introspection), or let e.g. a
wrapper-generator look into the sourcecode to generate the
"named-to-positional-wrapper".
(maybe I'll soon write such a named-to-positional-wrapper for
demonstration purposes.)

> > But you
> > already do have this explicit information, as soon as you want
> > to have service-descriptions (like system.describe).
> Presumably, this would usually come from the introspection.

Ok, if you don't already have this explicit information for
service-descriptions, you at least need them to document your
functions. And if you document them in a machine-readable way,
you again have the needed explicit information.

> Consequently I wouldn't be upset if they were added. I wouldn't use them, I
> would stick to positional parameters for the sake of interoperability,
> compactness, and simplicity, but I would probably support object parameters.

ok :).

regards,
Roland

Weston Ruter

unread,
Dec 9, 2007, 1:51:46 AM12/9/07
to JSON-RPC
I've been working on improving my JSON-RPC server implementation [1]
in PHP to take advantage of the PHP 5 Reflection API which enables
source introspection (I had been re-parsing the source files with the
PHP tokenizer). It also turns out that Java also has introspection
capabilities in java.lang.reflect.

With these libraries, server libraries can easily provide the details
needed for system.describe.

[1] http://code.google.com/p/json-xml-rpc/

On Nov 26, 2:01 am, Roland Koebler <r.koeb...@yahoo.de> wrote:
> On Sun, Nov 25, 2007 at 08:30:14PM -0800, Kris Zyp wrote:
> > Source inspection is the only guaranteed way to determine the
> > parameter names in most languages. [...]
> > And source inspection availability can be counted on only in
> > languages that do not provide intermediate representation (bytecode).
>
> for those languages, you then have to write the information in an
> explict way (so that you don't needintrospection), or let e.g. a

Kris Zyp

unread,
Dec 9, 2007, 11:35:12 AM12/9/07
to json...@googlegroups.com

> in PHP to take advantage of the PHP 5 Reflection API which enables
> source introspection (I had been re-parsing the source files with the
> PHP tokenizer). It also turns out that Java also has introspection
> capabilities in java.lang.reflect.

Our discussions regarding Java introspection have been based around the
java.lang.reflect API, however, one important point we have been making is
that parameter names are not (unless debug is on) stored in Java bytecode,
because it is not necessary for making calls and therefore is not accessible
from this API. java.lang.reflect can provide information about the position
and type of parameters, but not the parameter names. So, yes, you are
correct, java.lang.reflect definitely can and generally should be used to
create service descriptions, complete with type information, but it can only
do so on it's own for positional parameters. It requires extra information
to do named parameters (either through manual configuration, debug enabled
bytecode inspection or access to the source, which unlike PHP, is not
necessarily also available).
Kris

Reply all
Reply to author
Forward
0 new messages