Single request, multiple responses - legal?

1,980 views
Skip to first unread message

John

unread,
Dec 23, 2011, 2:27:53 PM12/23/11
to JSON-RPC
The JSON-RPC 2.0 Specification does not, IMO, forbid multiple
responses (each with the same id) being returned in response to a
single request, and I believe this could be useful for some remote
APIs. For example, when the client invokes some process on the server
and desires that progress state transitions be returned to it, each
related to the original request.
Of course, this could be handled differently: a request could include
an application level id in the params, and the requestee could then
send a series of notifications to the requestor, and those
notifications could include the same application level id in the
params. But this seems overly complex when there is already a JSON-RPC
id, itching to be used...
I believe this has been brought up a couple of times, but not so
directly. Thoughts?
Regards,
John

Matt (MPCM)

unread,
Jan 1, 2012, 9:07:37 PM1/1/12
to JSON-RPC
Itching to be misused... IMO. ;)

The spec does not forbid a lot of things, but that does not make them
a good idea. Most existing clients will view responses to requests
that have already been answered as bad noise, or possibly worse.
Rather than bending the spec, I would make it part of your rpc setup
and calling convention. You are going to have to control the client
and the server possibly...

What you are describing would work best when you have both sides
functioning as the client/server, in duplex. This also assumes a
transport that will allow either bi-direction communication, or at a
minimum the means to reverse contact the origin client (a simple http
trasport, one way is not going to do this for you).

A as client places a request to B
B as server takes it, understands the context to return data beyond
the initial request
B as server returns initial call
*Now*
B as client places a notification to A
A as server takes it, and does whatever you need to do.

Another consideration: If I were doing this all from the client, which
is how I'd approach it, I'd return a job id with the result of the
initial call. Expose this through the server for a call that can check
on the status, or possibly link it into something more like pub/sub.
It really depends on what you are trying to accomplish.

What is your planned use case?

--
Matt (MPCM)

Andrew Barnert

unread,
Aug 14, 2012, 6:40:05 PM8/14/12
to json...@googlegroups.com
I didn't find this in my search before posting my proposal on "pipe methods" to explicitly allow this.

Anyway, I assumed it was not legal in JSON-RPC as-is—and it's certainly not handled by any of the libraries I looked at—so I wrote a proposal to add it to the standard, and wrote and modified some libraries to handle it.

It's actually very useful, for essentially many of the same cases that iterators/generators/lazy lists/etc. are useful in programming, or where SAX style is useful instead of DOM style in parsing, and so on.

The toy example I used in my proposal was generating primes. If I tell the service to generate a large number of primes, I want to start getting results back (and displaying them to the user, and allowing the user to cancel) right away, not hours later. For a more realistic example, I've got a service that looks up cover art for an album, using a variety of different services, and starts feeding back URLs as they come in, so you don't have to wait for all of the services to return/fail/timeout before you start downloading the images and displaying them to the user.

There are ways around this without multiple responses—as suggested in your email, and in fact in the original question you were answering. But they're more complicated, more verbose, and more stateful. As the OP says, "this seems overly complex when there is already a JSON-RPC id, itching to be used..."

The explicit "get next result" method (your second suggestion) requires the server to manage jobs for each client. Which means it's not just stateful, it pretty much requires a session management infrastructure. Also, the "job id" you suggest is duplicating the exact same functionality that the "id" already serves, but at the application level instead of the protocol level.

The duplex method is less problematic, although it does still require some kind of application-level duplication of the "id" scheme (unless each client is only ever going to make a single call, which seems implausible). But it requires bidirectional communication. I'm not sure if that's even allowed in JSON-RPC 2.0, but it's definitely not required, and it's hard to implement, and most libraries don't handle it, and if you're using HTTP it may not even be possible.

In fact, a big part of the reason I implemented "pipe methods" is that it was much easier to add to existing libraries than bidirectionality.

Andrew Barnert

unread,
Aug 14, 2012, 6:43:10 PM8/14/12
to json...@googlegroups.com
I forgot to answer the "Itching to be misused" bit.

In my proposal, I have a few suggestions for extensions to the extension that I think take care of this. It's not hard to make an explicit distinction between single-response and multi-response methods, or to add a special error code to signal "done sending multiple responses", etc. See the proposal for details.

But I don't think any of these are actually necessary. I'm using multi-response "pipe" methods today, and my team hasn't run into any temptation to misuse them. As simple as it is to make a "pipe" call, it's always simpler to make a single-response method call, so if you don't want to get responses as they come in and deal with them asynchronously, you don't bother doing it.


On Sunday, January 1, 2012 6:07:37 PM UTC-8, Matt (MPCM) wrote:

Darren

unread,
Aug 15, 2012, 3:50:51 AM8/15/12
to json...@googlegroups.com
I actually use something extremely similar in my applications to allow in-band data streams.
To follow on from your primes example:

--> {"jsonrpc": "2.0", "method": "primes", "params": {"upto": 7}, "id": 1}
<-- {"jsonrpc": "2.0", "result": 2, "stream": true, "id": 1}
<-- {"jsonrpc": "2.0", "result": 3, "id": 1}
<-- {"jsonrpc": "2.0", "result": 5, "id": 1}
<-- {"jsonrpc": "2.0", "result": 7, "stream": false, "id": 1}

The protocol itself distinguishes the start and end of a 'stream', otherwise works in exactly the same way. There may be a use for a way to kill a 'stream' by its ID within the protocol itself perhaps.

I can only imagine this being fully available to full duplex connections but could easily be embedded within protocols such as HTTP with state already implemented.

As more applications are making use of pushing data rather than simply pulling it as the current protocol only defines, it may be required to actually define a method for this before people start implementing many different hacked on implementations.

Darren


--
You received this message because you are subscribed to the Google Groups "JSON-RPC" group.
To view this discussion on the web visit https://groups.google.com/d/msg/json-rpc/-/DsvdNz_t8g0J.

To post to this group, send email to json...@googlegroups.com.
To unsubscribe from this group, send email to json-rpc+u...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/json-rpc?hl=en.

Andrew Barnert

unread,
Aug 15, 2012, 2:47:48 PM8/15/12
to json...@googlegroups.com
You should take a look at my more detailed proposal (currently the thread right below this one) instead of this quick summary. I suggested a variety of different things that could be added to the basic idea, including explicit signaling for the end of a pipe/stream, but not explicit signaling for the start of one. (I'll get to that in a second.)

Anyway, there are clearly multiple projects that are extending JSON-RPC to make use of this, and we've all thought of slightly different ways of doing it, which seems to back up your point that it's worth coming up with a standard way of doing it. Even just trivially changing the standard to explicitly allow multiple responses/errors in the protocol, leaving signaling up to the application level, opens the door for people to start using it, and proposing different extensions to add protocol-level signaling.

I'm not sure why you think this could only be fully available to full duplex connections. Over a request-response protocol, the stream of JSON responses is just a single HTTP response. As far as HTTP is concerned, sending 10 JSON objects back isn't much different from sending back a single JSON object that happens to be an array of 10 other objects, but as far as JSON-RPC is concerned you can parse the former as they come in while the latter isn't valid until you've got all 10. (In fact, a big part of the reason I built this was to deal with half-duplex HTTP, because the first alternative people always think of is "send the results back as notifications", which works fine in early prototyping over full-duplex TCP…)

Anyway, back to your idea for signaling the start. I didn't even consider this, because my assumption was that the client side (both library and application) has to know in advance that it's expecting multiple responses, but on further thought, I was wrong. First, if your client model is, say, callbacks being triggered out of an event loop, it's easy to handle multiple callbacks. But even for a model where, say, RPCs look like synchronous functions that block your greenlet, it's not hard to deal with this: the client can guess, but it doesn't have to be right. If you call "pipe" on a non-pipe method, it's a generator function that only generates one value; if you call "method" on a pipe method, it's a regular function that waits until the stream is done and returns multiple values. (In fact, without me even thinking about it, my bjsonrpc fork already does the first half of this.) So, this does seem like a perfectly reasonable extension.
Reply all
Reply to author
Forward
0 new messages