> The projects are python based, although I'd
> like something that is easy to integrate with other languages,
> particularly for web-based apps - so either Java-script or Action-
> script (Flash).
json-rpc 2.0 is relatively new, so probably many languages don't have
an implementation yet. but it's very simple to write a json-rpc 2.0
implementation.
> So I was disappointed to see that json-rpc 2.0 is proposed to be
> client-server and not peer-to-peer. Is that because the actual
> implementations that have been done are all (or at least
> predominately) client-server based and not peer-to-peer, and are thus
> not fully compliant with the 1.0 spec?
no, that's not the reason.
the reason is that "client-server" is simpler and a "subset" of
peer-to-peer:
- "peer-to-peer" always requires all peers to be both server and client.
often, that's not necessary -- so it's better to *allow* server-only and
client-only peers.
- but "peer-to-peer" of course still is possible: simply run a
json-rpc-server *and* a json-rpc-client on the peers...
regards,
Roland
I don't see how this works in practice, particularly when the peer
> - but "peer-to-peer" of course still is possible: simply run a
> json-rpc-server *and* a json-rpc-client on the peers...
initiating the connection may be behind a NAT firewall.
For example, I might want to so is something like this:
--> {"jsonrpc": "2.0", "method": "get_primes", "params": { "calc_to":
100000, "callback": "add_prime"}, "id": 1}
<-- {"jsonrpc": "2.0", "method": "add_prime", "params": [2], "id":
1001 }
--> {"jsonrpc": "2.0", "result": "okay", "id": 1001}
<-- {"jsonrpc": "2.0", "method": "add_prime", "params": [3], "id":
1002 }
--> {"jsonrpc": "2.0", "result": "okay", "id": 1002}
...
<-- {"jsonrpc": "2.0", "method": "add_prime", "params": [99991], "id":
10592 }
--> {"jsonrpc": "2.0", "result": "okay", "id": 10592}
<-- {"jsonrpc": "2.0", "result": "complete", "id": 1}
I don't see how this can be done by running both a client and server
at both ends - at least not in a way that works across NAT firewalls
and where the callbacks are clearly 'within' the initial method call
and could not easily be spoofed by a third party. So either I am
missing something obvious, or I don't really understand what you mean
by 'simply run a json-rpc-server *and* a json-rpc-client on the
peers'.
On Thu, Sep 18, 2008 at 05:40:40PM -0700, Rasjid wrote:
> > - but "peer-to-peer" of course still is possible: simply run a
> > json-rpc-server *and* a json-rpc-client on the peers...
>
> I don't see how this works in practice, particularly when the peer
> initiating the connection may be behind a NAT firewall.
json-rpc only defines how the messages should look like -- it of course
does not tell you, how to open a connection (since it's transport-
independent). it also doesn't forbid to send several requests over
1 connection.
so, if you have a NAT firewall, the connection must be established from
the appr. peer, and this connection is then used by both peers to send
requests and responses.
but:
> For example, I might want to so is something like this:
>
> --> {"jsonrpc": "2.0", "method": "get_primes", "params": { "calc_to": 100000, "callback": "add_prime"}, "id": 1}
>
> <-- {"jsonrpc": "2.0", "method": "add_prime", "params": [2], "id": 1001 }
that shouldn't be a problem. from the transport's view (which is
relevant for e.g. the NAT firewall), this looks *exactly* the same
as a "normal" json-rpc-request with a response:
--> {"jsonrpc": "2.0", "method": "sum", "params": { "a": 1, "b": "2"}, "id": 1}
<-- {"jsonrpc": "2.0", "result": "3" }
> I don't see how this can be done by running both a client and server
> at both ends - at least not in a way that works across NAT firewalls
> and where the callbacks are clearly 'within' the initial method call
> and could not easily be spoofed by a third party. So either I am
> missing something obvious, or I don't really understand what you mean
> by 'simply run a json-rpc-server *and* a json-rpc-client on the
> peers'.
the only thing you need, is a json-rpc-implementation (or: a
json-rpc-transport), which supports to run both a server and client
*on the same "endpoint"*, so that it receives the rpc-message, and then
--depending on the message-- gives it to the server-part or to the client
part.
(and the transport-implementation (e.g. sockets, http(s),...) must of course
also allow to send more than 1 request per established connection -- but
that's always the case if you want to communicate bi-directionally with
a peer behind a NAT firewall. and it's completely independent of the
used RPC-protocoll.)
regards,
Roland
The thing that bugs me is that the json parsers on both ends are not
stream-based, and they would have a hard time understanding when, on an
open socket, a message has been fully received so that it can be
processed. Any hint about how to implement that?
Maybe using the long-polling technique the server (oops, php peer) can
close the connection as soon as it has sent every set of
requests+responses it has ready, and the client (oops, js peer) can use
this event as signal that the received payload is complete for the
moment and start processing it? It sounds very hackish as a base for a
working communication protocol...
Bye
Gaetano
I also wish there was better streaming support. You can do some workarounds,
if you are streaming an array of messages, and the server sends whole items
in between pauses, it is viable to look at the first and last characters to
see if the array is incomplete and manipulate the characters in order to
parse. Of course you can still get progress events for partial items, but
usually you can just ignore such unparseable events, as you can't usually do
much with partial items anyway.
> Maybe using the long-polling technique the server (oops, php peer) can
> close the connection as soon as it has sent every set of
> requests+responses it has ready, and the client (oops, js peer) can use
> this event as signal that the received payload is complete for the moment
> and start processing it? It sounds very hackish as a base for a working
> communication protocol...
Yeah, usually we just do long-polling anyway.
Kris
On Fri, Sep 19, 2008 at 12:12:17PM +0200, Gaetano Giunta wrote:
> The thing that bugs me is that the json parsers on both ends are not
> stream-based, and they would have a hard time understanding when, on an
> open socket, a message has been fully received so that it can be
> processed. Any hint about how to implement that?
I think there are 2 different ways to detect when a json-rpc-message
is "fully received":
a) send 1 message per connection
here, detecting when a message ends is simple.
and there are even some ways to use this with bidirectional
communication between peers behind NAT (which was the original
problem), e.g.:
http://www.bford.info/pub/net/p2pnat/
http://linide.sourceforge.net/nat-traverse/
b) send several messages in a single connection
here, the end of a message has to be detected somehow. the following
ideas came to my mind:
- use a stream-based JSON parser
- use a "json-splitter" or a "simplified streaming json parser" which
splits a long string into several JSON objects.
(a la: "count { and }, and detect { and } inside of strings")
- use a transport which tells when a message is "fully received"
(e.g. the http-approach: send the (content-)length of the json-rpc-
message before the json-rpc-string)
(an other "workaround" would be to add an optional "length"-parameter
to json-rpc and define that it must be the 1st member in the json-rpc-
string; but this has serveral disadvantages compared with the
solutions above: it still requires some kind of "simplified streaming
json-rpc-parser", in would only work for json-rpc (and not for other
json objects), and it would somehow break with json (and require
special json-serializers) since the order of the members in a
json-object normally is irrelevant.)
but this problem is independent of json-rpc, and probably is exactly the
same with every other rpc-protocol. does anyone know how they solve
it?
regards,
Roland
> here, the end of a message has to be detected somehow. the following
> ideas came to my mind:
>
> - use a stream-based JSON parser
> - use a "json-splitter" or a "simplified streaming json parser" which
> splits a long string into several JSON objects.
> (a la: "count { and }, and detect { and } inside of strings")
I liked my idea of a "json-splitter" ;), and so I tried to write one, which
turned out to be quite simple. (due to the really nice and clean JSON. :))
with this, it's easy to separate several json-objects/arrays, which
are e.g. received over a streaming connection, and so solve the
"several streaming json-objects"-problem. the resulting strings can
then be used with any (non-streaming) json-parser.
note, that this not only works for json-rpc, but for json in general.
here's the simple python-code. and since python-code often looks like
executable pseudo code, it should be readable by all programmers, even
if you don't know python ;). and it should be easily translatable to
any other language.
but one warning: this is really experimental code. *I* think it should
work, but I'm not sure that this is really the case. if you find
anything (any bug, any JSON-string etc.) which might breakt it: please
tell me.
----
def json_split(s):
"""Split a stream of JSON-objects/arrays into separate strings.
Every resulting string then contains exactly 1 JSON-array/object.
This is especially useful if you send JSON-objects/arrays over
a streaming connection, and want to separate the JSON-objects/arrays
before deserialization.
:Parameters: s: (unicode-) JSON-string
:Returns: (LIST_OF_JSON_STRINGS, REMAINING_STRING)
:Note: no error-handling is included
:Example:
verbose example::
>>> import simplejson
>>> test = {u'a': u'b', u'1': 2, u'c': {u'1': [1, 2], u'3': [{u'd': [u'}']}], u'2': {u'3': 4}}, u'xy': u'x ] } " [ { y'}
>>> json = simplejson.dumps(test)
>>> stream = json*5
>>> print stream[0:120] #show first 120 characters of stream
{"a": "b", "1": 2, "c": {"1": [1, 2], "3": [{"d": ["}"]}], "2": {"3": 4}}, "xy": "x ] } \\" [ { y"}{"a": "b", "1": 2, "c"
>>> list = json_split(stream)
>>> result = simplejson.loads(list[0][0])
>>> result == test
True
the same in much shorter notation::
>>> import simplejson
>>> test = {u'a': u'b', u'1': 2, u'c': {u'1': [1, 2], u'3': [{u'd': [u'}']}], u'2': {u'3': 4}}, u'xy': u'x ] } " [ { y'}
>>> result = simplejson.loads(json_split(simplejson.dumps(test)*5)[0][0])
>>> result == test
True
:Author: Roland Koebler (r.ko...@yahoo.de)
:Version: 2008-09-20-pre
"""
json_strings = []
state = 0 #0=search start, 1=array, 11=string inside array
# 2=object, 12=string inside object
b = 0 #begin of string-part
i = 0 #current position in string
cnt = 0
while True:
if i >= len(s):
break
if 0 == state: #find start of json-array or object
#TODO: what to do with s[b:i]
if s[i] == '[':
state = 1
b = i
cnt = 1
elif s[i] == '{':
state = 2
b = i
cnt = 1
elif state > 10: #inside string
if s[i] == '\\': # skip char after \
i += 1
elif s[i] == '"': # end of string
state -= 10
elif 1 == state: #inside array
if s[i] == '"':
state += 10
elif s[i] == '[':
cnt += 1
elif s[i] == ']':
cnt -= 1
if cnt == 0:
json_strings.append(s[b:i+1])
state = 0
b = i+1
elif 2 == state: #inside object
if s[i] == '"':
state += 10
elif s[i] == '{':
cnt += 1
elif s[i] == '}':
cnt -= 1
if cnt == 0:
json_strings.append(s[b:i+1])
state = 0
b = i+1
i += 1
return (json_strings, s[b:])
----
regards,
Roland
But I fear using this as clear sign of message separation would make the
parser too brittle - unless both ends of the communication are
controlled, I'd rather have a parser that keeps track of the current
nesting level and flags any pairs of }{ and ][ as errors...
that's almost exactly what my json_split() code (that I mailed on saturday)
does: it detects if the json-string contains an array or object, counts
the nesting level, and skips strings. and if you replace the line
"#TODO: what to do with s[b:i]" in my code with something, which checks
that s[b:i] is empty or only contains whitespace, and otherwise returns
an error, that should do what you need...
(although, if you use python, probably a "yield"-approach may be
better -- but this won't be a big change.)
regards,
Roland
----- Original Message -----
From: "Stephen McKamey" <jso...@gmail.com>
To: "JSON-RPC" <json...@googlegroups.com>
Sent: Monday, September 22, 2008 11:27 AM
Subject: [json-rpc] Re: Why is 2.0 client-server and not peer-to-peer?