q-connection "splits" remote objects (functions on server as rpc, strings/ints sent to client by value)?

93 views
Skip to first unread message

Darren Cruse

unread,
Jan 18, 2014, 10:44:55 PM1/18/14
to q-con...@googlegroups.com
I'm liking q-connection a lot, but this one aspect is throwing me - I'm wondering is there some rule of thumb or design rationale that could help me understand why:

If a remoted object has e.g. a string or integer property, as well as a function property, the function property ends up like rpc i.e. where the client has a proxy function that will invoke the function on the server, but the integer/string property is sent to the client as an actual value that the client winds up with locally.

This is confusing to me i.e. that I can't just think of this as a "remote object" or a "local object" it actually winds up split with the functions "remote" and the values "local".

I'm not sure right now how to keep that straight in my mind or how to explain it to others - I'm leaning toward making a rule of thumb that I will never mix functions and simple values properties.  

Instead I will only use functions on my remote objects, possibly with getter/setter style functions to manipulate the remote "values" when I have them, so that I can simply and clearly treat these as fully "remote objects".

Or else I will *only* use non-function properties on the objects so that I can think of those as "json style" objects that are being fully sent across the wire i.e. without any "remote proxying".

Do people agree I'm understanding this behavior correctly?

If so does anybody else have "rules of thumb" like above to keep things clear?

Otherwise there's no way to control this is there?  

(I would really like to have objects with both functions and string/integer properties where both the functions and the properties "live" on the server so to speak where if I call the function it runs on the server and if I change a property value it changes on the server too).

Thanks,

Darren






Darren Cruse

unread,
Jan 19, 2014, 12:27:45 PM1/19/14
to q-con...@googlegroups.com
I made a simplified equivalent of the code that led me to ask my question I'll show below.

It helped me to realize what I was asking doesn't always happen, thought I'm not clear on why the behavior differs.

I'll show an example that doesn't do what confused me and the other that does show what I was asking.

So the following example does change the "count" property server side:

server.js:

var Connection = require("q-connection");
var WebSocketServer = require('ws').Server;
var wss = new WebSocketServer({port: 3000});

var counter = {
 count: 0,

 add: function (x) {
   counter.count += x;
   console.log('in add: count is now ', counter.count);
   return counter.count;
 }
}

wss.on('connection', function (ws) {

 var port = {
   postMessage: function (message) {
     ws.send(message);
   },
   onmessage: null // gets filled in by Q-Connection
 };

 ws.on("message", function (data) {
   port.onmessage({data: data});
 });

 var remote = Connection(port, counter);
});

client.js:

var Connection = require("q-connection");

var WebSocket = require('ws');
var ws = new WebSocket('ws://localhost:3000/path');

ws.on('open', function() {

 var counter = Connection(ws);

 console.log('bumping count by one');
 counter.invoke('add', 1).then(function (count) {
   console.log("count is now ", count);
   console.log('resetting count to 0');
   return counter.set('count', 0);
 }).then(function () {
   counter.get('count').then(function (count) {
     console.log('count is now ' + count);
     console.log('bumping count by one (again)');
     counter.invoke('add', 1).then(function (count) {
       console.log("count is now at", count);
     });
   });
 });
});

This outputs as I'd expect (note after resetting the count to zero the next 'bump' gives 1 as expected):

bumping count by one
count is now  1
resetting count to 0
count is now 0
bumping count by one (again)
count is now at 1

But if I change the code a little so that the remote object has "counter" as a property of the remoted object instead of at the top level of the remoted object, then "count" seems to be sent by value to the client such that "counter.set('count', 0);" only changes the count on the client side.  This causes it's value to deviate from the server side "count" (i.e. as seen by the server side "add" function).

Here's the version with the changes that demonstrate this:

server2.js:

var Connection = require("q-connection");
var WebSocketServer = require('ws').Server;
var wss = new WebSocketServer({port: 3000});

var example = {
 counter: {
   count: 0,

   add: function (x) {
     example.counter.count += x;
     console.log('in add: count is now ', example.counter.count);
     return example.counter.count;
   }
 }
}

wss.on('connection', function (ws) {

 var port = {
   postMessage: function (message) {
     ws.send(message);
   },
   onmessage: null // gets filled in by Q-Connection
 };

 ws.on("message", function (data) {
   port.onmessage({data: data});
 });

 var remote = Connection(port, example);
});

client2.js:

var Connection = require("q-connection");

var WebSocket = require('ws');
var ws = new WebSocket('ws://localhost:3000/path');

ws.on('open', function() {

 var example = Connection(ws);
 var counter = example.get('counter');

 console.log('bumping count by one');
 counter.invoke('add', 1).then(function (count) {
   console.log("count is now ", count);
   console.log('resetting count to 0');
   return counter.set('count', 0);
 }).then(function () {
   counter.get('count').then(function (count) {
     console.log('count is now ' + count);
     console.log('bumping count by one (again)');
     counter.invoke('add', 1).then(function (count) {
       console.log("count is now at", count);
     });
   });
 });
});

And here's the output, note the final count of 2 from the second call to "add" demonstrates the server side "count" seen by add is now different from the client side "count" seen by counter.get/counter.set:

bumping count by one
count is now  1
resetting count to 0
count is now 0
bumping count by one (again)
count is now at 2

Would others expect this behavior?  

It seems when I do example.get('counter') I'm getting a promise for "counter" which behaves differently by sending count client side, than when "counter" was directly remoted via Connection(port, counter)?  

Is that to be expected?  (I didn't expect it but I'm new to q-connection maybe I'm misunderstanding something...)

Thanks (sorry for the length but hopefully it's clear),

Darren

Kris Kowal

unread,
Jan 20, 2014, 1:16:51 PM1/20/14
to Q Continuum
The strategy for partitioning local and remote has not settled.

In the first iteration, there was Q.def that marked an object as local-only, and none of its properties would be copied. Without Q.def, any object was treated as copyable to the far side of the connection. I renamed this to Q.master.

This turns out to be a hindrance since often you’re working with remote API’s that were designed for local use. So, we flipped the rules. In the current state of affairs, all objects have a duality, they can be used as either remote or local. If you use "then", you’ll get a copy of whatever portion of the object was serializable. This is not quite satisfying.

Mark Miller is proposing a Q.passByCopy to explicitly mark objects that can be serialized. Value types, like number and boolean, and also arrays, would be assumed to pass by copy.

I have an idle notion of using push() and pull() methods. push() would wrap a local object with a promise that would marshall the value to the other side, and pull() would ask a remote object to marshall itself back.

Kris Kowal

Mark S. Miller

unread,
Jan 20, 2014, 3:09:30 PM1/20/14
to q-con...@googlegroups.com
On Mon, Jan 20, 2014 at 10:16 AM, Kris Kowal <kris....@cixar.com> wrote:
The strategy for partitioning local and remote has not settled.

In the first iteration, there was Q.def that marked an object as local-only, and none of its properties would be copied. Without Q.def, any object was treated as copyable to the far side of the connection. I renamed this to Q.master.

This turns out to be a hindrance since often you’re working with remote API’s that were designed for local use. So, we flipped the rules. In the current state of affairs, all objects have a duality, they can be used as either remote or local. If you use "then", you’ll get a copy of whatever portion of the object was serializable. This is not quite satisfying.

Mark Miller is proposing a Q.passByCopy to explicitly mark objects that can be serialized. Value types, like number and boolean, and also arrays, would be assumed to pass by copy.

Yes. For .def I am instead have it mean "make a defensible object", which freezes the properties of the object, and of all objects transitively reachable from there by property traversal. (I am also not having Q provide .def.) Having arrays be default pass-by-copy while non-array objects default to pass-by-remote-reference is a compromise but works out well. See <http://research.google.com/pubs/pub40673.html> for an example. Only one object needed to be pass-by-copy, which is the contract-argument object made at the top of page 16. 

The documentation in that paper fails to mention that Q.passByCopy also implies a shallow freeze of the object itself, as otherwise a shallow copy of it would make little sense.

IIRC, the only implicit pass-by-copying of an array in this code is line 32 of figure 3, which in retrospect would anyway have been clearer as:

    return Q.passByCopy(tokens);

so I could probably be argued out of this compromise on arrays.

 

I have an idle notion of using push() and pull() methods. push() would wrap a local object with a promise that would marshall the value to the other side, and pull() would ask a remote object to marshall itself back.

Kris Kowal

--
You received this message because you are subscribed to the Google Groups "Q Continuum (JavaScript)" group.
To unsubscribe from this group and stop receiving emails from it, send an email to q-continuum...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.



--
    Cheers,
    --MarkM

Kris Kowal

unread,
Jan 20, 2014, 4:26:56 PM1/20/14
to Q Continuum
I might be compelled to remove the exception for arrays as well to avoid transferring large arrays over the wire. Consider:

remote.get("fs")
.invoke("list", "/")
.invoke("forEach", function (entry, index) {
    // ...
}).then(function () {
    // complete
})

Which would be "isomorphic" for remote arrays, streams, and streams from iterators or generator iterators. The promise returned by the forEach callback may resolve later to provide a congestion control signal, and the promise returned by forEach provides a completion signal.

In this world, everything except strings, numbers, booleans, null, and undefined would be presumed unserializable. Calling "then" on a promise for a remote object would request notification when the *remote* promise becomes fulfilled or rejected. The object would not be serialized, and the input promise for the remote object would be passed to fulfilled(remote) instead of a local copy of remote.

Currently, there is a special case for functions. A promise for a remote function gets deserialized as a function that will call the remote function asynchronously, returning a promise for the remote result. This is handy for callbacks as in the above example.

From here, I would like to conduct an experiment with push() and pull() in a future release. push() would turn a local promise for an unserializable object (default) into a local promise for a serializable object, such that any remote proxy for this promise would be fulfilled with the marshaled value. pull() would turn a remote promise for an unserializable object into a local promise for the deserialized, fulfilled value of the remote.

// to get a local copy of a file system stat record
remote.get("fs")
.invoke("stat", "/foo.txt")
.pull().then(function (stat) {
    console.log(stat); // {atime, mtime, &c}
})

// to give a record to a remote key value store
remote.invoke("set", "key", Q.push({a: 10, b: 20}))

Kris Kowal

Tom Van Cutsem

unread,
Jan 21, 2014, 6:53:49 AM1/21/14
to q-con...@googlegroups.com, kris....@cixar.com
Hi,

MarkM asked me to join in, so I'll do that :)

First, I share the confusion of the OP regarding the current semantics of having "split-brain" objects whose data props are serialized but function props are proxied.

Second, I agree that arrays should probably also be pass-by-proxy rather than pass-by-copy by default. This is just following the principle of least surprise, as arrays are objects.
Of course, JSON sets a different precedent, but then JSON only deals with data and not with distributed objects.

I find the serialization semantics so confusing that I tried to list the semantics, including a comparison with two other well-known Javascript serialization semantics: JSON and W3C structured clone (i.e. the serialization mechanism used when passing data between web workers):


I gave edit rights to all, so please feel free to update or extend this. I hope it can help inform the discussion.

Cheers,
Tom


Darren Cruse

unread,
Jan 21, 2014, 10:24:53 AM1/21/14
to q-con...@googlegroups.com, kris....@cixar.com
Thanks a lot Kris and Mark I got busy yesterday and just now saw your replies.  

fwiw I was investigating using q-connection in the implementation of a node module I'd been toying with for awhile (using other tools).

Where the module would try and build on commonjs as the basis for a somewhat actor like system of "remote modules" (well more "active object" like really with the RPC feeling of the exported functions).

I had recently fallen in love with examples I've done using this with the generators/yield style of remote calls. 

Reading about E last week it struck me yield plays a role similar to E's "<-" syntax when using generators (yes? :).

As I'm learning more about q-connection I'm asking myself if the module I've been working on adds value or whether I should drop what I'm doing completely and just use q-connection directly (I went through the same thoughts over dnode btw :).

I guess my comment at the moment is just that for my module I was dreaming of something that would feel really simple and non-academic for your average application developer.

Maybe I can share more of what I've been doing at some point and get your feedback.

Mark Miller

unread,
Jan 21, 2014, 10:59:04 AM1/21/14
to q-con...@googlegroups.com, Kris Kowal
Please do share. We'd love to hear more. FWIW, E's "<-" is more like the infix "!" proposed for ES7 at <http://wiki.ecmascript.org/doku.php?id=strawman:concurrency> and <http://research.google.com/pubs/pub40673.html>. The combination of generators and promises does have a similar role, as you noticed, but is altogether quite different. In a distributed context, perhaps the biggest diff is that generators encourages patterns that make promise pipelining impossible.


--
You received this message because you are subscribed to the Google Groups "Q Continuum (JavaScript)" group.
To unsubscribe from this group and stop receiving emails from it, send an email to q-continuum...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.



--
Text by me above is hereby placed in the public domain

  Cheers,
  --MarkM

Mark Miller

unread,
Jan 21, 2014, 12:37:07 PM1/21/14
to q-con...@googlegroups.com, Kris Kowal
I've added three columns whose labels begin with "Proposed", listing the features I expect to propose in this discussion.


--
You received this message because you are subscribed to the Google Groups "Q Continuum (JavaScript)" group.
To unsubscribe from this group and stop receiving emails from it, send an email to q-continuum...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.



--

Darren Cruse

unread,
Jan 22, 2014, 9:35:21 AM1/22/14
to q-con...@googlegroups.com, Kris Kowal
Please do share. We'd love to hear more. 

So at Mark's encouragement the following is a bit rambling but it's a little more about me and what I've been experimenting with in the context of our discussion here.  Still no code just talk but for now...

I'm still going through the links you guys gave and getting my head fully around things you guys said.

Your guys comments were really interesting to me partly because I'm honestly still very new to ideas like promise pipelining and some of what's possible with the q-connection approach (it's funny some of my learning about q-connection came not from the README but from single stepping my code btw!  if there's more docs/examples/related things I should read please let me know!)

This stuff has also got me thinking about why I hadn't thought as much about some of these more challenging questions like serializing functions and things that you guys have.

I think it's because I started from a "document oriented" RESTful services background/bias.

Esp. about a year ago I came off a somewhat large-ish JSON services heavy project done with server side javascript (using ringo on the JVM not node).  

The thought of assigning URIs to CommonJS modules, and exposing their exported functions RPC style, seemed similar but cleaner than the design we'd used on the above project.  I liked the thought of eliminating some of the explicit URL routing (this would be kind of a "Naked Objects" notion).

So my initial thoughts were around commonjs "remote modules" being analogous to (gasp the SOAP-y notion of) "web services" grouping and exporting functions as the "endpoints".

What strikes me looking at q-connection now is some of the limitations/constraints assumed by the above ideas,

There's not really a strong object oriented focus on these web service focused systems I've been working on - they tend more towards statelessness and JSON messages (as "value objects").

And my initial thoughts were around *functions* as the unit the developer thinks of as running on machine A or machine B.  Functions that take simple string/integer parameters (the kind of things easily passed in a URI) and return JSON (or html).  

I guess this background explains why I honestly hadn't thought much about e.g. higher order functions i.e. passing or returning functions from the remote objects.

And then commonjs modules are effectively singletons, so even the idea of exposing remote factory functions returning *objects* wasn't much in my thinking.  

I could see reading about E last week how such functions returning objects are a step in the direction of an actor-like system i.e. dealing with *many* dynamically created remote objects.  I've read about Erlang and Akka and I'm very interested in all these things - heck even mobile code/mobile agents - but it's RESTful services I've worked with most (and ack cough gasp CORBA before that :).

So in a nutshell I'd mostly mainly thinking about JSON as the "messages"/"payloads" sent between these "remote modules" deployed to different machines.  Which seems both simpler and less powerful than q-connection but also avoided some of these more challenging design questions.

Going a little further: regarding performance, maybe in contrast to promise pipelining, for my pet project I'd thought a lot about the need for caching and that easy configurable client-side caching of RPC results would be critical (and would be the analog of RESTs notion of transferring "representations").  I got heavy into NetKernel a few years ago which pursues (broadly speaking) similar goals of bringing "REST inside your system" (and intelligent caching of both final, as well as intermediate, results).

Also vaguely related to promise pipelining - I've implemented this idea I think I first saw in (the ill fated) Jaxer project, that a single file could hold functions that get deployed onto multiple machines (another project I'm aware of doing this is Opa).  So I'd allowed you to annotate whether an exposed "remote modules" function should "runat" the client or the server.  So a single remote module file could have functions that ran on both client and server.

It feels to me there could be cases where this ability to easily throw a chunk of code into a function that runs in your remote "peer" would answer a similar problem to the ones helped by promise pipelining.  Having the whole function run on the other side seems less sophisticated than promise pipelining, but otoh the promise pipelining examples you guys gave above are having to work with the promises using the .get/.invoke style api (maybe harmony direct proxies could help?)

As I look at these ideas more it's this question of power versus simplicity and ease of understanding that keeps running through my mind. 

And whether there's a sweet spot between what I'd been doing with my "remote modules" versus what q-connection provides, that could be really simple to explain and understand for just an average developer.

I guess that's in the spirit of what I said before that maybe I won't give up on what I've been doing but continue pursuing the idea of using q-connection in it's implementation.  

Darren Cruse

unread,
Jan 26, 2014, 10:44:42 AM1/26/14
to q-con...@googlegroups.com, kris....@cixar.com
On Monday, January 20, 2014 12:16:51 PM UTC-6, Kris Kowal wrote:
The strategy for partitioning local and remote has not settled.

In the first iteration, there was Q.def that marked an object as local-only, and none of its properties would be copied. Without Q.def, any object was treated as copyable to the far side of the connection. I renamed this to Q.master.

fyi I did try Q.master and that did work for me.  
If anybody looks at the code I posted above the change was just that the remoted object in server2.js became like:

var counter = {
  count: 0,

  add: function (x) {
    counter.count += x;
    console.log('in add: count is now ', counter.count);
    return counter.count;
  }
};

var example = {
  counter: Q.master(counter)
}

But note Kris said not to use this for real since Q.master has been deprecated as he works out a new/final solution (I do see the Q.master function is already removed in Q's master branch).

Kris Kowal

unread,
Jan 26, 2014, 11:11:39 AM1/26/14
to q-con...@googlegroups.com, kris....@cixar.com
You can continue using Q.master in Q 1.x.x. It will continue being supported. The master branch of Q will not land until version 2 which you will have the option of migrating to, but I don't think it will be release worthy for quite some time. 
--
Reply all
Reply to author
Forward
0 new messages