The strategy for partitioning local and remote has not settled.In the first iteration, there was Q.def that marked an object as local-only, and none of its properties would be copied. Without Q.def, any object was treated as copyable to the far side of the connection. I renamed this to Q.master.This turns out to be a hindrance since often you’re working with remote API’s that were designed for local use. So, we flipped the rules. In the current state of affairs, all objects have a duality, they can be used as either remote or local. If you use "then", you’ll get a copy of whatever portion of the object was serializable. This is not quite satisfying.Mark Miller is proposing a Q.passByCopy to explicitly mark objects that can be serialized. Value types, like number and boolean, and also arrays, would be assumed to pass by copy.
I have an idle notion of using push() and pull() methods. push() would wrap a local object with a promise that would marshall the value to the other side, and pull() would ask a remote object to marshall itself back.Kris Kowal
--
You received this message because you are subscribed to the Google Groups "Q Continuum (JavaScript)" group.
To unsubscribe from this group and stop receiving emails from it, send an email to q-continuum...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
--
You received this message because you are subscribed to the Google Groups "Q Continuum (JavaScript)" group.
To unsubscribe from this group and stop receiving emails from it, send an email to q-continuum...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
--
You received this message because you are subscribed to the Google Groups "Q Continuum (JavaScript)" group.
To unsubscribe from this group and stop receiving emails from it, send an email to q-continuum...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
Please do share. We'd love to hear more.
Your guys comments were really interesting to me partly because I'm honestly still very new to ideas like promise pipelining and some of what's possible with the q-connection approach (it's funny some of my learning about q-connection came not from the README but from single stepping my code btw! if there's more docs/examples/related things I should read please let me know!)
This stuff has also got me thinking about why I hadn't thought as much about some of these more challenging questions like serializing functions and things that you guys have.
I think it's because I started from a "document oriented" RESTful services background/bias.
Esp. about a year ago I came off a somewhat large-ish JSON services heavy project done with server side javascript (using ringo on the JVM not node).
The thought of assigning URIs to CommonJS modules, and exposing their exported functions RPC style, seemed similar but cleaner than the design we'd used on the above project. I liked the thought of eliminating some of the explicit URL routing (this would be kind of a "Naked Objects" notion).
So my initial thoughts were around commonjs "remote modules" being analogous to (gasp the SOAP-y notion of) "web services" grouping and exporting functions as the "endpoints".
What strikes me looking at q-connection now is some of the limitations/constraints assumed by the above ideas,
There's not really a strong object oriented focus on these web service focused systems I've been working on - they tend more towards statelessness and JSON messages (as "value objects").
And my initial thoughts were around *functions* as the unit the developer thinks of as running on machine A or machine B. Functions that take simple string/integer parameters (the kind of things easily passed in a URI) and return JSON (or html).
I guess this background explains why I honestly hadn't thought much about e.g. higher order functions i.e. passing or returning functions from the remote objects.
And then commonjs modules are effectively singletons, so even the idea of exposing remote factory functions returning *objects* wasn't much in my thinking.
I could see reading about E last week how such functions returning objects are a step in the direction of an actor-like system i.e. dealing with *many* dynamically created remote objects. I've read about Erlang and Akka and I'm very interested in all these things - heck even mobile code/mobile agents - but it's RESTful services I've worked with most (and ack cough gasp CORBA before that :).
So in a nutshell I'd mostly mainly thinking about JSON as the "messages"/"payloads" sent between these "remote modules" deployed to different machines. Which seems both simpler and less powerful than q-connection but also avoided some of these more challenging design questions.
Going a little further: regarding performance, maybe in contrast to promise pipelining, for my pet project I'd thought a lot about the need for caching and that easy configurable client-side caching of RPC results would be critical (and would be the analog of RESTs notion of transferring "representations"). I got heavy into NetKernel a few years ago which pursues (broadly speaking) similar goals of bringing "REST inside your system" (and intelligent caching of both final, as well as intermediate, results).
Also vaguely related to promise pipelining - I've implemented this idea I think I first saw in (the ill fated) Jaxer project, that a single file could hold functions that get deployed onto multiple machines (another project I'm aware of doing this is Opa). So I'd allowed you to annotate whether an exposed "remote modules" function should "runat" the client or the server. So a single remote module file could have functions that ran on both client and server.
It feels to me there could be cases where this ability to easily throw a chunk of code into a function that runs in your remote "peer" would answer a similar problem to the ones helped by promise pipelining. Having the whole function run on the other side seems less sophisticated than promise pipelining, but otoh the promise pipelining examples you guys gave above are having to work with the promises using the .get/.invoke style api (maybe harmony direct proxies could help?)
As I look at these ideas more it's this question of power versus simplicity and ease of understanding that keeps running through my mind.
And whether there's a sweet spot between what I'd been doing with my "remote modules" versus what q-connection provides, that could be really simple to explain and understand for just an average developer.
The strategy for partitioning local and remote has not settled.In the first iteration, there was Q.def that marked an object as local-only, and none of its properties would be copied. Without Q.def, any object was treated as copyable to the far side of the connection. I renamed this to Q.master.
var counter = {count: 0,add: function (x) {counter.count += x;console.log('in add: count is now ', counter.count);return counter.count;}
};var example = {counter: Q.master(counter)}
--