Deferred = function(/*Function?*/ canceller){
// summary:
// Creates a new promise, encapsulating a sequence of
callbacks in
// response to a value that
// may not yet be available. This is modeled after the
Deferred class
// from Twisted <http://twistedmatrix.com>.
}
Deferred.prototype = {
cancel: function(){
// summary:
// Cancels a Deferred that has not yet received a value,
or is
// waiting on another Deferred as its value.
},
callback: function(result){
// summary:
// Fulfills the Deferred promise object, beginning the
callback
// sequence with a non-error value.
},
errback: function(/*Error*/error){
// summary:
// Fulfills the Deferred promise object with an error,
beginning the
// callback sequence with an error result.
},
progress: function(progressValue){
// summary:
// Indicate that progress has been made on fulfilling
this Deferred object
},
addCallbacks: function(callback, errback, progressback){
// summary:
// Add separate callback, errback, and progressback to
the end of the callback
// sequence. Non-function values are ignored.
},
wait: function(){
// summary:
// This will block the execution of the current function
while waiting for the Deferred
// to be fulfilled, *if* the implementation and this
Deferred support this operation. If
// supported, the function will return the result of the
resolved Deferred when it is
// is fulfilled. If the Deferred is resolved with an error,
than this function will throw the
// the provided error object. If the concurrency model of
the implementation or the
// Deferred object do not support blocking, the "wait"
property should be a null or
// undefined. This API is not intended to imply that
implementations that provide
// a "wait" function are superior.
}
}
Deferred.when = function(value, callback, errback){
// summary:
// When the value is not a Deferred object, the callback
function is immediately called
// with the value as the first argument.. If the value is an
instance of Deferred, this is
// equivalent to calling value.addCallbacks(callback, errback);
}
Deferred.wait = function(value){
// summary:
// When the value is not a Deferred object, the value is
immediately returned. If the
// value is an instance of Deferred, this returns the result
of executing value.wait();
}
Tyler Close wrote:
> Ahh... I see from the wiki that you must be proposing a standard API
> for interaction between libraries. In that case, I'll just forge ahead
> with an explanation of why a promise API is better than a deferred
> API.
>
> First off, it's worth spending some time on this, since this kind of
> API tends to be very contagious. Once one module adopts it, it spreads
> to its public interface and so to the working's of other libraries.
> Having many different asynchronous messaging APIs would be about as
> annoying as having multiple different string APIs (for those who
> remember Win32 programming and the constant fiddling with string
> types). Since an asynchronous messaging API also introduces what are
> effectively new control flow statements to the language, many slightly
> different APIs could be very disorienting for a programmer.
>
I completely agree, that is why we want to standardize a promise API.
No, it doesn't. Why should a remote object API be combined with a
promise API? That seems like a poor separation of concerns. Why is the
remote object API more first class in the async world than other async
APIs? What protocol for remote object interaction would we use?
IIUC, you are suggesting that read-only promise are valuable for
security purposes. That makes sense, I agree. Is there a reason we can't
have a method or getter on the Deferred API that returns a "clone" that
doesn't affect the result in the original? It seems like unnecessary
complexity to force differentiation between the read-only and writable
promises for the 99% use case that doesn't need to send a promise to an
untrusted function.
> 3) Debugging
>
> In local-only computing, stack traces are often a crucial debugging
> tool, as they provide a snapshot of the call chain leading up to an
> error condition. In asynchronous messaging, this call chain is broken
> up, since the callee is invoked later. Having lost a crucial debugging
> tool, its important to provide a similarly powerful one. For promises,
> Mark Miller, Terry Stanley and myself have been working on Causeway, a
> program that can stitch stack traces back together after they've been
> broken up by asynchronous messaging
> <http://waterken.sourceforge.net/debug/>. AFAIK, the Deferred API has
> not been designed with similar debugging support in mind.
>
>
Is there a reason why the ref_send API would be more debuggable than the
Deferred API for promises that I proposed?
Also, what does the Q stand for?
Thanks,
Kris
On Tue, Mar 24, 2009 at 4:24 PM, Kris Zyp <kri...@gmail.com> wrote:
> Tyler Close wrote:
>> In a distributed scenario, the good and account may each be in a
>> separate process from the local one. Using the ref_send API, this code
>> would look like:
>>
>> var good = ...
>> var account = ...
>> var payment = Q.post(b, 'withdraw', [ Q.get(good, 'price') ])
>>
>> In the above code, both the good and account variables hold a promise
>> for a remote object. The global 'Q' variable provides access to the
>> promise API. The call "Q.get(good, 'price')" accesses the price field
>> on the object that the 'good' promise will eventually refer to and
>> returns a promise for the value of the field at the time it was read.
>> We then pass this promise as an argument to the withdraw method, which
>> returns a promise for the produced payment object.
>>
>> AFAICT, the proposed Deferred API doesn't provide a way to express the
>> above interaction.
>>
> No, it doesn't. Why should a remote object API be combined with a
> promise API?
Not so much 'combined with' as 'compatible with'. A promise is useful
for setting up operations, like invocations and field accesses, on an
object that is yet-to-be-determined. The Q.post() and Q.get() methods
exist to do that setup on any promise, regardless of why the target
object has not yet been determined. Both of these methods also return
a promise, making it easy to setup another layer of pending operations
on the return values from the first layer of operations. For example,
in the above code, the return promise from the Q.get() call is
immediately passed as an argument to the Q.post() call. That one line
of code at the end of the example generates 3 promises and sets up 2
layers of future operations. The goal of the Q.post() and Q.get() API
is to enable that level of expressiveness for working with
yet-to-be-determined objects. It's about the syntax. Assuming all the
objects were local, the example could also be coded using only when()
calls, which are similar to the when() methods defined in the proposed
Deferred API. For example:
var payment = Q.when(good, function (goodValue) {
return Q.when(account, function (accountValue) {
return accountValue.withdraw(goodValue);
};
};
For the local only case, the above code is semantically equivalent to
the previous code, but is more awkward to read and write and doesn't
work when the good and/or account may be remote. So the Q.post() and
Q.get() API is about providing a more productive syntax for promises
and a compatibility layer that we can plug a remote messaging
implementation into.
> That seems like a poor separation of concerns. Why is the
> remote object API more first class in the async world than other async
> APIs? What protocol for remote object interaction would we use?
So if you crack open the source code on the example page I've provided
for the ref_send library <http://waterken.sourceforge.net/bang/>,
you'll find that this separation of concerns between the promise API
and the remote messaging implementation is present. The ref_send.js
file only defines the promise API and provides an implementation for
local-only promises. This implementation has been done such that it is
easy to plug in other promise implementations, which is what the
web_send.js file does, providing remote promises built on top of XHR.
The goal is to support many different promise implementations, all
manipulated through the same API provided by the Q object. For
example, a planned next step is to provide a promise implementation
built on top of the new cross-frame scripting APIs. From the
programmer's perspective, all these different promises can be treated
the same.
>> 2) Safe coordination with local objects
>>
> IIUC, you are suggesting that read-only promise are valuable for
> security purposes. That makes sense, I agree. Is there a reason we can't
> have a method or getter on the Deferred API that returns a "clone" that
> doesn't affect the result in the original?
When you're making heavy use of promises, you end up passing them
around quite freely, like you do with normal references in synchronous
code. Once you start doing asynchronous operations, you quickly find
that most of your references become promises. Imagine what it would be
like to program in Javascript if you had to "clone" a reference before
passing it to another object. Your code would be littered with "clone"
operations in a haphazard and awkward way, making it less readable and
likely still not secure since you probably forgot to do the "clone"
operation in some places.
>> 3) Debugging
>>
>> In local-only computing, stack traces are often a crucial debugging
>> tool, as they provide a snapshot of the call chain leading up to an
>> error condition. In asynchronous messaging, this call chain is broken
>> up, since the callee is invoked later. Having lost a crucial debugging
>> tool, its important to provide a similarly powerful one. For promises,
>> Mark Miller, Terry Stanley and myself have been working on Causeway, a
>> program that can stitch stack traces back together after they've been
>> broken up by asynchronous messaging
>> <http://waterken.sourceforge.net/debug/>. AFAIK, the Deferred API has
>> not been designed with similar debugging support in mind.
>>
>>
> Is there a reason why the ref_send API would be more debuggable than the
> Deferred API for promises that I proposed?
Since the proposed Deferred API doesn't provide a Q.post() or Q.get()
API, it can't be used to generate remote operations and so can't
generate the Causeway log events used to stitch together traces of
call chains that cross process boundaries.
> Also, what does the Q stand for?
Q stands for "queue", as in "queue this invocation for later" or
"queue this field access for later". The operations done via the Q
object are things that don't happen immediately, but rather are queued
to happen later.
> Thanks,
> Kris
My pleasure,
--Tyler
I certainly agree that we want promises and remote object interaction to
be compatible. And if we agree that the goal is compatibility and not
combining, is it safe to surmise that we can define these with separate
APIs/modules (for ServerJS's purposes)?
>
>> That seems like a poor separation of concerns. Why is the
>> remote object API more first class in the async world than other async
>> APIs? What protocol for remote object interaction would we use?
>>
>
> So if you crack open the source code on the example page I've provided
> for the ref_send library <http://waterken.sourceforge.net/bang/>,
> you'll find that this separation of concerns between the promise API
> and the remote messaging implementation is present. The ref_send.js
> file only defines the promise API and provides an implementation for
> local-only promises. This implementation has been done such that it is
> easy to plug in other promise implementations, which is what the
> web_send.js file does, providing remote promises built on top of XHR.
> The goal is to support many different promise implementations, all
> manipulated through the same API provided by the Q object. For
> example, a planned next step is to provide a promise implementation
> built on top of the new cross-frame scripting APIs. From the
> programmer's perspective, all these different promises can be treated
> the same.
>
I am curious how this works. If I write a SOAP handler and JSON-RPC
handler, how does Q.get or Q.post know which handler to call?
var vow1 = fetchObjectForSOAPInteraction();
var vow2 = fetchObjectForJSONRPCInteraction();
Q.post(vow1, "foo", []);
Q.post(vow2, "foo", []);
If I wrote both the fetch functions, how do indicate to Q.post how to
execute the method execution on the different remote objects?
>
>>> 2) Safe coordination with local objects
>>>
>>>
>> IIUC, you are suggesting that read-only promise are valuable for
>> security purposes. That makes sense, I agree. Is there a reason we can't
>> have a method or getter on the Deferred API that returns a "clone" that
>> doesn't affect the result in the original?
>>
>
> When you're making heavy use of promises, you end up passing them
> around quite freely, like you do with normal references in synchronous
> code. Once you start doing asynchronous operations, you quickly find
> that most of your references become promises. Imagine what it would be
> like to program in Javascript if you had to "clone" a reference before
> passing it to another object. Your code would be littered with "clone"
> operations in a haphazard and awkward way, making it less readable and
> likely still not secure since you probably forgot to do the "clone"
> operation in some places.
>
>
The "clone" method was supposed to be analogous to your separation of
read-only and writable promises. How is that that programmers using your
API are so much less likely to improperly distinguish between them? If I
spelled it "promise" and made it getter would there be any difference
between them?
Designing the syntax around separate promises still feels we are
designing convenience for the exception rather than the norm.
>>> 3) Debugging
>>>
>>> In local-only computing, stack traces are often a crucial debugging
>>> tool, as they provide a snapshot of the call chain leading up to an
>>> error condition. In asynchronous messaging, this call chain is broken
>>> up, since the callee is invoked later. Having lost a crucial debugging
>>> tool, its important to provide a similarly powerful one. For promises,
>>> Mark Miller, Terry Stanley and myself have been working on Causeway, a
>>> program that can stitch stack traces back together after they've been
>>> broken up by asynchronous messaging
>>> <http://waterken.sourceforge.net/debug/>. AFAIK, the Deferred API has
>>> not been designed with similar debugging support in mind.
>>>
>>>
>>>
>> Is there a reason why the ref_send API would be more debuggable than the
>> Deferred API for promises that I proposed?
>>
>
> Since the proposed Deferred API doesn't provide a Q.post() or Q.get()
> API, it can't be used to generate remote operations and so can't
> generate the Causeway log events used to stitch together traces of
> call chains that cross process boundaries.
>
>
Obviously if you can't generate remote operations they can't be debugged
:P. But, if post and get exist in a separate API, why can't they be
debugged in the same manner from that API?
Thanks,
Kris
On Wed, Mar 25, 2009 at 12:33 PM, Kris Zyp <kri...@gmail.com> wrote:
> Tyler Close wrote:
>> For the local only case, the above code is semantically equivalent to
>> the previous code, but is more awkward to read and write and doesn't
>> work when the good and/or account may be remote. So the Q.post() and
>> Q.get() API is about providing a more productive syntax for promises
>> and a compatibility layer that we can plug a remote messaging
>> implementation into.
>>
> I certainly agree that we want promises and remote object interaction to
> be compatible. And if we agree that the goal is compatibility and not
> combining, is it safe to surmise that we can define these with separate
> APIs/modules (for ServerJS's purposes)?
Not providing a way to schedule future invocations greatly limits the
utility of promises. Really, this functionality is the thing that
makes a promise what it is. Most of the programming idioms in the E
language and the Waterken server rely on this functionality. I'm not
sure that there's much value in a promise API that doesn't support
scheduling future invocations.
It's also worth noting that the Q.get() and Q.post() functions add
very little complexity to the implementation. For example, see my
local-only promise implementation at:
This code follows the "Good Parts" conventions for Javascript, so the
public API is defined at the bottom of the file. Note that the Q.get()
and Q.post() implementations are short sugar methods that rely on
plumbing code that is already necessary for implementing Q.when().
There's very little cost to providing these methods, and a lot to be
gained.
The Q.post() implementation delegates the invocation part of the call
to the provided promise object, so your
fetchObjectForSOAPInteraction() function would return a promise
implementation that does SOAP messaging.
Your use of the 'vow' terminology suggests that you've at least looked
at some of the E language material. Given that, I'm surprised that you
don't think the get() and post() methods are important to the utility
of promises. How did you come to this view?
>>>> 2) Safe coordination with local objects
>>>>
>>>>
>>> IIUC, you are suggesting that read-only promise are valuable for
>>> security purposes. That makes sense, I agree. Is there a reason we can't
>>> have a method or getter on the Deferred API that returns a "clone" that
>>> doesn't affect the result in the original?
>>>
>>
>> When you're making heavy use of promises, you end up passing them
>> around quite freely, like you do with normal references in synchronous
>> code. Once you start doing asynchronous operations, you quickly find
>> that most of your references become promises. Imagine what it would be
>> like to program in Javascript if you had to "clone" a reference before
>> passing it to another object. Your code would be littered with "clone"
>> operations in a haphazard and awkward way, making it less readable and
>> likely still not secure since you probably forgot to do the "clone"
>> operation in some places.
>>
>>
> The "clone" method was supposed to be analogous to your separation of
> read-only and writable promises. How is that that programmers using your
> API are so much less likely to improperly distinguish between them? If I
> spelled it "promise" and made it getter would there be any difference
> between them?
The crucial difference between the two APIs is that the object
returned by Q.defer() is *not* polymorphic with a promise, whereas a
Deferred that has not been clone()'d is polymorphic with one that has.
For example, in the ref_send API,
in callee:
function foo() {
var x = Q.defer();
// do some stuff.
return x; // forgot to return x.promise
}
in caller
Q.when(foo(), ...); // causes runtime error since foo() does not
return a promise
Now, in the proposed Deferred API for the callee:
function foo() {
var x = new Deferred();
// do some stuff.
return x; // forgot to call x.clone()
}
in caller:
foo().when(...); // code works even though clone() not called
In the above code, it may be a security violation for the returned
Deferred to not be clone()'d; however, the error lies hidden since an
honest caller is only expecting a read-only Deferred and so doesn't
attempt any write operations. The security error may lie dormant in
the code until an attacker notices that write operations will also
succeed. The extra write permission leaks out of the caller and sits
waiting for an attacker to notice.
The proposed design for Deferred.clone() is a good example of a design
anti-pattern that is well-known in capability-based design: "Don't
create subtypes that provide greater authority". This anti-pattern is
an important one that we commonly search for when doing security
audits, since it provides a hiding place for security errors in code
that seemingly works and passes its unit tests.
> Designing the syntax around separate promises still feels we are
> designing convenience for the exception rather than the norm.
That's strange, since my own intuition is exactly the opposite.
Between E, Waterken and some others, I've done a fair amount of
programming with promises and also participated in rigorous security
reviews of this code. To me, mutable promises seem like a disaster.
This disagreement is especially surprising since you agreed earlier in
this thread that promises are likely to be used in a library's public
API. With mutable promises, this is like client code getting write
access to a library's internal data structures. It's hard to create
reliable code that way.
>>>> 3) Debugging
>>>>
>>>> In local-only computing, stack traces are often a crucial debugging
>>>> tool, as they provide a snapshot of the call chain leading up to an
>>>> error condition. In asynchronous messaging, this call chain is broken
>>>> up, since the callee is invoked later. Having lost a crucial debugging
>>>> tool, its important to provide a similarly powerful one. For promises,
>>>> Mark Miller, Terry Stanley and myself have been working on Causeway, a
>>>> program that can stitch stack traces back together after they've been
>>>> broken up by asynchronous messaging
>>>> <http://waterken.sourceforge.net/debug/>. AFAIK, the Deferred API has
>>>> not been designed with similar debugging support in mind.
>>>>
>>>>
>>>>
>>> Is there a reason why the ref_send API would be more debuggable than the
>>> Deferred API for promises that I proposed?
>>>
>>
>> Since the proposed Deferred API doesn't provide a Q.post() or Q.get()
>> API, it can't be used to generate remote operations and so can't
>> generate the Causeway log events used to stitch together traces of
>> call chains that cross process boundaries.
>>
>>
> Obviously if you can't generate remote operations they can't be debugged
> :P. But, if post and get exist in a separate API, why can't they be
> debugged in the same manner from that API?
In such a case, I suspect the get() / post() implementation would end
up creating its own promise implementation, since it's useful to be
able to stash information inside the promises for use by the get() and
post() methods. For example, for Causeway logging, these methods need
some way of assigning a stable string identifier to promises. It's
most convenient to store this identifier inside the promise.
--Tyler
var good = ...
var account = ...
var payment = Q.post(b, 'withdraw', [ Q.get(good, 'price') ])
1. spell 'post' as 'call' ... I suppose the first is more a matter of taste than
anything, so I'll just let it alone for now.
2. put the methods on the promise, instead of sending them via the Q object. ... has a
negative impact on the API though. When a caller does a 'post' (or
'call'), it wants assurance that the invocation will be done later,
not immediately.
I'll add to the bike's paintjob by saying that I hate 'call'. For
years the E community has been using 'call' vs 'send' to distinguish
synchronous call-return interactions vs asynchronous message sends. I
often misremember Tyler's 'post' as 'send' but never as 'call'.
Actually, since post/send is by far the most common operation, why not
make Q a function so we write
Q(observer, 'currentValue', [value])
?
>> 2. put the methods on the promise, instead of sending them via the Q
>> object. ... has a
>> negative impact on the API though. When a caller does a 'post' (or
>> 'call'), it wants assurance that the invocation will be done later,
>> not immediately.
Further, the current API enables the ref_send library to treat a
non-promise value as equivalent-for-most-purposes to a promise which
has already been resolved to that value. If these methods were on the
promise, then they could be confused for methods of the same name on
non-promise values.
> Ah, I see. Makes sense. (Cool!)
> So, in E, using the eventual send operator, "aVow <- doSomething()", serves
> the same purpose.
> Ihab
Yes. For future JavaScript, "<-" would unfortunately be ambiguous.
Instead, I hope that
aVow <== soSomething()
could one day desugar to
Q(aVow, 'doSomething', [])
and
when (aVow) ==> (a) {
...a...
}
could desugar to
Q.when(aVow, function(a) { return ...a...; });
--
Cheers,
--MarkM
Actually, since post/send is by far the most common operation, why notmake Q a function so we write
Q(observer, 'currentValue', [value])
For get, I would leave unchanged Tyler's current
Q.get(aVow, 'propName')
For set, delete, for/in, in, instanceof, ===, and all the other
elements of JS inter-object interaction not covered by get & send, we
come to an interesting fork in the design space. Tyler chose
operations that can be mapped to a language independent protocol and
rationalized in terms of HTTPS GET and POST semantics. What he's come
up with works rather well across E, Joe-E, JS, and Caja. Within
Tyler's design, these other operations can be mapped onto posts to an
object that provides these services for its local objects. In other
words, in vat A exports an
exports.JS = Object.freeze({
set: Object.freeze(function(obj, propName, newValue) {
obj[propName] = newValue;
}),
// .. and likewise for the other JS-specific interactions
});
then, when JS in vatB has a remote reference to vatA's JS, say, as
JSARef, then, if it want to eventually set aVow's propName to 8, where
aVow is also expected to be a reference to an object in vatA, it can
do
Q(JSARef, 'set', [aVow, 'propName', 8])
or, with possible sugar,
JSARef <== set(aVow, 'propName', 8)
Yes, even with sugar this is ugly. But I'd rather live with it for the
uncommon case rather than make the intervat object model JS specific.
--
Cheers,
--MarkM
For set, delete, for/in, in, instanceof, ===, and all the otherelements of JS inter-object interaction not covered by get & send, we
come to an interesting fork in the design space. Tyler chose
operations that can be mapped to a language independent protocol and
rationalized in terms of HTTPS GET and POST semantics.
What he's come up with works rather well across E, Joe-E, JS, and Caja.
Q(JSARef, 'set', [aVow, 'propName', 8])
Yes, even with sugar this is ugly. But I'd rather live with it for the
uncommon case rather than make the intervat object model JS specific.
Ihab & I just had a verbal conversation that clarified many things.
I'll try to recap.
If we're not computing inter-vat, why do we need a promise API at all?
If we are computing inter-vat, we're already paying all the semantic
costs of computing between address spaces, and thus most of the costs
of computing between machines. Once we shift to async messages over
unreliable references, we've already mostly shifted to a distributable
model, so let's just get it right. E, Waterken/Joe-E, and AmbientTalk
show that this right way is still bloody simple.
>> What he's come up with works rather well across E, Joe-E, JS, and Caja.
>
> Ok, so this is an extra constraint: "Represent promises in a remote-friendly
> way". Is this something we all really need?
If you're only computing within a vat, why do you need promises? Why
not just continue with conventional sequential programming?
> The lesson I learned from CORBA
> (fwiw) is: Pick a language and VM, and stick with it, and so embrace code
> mobility.
It's a tradeoff. E's distributed object semantics is tightly tied to
its local object semantics. The two were designed together to play
together well. Were E to switch to ref_send as its only remoting
protocol, something would be lost.
Neither JS nor Java were designed to play well with remote objects.
ref_send was initially designed with the main use-case being Java then
Joe-E objects. However, Tyler was careful not to design something Java
specific, and he hasn't. If we were designing something JS specific, I
don't know what I would do differently. When the misfit of computing
across languages is sufficiently small, we may as well pay a small
cost to buy the large benefits.
>> Q(JSARef, 'set', [aVow, 'propName', 8])
>> Yes, even with sugar this is ugly. But I'd rather live with it for the
>> uncommon case rather than make the intervat object model JS specific.
>
> But promises are not *always* used intervat, and the common case *is* JS
> specific.
When you know you're computing intra-vat, then you can painlessly do
all these other operations in a when block. For example, a deferred
'in' operation is simply:
var flagVow = Q.when(aVow, function(a) { return 'propName' in a; });
Since it is presently uncertain whether JavaScript itself will ever
have catchalls, we can't easily write access abstractions, such as
interposed membranes. Instead, we can divide into two solvable cases.
In the local synchronous case, we know what properties the proxied
object already has at the time we create the proxy, so we can create
proxies for each of these properties explicitly. For the asynchronous
case, we don't know, so we can't. Fortunately, by funnelling all async
ops through a generic API, we don't have to. The big advantage of
supporting only deferred get/send/eq directly is that interposed
access abstractions only have to intermediate these operations.
--
Cheers,
--MarkM
I'm interested in a generalized event API, an event loop/reactor/vat
module, and perhaps a worker thread module. I'm curious about whether
it would be appropriate to build one on top of the other, or whether
they should be coupled.
I would not preclude the possibility of inter-vat computing. Client
and server "vat" communication is very likely to become common.
Multiple vats might be composed on either or both the server and the
client, by way of worker thread communication. Even intra-vat
computing in JavaScript demands a strong basis for asynchrony.
I'm thinking that we need a "promise" module, an "event" module, and a
"worker" module, or some subset with some functionality accreted.
Tyler's Q object, would presumably be the "promise" module:
var Q = require("promise");
Tom Robinson, Wes Garland, and I have recently talked about what we're
going to do about timers, threads, and event loops. Wes advocates not
imposing any particular strategy, but providing modules for each. Tom
Robinson proposes the addition of an "onunload" event to the standard,
so a module can hook an event loop or timer service to the end of the
current execution. Without that, we could still explicitly initiate
an event loop, like Twisted's "reactor.run()", or a "vat.run()"
equivalent. I see event loops, timers, vats, reactors, events,
observer pattern, promises, signals, and form binding ideas as closely
related and really ought to flush well with one another, perhaps in
terms of each other. My question for those with experience: what
would the natural architecture stack look like?
At any rate, a summary would be good for the rest of us. I'm still
plodding through Mark's thesis and don't consider myself an expert,
but certainly becoming a fan, of the promise model.
Kris Kowal
Tom Robinson proposes the addition of an "onunload" event to the standard,
so a module can hook an event loop or timer service to the end of the
current execution. Without that, we could still explicitly initiate
an event loop ...
Yeah. Nominally the module system would check whether
environment.onunload exists and call it after the main module has
finished loading. Thus any module in the environment could register a
callback by having been required, thereby permitting events registered
by its API to execute in turn. A high level timer API and worker
thread communication events could be implemented in terms of sleep and
signals.
Kris Kowal
On Tue, Mar 31, 2009 at 7:29 PM, Kris Kowal <cowber...@gmail.com> wrote:
>
> Could either Tyler Close or Kris Zyp summarize the deferred and
> promise API in a documentation form with the options for names and the
> noted preferences?
The update-to-date version of my JavaScript ref_send library is kept at:
Below is a copy of the interface documentation for the main Q object
exported by the library:
{
/**
* Enqueues a task to be run in a future turn.
* @param task function to invoke later
*/
run:,
/**
* Constructs a rejected promise.
* @param reason Error object describing the failure
* @param $ optional type info to add to <code>reason</code>
*/
reject:,
/**
* Constructs a promise for an immediate reference.
* @param value immediate reference
*/
ref:,
/**
* Constructs a ( promise, resolver ) pair.
*/
defer: ,
/**
* Gets the current value of a promise.
* @param value promise or immediate reference to evaluate
*/
near:,
/**
* Registers an observer on a promise.
* @param value promise or immediate reference to observe
* @param fulfilled function to be called with the resolved value
* @param rejected function to be called with the rejection reason
* @return promise for the return value from the invoked callback
*/
when:,
/**
* Gets the value of a property in a future turn.
* @param value promise or immediate reference to get property from
* @param noun name of property to get
* @return promise for the property value
*/
get: ,
/**
* Invokes a method in a future turn.
* @param value promise or immediate reference to invoke
* @param verb name of method to invoke
* @param argv array of invocation arguments
* @return promise for the return value
*/
post:
}
People on this list have been kicking around alternate names for the
'post' method. I'm inclined to keep the interface as is.
> I'm interested in a generalized event API, an event loop/reactor/vat
> module, and perhaps a worker thread module. I'm curious about whether
> it would be appropriate to build one on top of the other, or whether
> they should be coupled.
The ref_send.js library reuses an existing event loop, rather than
creating it's own. Promise libraries are typically coded this way,
layering the promise functionality on top of an existing event loop
API. Concurrency is created by having multiple event loops, with
asynchronous messaging between them. These inter-event-loop references
also conform to the promise API. An event can be modeled as a
(promise, resolver) pair. For example:
var event = Q.defer();
// Register an event handler
Q.when(event.promise, function (value) {
// the event happened
});
// Fire an event
event.resolve('click');
If an event is recurring, you could do something like:
var recurringEvent = Q.defer();
// Register an event handler
var handler = function (next) {
// the event happened!
// register for the next event in the series
Q.when(next, handler);
};
Q.when(event.promise,handler);
// Fire an event
var next = Q.defer();
event.resolve(next.promise);
There are many variations you could create on the above pattern,
depending on what kind of event stream you want to create.
> I would not preclude the possibility of inter-vat computing. Client
> and server "vat" communication is very likely to become common.
> Multiple vats might be composed on either or both the server and the
> client, by way of worker thread communication. Even intra-vat
> computing in JavaScript demands a strong basis for asynchrony.
>
> I'm thinking that we need a "promise" module, an "event" module, and a
> "worker" module, or some subset with some functionality accreted.
> Tyler's Q object, would presumably be the "promise" module:
If you take an approach like the one I outlined above, you'll probably
end up with many different flavours of event modules, all built out of
promises. I think that's a good thing. For a "worker" module,
something like the Google worker module API would do fine. This
provides an event loop and a messaging layer on which to implement
cross-worker promises.
> var Q = require("promise");
>
> Tom Robinson, Wes Garland, and I have recently talked about what we're
> going to do about timers, threads, and event loops. Wes advocates not
> imposing any particular strategy, but providing modules for each. Tom
> Robinson proposes the addition of an "onunload" event to the standard,
> so a module can hook an event loop or timer service to the end of the
> current execution. Without that, we could still explicitly initiate
> an event loop, like Twisted's "reactor.run()", or a "vat.run()"
> equivalent. I see event loops, timers, vats, reactors, events,
> observer pattern, promises, signals, and form binding ideas as closely
> related and really ought to flush well with one another, perhaps in
> terms of each other. My question for those with experience: what
> would the natural architecture stack look like?
A vat spawning API, built on top of something like the Google worker
API, plus a promise API like ref_send, is a good foundation to build
on. This gives you an easy to use concurrency model, asynchronous and
remote messaging, and a foundation for various event APIs.
> At any rate, a summary would be good for the rest of us. I'm still
> plodding through Mark's thesis and don't consider myself an expert,
> but certainly becoming a fan, of the promise model.
It's a surprisingly useful model. The API is rather simple, yet it
solves a wide array of problems. MarkM/Liskov really hit upon
something good.
--Tyler
Like in E, both a promise and far reference implement the same API,
but the underlying implementation is different. One queue's up
operations to be forwarded to the resolved value of the promise, the
other sends the operations over the network to the remote object. As
discussed earlier with Kris Zyp, there can be many different far
reference implementations that use their own on-the-wire syntax. In
that light, think of a promise as a kind of far reference that queues
its messages in RAM instead of on the NIC.
> To an extent, this makes sense since the APIs for a promise and a far
> reference are similar (eventual send operations returning promises). In
> fact, the syntax in E for invocations on these looks the same. The question
> is, in our JS incarnation, should we make them similar or different?
What would it mean for them to be different?
--Tyler
Actually, that not quite right. It would have to be like:
var recurringEvent = Q.defer();
// Register an event handler
var handler = function (info) {
// the event happened!
// register for the next event in the series
Q.when(info.next, handler);
};
Q.when(event.promise,handler);
// Fire an event
var next = Q.defer();
event.resolve({ next: next.promise });
In the previous code, the event handler would never fire, instead
being automatically queued on the next promise.
--Tyler
What would it mean for them to be different?
I'm porting this to ServerJS on Narwhal. I have a question. I'm
going to have to supplant the ADSAFE API with standard library module
equivalents, and standard JavaScript idioms. That means that the
module will no longer be able to depend on static verification and the
ADSAFE API to maintain security invariants. When it counts, globals
will be deeply frozen. Are there any objects I need to freeze?
Kris Kowal
Tyler Close wrote:
> Responses inline below...
>
> On Wed, Mar 25, 2009 at 12:33 PM, Kris Zyp <kri...@gmail.com> wrote:
>
>> Tyler Close wrote:
>>
>>> For the local only case, the above code is semantically equivalent to
>>> the previous code, but is more awkward to read and write and doesn't
>>> work when the good and/or account may be remote. So the Q.post() and
>>> Q.get() API is about providing a more productive syntax for promises
>>> and a compatibility layer that we can plug a remote messaging
>>> implementation into.
>>>
>>>
>> I certainly agree that we want promises and remote object interaction to
>> be compatible. And if we agree that the goal is compatibility and not
>> combining, is it safe to surmise that we can define these with separate
>> APIs/modules (for ServerJS's purposes)?
>>
>
> Not providing a way to schedule future invocations greatly limits the
> utility of promises. Really, this functionality is the thing that
> makes a promise what it is. Most of the programming idioms in the E
> language and the Waterken server rely on this functionality. I'm not
> sure that there's much value in a promise API that doesn't support
> scheduling future invocations.
>
Dojo has benefited greatly from promises that with any special concept
of a future invocation on the promised object. There are a huge range of
uses that don't require a future invocation API (HTTP requests, File IO,
event-completed actions, etc).
So we do need a API for the promise object itself then don't we? In
order to send the method invocation to the target of an unfulfilled
promise, there must be some contract of what to call on the promise for
the remote protocol implementor to deliver that invocation. In the
waterken source code it looks like promise is a function, and the first
argument is the operation (WHEN, GET, or POST), and the other arguments
do other things. I assume you chose to make a promise a function so you
could prevent tampering. Of course on the server side we can actually
make tamper-proof objects that expose methods that safely return value.
It seems to me that this lower-level API (which is apparently
undocumented AFAICT in ref_send) should be what ServerJS should be
standardizing in order to support different promise implementors (with a
separation of plain promises and remote invocation functionality,
perhaps as "subclass"). Then environments can freely provide convenience
methods like those provided by the ref_send API.
OK, that makes sense, the read-only promise should have different method
signatures then the mutable promise. I think that seems reasonable.
>> Designing the syntax around separate promises still feels we are
>> designing convenience for the exception rather than the norm.
>>
>
> That's strange, since my own intuition is exactly the opposite.
> Between E, Waterken and some others, I've done a fair amount of
> programming with promises and also participated in rigorous security
> reviews of this code. To me, mutable promises seem like a disaster.
>
> This disagreement is especially surprising since you agreed earlier in
> this thread that promises are likely to be used in a library's public
> API. With mutable promises, this is like client code getting write
> access to a library's internal data structures. It's hard to create
> reliable code that way.
>
Mutable promises have been far from a disaster for us, they have been
very useful. However, I do see the value in immutable promises, and that
does seem a more reasonable path for object-capability systems. Still we
need an API for that allows for different implementations, and I don't
see how ref_send provides that.
Agreed.
Thanks,
Kris