Futures in 2.10.1 - a proposal

609 views
Skip to first unread message

Philipp Haller

unread,
Nov 27, 2012, 5:37:45 AM11/27/12
to scala...@googlegroups.com
SIP-14 as implemented in Scala 2.10.0 is fairly minimal. This is good, for many reasons. However, the minimality of the design also means that certain functionalities are not readily available, or require a rather low-level programming style. Scala 2.10.1 and 2.10.2 give us the opportunity to add some of the missing pieces.

To kick off the discussion, here is a proposal of functionality to add in Scala 2.10.1.

* Scheduler. Scheduling actions is important, among others, for timeouts where you'd like to schedule the completion of a promise with a `TimeoutException`. So far, Akka users could just use `akka.actor.Scheduler`. Otherwise, it's possible to fall back to a `java.util.concurrent.ScheduledExecutorService`, of course. However, it would be best to have a scheduling service available in `scala.concurrent`. The Akka scheduler would work great, however, it uses a few Java classes (in `akka.util.internal`) under Apache 2.0.

* A convenience method to avoid explicit use of (promises and) the scheduling service:

/** A future that fails if `this` future is not completed in time.
*/
def within(timeout: Duration): Future[T]

* Better integration of `scala.util.Try` and `scala.util.control.Exception`.

* Locals. These are used extensively in Twitter's Finagle [1], and can be used for tracing, debugging, and high-level exception handling abstractions (such as Twitter's monitors). You can think of Locals as more flexible thread-locals- instead of managing state per thread, a Local can be managed per chain of delayed computations when using futures.

* Twitter-style monitors. (If time constraints permit.) These build on top of Locals.

Cheers,
Philipp

[1] https://github.com/twitter/util/blob/master/util-core/src/main/scala/com/twitter/util/Local.scala

√iktor Ҡlang

unread,
Nov 27, 2012, 9:42:43 AM11/27/12
to <scala-sips@googlegroups.com>
Hi Philipp!

It would be very interesting with a Scheudler implemented as a recursive runnable so it can run on top of an ExecutionContext instead of having to allocate its own thread.

"* Better integration of `scala.util.Try` and `scala.util.control.Exception`." <-- What does that mean in practice?

I think we have some combinators that could be added to "object Future", like "retry" etc.

Something me and Havoc experimented with ended up in Akka 2.1, and it's proven _very_ nice from a performance perspective is: https://github.com/akka/akka/blob/master/akka-actor/src/main/scala/akka/dispatch/BatchingExecutor.scala
( it also integrates with BlockContext so if managed blocking is done, it will schedule the remaining batch to run on another thread)

I'd also want to upgrade to the latest JSR166 work by Doug with the commonPool, then we can potentially delegate the ExecutionContext.global to that one.

I think it would be nice to open a SIP-14B as a natural extension to SIP-14

More suggestions?

Cheers,
--
Viktor Klang

Akka Tech Lead
Typesafe - The software stack for applications that scale

Twitter: @viktorklang

Chris Marshall

unread,
Nov 27, 2012, 9:57:31 AM11/27/12
to scala...@googlegroups.com
My experience of writing scheduling DSLs has given me the insight that they are closely aligned with datetime representations. For example, 

  schedule(work) onceAtNext MIDNIGHT

Where MIDNIGHT is a time-of-day is only useful if you have a TimeOfDay class. Doing the legwork yourself of calculating the delay until the next midnight is fine but obscures the intent of your code. This is of course a royal pain because there is no *standard* Java/Scala datetime representation (well, there's no good one, anyway).

I wrote a lightweight scheduling DSL, called foil which uses type classes to remove the dependence on the underlying datetime libraries (you can then provide JODA instances, JSR310 instances etc). It's on GitHub and is described in more detail here (https://github.com/oxbowlakes/foil/wiki/5-minutes-of-your-attention).

Ultimately you get to write code like this:

  schedule(work) immediatelyThenEvery 2.hours untilNext TWO_AM

The same code then works iirespective of whether you're using Joda, JSR310 or whatever under the hood

Chris

√iktor Ҡlang

unread,
Nov 27, 2012, 10:13:11 AM11/27/12
to <scala-sips@googlegroups.com>
Hi Chris,

I'd love to add a Scheduler infrastructure that can be used as the "backend" for external libraries that want to have the DSLs and the date handling – potentially targeting different Date/Time libraries.

So the Scheduler would be wall-time agnostic and only deal with durations, so "foil" would still be applicable to it.

Makes sense?

Cheers,

Chris Marshall

unread,
Nov 27, 2012, 10:31:42 AM11/27/12
to scala...@googlegroups.com
Yes - that's exactly right. You can't quite get away without durations (I call them Intervals) but all the rest can be provided by typeclasses

C

Rich Oliver

unread,
Nov 27, 2012, 5:38:25 PM11/27/12
to scala...@googlegroups.com
I'm pretty new to Scala, but if there is a 2.10.1 shouldn't it be reserved for fixing things. Personally I'm not in favour of binary compatibility for second order releases, but it would make sense for third order. New functionality should be saved for 2.11

Philipp Haller

unread,
Nov 29, 2012, 6:04:51 AM11/29/12
to scala...@googlegroups.com
Hi Rich,

I think it's important to distinguish between the compiler and the library. One thing that can't be broken in a minor release like 2.10.1 is binary compatibility. That applies to both the compiler and the library, of course.

However, it's possible to add functionality to the library without breaking binary compatibility. (The Java Language Specification has a section devoted to changes that are binary compatible which is quite useful.)

On several occasions the SIP-14 team has discussed putting together a SIP-14B for major new features. So, I think the best route would be to come up with a SIP-14B which includes "big additions", but consider smaller additions for 2.10.1 and 2.10.2.

I don't think we should artificially restrict uncontroversial improvements. (Consider that fact that VisualVM was first bundled with JDK 6 update 7- without doubt it was good to ship it then rather than delaying until JDK 7.)

Cheers,
Philipp

Philipp Haller

unread,
Nov 29, 2012, 7:29:35 AM11/29/12
to scala...@googlegroups.com
I don't think we should artificially restrict uncontroversial improvements. (Consider that fact that VisualVM was first bundled with JDK 6 update 7- without doubt it was good to ship it then rather than delaying until JDK 7.)

I realize that VisualVM is not the best example, so let's not get distracted by it.

Ismael Juma

unread,
Nov 29, 2012, 9:12:06 AM11/29/12
to scala...@googlegroups.com
Hi all,

For reference, new Java APIs are _not_ allowed in Java updates (i.e. Java 7 update 1, 2...). On the other hand, new tools (like VisualVM), performance improvements (like a brand new HotSpot) and new implementation-specific things (new HotSpot switches, or private APIs) are allowed.

The idea is that if you stick to Java APIs, you can compile with any JDK with same major version without causing incompatibilities. If new APIs could be added to Java 7 update 2, for example, it would mean that someone who used them (perhaps inadvertently) would cause their library not to work with Java 7 update 1.

Maybe we want something else for Scala, but we need to think of the ramifications and the recommendations for library authors (maybe they always stick to .0 to give flexibility to library users).

Best,
Ismael

Paul Phillips

unread,
Dec 1, 2012, 2:27:56 AM12/1/12
to scala...@googlegroups.com


On Thursday, November 29, 2012, Ismael Juma wrote:
Maybe we want something else for Scala, but we need to think of the ramifications and the recommendations for library authors (maybe they always stick to .0 to give flexibility to library users).

If the only problematic aspect of adding api in a point release is people compiling against it and losing compatibility with earlier point releases, we could also annotate new elements and require you to give a compiler option to use those from source.

Ismael Juma

unread,
Dec 3, 2012, 4:08:51 PM12/3/12
to scala...@googlegroups.com
On Sat, Dec 1, 2012 at 7:27 AM, Paul Phillips <pa...@improving.org> wrote:
If the only problematic aspect of adding api in a point release is people compiling against it and losing compatibility with earlier point releases, we could also annotate new elements and require you to give a compiler option to use those from source.

True, that's an interesting approach.

Ismael 

Jason Zaugg

unread,
Dec 8, 2012, 11:32:53 AM12/8/12
to scala...@googlegroups.com
Or, we could offer a MIMA-like tool that inspects your code and checks that it only calls methods defined in 2.10.0. This wouldn't rely on use remembering to mark new methods as experimental.

-jason

Paul Phillips

unread,
Dec 8, 2012, 10:19:58 PM12/8/12
to scala...@googlegroups.com
On Sat, Dec 8, 2012 at 8:32 AM, Jason Zaugg <jza...@gmail.com> wrote:
Or, we could offer a MIMA-like tool that inspects your code and checks that it only calls methods defined in 2.10.0. This wouldn't rely on use remembering to mark new methods as experimental.

Unless the new tool is integrated with the compiler and run by default, this means point releases are NOT binary compatible by default, which I think would be a big mistake. If binary compatibility is worth having, the default settings must lead to it. I would rather count on us remembering to mark new methods experimental (and we don't have to "remember" - WE can use the MIMA-like tool ourselves!) than burden every user of the compiler with extra steps needed to protect binary compat.
 

Anton Kolmakov

unread,
Jan 5, 2014, 12:43:34 AM1/5/14
to scala...@googlegroups.com
I would really love to see all these features. Is anyone working on this or is it dead?

Ryan LeCompte

unread,
Jan 5, 2014, 12:46:55 AM1/5/14
to scala...@googlegroups.com
+1
> --
> You received this message because you are subscribed to the Google Groups
> "scala-sips" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to scala-sips+...@googlegroups.com.
> For more options, visit https://groups.google.com/groups/opt_out.

Alex Boisvert

unread,
Jan 5, 2014, 8:57:04 AM1/5/14
to scala...@googlegroups.com

+1 ... People that decide to use Scala futures today have to roll their own set of helpers to deal with timeouts and such. Most just use Akka's or Twitter's.

√iktor Ҡlang

unread,
Jan 5, 2014, 9:06:21 AM1/5/14
to <scala-sips@googlegroups.com>

Create an extension to SIP-14 and invite people to participate in it. I'll chime in for sure.

Cheers,
V

charlie robert

unread,
Jan 5, 2014, 9:16:48 AM1/5/14
to scala...@googlegroups.com
Those are good features.  It would be wonderful to gain a scala implementation of Cap’n Proto, level 4, with 3-way promise pipelining and distributed equality.  Would Cap’n Proto fit with the current future/promise semantics or could those semantics be changed or extended to offer these capabilities to the scala community?


- charlie
- charlie robert




charlie robert

unread,
Jan 5, 2014, 9:27:15 AM1/5/14
to scala...@googlegroups.com, capnproto
adding the capnproto list, to see if a small fire can be started to bring more light.

Anton Kolmakov

unread,
Jan 5, 2014, 11:00:28 AM1/5/14
to scala...@googlegroups.com
Agree. We also can start it with collection of suggestions for the extension. Or, just create the SIP with what Philipp suggested and then evolve.

√iktor Ҡlang

unread,
Jan 5, 2014, 11:34:28 AM1/5/14
to <scala-sips@googlegroups.com>
I think any and all specific vendor integrations should live in the community, hence not SIP material.

Cheers,
Cheers,

Viktor Klang

Director of Engineering

Twitter: @viktorklang

charlie robert

unread,
Jan 5, 2014, 11:36:25 AM1/5/14
to scala...@googlegroups.com
I don’t follow what you are saying.

- charlie

√iktor Ҡlang

unread,
Jan 5, 2014, 11:51:07 AM1/5/14
to <scala-sips@googlegroups.com>
" It would be wonderful to gain a scala implementation of Cap’n Proto," <-- sounds like a specific integration to me? I don't see why that should be in the Scala Standard Library.

Anton Kolmakov

unread,
Jan 5, 2014, 12:09:27 PM1/5/14
to scala...@googlegroups.com
For me, it sounds like Rob wants to have something like this, not exact integration. But in any way, I do not see it as a part of futures & promises because Cap’n Proto is about RPC, is not it?

charlie robert

unread,
Jan 5, 2014, 3:22:46 PM1/5/14
to scala...@googlegroups.com
On Jan 5, 2014, at 9:51 AM, √iktor Ҡlang <viktor...@gmail.com> wrote:

" It would be wonderful to gain a scala implementation of Cap’n Proto," <-- sounds like a specific integration to me? I don't see why that should be in the Scala Standard Library.


That is a reasonable argument to make about a vender integration, but this is not necessarily such.  I would point out that there are 2 layers to Cap’n Proto.  

There is the encoding.  This is nice to have, for sure and that would be an interesting exercise for a vendor integration.  It is just one more message structure specification, but quite interesting.  It is firmly in the session layer.

There is also the RPC protocol specification.  As long as there is support for specifying the encoding in the startup protocol, when rendezvous occurs, then the implementation of the underlying encoding can be varied.  The same is true of many of the details of coordination, in the session layer, like encryption and protocol version.  

However, there is also the upper layer of the RPC protocol, which is the remote object refs, object tables and promises.  It has the execution semantics of an event loop, but with the ability to wait and pause a continuation.  Placing that presentation layer on top of a negotiated session is not a vendor specific set of thing.  It is supporting a common protocol standard.  It was derived from Elib.

The whole point of distributed event loops and promise pipelining is to spread the stack of execution between event loop queues, between processes.  This is interesting.  The only things really needed are the ability to send to a promise and to use them as typed arguments in calls.  Also need the ability to mutate, but that is achievable in scala, I think, using a for-comprehension.

On Jan 5, 2014, at 10:09 AM, Anton Kolmakov <an...@kolmakov.me> wrote:

For me, it sounds like Rob wants to have something like this, not exact integration. But in any way, I do not see it as a part of futures & promises because Cap’n Proto is about RPC, is not it?

The capabilities this standard offers are quite powerful and not present in scala.  What is the use of futures/promises with no RPC between processes?  Aren’t you wanting to build a promise library to help with concurrent processing?  Is it useful to have concurrent processing with no distribution?   

- charlie (now preferred, as I buried the old me, dead and gone)

Anton Kolmakov

unread,
Jan 6, 2014, 2:22:03 AM1/6/14
to scala...@googlegroups.com
The capabilities this standard offers are quite powerful and not present in scala.  What is the use of futures/promises with no RPC between processes?  Aren’t you wanting to build a promise library to help with concurrent processing?  Is it useful to have concurrent processing with no distribution

In general, they are used for asynchronous computations, it does not meter whey they happen – locally or remotely. For example, you can use Akka without communication with the world. Another one is Finagle, there is no any sense to use it in scope of the single process. All these examples have one thing in common: they all are built in pair with the futures. They are not trying to extends them, they are trying to use them to build much more complex frameworks. It requires having futures and promises quite simple, easy to use, and without strict contract that makes others to do things in only one right way.

Current simplicity of the futures is fine, but as we can see Twitter, Akka, and others have a lot of tasty stuff which may also be used as a part of Scala's Futures and Promises.

The only things really needed are the ability to send to a promise and to use them as typed arguments in calls.

Could you show an example of this? 

charlie robert

unread,
Jan 6, 2014, 9:56:07 AM1/6/14
to scala...@googlegroups.com
On Jan 6, 2014, at 12:22 AM, Anton Kolmakov <an...@kolmakov.me> wrote:

The capabilities this standard offers are quite powerful and not present in scala.  What is the use of futures/promises with no RPC between processes?  Aren’t you wanting to build a promise library to help with concurrent processing?  Is it useful to have concurrent processing with no distribution

In general, they are used for asynchronous computations, it does not meter whey they happen – locally or remotely. For example, you can use Akka without communication with the world. Another one is Finagle, there is no any sense to use it in scope of the single process. All these examples have one thing in common: they all are built in pair with the futures. They are not trying to extends them, they are trying to use them to build much more complex frameworks. It requires having futures and promises quite simple, easy to use, and without strict contract that makes others to do things in only one right way.

Current simplicity of the futures is fine, but as we can see Twitter, Akka, and others have a lot of tasty stuff which may also be used as a part of Scala's Futures and Promises.

Thanks for explaining, that makes complete sense.  I am not so familiar with all of these features, so I did not know Akka and other frameworks all use Scala futures and promises.  So the Scala Standard Library support for futures and promises are a generalized asynchronous framework used by many implementations.  

Please pardon the length of the below description, it’s complex.  If you are intrigued by all of this, I would suggest spending time to carefully read the Cap’n Proto docs on RPC (http://kentonv.github.io/capnproto/rpc.html) and reading the pages on Elib at erights.org (http://erights.org/elib/index.html), and of course looking at an implementation.

I do know that the semantics of promises, in Elib, are different than the semantics of Scala futures and promises, although I am not sure how well I can describe it.  Please correct my understanding, if faulty.  Scala futures hold the result eventually set, used on the calling side, so a read object.  Scala promises allow the result to be set, used on the called side, so a write object.  It is the 2 ends of the continuation of the result from the async computation.  Communication from the future to the promise is not allowed.

Elib’s promises are different.  They are 1 type of eventual reference (actually 2), that mutate when resolution occurs.  The other types of eventual references are NearRef, FarRef, 1 of {DisconnectedRef, UnconnectedRef, BrokenRef} for partitions and exceptions.  The 2 types of promises are a PromiseRef and a RemotePromiseRef, for local promise versus remote promise.  Under the covers they encapsulate the machinery for the 2 ends of the continuation.  But they are bi-directional.  

Resolution: the result of a message send computation is sent to the resolver of the promise, which mutates the PromiseRef into a Resolved Ref (Near, Far, Broken) or another Promise.  The resolver is the equivalent object to Scala’s promise.  In the implementation I did, closely following Elib, the MessageSend in the called vat has a remote ref to the Resolver of the calling vat.  The immediate result of the computation is sent to this remote resolver.  

Promise pipelining: message sends can be sent to the PromiseRef and forwarded to the vat of the computation.  When the immediate result is obtained from the initial message send, it resolves the remote resolver and then sends queued eventual sends to the promiseRef to the result ref.  That it does so in the called vat, with just 2 vats, means that 1) we save a network hop and so reduce latency in the computation and 2) prepare the semantics to support mobile code.  With more than 2 vats, the 3 vat case, where a previous eventual computation has it’s result used in a send to a 3rd vat, has its result directly sent to the 3rd vat, from the 2nd vat (where the first computation occurred) so it saves 2 network hops and further manages latency of the computation.  So Elib does not really have the equivalent of a Scala future, as it is implicit in having a promise ref.

A couple of points here.  The PromiseRef is a reference to the eventual result.  When the resolver is resolved, the ref is mutated from a promise to the resolved value.  The ability to mutate refs is a difficulty on the JVM with it’s languages.  The JVM does not support this as it is considered a security issue.  The other point is when dealing with statically typed languages, as the promise ref ought to be typed as a promise and the resulting type.  In this way the compiler would be happy with subsequent sends to the promise ref.   Java can’t do these things.  The refs that mutate have to be wrapped in a proxy and the proxy is not typed appropriately.  Groovy can handle the typing issues, as it is a dynamic language.  

My gut tells me that with the use of a for-comprehesion and type inference, scala can address both of these issues.  The for-comprehension is a sort of computation boundary that allows the refs to mutate. 

Issues aside, the use of Scala futures and promises provide a different semantic than Elib’s promises and resolvers.  I suppose I was thinking you all may be interested in providing alternate promises with the correct semantics for Elib-style eventual sending.  Whether this semantic is provided by your promise framework or as a separate effort, support for the Elib-style semantics must be inside the language, for mutation and type inferencing.  This is where my evangelism collides with my knowledge and ability.


The only things really needed are the ability to send to a promise and to use them as typed arguments in calls.

Could you show an example of this? 

I would recommend downloading and exploring the reference implementation of these ideas at http://erights.org.   Cap’n Proto has a calculator sample (https://github.com/kentonv/capnproto/blob/master/c++/samples/calculator-client.c++), though I am unsure if it yet supports pipelining. 

In my java implementation of these ideas, which fail for the above 2 points (mutation and typing), I can do 3-way sends, with pipelining:

      Ref answer = alice.redirectMessage("redirectForTheAnswer", bob).redirectMessage("hashCode");

This sends to alice with bob as an argument.  The method redirectForTheAnswer forwards a send to bob for the method getTheAnswer.  The hashCode send is a pipeline send and is forwarded to alice to be sent when the result resolves over there.  Since the pipeline send is to the result of redirectForTheAnswer, it does not get forwarded to bob.  Note that due to the typing issue, I have to specify a Ref as the parameter to the method redirectForTheAnswer.

Thank you,
charlie


On Sunday, January 5, 2014 10:22:46 PM UTC+2, Rob Withers wrote:

On Jan 5, 2014, at 9:51 AM, √iktor Ҡlang <viktor...@gmail.com> wrote:

" It would be wonderful to gain a scala implementation of Cap’n Proto," <-- sounds like a specific integration to me? I don't see why that should be in the Scala Standard Library.


That is a reasonable argument to make about a vender integration, but this is not necessarily such.  I would point out that there are 2 layers to Cap’n Proto.  

There is the encoding.  This is nice to have, for sure and that would be an interesting exercise for a vendor integration.  It is just one more message structure specification, but quite interesting.  It is firmly in the session layer.

There is also the RPC protocol specification.  As long as there is support for specifying the encoding in the startup protocol, when rendezvous occurs, then the implementation of the underlying encoding can be varied.  The same is true of many of the details of coordination, in the session layer, like encryption and protocol version.  

However, there is also the upper layer of the RPC protocol, which is the remote object refs, object tables and promises.  It has the execution semantics of an event loop, but with the ability to wait and pause a continuation.  Placing that presentation layer on top of a negotiated session is not a vendor specific set of thing.  It is supporting a common protocol standard.  It was derived from Elib.

The whole point of distributed event loops and promise pipelining is to spread the stack of execution between event loop queues, between processes.  This is interesting.  The only things really needed are the ability to send to a promise and to use them as typed arguments in calls.  Also need the ability to mutate, but that is achievable in scala, I think, using a for-comprehension.

On Jan 5, 2014, at 10:09 AM, Anton Kolmakov <an...@kolmakov.me> wrote:

For me, it sounds like Rob wants to have something like this, not exact integration. But in any way, I do not see it as a part of futures & promises because Cap’n Proto is about RPC, is not it?

The capabilities this standard offers are quite powerful and not present in scala.  What is the use of futures/promises with no RPC between processes?  Aren’t you wanting to build a promise library to help with concurrent processing?  Is it useful to have concurrent processing with no distribution?   

- charlie (now preferred, as I buried the old me, dead and gone)


--
You received this message because you are subscribed to the Google Groups "scala-sips" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scala-sips+...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

- charlie robert




Anton Kolmakov

unread,
Jan 6, 2014, 10:13:09 AM1/6/14
to scala...@googlegroups.com
... So I did not know Akka and other frameworks all use Scala futures and promises.  So the Scala Standard Library support for futures and promises are a generalized asynchronous framework used by many implementations.  

Just a small clarification, they do not use the futures from Scala Standard Library. Akka is using own version of them, Twitter is doing the same. The version of futures and promises in Scala 2.10 are based on Akka ones.
Such fragmentation happens because Scala got the futures only in 2.10 version (as I remember). Before that version many developers have to do it on their own. 

I would like to have in the standard library best things from different implementations with hope that it will make others to switch to the standard version.

Ivan Topolnjak

unread,
Jan 6, 2014, 10:27:48 AM1/6/14
to scala...@googlegroups.com
Anton, just a small addition: as of Akka 2.1, Akka uses the standard library futures and no longer packs it's own implementation, the only custom stuff related to futures you will find there is a Java friendly wrapper to make Jave devs life a little less painful. regards.

charlie robert

unread,
Jan 6, 2014, 10:31:09 AM1/6/14
to scala...@googlegroups.com
The thing about Elib that really sticks with me is that it is stretching the stack of execution between 2 or more event loops, using send queues.  What allows for the stretching of the call stack is the support for remote continuations and the allowance of promise pipelining.  I feel this is a fundamental, transcendent change in the execution environment with many benefits and I would love to see scala support for this on the JVM.  Unfortunately, I don’t have the chops to pull it off.

√iktor Ҡlang

unread,
Jan 6, 2014, 11:39:35 AM1/6/14
to <scala-sips@googlegroups.com>
Use Actors?

charlie robert

unread,
Jan 6, 2014, 11:45:24 AM1/6/14
to scala...@googlegroups.com
 I am not sure to what you are referring.  Do you have a link?

- charlie

√iktor Ҡlang

unread,
Jan 6, 2014, 11:46:59 AM1/6/14
to <scala-sips@googlegroups.com>

charlie robert

unread,
Jan 6, 2014, 11:55:25 AM1/6/14
to scala...@googlegroups.com
I see.  A general recommendation, where Akka is one implementation of an actor model in Scala.  However, Akka does not support promise pipelining, so it does not have the optimization characteristics of dealing with latency in the way Elib does.  It does not stretch the execution environment in the way Elib does.  Elib is itself an actor model, plus continuations and promise pipelining.  Is there another actor model implementation, in scala, which is closer to Elib's mark?

Thank you,
- charlie

√iktor Ҡlang

unread,
Jan 6, 2014, 12:13:38 PM1/6/14
to <scala-sips@googlegroups.com>
On Mon, Jan 6, 2014 at 5:55 PM, charlie robert <charlie...@icloud.com> wrote:
I see.  A general recommendation, where Akka is one implementation of an actor model in Scala.  However, Akka does not support promise pipelining,

Of course it does: you get "promise pipelining" if you use Actors instead of Futures.
 
so it does not have the optimization characteristics of dealing with latency in the way Elib does.

Example?
 
 It does not stretch the execution environment in the way Elib does.  Elib is itself an actor model, plus continuations and promise pipelining.  Is there another actor model implementation, in scala, which is closer to Elib's mark?

Well that was my point, you have a great testbed to implement whatever you want on top of Akka's actors, you get remoting, clustering, distributed fault detection etc for "free".

Cheers,

charlie robert

unread,
Jan 6, 2014, 2:13:39 PM1/6/14
to scala...@googlegroups.com
On Jan 6, 2014, at 10:13 AM, √iktor Ҡlang <viktor...@gmail.com> wrote:




On Mon, Jan 6, 2014 at 5:55 PM, charlie robert <charlie...@icloud.com> wrote:
I see.  A general recommendation, where Akka is one implementation of an actor model in Scala.  However, Akka does not support promise pipelining,

Of course it does: you get "promise pipelining" if you use Actors instead of Futures.

I just read the Scala Actors tutorial and I do not see support for pipelining.  Can a message be sent to an unresolved future, in Akka?  Of not, then there is no pipelining and no stretched stack.  This misses the core feature.

 
so it does not have the optimization characteristics of dealing with latency in the way Elib does.

Example?

Please look at the Cap'n Proto link on RPC and look at the topmost diagram.

 
 It does not stretch the execution environment in the way Elib does.  Elib is itself an actor model, plus continuations and promise pipelining.  Is there another actor model implementation, in scala, which is closer to Elib's mark?

Well that was my point, you have a great testbed to implement whatever you want on top of Akka's actors, you get remoting, clustering, distributed fault detection etc for "free".

The issue is the lack of pipelining semantics, so Akka will not work as a test bed, it seems.  Not to say Akka is not wonderful tech, but not in this regard.

Cheers,
charlie

√iktor Ҡlang

unread,
Jan 6, 2014, 2:31:17 PM1/6/14
to <scala-sips@googlegroups.com>
On Mon, Jan 6, 2014 at 8:13 PM, charlie robert <charlie...@icloud.com> wrote:
On Jan 6, 2014, at 10:13 AM, √iktor Ҡlang <viktor...@gmail.com> wrote:




On Mon, Jan 6, 2014 at 5:55 PM, charlie robert <charlie...@icloud.com> wrote:
I see.  A general recommendation, where Akka is one implementation of an actor model in Scala.  However, Akka does not support promise pipelining,

Of course it does: you get "promise pipelining" if you use Actors instead of Futures.

I just read the Scala Actors tutorial and I do not see support for pipelining.  Can a message be sent to an unresolved future, in Akka?  Of not, then there is no pipelining and no stretched stack.  This misses the core feature.

Actors _instead of_ Futures. The "stretched stack" seems less like a feature and more like a gaping security hole. (getting someone else to call logic on your behalf). You can get a limited version of that by pipelining, which is supported since ActorRefs (identities) are reified.


 
so it does not have the optimization characteristics of dealing with latency in the way Elib does.

Example?

Please look at the Cap'n Proto link on RPC and look at the topmost diagram.

Looks like too much of a closed-world-assumption to me, but I haven't done more than read the "time travel" page of the docs.
 

 
 It does not stretch the execution environment in the way Elib does.  Elib is itself an actor model, plus continuations and promise pipelining.  Is there another actor model implementation, in scala, which is closer to Elib's mark?

Well that was my point, you have a great testbed to implement whatever you want on top of Akka's actors, you get remoting, clustering, distributed fault detection etc for "free".

The issue is the lack of pipelining semantics, so Akka will not work as a test bed, it seems.  Not to say Akka is not wonderful tech, but not in this regard.

What makes you say that? You can propagate whatever ActorRef as the sender/reply-to target and you can encode whatever protocol you want (actors is a universal model of computation).

Cheers,

charlie robert

unread,
Jan 6, 2014, 3:10:59 PM1/6/14
to scala...@googlegroups.com
It comes down to the question of whether Akka allows a message to be sent remotely to a pending resolution of a previous send.  Can the Akka Actors or futures support this mechanism of computation?  If the answer is no, then Akka does not support promise pipelining. 

A stretched stack is in the context of an object-capability system, so the security hole is addressed.  Can't do it without promise pipelining.

Regards,
- charlie

√iktor Ҡlang

unread,
Jan 6, 2014, 3:42:51 PM1/6/14
to <scala-sips@googlegroups.com>
On Mon, Jan 6, 2014 at 9:10 PM, charlie robert <charlie...@icloud.com> wrote:
It comes down to the question of whether Akka allows a message to be sent remotely to a pending resolution of a previous send.  

send is asynchronous. I think you are confusing syntax with semantics.
 
Can the Akka Actors or futures support this mechanism of computation?  If the answer is no, then Akka does not support promise pipelining. 

See above.
 

A stretched stack is in the context of an object-capability system, so the security hole is addressed.  Can't do it without promise pipelining.

Ah, good!
However, I get an uncanny feeling this seems to be a local model trying to stretch it to be a distributed model instead of vice-versa.

Cheers,

Alex Boisvert

unread,
Jan 6, 2014, 3:58:07 PM1/6/14
to scala...@googlegroups.com
In the spirit of getting the conversation back to Philip's original post and scheduler integration, I wanted to share a quick overview of how we've "extended" the standard Scala futures in our work environment.   I figure these can be used as motivating use-cases for anyone wanting to push a SIP forward.     

The library uses a pluggable Timer service (by default based on ScheduledExecutorService) and uses scala.concurrent.duration.Duration for time units.   I could share the code (through an ASL v2 license) if there's interest.

Here's the quick README project description:

README

Scala Futures are really are awesome yet still have a few shortcomings (as of Scala 2.10).

In particular, they are still missing a few combinators that 1) handle timeouts/deadlines in a non-blocking fashion without having to resort to the blocking Await.result and 2) easily allow you to either ignore or deal with upstream failures.

To address these, we created a new scala-future-utils project that hosts a few combinators we needed for [SYSTEM-X] and deemed general enough.

In the standard library, you'll find Future.firstCompletedOf(futures) as well as findfoldand reduce which are useful if you want failure propagation.

But in [SYSTEM-X], we don't care so much about complete failure propagation. We often want to provide the best response possible within a reasonable amount of time — not fail if one or more of the classifiers fail.

Here are a few illustrative examples of the new combinators …

On individual futures:

// raise a TimeoutException after some duration
future timeoutAfter (100 millis)
// default to some value after some duration
future defaultAfter (100 millis, defaultValue)
// fail with a custom exception after some duration
future failAfter (100 millis, new MyException)

On groups of futures:

// ignores failures in futures -- only consider timeout/deadline
reducing(futures) { (x1, x2) => ... reduce ... } timeoutAfter (100 millis)
reducing(futures) { (x1, x2) => ... reduce ... } failAfter (100 millis, new MyException)
reducing(futures) { (x1, x2) => ... reduce ... } defaultAfter (100 millis, defaultValue)

// returns whatever we have computed after duration, or NoResultException if we didn't get
// at least two successful completions before that time.
reducing(futures) { (x1, x2) => ... reduce ... } forceAfter (100 millis)
And similarly for folding,

// ignores failures in futures -- only consider timeout/deadline
folding(futures, initialValue) { (acc, t) => ... fold ... } timeoutAfter (100 millis)
folding(futures, initialValue) { (acc, t) => ... fold ... } failAfter (100 millis, new MyException)

// there is no `defaultAfter` for folding since you have to provide an `initialValue` and you
// can use `forceAfter`

// returns whatever we have computed after duration
folding(futures, initialValue) { (acc, t) => ... fold ... } forceAfter (100 millis)

There is also a general tryFold method that allows incrementally handling bothSuccess/Failure of underlying futures as they come,

tryFold(futures, initialValue) {
  case (acc, Success(t)) => ... fold ...
  case (acc, Failure(t)) => ... fold ...
} forceAfter (100 millis)   // also supports timeout/failAfter

tryFold is pretty flexible and allows expressing more complex patterns, such as success-after-X-successes or fail-after-X-failures … those kinds of things.


charlie robert

unread,
Jan 6, 2014, 9:01:22 PM1/6/14
to scala...@googlegroups.com
On Mon, Jan 6, 2014 at 9:10 PM, charlie robert <charlie...@icloud.com> wrote:
It comes down to the question of whether Akka allows a message to be sent remotely to a pending resolution of a previous send.  

send is asynchronous. I think you are confusing syntax with semantics.

How did syntax come into it?   Certainly you cannot do promise pipelining with synchronous sends.  In my previous example, I basically did:

val resultHashCode = (alice ! redirectForTheAnswer(bob)) ! hashCode

if ActorRefs returned ActorRefs wrapping a future, which resolve later.  Hey, that is actually a fair way to model it, except they would be EventualActorRefs and we would change the semantics of !.  Let args be explicitly an EventualActorRef.  This seems like a workable interface, doable in a testbed, do you think?

The point is that the send for hashCode is sent to the remote location, where redirectForTheAnswer is computed, to the result EventualActorRef, before it resolves.  Both of these messages are sent in the same packet, because sends are asynchronous and sending is buffered (10 ms?).  When that future resolves, send the hashCode to the result, but it is done in alice’s vat as the send is already pending there.  This drops a 2-way network hop (in time interval, but all still resolves unless GCed), pipelines both message sends in the same packet to alice, and reduces latency.  And the happy path is all eventual sends resolve, eventually.   It is because you can send a message send forward along to the result continuation, before it is resolved, so the stack is stretched.

This sort of computing capability can extend past a simple 2-way interaction.  You can imagine the ease with which to think of what it could be as a graph of distributed eventual sends, owned by different organizations, to settled refs and unresolved futures, that will eventual complete, maybe.  :)   This is quite interesting to me and fundamentally new.

Thanks for your interest,
- charlie robert




Anton Kolmakov

unread,
Jan 7, 2014, 12:24:20 AM1/7/14
to scala...@googlegroups.com
Thanks for sharing this, will see how it could be used in the SIP. If anyone else has ideas, it would be nice to see them here.

√iktor Ҡlang

unread,
Jan 7, 2014, 5:07:15 AM1/7/14
to <scala-sips@googlegroups.com>
On Tue, Jan 7, 2014 at 3:01 AM, charlie robert <charlie...@icloud.com> wrote:
On Mon, Jan 6, 2014 at 9:10 PM, charlie robert <charlie...@icloud.com> wrote:
It comes down to the question of whether Akka allows a message to be sent remotely to a pending resolution of a previous send.  

send is asynchronous. I think you are confusing syntax with semantics.

How did syntax come into it?   Certainly you cannot do promise pipelining with synchronous sends.  In my previous example, I basically did:

val resultHashCode = (alice ! redirectForTheAnswer(bob)) ! hashCode

if ActorRefs returned ActorRefs wrapping a future, which resolve later.  Hey, that is actually a fair way to model it, except they would be EventualActorRefs and we would change the semantics of !.  Let args be explicitly an EventualActorRef.  This seems like a workable interface, doable in a testbed, do you think?

You should be able to make a protocol that does that, yes!
 

The point is that the send for hashCode is sent to the remote location, where redirectForTheAnswer is computed, to the result EventualActorRef, before it resolves.  Both of these messages are sent in the same packet, because sends are asynchronous and sending is buffered (10 ms?).  When that future resolves, send the hashCode to the result, but it is done in alice’s vat as the send is already pending there.  This drops a 2-way network hop (in time interval, but all still resolves unless GCed), pipelines both message sends in the same packet to alice, and reduces latency.  And the happy path is all eventual sends resolve, eventually.   It is because you can send a message send forward along to the result continuation, before it is resolved, so the stack is stretched.

This sort of computing capability can extend past a simple 2-way interaction.  You can imagine the ease with which to think of what it could be as a graph of distributed eventual sends, owned by different organizations, to settled refs and unresolved futures, that will eventual complete, maybe.  :)   This is quite interesting to me and fundamentally new.

Thanks for your interest,
- charlie robert




--
You received this message because you are subscribed to the Google Groups "scala-sips" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scala-sips+...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Anton Kolmakov

unread,
Jan 13, 2014, 8:42:10 AM1/13/14
to scala...@googlegroups.com

√iktor Ҡlang

unread,
Jan 13, 2014, 9:18:08 AM1/13/14
to <scala-sips@googlegroups.com>
Thanks for taking the time to write that up, commenting...

Maltsev Eduard

unread,
Jan 26, 2014, 12:07:45 AM1/26/14
to scala...@googlegroups.com
If anyone has further suggestions regarding this document or the SIP in general, please feel free to start discussion.

Понеділок, 13 січня 2014 р. 15:42:10 UTC+2 користувач Anton Kolmakov написав:

Benjamin Jackman

unread,
Feb 5, 2014, 10:28:32 PM2/5/14
to scala...@googlegroups.com
1.
I was having a hard time implementing a custom version of futures that would work in ScalaJs. Sebastien already implemented Futures in a different way I believe
but I think it pointed out a flaw in the current implementation of the Future trait.

For methods in Future that need to create a promise currently they call through to the Promise companion class to create the object
for example:

  def transform[S](s: T => S, f: Throwable => Throwable)(implicit executor: ExecutionContext): Future[S] = {
    val p = Promise[S]()

    onComplete {
      case result =>
        try {
          result match {
            case Failure(t)  => p failure f(t)
            case Success(r) => p success s(r)
          }
        } catch {
          case NonFatal(t) => p failure t
        }
    }(executor)

    p.future
  }

val p = Promise[S]() or similar is in several methods in Future (map, flatMap, etc)

If someone want to make a custom implementation of Future and Promise they have to override
 these methods or accept that they will get this java requiring implementation of a Promise in their Future:

def apply[T](): Promise[T] = new impl.Promise.DefaultPromise[T]()

I would propose simply adding a protected def makePromise[A]() with a default implementation that calls the current Promise()

However if someone wants to define their own Future / Promise pair they won't have to override all the methods where makePromise is called, 
instead they will just have to override makePromise

  private implicit def internalExecutor: ExecutionContext = Future.InternalCallbackExecutor

has similar issues.

2. 
I would also suggest lifting all the factory behavior from the object Future up into a FutureFactory trait
that those who want to make their own custom Futures can implement. It will need to have the overrideable
makePromise method as well. 

also things like   private[concurrent] object InternalCallbackExecutor extends ExecutionContext with java.util.concurrent.Executor 
should be made overrideable as well

This factory also probably needs to be tied into into the implementation of the Future trait as an overrideable method aka 
protected def futureFactory : FutureFactory = Future

Ben

√iktor Ҡlang

unread,
Feb 8, 2014, 8:50:53 AM2/8/14
to <scala-sips@googlegroups.com>
Hi Ben,

i think an easier solution is to move the default implementations into DefaultPromise alltogether, this is binary incompatible though, so if that is the way we choose to go, it'll have to be for 2.12
Cheers,

———————
Viktor Klang
Chief Architect - Typesafe

Twitter: @viktorklang
Reply all
Reply to author
Forward
0 new messages