I don't think we should artificially restrict uncontroversial improvements. (Consider that fact that VisualVM was first bundled with JDK 6 update 7- without doubt it was good to ship it then rather than delaying until JDK 7.)
Maybe we want something else for Scala, but we need to think of the ramifications and the recommendations for library authors (maybe they always stick to .0 to give flexibility to library users).
If the only problematic aspect of adding api in a point release is people compiling against it and losing compatibility with earlier point releases, we could also annotate new elements and require you to give a compiler option to use those from source.
Or, we could offer a MIMA-like tool that inspects your code and checks that it only calls methods defined in 2.10.0. This wouldn't rely on use remembering to mark new methods as experimental.
+1 ... People that decide to use Scala futures today have to roll their own set of helpers to deal with timeouts and such. Most just use Akka's or Twitter's.
Create an extension to SIP-14 and invite people to participate in it. I'll chime in for sure.
Cheers,
V
" It would be wonderful to gain a scala implementation of Cap’n Proto," <-- sounds like a specific integration to me? I don't see why that should be in the Scala Standard Library.
For me, it sounds like Rob wants to have something like this, not exact integration. But in any way, I do not see it as a part of futures & promises because Cap’n Proto is about RPC, is not it?
The capabilities this standard offers are quite powerful and not present in scala. What is the use of futures/promises with no RPC between processes? Aren’t you wanting to build a promise library to help with concurrent processing? Is it useful to have concurrent processing with no distribution
The only things really needed are the ability to send to a promise and to use them as typed arguments in calls.
The capabilities this standard offers are quite powerful and not present in scala. What is the use of futures/promises with no RPC between processes? Aren’t you wanting to build a promise library to help with concurrent processing? Is it useful to have concurrent processing with no distributionIn general, they are used for asynchronous computations, it does not meter whey they happen – locally or remotely. For example, you can use Akka without communication with the world. Another one is Finagle, there is no any sense to use it in scope of the single process. All these examples have one thing in common: they all are built in pair with the futures. They are not trying to extends them, they are trying to use them to build much more complex frameworks. It requires having futures and promises quite simple, easy to use, and without strict contract that makes others to do things in only one right way.Current simplicity of the futures is fine, but as we can see Twitter, Akka, and others have a lot of tasty stuff which may also be used as a part of Scala's Futures and Promises.
The only things really needed are the ability to send to a promise and to use them as typed arguments in calls.Could you show an example of this?
Ref answer = alice.redirectMessage("redirectForTheAnswer", bob).redirectMessage("hashCode");This sends to alice with bob as an argument. The method redirectForTheAnswer forwards a send to bob for the method getTheAnswer. The hashCode send is a pipeline send and is forwarded to alice to be sent when the result resolves over there. Since the pipeline send is to the result of redirectForTheAnswer, it does not get forwarded to bob. Note that due to the typing issue, I have to specify a Ref as the parameter to the method redirectForTheAnswer.Thank you,charlie
On Sunday, January 5, 2014 10:22:46 PM UTC+2, Rob Withers wrote:On Jan 5, 2014, at 9:51 AM, √iktor Ҡlang <viktor...@gmail.com> wrote:" It would be wonderful to gain a scala implementation of Cap’n Proto," <-- sounds like a specific integration to me? I don't see why that should be in the Scala Standard Library.That is a reasonable argument to make about a vender integration, but this is not necessarily such. I would point out that there are 2 layers to Cap’n Proto.There is the encoding. This is nice to have, for sure and that would be an interesting exercise for a vendor integration. It is just one more message structure specification, but quite interesting. It is firmly in the session layer.There is also the RPC protocol specification. As long as there is support for specifying the encoding in the startup protocol, when rendezvous occurs, then the implementation of the underlying encoding can be varied. The same is true of many of the details of coordination, in the session layer, like encryption and protocol version.However, there is also the upper layer of the RPC protocol, which is the remote object refs, object tables and promises. It has the execution semantics of an event loop, but with the ability to wait and pause a continuation. Placing that presentation layer on top of a negotiated session is not a vendor specific set of thing. It is supporting a common protocol standard. It was derived from Elib.The whole point of distributed event loops and promise pipelining is to spread the stack of execution between event loop queues, between processes. This is interesting. The only things really needed are the ability to send to a promise and to use them as typed arguments in calls. Also need the ability to mutate, but that is achievable in scala, I think, using a for-comprehension.On Jan 5, 2014, at 10:09 AM, Anton Kolmakov <an...@kolmakov.me> wrote:For me, it sounds like Rob wants to have something like this, not exact integration. But in any way, I do not see it as a part of futures & promises because Cap’n Proto is about RPC, is not it?The capabilities this standard offers are quite powerful and not present in scala. What is the use of futures/promises with no RPC between processes? Aren’t you wanting to build a promise library to help with concurrent processing? Is it useful to have concurrent processing with no distribution?- charlie (now preferred, as I buried the old me, dead and gone)
--
You received this message because you are subscribed to the Google Groups "scala-sips" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scala-sips+...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
... So I did not know Akka and other frameworks all use Scala futures and promises. So the Scala Standard Library support for futures and promises are a generalized asynchronous framework used by many implementations.
I see. A general recommendation, where Akka is one implementation of an actor model in Scala. However, Akka does not support promise pipelining,
so it does not have the optimization characteristics of dealing with latency in the way Elib does.
It does not stretch the execution environment in the way Elib does. Elib is itself an actor model, plus continuations and promise pipelining. Is there another actor model implementation, in scala, which is closer to Elib's mark?
On Mon, Jan 6, 2014 at 5:55 PM, charlie robert <charlie...@icloud.com> wrote:
I see. A general recommendation, where Akka is one implementation of an actor model in Scala. However, Akka does not support promise pipelining,Of course it does: you get "promise pipelining" if you use Actors instead of Futures.
so it does not have the optimization characteristics of dealing with latency in the way Elib does.Example?
It does not stretch the execution environment in the way Elib does. Elib is itself an actor model, plus continuations and promise pipelining. Is there another actor model implementation, in scala, which is closer to Elib's mark?
Well that was my point, you have a great testbed to implement whatever you want on top of Akka's actors, you get remoting, clustering, distributed fault detection etc for "free".
I just read the Scala Actors tutorial and I do not see support for pipelining. Can a message be sent to an unresolved future, in Akka? Of not, then there is no pipelining and no stretched stack. This misses the core feature.On Mon, Jan 6, 2014 at 5:55 PM, charlie robert <charlie...@icloud.com> wrote:
I see. A general recommendation, where Akka is one implementation of an actor model in Scala. However, Akka does not support promise pipelining,Of course it does: you get "promise pipelining" if you use Actors instead of Futures.
Please look at the Cap'n Proto link on RPC and look at the topmost diagram.so it does not have the optimization characteristics of dealing with latency in the way Elib does.Example?
The issue is the lack of pipelining semantics, so Akka will not work as a test bed, it seems. Not to say Akka is not wonderful tech, but not in this regard.It does not stretch the execution environment in the way Elib does. Elib is itself an actor model, plus continuations and promise pipelining. Is there another actor model implementation, in scala, which is closer to Elib's mark?
Well that was my point, you have a great testbed to implement whatever you want on top of Akka's actors, you get remoting, clustering, distributed fault detection etc for "free".
It comes down to the question of whether Akka allows a message to be sent remotely to a pending resolution of a previous send.
Can the Akka Actors or futures support this mechanism of computation? If the answer is no, then Akka does not support promise pipelining.
A stretched stack is in the context of an object-capability system, so the security hole is addressed. Can't do it without promise pipelining.
Scala Futures are really are awesome yet still have a few shortcomings (as of Scala 2.10).
In particular, they are still missing a few combinators that 1) handle timeouts/deadlines in a non-blocking fashion without having to resort to the blocking Await.result and 2) easily allow you to either ignore or deal with upstream failures.
To address these, we created a new scala-future-utils project that hosts a few combinators we needed for [SYSTEM-X] and deemed general enough.
In the standard library, you'll find Future.firstCompletedOf(futures)
as well as find
, fold
and reduce
which are useful if you want failure propagation.
But in [SYSTEM-X], we don't care so much about complete failure propagation. We often want to provide the best response possible within a reasonable amount of time — not fail if one or more of the classifiers fail.
Here are a few illustrative examples of the new combinators …
On individual futures:
// raise a TimeoutException after some duration future timeoutAfter (100 millis) // default to some value after some duration future defaultAfter (100 millis, defaultValue) // fail with a custom exception after some duration future failAfter (100 millis, new MyException)
On groups of futures:
// ignores failures in futures -- only consider timeout/deadline reducing(futures) { (x1, x2) => ... reduce ... } timeoutAfter (100 millis) reducing(futures) { (x1, x2) => ... reduce ... } failAfter (100 millis, new MyException) reducing(futures) { (x1, x2) => ... reduce ... } defaultAfter (100 millis, defaultValue) // returns whatever we have computed after duration, or NoResultException if we didn't get // at least two successful completions before that time. reducing(futures) { (x1, x2) => ... reduce ... } forceAfter (100 millis) And similarly for folding, // ignores failures in futures -- only consider timeout/deadline folding(futures, initialValue) { (acc, t) => ... fold ... } timeoutAfter (100 millis) folding(futures, initialValue) { (acc, t) => ... fold ... } failAfter (100 millis, new MyException) // there is no `defaultAfter` for folding since you have to provide an `initialValue` and you // can use `forceAfter` // returns whatever we have computed after duration folding(futures, initialValue) { (acc, t) => ... fold ... } forceAfter (100 millis)
There is also a general tryFold
method that allows incrementally handling bothSuccess
/Failure
of underlying futures
as they come,
tryFold(futures, initialValue) { case (acc, Success(t)) => ... fold ... case (acc, Failure(t)) => ... fold ... } forceAfter (100 millis) // also supports timeout/failAfter
tryFold
is pretty flexible and allows expressing more complex patterns, such as success-after-X-successes or fail-after-X-failures … those kinds of things.
On Mon, Jan 6, 2014 at 9:10 PM, charlie robert <charlie...@icloud.com> wrote:It comes down to the question of whether Akka allows a message to be sent remotely to a pending resolution of a previous send.send is asynchronous. I think you are confusing syntax with semantics.
On Mon, Jan 6, 2014 at 9:10 PM, charlie robert <charlie...@icloud.com> wrote:It comes down to the question of whether Akka allows a message to be sent remotely to a pending resolution of a previous send.send is asynchronous. I think you are confusing syntax with semantics.How did syntax come into it? Certainly you cannot do promise pipelining with synchronous sends. In my previous example, I basically did:val resultHashCode = (alice ! redirectForTheAnswer(bob)) ! hashCodeif ActorRefs returned ActorRefs wrapping a future, which resolve later. Hey, that is actually a fair way to model it, except they would be EventualActorRefs and we would change the semantics of !. Let args be explicitly an EventualActorRef. This seems like a workable interface, doable in a testbed, do you think?
The point is that the send for hashCode is sent to the remote location, where redirectForTheAnswer is computed, to the result EventualActorRef, before it resolves. Both of these messages are sent in the same packet, because sends are asynchronous and sending is buffered (10 ms?). When that future resolves, send the hashCode to the result, but it is done in alice’s vat as the send is already pending there. This drops a 2-way network hop (in time interval, but all still resolves unless GCed), pipelines both message sends in the same packet to alice, and reduces latency. And the happy path is all eventual sends resolve, eventually. It is because you can send a message send forward along to the result continuation, before it is resolved, so the stack is stretched.This sort of computing capability can extend past a simple 2-way interaction. You can imagine the ease with which to think of what it could be as a graph of distributed eventual sends, owned by different organizations, to settled refs and unresolved futures, that will eventual complete, maybe. :) This is quite interesting to me and fundamentally new.
Thanks for your interest,- charlie robert
--
You received this message because you are subscribed to the Google Groups "scala-sips" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scala-sips+...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.