--
You received this message because you are subscribed to the Google Groups "reactive-demand" group.
To unsubscribe from this group and stop receiving emails from it, send an email to reactive-dema...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
In all forms of 'left' the (a -> a') is generally a static function. In the programming model I'm developing, every value has implicit spatial-temporal attributes; one might understand `static` as a special space-time location meaning "wherever and whenever this program fragment is".
In Haskell, the `left :: (a ~> a') -> ((a + b) ~> (a' + b))` uses two different arrow types, `->` for pure Haskell functions and `~>` for the arrow being modeled (which generally is not pure). In this case, it is clear that the argument `(a ~> a')` is not only static, but also cannot be distributed through the arrow itself - i.e. it is syntactically bound to the point of application (modulo staged metaprogramming).
The `left' :: ((a -> a')*(a + b))->(a'+b)` is different from `left` because it distributes the `(a -> a')` operation within the arrow. Consequently, I only need one arrow type (->) for all behaviors. (I could just as easily use `~>` for all behaviors; perhaps this would have been more consistent in presence of the Haskell arrows.) The (a -> a') behavior is typically still static. It could potentially be dynamic, though.
...
Though 'static' code is time-varying. It just happens in a higher layer. Live programming, for example.
I'm not sure I understood what you meant by "a port satisfying either a or b". The (a + b) type is selected by the provider, not the receiver. So a port satisfying (a + b) must be capable of handling both option a and option b (though only one at a time). If we had an (a & b) type, additive conjunction, that would be selected by the receiver, in which case the port 'satisfying a' would also satisfy (a & b). Can you help me understand what you meant?
I think Ross was just saying that I should worry about no-drop (linear or relevant) types when they come up, as a special case, and everything will work out. At least, that's what I took away from it. He's probably right. :)
Could you explain a bit more about your process combinators?
Warm Regards,
Dave
You know, there are a number of similarities between your combinators here and my process combinators which I use to connect ports in a linear 1-1 way with other processes/ports (a notable difference being that I use named ports rather than structural position). And I could add a way to sum protocols similar to your sum types here. If I did that, I can see how to make sense of (a + b) as a port satisfying either a or b. And I could implement something similar to your original left. I'm not sure how to read your left'. Which of those types are time-varying? Does the type (a -> a') -> ... not refer to a time-varying time-varying function? Or are those real function arrows? Anyway, I don't really know what's going on, but I wonder if Ross wasn't saying that lifting (a -> a') into a sum type should be considered consuming it, even if that branch of the sum type isn't taken. That feels like the correct behavior to me, even though, again, I don't know what's going on.
--
Hi all,
David got the point I was making. :) With explicit linear types, here's what I'm imagining for the left operator:
!((((!(a -o b)) * (a + c)) * s) -o ((b + c) * s))
The two occurrences of ! show which resources are usable any number of times. That is, the entire left operator itself is reusable, and the function we're applying must be reusable. (Technically we only need to use the function zero or one times, and we could express that using additive conjunction as (1 & (a -o b)), but then we'd need operators to manipulate &.)
The types ((!a) -o b) or !((!a) -o b) would almost correspond to RDP's usual (a -> b) behaviors, but there's a caveat. A behavior of type ((!a) -o b) must be used at a specific time offset. RDP's usual behaviors would be more like some kind of (ForAllNonnegativeDurations d. addDelay d !((!a) -o b)), where "addDelay d <type>" does a deep substitution of static time offsets throughout the type. Not to mention, RDP behaviors can have side effects, and David's been talking about adding ripening and expiration times, so these type interpretations might get more complicated depending on what we choose to reason about.
On Oct 8, 2013 8:22 AM, "Matt McLelland" <mclella...@gmail.com> wrote:
Which of those types are time-varying? Does the type (a -> a') -> ... not refer to a time-varying time-varying function? Or are those real function arrows?
David's been using the notation (a ~> b) for behaviors while working in Haskell, but I keep wanting to say (a -> b) instead because I think we only need one arrow type for this. Pure function types are a special case of RDP behavior types, and RDP side effects can only occur when certain *input or output* types are involved (namely, signals located at the partition where side effects occur), so purity is still apparent in the behavior's type.
(Actually, I just realized that's not true.... If we support inlined closures like what I've been doing in Underreact, then invoking a closure may enable side effects fueled by those captured signals, even if the closure's own input and output are completely disconnected from that phenomenon. Interesting. I think David saw this coming. ^_^; )
Anyway, I don't really know what's going on, but I wonder if Ross wasn't saying that lifting (a -> a') into a sum type should be considered consuming it, even if that branch of the sum type isn't taken.
> Linear types are pretty useful for ensuring protocols are completed.
I wonder if they can express anything that my system can't. Any ideas for a counterexample?
Transition is just a specific combinator that captures a certain signal and uses it to abandon the current process state and move to another one.
another goal for these [combinators] is to use them as a compilation target, after eliminating the more abstract processes and parameter references, etc.
> If I have a process as a value, I can copy/suspend/resume/drop it as I please.
Wouldn't similar words also serve as justification for dynamic types, breaking encapsulation, etc?
But types are for expressing and enforcing properties that we believe are important. I find value in enforcing completion of protocols within programs, even between pure subprograms.
Also, if I understand correctly this argument wouldn't apply to processes connected indirectly through a combinator or named port. The processes involved don't really "have" those as values, only an ability to signal them.
> That 'main' process has no way of suspending the environment, so its only choice is to complete protocols.
I think this "only choice" you're assuming might be in error, unless you also introduce types asserting on the state of the environment (Hoare logic). If your environment is robust or multi-agent at all, it can unfortunately be polluted and littered upon - e.g. starting many 'new' sessions with a service but not completing any of them.
it's just a value, so its types should be describing what it can do and not what I'm allowed/required to do with it
the environment as named values is an abstraction I want to keep. If names in the environment come with types that place obligations/constraints on other parts of the environment, that abstraction breaks.
The requirement here is that protocols aren't starved
Linear types will not be able to prove that ports aren't starved, either
> When a value *represents* an obligation, contract, debt, or resource, it isn't "just a value"In my world view it is. Representations are external to values. When I talk about named values, those values are in a specific mathematical domain of discourse that you can reason about in a certain logic. Anyway, I'm not criticizing your approach, but this setup of pure environments with structural types makes sense to me.
Since establishing time bounds is even more difficult than establishing termination
Formalization? Yes, I'm doing that in Agda. (Haskell wasn't expressive enough for rational latency and location types.) And I do have a formal type system.
Linear logic can be part of the formalization, too. Axioms can be introduced or dropped at will, but linear types can still be useful to enforce some resource constraints or obligations even on proofs.
Best,
Dave