I tend to think of 'staging' this way: there is a static stage, then there is a dynamic stage. But, there are some dynamic behaviors, and they can have their static stage at runtime.
Oh, "static stage at runtime." So you'd alternate between a layer of continuous reactive side effects and a layer of code generation side effects. This seems a bit arbitrary to me, but it's also a reasonable-looking choice to make.
An interpreter for untyped code will have to deal with at least as many issues as an interpreter for typed code. Namely, both kinds of interpretation need to avoid instantaneous feedback loops through external resources. We might be able to address this with some explicit delay and/or a system of "tainted" types.
What do you think of sending a self-interpretation request to an external resource and waiting for it to come back with a result?
I'm not sure if this is what you're pondering about, but... since Underreact doesn't have dynamic behaviors yet, it doesn't have any way to expand and retract the reactive node graph at run time.
It's interesting that you're encountering a desire to have closures at the static level, but it makes complete sense because you're trying to have open-system composition at that level too.
Well, that's part of it. But I think the tipping point was when I realized I can provide static signal inputs to a dynamic behavior, i.e. like configuration variables. I imagine this could make high-performance dynamic behaviors quite feasible.Which of these type signatures is closest to what you mean by "provide static signal inputs to a dynamic behavior"?
- Use (Dyn (Code (Sta x ~~> y))) and (Dyn x) to get `y`.
- Use (Dyn (Code (Sta x ~~> y))) and (Sta x) to get `y`.
- Use (Sta (Code (Sta x ~~> y))) and (Sta x) to get `y`.
Awelon's sealer/unsealer pairs currently are only good for static computations. If I figure out how to 'serialize' a sealed value for partition crossing (without unsealing it first) perhaps I'll address cryptography then. :)Sounds more like the converse to me. If we ever have a satisfactory way to serialize a sealed value, that's probably because we're using some kind of cryptography.
Ok. I spent the last few hours modeling part of this proposed design in Haskell.
One challenge I'm having difficulty with is figuring out 'where' the raw literals (text, numbers, quoted behaviors) should initially go.
It seems that, if I want to model this as a multi-agent program editing process, I'll need to be very explicit and precise about it - modeling the editing and distribution behaviors.
I'm also thinking I'll switch to Agda. I'm modeling Awelon as it might be, and I already miss text and numbers for metaprogramming and typing. Dealing with partition constraints, and substructural types (like copy ops where one copy of relevant type is not relevant) is painful.
On the success side:
1) It turns out that keeping latency info in the 'static' model is a great fit. It really contributes to the intuition that, in live programming, it takes time for updates to distribute. The normal benefits with respect to looping and scheduling should also apply.
2) In many ways the model seems simpler. I don't need a distinct signal concept (though I can use one to implement). I think state will be a better fit too, more explicitly part of the environment.
On Sat, Sep 14, 2013 at 1:08 PM, Ross Angle <rok...@gmail.com> wrote:I tend to think of 'staging' this way: there is a static stage, then there is a dynamic stage. But, there are some dynamic behaviors, and they can have their static stage at runtime.
Oh, "static stage at runtime." So you'd alternate between a layer of continuous reactive side effects and a layer of code generation side effects. This seems a bit arbitrary to me, but it's also a reasonable-looking choice to make.Well, I must admit: a richer staging model is looking more and more tempting. If I ever figure out how to do it while keeping it conceptually simple it could become something very powerful. I think the sort of staging you've been describing could be a much better basis for PL/UI synthesis.
oh, in slightly less recent news, I had a very productive conversation with David Barbour about programming languageswe've encountered some pretty amazing new ideaslike a programming language that would work with a clear physical metaphor, carrying items around and using them with each otherand (relatedly) a programming language that means that the things being done now are really just a blueprint for doing things later
My intuition, at this point, is that Awelon's basic structure with controlled support for "program history editing" is almost powerful enough to be a basis for universal-scale reactive PL/UI unification. Unfortunately, the staging I've been contemplating seems to interfere! It hinders keeping snapshots, clearing unneeded history, and makes a problematic semantic distinction:
* when users want a reactive effect, they use a live-programming or history-editing concept* when anything else (sensors, databases, etc.) have a reactive effect, they use time-varying signalsHow frustrating, my model has such a harsh edges!
But I wonder: What if...* What if sensor inputs are also modeled as editing a program history?* What if all agents and behaviors are modeled this way?
Conceptually, this idea shifts everything (including the act-of-programming) into a conceptual RDP runtime. In practical terms, I think it simply means my static values must also bind spatial-temporal information, such that they can model the distribution and locations of program fragments. (<-- for some reason, discovering that I might want even more precision, uniformity, and symmetry surprises me every time.)
I feel that the underlying nature of types is bound within these questions. A type is a way of describing what can and cannot be changed. A subprogram can safely be altered so long as no downstream agency is depending on its structure. An agent can protect its agency to alter a program by representing type information such that downstream agencies must code conservatively against it - e.g. we can say: don't assume this value won't change, just because it hasn't changed for the last few hours.A good UI/IDE should help users rewrite history in a way that protects dependents.
So getting back to staged programming: staging is modeled in terms of operations of constructing program fragments (behaviors), perhaps wrapping these up with acquired signals, and use of combinators (first, left). I should be able to drop `beval` entirely, and switch to dependently-typed 'static' behaviors as you seem to have been suggesting all along.So, please, keep trying to make me understand this. I think it could be very useful.
eval :-- For any type...(t : Type) ->
-- ..for any serializable value of this type...
Code t ->
-- ...we can simply not serialize it.-- (TODO: Come up with a type signature that guarantees the output
-- is equivalent to the input.)
t-- Given a string, we have a possibly nonterminating way to extract
-- either a parse error or a serializable value of some type.
-- (NOTE: The string is just one way we might represent untyped code.
-- We could also use binary data, collections of independently
-- developed modules, or a system of lisp macros, for instance.)
imbueType :-- For any expected type...(t : Type) ->
-- ...for any piece of untyped code....
String ->
-- ...we have a possibly nonterminating, possibly failing way to
-- extract a serializable value of that type.-- (TODO: Come up with a type signature that guarantees the output-- is equivalent to the String input.)
-- (TODO: See if it would be useful to return a type-tagged value-- rather than taking an expected type upfront.)
Partial (Maybe (Code t))getSourceCode :-- For any type...(t : Type) ->-- ...if we can prove this type and its values have no extra-- dependencies...
-- (TODO: Figure out how to implement `isolated`.)
isolated t ->-- ...then we can can serialize any value of this type.-- (TODO: Come up with a type signature that guarantees the output-- is equivalent to the input.)t -> Code t
An interpreter for untyped code will have to deal with at least as many issues as an interpreter for typed code. Namely, both kinds of interpretation need to avoid instantaneous feedback loops through external resources. We might be able to address this with some explicit delay and/or a system of "tainted" types.
Ah, I see. I hadn't been considering the feedback loop issue.
Any level of call nesting is possible, but there's no way for a (~~>) behavior to receive itself as a parameter.I see. Even if you eliminate pure recursion or fixpoints, you still have demand monitors. Effects make it easy to dodge around such limitations. That's a cost of effects, and a benefit.Yeah. Underreact currently only supports serializable data in demand monitors, but I know what you mean.
Is [the feedback loop issue] the main reason you're using explicit stages for your constructs?
I'm not sure if this is what you're pondering about, but... since Underreact doesn't have dynamic behaviors yet, it doesn't have any way to expand and retract the reactive node graph at run time.Yes, that's also a big use-case for dynamic behaviors. RDP without some ability to programmatically manipulate the graph at runtime is only half the paradigm. Also, without such an explanation, it seems
Many expand/retract operations could be handled by something simpler, though. Like:bforeach :: b x y ~> b [x] [y]
It's interesting that you're encountering a desire to have closures at the static level, but it makes complete sense because you're trying to have open-system composition at that level too.
Well, that's part of it. But I think the tipping point was when I realized I can provide static signal inputs to a dynamic behavior, i.e. like configuration variables. I imagine this could make high-performance dynamic behaviors quite feasible.Which of these type signatures is closest to what you mean by "provide static signal inputs to a dynamic behavior"?
The middle one is closest. But none of them are right! In all cases you listed, `y` would also need to be a static constant, and the behavior wouldn't actually do anything. (I'm assuming no encapsulation of signals in type Code.)Closer would be:Dyn (Code (Sta x * Dyn y ~~> z)) * (Sta x * Dyn y) ~> z
Another update.It seems statics don't really have a clear spatial location except for "wherever this program fragment is physically represented", which is currently implicit in the RDP model. I'm beginning to think I should formalize some of the 'small-step semantics' for how a program is rewritten into a lot of smaller program fragments for distribution.
getSourceCode :-- For any input and output types...(x : Type) -> (y : Type) ->-- ...if we consume an isolated subprogram...
(Hose x -o Hose y) -o
-- ...then we get a serializable program.-- (TODO: Come up with a type signature that guarantees the (Code (x -> y)) is-- equivalent to the (Hose x -o Hose y).)Code (x -> y)
With regards to PL/UI synthesis, the 'isolation' of statics is problematic.We cannot treat extension and expression of the program as having clean staging, a barrier to changes from the outside world; instead, we must build on the world as we can observe it, accepting some risk that our programs will break when the world changes, and developing more adaptive/flexible programs to mitigate this risk. I think dependent types won't scale to a system where a million user/programmers are adjusting and extending their code in real-time.
When I initially considered how PL/UI synthesis might scale to multiple users, I was thinking in terms of DVCS, with a bunch of programmers sharing code transactionally. In retrospect, this is a terrible idea: it defeats most of the benefits of RDP .
Right now, Awelon applications take two arguments: (1) a powerbox, (2) an 'activity' signal starting in some application. I believe that I can model static interactions through capabilities in the powerbox, and I can remove the 'activity' signal. If developers want to "turn off" their application (or any subprogram) it is no longer modeled externally (by powering down a staged activity signal); instead, it is treated as an act of live programming.
One area where it might not work out so well is the issue of 'choice' and conditional expressions. The relevant issue is:* If a capability that influences a resource is used in a choice... (e.g. in just one branch)* What shall other software agents and resources observe?* Can I effectively control the number of 'potential' signals an agent must deal with?
Getting back to the notion of partitions and modeling agents within agents, it seems that it should not be difficult to have distinct concepts for isolated effects - a different logical 'this' partition for each agent (perhaps leveraging an affine uniqueness source). Under these conditions, I could perhaps track the greatest scope any behavior or capability might directly effect. Purity is just a local partition without resources. This could potentially enable a staged compilation based on isolation. The whole notion of 'partitions' must be internalized within RDP (as opposed to treated 'external' to the program, which is what I have been thinking). So every program is modeling logical internal partitions.Then we simply stretch the idea so the entire Internet is conceptually internal to RDP. :)
To use capabilities to model communication between RDP agents or to resources modeled within an application... I've been thinking a bit how to accomplish that. Do I create some state and share it using? Do I model some kind of continuous channel leveraging linear types? I've a few ideas. The only critical issue is that I must model this sort of connectivity as a 'side-effect' enabling a dimension for composition for concurrent (independently developed and maintained) behavior.
But smooth failover, logical time, and latency control are insufficient.
When a program fails due to some *internal* issue (e.g. a type violation) then an RDP program would would just be continuously shoving its insanity into the world. What I need to do is somehow disable the problematic subprogram until conditions change This presents challenges:1) How do I identify the scope of "the problematic subprogram"?2) How can I precisely shut down "the problematic subprogram"?3) How do I determine when conditions change to try again?
For problem 1, I want some way of declaring: "this subprogram will be an independent unit-of-failure".I've thought of a few ways to express this. I'll list them worst to best.* Treat the declaration as a manipulation of the powerbox.* Treat the declaration as a modifier for a block.* Treat the declaration as a new primitive combinator.* Model the declaration as an evaluation capability in the powerbox.
Problem 3 is challenging to do precisely, but easy to brute force.If RDP was "pure" I could simply wait for inputs to change, then try again. But RDP is not pure; it has its own simple form of highly constrained effects and observations, some communications flowing orthogonally to the primary dataflow path. The sanity failure may partially belong to the external world, e.g. someone might be upgrading a dependency service. So unless we have strong reason to believe the failure is internal, we should recover automatically.A viable approach is brute force: try again after some time has passed, leveraging some sort of heuristic or backoff algorithm to avoid burdening the system. If the problems happen again, back out again.
A more interesting approach might be to record precisely the resources and queries that caused the failure before, and surgically prod just those using RDP's signal/cancel model - this technique could be very efficient, simple, and usable. The surgical approach could be made even lighter weight by heuristically and probabilistically testing a partially random subset of the external resources that might have been involved with the problem (if there was more than one).This semi-precise polling approach seems promising, and the development of it should also result in more precise error descriptions.
Anyhow, I think the use of these evaluator 'membranes' could offer most of the benefits of static typing without hindering dynamism where developers don't really trust certain subprograms.
--
You received this message because you are subscribed to the Google Groups "reactive-demand" group.
To unsubscribe from this group and stop receiving emails from it, send an email to reactive-dema...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
* when users want a reactive effect, they use a live-programming or history-editing concept* when anything else (sensors, databases, etc.) have a reactive effect, they use time-varying signalsHow frustrating, my model has such a harsh edges!This edge is just fine. Not everything can be done in real time. Some actions on the world can take preparation time before they're worthy of being deployed, and that preparation will take the form of manipulating an abstract representation of the project (doing "program history editing" etc.).
But I wonder: What if...* What if sensor inputs are also modeled as editing a program history?* What if all agents and behaviors are modeled this way?If I understand correctly, you're talking about taking every higher-order API and littering it with a stateful protocol for program editing, just so that humans' stateful workspaces blend in with the rest of the system.
Careful. In my multi-stage model, for every stage-preceding-a-stage relationship, the later stage lives within a *single* partition of the earlier one.
Fortunately, I don't expect to be able to serialize code if its type captures stage or partition labels.
I recommend we "help users rewrite history" by making it so the thing they're exposing is a time-varying *immutable* behavior in the first place. They can still use a stateful workspace to help them change the behavior's implementation over time, but that statefulness is external to the network/API protocol.
The "static" behaviors in Underreact are expanded at the moment where untyped code is interpreted to become typed code. The multi-stage system, with the use of getSourceCode, would make it easy to construct typed code values in the first place.
Maybe I can clarify what these operations might look like, in broad strokes: [..]
Ah, I see. I hadn't been considering the feedback loop issue.Well, for context, you and I started talking about feedback loops through external resources a long time ago. I think you brought it up again when I talked about Underreact's termination guarantee:
The way I imagine it, a later stage never executes unless an earlier stage decides it's time to eval that code, at which point it executes *during* the earlier stage.
Many expand/retract operations could be handled by something simpler, though. Like:bforeach :: b x y ~> b [x] [y]Yeah, that's interesting....
To really complete the multi-stage model, I expect we'd have to take several difficult steps:
- Start with a sound static type system.
- Add staging in this impure style we've been discussing. This will probably mean adding dependent typing, adding tweaked versions of observational equality to formalize index completions, and maybe even adding effect types to account for the code generation side effect. (Or maybe linear types would work for that side effect! Ooooh...)
- Add getSourceCode and eval operators so our stages can interact with each other.
- Add partitions. This will probably mean putting complex side conditions on several of our primitive operations so that they don't implicitly communicate between partitions.
- Implement a concrete infrastructure that uses partitions to represent individual devices (or simulated devices) on a network.
- Add linearly typed resources on the network. Obviously, we'll have to add linear types for this if we haven't done so already.
I just Googled [location type theory], and apparently we might benefit from doing some reading (and really dense reading, wow):
- This thesis proposal by Jonathan Moody.
- A few papers by Jonathan Moody.
- Primiero, Giuseppe, 2010. "A multi-modal dependent type theory for representing data accessibility in a network" (which cites one of Jonathan Moody's papers)
...As I indicated just above ("Ooooh..."), I just thought of using linear types to account for the code generation side effect. It's not that we can't use contraction and weakening for later-stage values, but when we *do so*, it's not quite a free operation.
We cannot treat extension and expression of the program as having clean staging, a barrier to changes from the outside world
I would agree only if you mean "we cannot treat ... as having ... a barrier" as a guideline, but not as a rule. There are many cases where a barrier seems natural to me, and therefore where clean staging could come in handy
I'd say the meaning of externalizing state is that it's no longer under the program's exclusive access, so it's available for debugging, mashups, and so on. (As a great bonus, it also means the program is unlikely to be construed as an independent moral agent, so we avoid some ethical complexity.)
I recommend modeling the data-control flow for this, by using static types. If a subprogram could fail, it'll have to account for that possibility in its output type.
I'm proposing imbueType, a pure operation, which just translates untyped code to typed code. I think this covers all the worst parts of eval, and at the same time it means you can avoid dealing with weird side effect semantics. This step does all the parsing and preprocessing, and afterward, we can reason about resource usage almost as though we're dealing with a single program again.
Whenever the input changes, execute it as a new thread. Whenever we're executing more than two threads, kill all but the oldest, the newest, and maybe a few random ones in between. Whenever a thread completes, update the output to be (Just (input * threadResult)), and kill any threads older than this one.
I'm proposing imbueType, a pure operation, which just translates untyped code to typed code. I think this covers all the worst parts of eval, and at the same time it means you can avoid dealing with weird side effect semantics. This step does all the parsing and preprocessing, and afterward, we can reason about resource usage almost as though we're dealing with a single program again.
I think you're assuming you can provide static types for all your capabilities without at least partially invoking them. This might be difficult to achieve in practice, without overgeneralizing types (which would bring back the 'partial failure' problems).
Consider, for example: we might want to assume some shared state has a particular type structure (e.g. an XML file). This might not be something you can properly determine without loading the file - i.e. the capability might not have an 'XML file type'.
The main strength of the staged model was that it provided a very static structure. A consequence was that I needed a different kind of collection type, such as a vector or set, to account for observed state. A consequence is that it becomes very difficult to program against a system's actual state; we must target the general possibility (a vector or set of any size with ad-hoc elements) instead of a specific possibility. In order to block direct dependency on unstable signals, users would also be unable to effectively treat dependable signals as part of their environment.I'm still fond of the staged model. I just don't see a good way to make it fit with the PL/UI synthesis, at least for many use-cases.I wonder if there's a good intermediate. Maybe capabilities and users can both mark types with known trusted-path and stability information, and typefully prevent any accidental dependencies. If we require capabilities to add and remove certain type information, we could perhaps also control what dependencies an ad-hoc subprogram can have upon structure. Most type information seems to be associated with the atoms and endpoints. But staging and such are probably something the `*` and `+` types should also support, indicating the stability and trust in that `*` structure.
Your "blocks" sound like exactly what I was calling typed code, so at least we're on a similar page. The ability to coerce a value to a block of type (1 -> value) is pretty interesting to think about. I wonder if it's only valid if blocks can't perform side effects.
Those operations on `Sealed` values look like they're incomplete. You have `sealedJoin`, but there's no `sealedSealedJoin` or `sealedSealedSealedJoin`. :)
In response to my roadmap for the multi-stage system, you said "I plan to start with partitions early on." I think staging is even harder to get right, and it has pretty much all the same semantic challenges as partitions.
If state is stable and trustworthy enough, you want developers to be able to build upon it the same way they would build upon a programming language's core utilities. The unchanging and trustworthy properties of a core utility are described by its static type, so you want programmers to be able to extract statically typed values from dynamically occurring state, even if it's not 100% stable or trustworthy.Pretty nice idea. :) The basic intuition of this idea is unsound, but we can keep the contradictions under modalities like "Hopefully an X." :-p