I think all I want is for that "block" type to be well-understood.
[from: To Quota or not]On Fri, Aug 16, 2013 at 11:19 PM, Ross Angle <rok...@gmail.com> wrote:
I think all I want is for that "block" type to be well-understood.This is unrelated to 'well-understood' termination and quotas, but I rather another aspect of our block types:First, assume a concept of choice - much like sums, except static:
The following approach would seem to expose dependent types and type-matching pretty well:exposeProduct :: x ~> ((a * b) | x)exposeSum :: x ~> ((a + b) | x)exposeStaticChoice :: x ~> ((a | b) | x)exposeGreaterZero :: (Static Int * x) ~> (Static Int * x) | (Static Int * x)
There should be fewer ops (since I avoid Static Boolean type entirely), but I do lose the ability to operate using ad-hoc polymorphism on multiple branches (like `orStatic` enables). I might regain that:-- (x|y) means the type is x or y, and I know which-- (x\y) means the type is x or y, but I've lost info about which. x\x is essentially just x.mergeStaticChoice :: (x|y) ~> (x\y)
Inverted, we might want to leave a choice to the client of a value. In the earlier formation, this essentially matches a rational type ((x|y) / Boolean), which is again lossy info due to booleans; it's also getting a tad ugly since we now have booleans on both ends and it's difficult to grok how they should be combined.
In the latter formation, I might want to model this in terms of a new type:-- (x & y): I'm statically offering both x and y, and you must pick oneinitOffering :: x ~> (x & x)firstOffering :: (Static (x ~> x') * (x & y)) ~> (x' & y)swapOffering :: (x & y) ~> (y & x)assoclOffering :: (x & (y & z)) ~> (x & (y & z))acceptOffering :: (x & y) ~> xexposeOffering :: x ~> ((a&b)|x)To drop an offering, you must accept an offering then drop it (just in case there are linear types hidden within it). We can also develop induction-recursion over offerings.
I find fractional types pretty hard to grok, but I think I can say at least this much: They can do causal loops, so if you use them to carry any dup'able types, those types will need to be under beat indexes... or maybe some kind of linear reference type (Linear Boolean) that gets consumed upon dereference.
if (x + y) contains static data signals, then the idea that "you can only tell at run time" actually refers to the run time of the static signals, which is compile time.
Whew, this is just the linear logic dual of +. No cause for alarm. :)
maybe you'll also want duals of (x | y) and (x \ y). Running out of space on the keyboard yet? :-p
On Sat, Aug 17, 2013 at 6:15 PM, Ross Angle <rok...@gmail.com> wrote:if (x + y) contains static data signals, then the idea that "you can only tell at run time" actually refers to the run time of the static signals, which is compile time.We might be able to tell at compile-time if we *split* on a static signal (or used intro0). But the '+' type loses that tidbit of information. We can feasibly convert from (x|y) ~> (x+y), but never the inverse. An important reason for (x | y) is that we want eventually to 'forget' about most branches without ever properly merging. The outcome of a static choice is accessible to developers, and they don't want to keep considering every possible branch. Another good reason is that disjoin is trivial on (x | y).
So far, I don't believe a dual for a runtime choice is feasible in Awelon. A backwards flowing runtime choice is closer to: 1/(x+y).
If it helps, I consider there to be an isomorphism between (x | y) and ((Static () * x) + (Static () * y)). This means if (x + y) contains enough static signals, it's possible to reconstruct (x | y).
a program can take ((1 / (1 + 1)) * (1 + 1)) to 1, using the multiplicative elimination primitive. If the input we provide is ((1 / left ()), right ()), then it won't unify properly, so the branch will fail
When we backbeat (-x + x), the continuous-time downstream parts must usually be enough to determine whether the left signal or the right signal is originally active, so that we don't have to nondeterministically branch and then fail.
Now let's consider the isomorphisms between (-a * b) and (- (a * b)) and (a * -b). To make sense of these, we don't necessarily want `-a` to reverse each signal type of `a`.
Instead, we probably want `-a` to be limited so that it only acts on signals `a` which are symmetrical enough that (-a + a) can plug together. As long as we're talking about continuous-time signals, some examples of `a` might include (Atom p t * Co (Atom p t)) and, um, ((1 + Atom p t) * Co (1 + Atom p t)), where (Co _) is a modality for upstream-downstream reversal. (I would use linear negation, but (1 / _) might work too.)
As far as I can tell right now, these design decisions might lead to a working system, but there are many compromises to to be made on the way there. The way I see it, we don't get the same nondeterministic model, we have to limit (1 / _) to linear types, and we have to limit (- _) to some kind of self-pluggable types. Does this seem to be in line with what you're thinking?Warm Regards,Ross
--
You received this message because you are subscribed to the Google Groups "reactive-demand" group.
To unsubscribe from this group and stop receiving emails from it, send an email to reactive-dema...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
turninDatBeatAround :: Static (x ~> y) ~> Static (1/y ~> 1/x)negativeBehavior :: Static (x ~> y) ~> Static ( (-y) ~> (-x) )
As far as I can tell right now, these design decisions might lead to a working system, but there are many compromises to to be made on the way there. The way I see it, we don't get the same nondeterministic model, we have to limit (1 / _) to linear types, and we have to limit (- _) to some kind of self-pluggable types. Does this seem to be in line with what you're thinking?
On Tue, Aug 20, 2013 at 12:32 AM, David Barbour <dmba...@gmail.com> wrote:
On Mon, Aug 19, 2013 at 8:33 PM, Ross Angle <rok...@gmail.com> wrote:
If it helps, I consider there to be an isomorphism between (x | y) and ((Static () * x) + (Static () * y)). This means if (x + y) contains enough static signals, it's possible to reconstruct (x | y).
Containing static signals does not mean anything about the runtime activity is known.
myBeh' :: S p () ~> ((Static () * S p ()) + (Static () * S p ()))myBeh' = myBeh [intro1] left [intro1] right
One can intuit the Static types either being continuously active or never active or just imaginary/unrealized; it doesn't matter because it isn't visible at runtime.
a program can take ((1 / (1 + 1)) * (1 + 1)) to 1, using the multiplicative elimination primitive. If the input we provide is ((1 / left ()), right ()), then it won't unify properly, so the branch will failThe primitives of shape (1/x * x) would, of course, need to simply output their own input.beat :: 1 ~> (1/x * x)backbeat :: (1/x * x) ~> 1What you describe is the backbeat operation, just instantiated for x=(1+1).
But there is nothing preventing developers from creating systems that input left and output right. It just takes a simple mirroring operation:backbeatWithTwist = [mirror] second backbeatAt this point, we could input `right ()` and output `1 / left ()`, no problem.
I'm thinking the problem with the unification argument you presented is that it's shallow.In any case, I really don't follow the "non-determinism" claim; they mention non-determinism in an analogy to logic systems, but I don't believe non-determinism is part of their semantics. Indeed, it seems very deterministic. Relevant to the case you presented: "In the case of sums, the shape of the value, i.e., whether it is tagged with left or right, determines whether path c1 or path c2 is taken."
When we backbeat (-x + x), the continuous-time downstream parts must usually be enough to determine whether the left signal or the right signal is originally active, so that we don't have to nondeterministically branch and then fail.Yeah, that sounds right. Though, I might want another name for sums. Hmm.Now let's consider the isomorphisms between (-a * b) and (- (a * b)) and (a * -b). To make sense of these, we don't necessarily want `-a` to reverse each signal type of `a`.Why not? Perhaps we should think of `-a` as a backwards flowing a.
One can intuit the Static types either being continuously active or never active or just imaginary/unrealized; it doesn't matter because it isn't visible at runtime.The way I see it, a Static signal is active instantaneously, carrying a single value at compilation time. If it's in a dead program branch, then it's inactive. At run time, its activity is indeterminate (the same as S1), because it doesn't exist at any run time coordinates.
But there is nothing preventing developers from creating systems that input left and output right. It just takes a simple mirroring operation:backbeatWithTwist = [mirror] second backbeatAt this point, we could input `right ()` and output `1 / left ()`, no problem.
I realize you're talking about an RDP interpretation, but just to be clear about the paper's interpretation: If we try to invoke that program with an input value of ((1 / left ()), left ()), we'll get nondeterministic failure.
Now let's consider the isomorphisms between (-a * b) and (- (a * b)) and (a * -b). To make sense of these, we don't necessarily want `-a` to reverse each signal type of `a`.Why not? Perhaps we should think of `-a` as a backwards flowing a.So we can start with a backwards flowing `a` and a forwards flowing `b`, and then we can use the the (-a * b) to (a * -b) isomorphism to derive a forwards flowing `a` and a backwards flowing `b`? I don't know how to make sense of that.
On Wed, Aug 21, 2013 at 10:01 AM, Ross Angle <rok...@gmail.com> wrote:One can intuit the Static types either being continuously active or never active or just imaginary/unrealized; it doesn't matter because it isn't visible at runtime.The way I see it, a Static signal is active instantaneously, carrying a single value at compilation time. If it's in a dead program branch, then it's inactive. At run time, its activity is indeterminate (the same as S1), because it doesn't exist at any run time coordinates.Okay, I'll try to represent this based on the way you see things.In the normal case (e.g. branching on dynamic information), static information - such as the type - must flow down BOTH branches of (x + y). Thus, you should think of `(x + y)` as having a corresponding static product carrying ("sum" * (typeRep(x)*typeRep(y)) ). By comparison, (x | y) is a true static decision. The instantaneous signal has a nature closer to ("left" * typeRep(x)) or ("right" * typeRep(y)), and we're free to just ignore the branches we aren't taking. We can think of `intro1` as operating on the 'static signal' that carries type information. Since `x` always has a type signal (proof: we're looking at a type called 'x'), we can always operate on that signal to form `1*x`. In case of (x + y), we always have the correct underlying typeRep for signal processing such that applying `[intro1] left` will compute a new type ((1*x)+y). No problems there!
But there is nothing preventing developers from creating systems that input left and output right. It just takes a simple mirroring operation:backbeatWithTwist = [mirror] second backbeatAt this point, we could input `right ()` and output `1 / left ()`, no problem.
I realize you're talking about an RDP interpretation, but just to be clear about the paper's interpretation: If we try to invoke that program with an input value of ((1 / left ()), left ()), we'll get nondeterministic failure.
What would be non-deterministic about that failure?
Fractionals I understand much better - they're concrete, tangible, just running against the stream.
Well, I imagine you're not actually thinking of using AttachableHoses. You're thinking the Static layer would perform an *effectful* transformation from Face to Face, where the effect is to generate an appropriately typed run time behavior. For instance, you can use (>>>) or (***) or (+++) to compose two Static-layer computations, and the resulting dynamic behavior will automatically represent the corresponding (>>>) or (***) or (+++) composition on the run time network. But (+++) composition is an odd case here: For the run time network it represents two mutually exclusive paths, but for the Static-layer computations it represents a parallel transformation of both sides of a FaceSum (just like (***) acts on both sides of a FaceProduct). In this way, you really do "flow down BOTH branches of (x + y)."This is pretty interesting stuff. I keep wondering how to represent staged programming in a way that's more seamless with normal programming, and I think this is it. If we extrapolate from your two-stage model, I think we'll arrive at a wonderfully elegant stage-indexed family of sum operators. :)
I'd like to make sure we can extrapolate this into a stage-indexed family of Sigma types. That'll be interesting to think about, since I haven't even figured out what Sigma types need to look like for RDP yet.
A Sigma type can be used for that purpose, and I've done so in these discussions (even going so far as to use "SuchThat" as the notation). A Sigma type is an indexed sum, and its values let you extract two pieces of information:
- Which branch of the sum you're seeing (the index).
- The content of that branch.
The second value's *type* can depend on the first *value*.
Sigma types are a close counterpart to Pi types, which are indexed products.
The question I've been pondering is, where does the index signal live? For an RDP (Atom hither t () + Atom yonder t ()) signal, the index information is sometimes hither and sometimes yonder, but it's never available in both places.
I've come up with an answer. We can have some signals which supposedly represent run time information, but which don't actually live in any partition at run time. Instead these are abstract global statements about the run time phenomena, which are useless at run time but useful for a dependent typechecker. We can represent (a + b) as (Sigma (x : OmniBool) ~ if x then a else b), and sometimes we can use normal signals in the index type too (or instead): (Sigma (x : Atom elsewhere t Boolean) ~ if x then a else b).
Note that I'm not using a tacit syntax here--the x is a variable that holds a signal--but I've done some sketching that makes me think it isn't hard to avoid this.
For multi-stage programming, every Atom and OmniBool would be marked with a stage. An OmniBool is not a static signal; its stage is the same as its fellow Atoms. If you have a value of a Sigma type, there will be two ways to decompose it:
a) Given a stage, get the incomplete index. This will be a value that's partially uninformative because future stages haven't been decided yet.
b) Given a stage and an index completion, get the projection. An index completion represents whatever information could possibly fill in the gaps of the incomplete index.
In the one-stage case, this collapses back to the usual Sigma semantics: We always "give" the same stage, and there's never any incompleteness since there are no future stages to shroud in mystery.
In the two-stage case, for binary sums, this collapses to your static choice operator:
a1) Given stage 1, get a useless unit value.
a2) Given stage 2, get a boolean index representing which branch you're seeing.
a1) Given stage 1 and a boolean, get whichever branch will correspond to that.
a2) Given stage 2 and a useless unit value, get the branch you're on.
The full generality of this operator would be tough to implement. I would want to use a technique based on "Observational Equality, Now!" to formalize the incomplete indexes and index completions in terms of the things they could potentially be equal to. However, I'm afraid that will be very cumbersome to work with, much like my extensible-sums-based approach to the expression problem in Era.
Hmm... extensible staged sums. :) I suppose that'll be my next step into the abyss. :-p
-Ross
Note that I'm not using a tacit syntax here--the x is a variable that holds a signal--but I've done some sketching that makes me think it isn't hard to avoid this.
The question I've been pondering is, where does the index signal live? For an RDP (Atom hither t () + Atom yonder t ()) signal, the index information is sometimes hither and sometimes yonder, but it's never available in both places.
I've come up with an answer. We can have some signals which supposedly represent run time information, but which don't actually live in any partition at run time. Instead these are abstract global statements about the run time phenomena, which are useless at run time but useful for a dependent typechecker.
We can represent (a + b) as (Sigma (x : OmniBool) ~ if x then a else b), and sometimes we can use normal signals in the index type too (or instead): (Sigma (x : Atom elsewhere t Boolean) ~ if x then a else b).
Note that I'm not using a tacit syntax here--the x is a variable that holds a signal--but I've done some sketching that makes me think it isn't hard to avoid this.
In the two-stage case, for binary sums, this collapses to your static choice operator:
a1) Given stage 1, get a useless unit value.
a2) Given stage 2, get a boolean index representing which branch you're seeing.
b1) Given stage 1 and a boolean, get whichever branch will correspond to that.
b2) Given stage 2 and a useless unit value, get the branch you're on.
The full generality of this operator would be tough to implement.
Last night I put my sketch here:
https://github.com/rocketnia/underreact/blob/gh-pages/notes/reactive-staging.txt
maybe we don't need type expressions, just (1 -> #type) expressions. Then perhaps we could write this:
(#typeSigma (#bool s) (#if s x y))
Oh my gosh....
On Mon, Aug 26, 2013 at 8:21 PM, Ross Angle <rok...@gmail.com> wrote:
We can represent (a + b) as (Sigma (x : OmniBool) ~ if x then a else b), and sometimes we can use normal signals in the index type too (or instead): (Sigma (x : Atom elsewhere t Boolean) ~ if x then a else b).
Note that I'm not using a tacit syntax here--the x is a variable that holds a signal--but I've done some sketching that makes me think it isn't hard to avoid this.
For tacit representation of behavior types, the best I've come up with is behaviors that transform typereps... which are modeled as static values in the type system. I haven't figured out why, but I don't really find this satisfying. Let me know if you have any other ideas. :)
In the two-stage case, for binary sums, this collapses to your static choice operator:
a1) Given stage 1, get a useless unit value.
a2) Given stage 2, get a boolean index representing which branch you're seeing.
b1) Given stage 1 and a boolean, get whichever branch will correspond to that.
b2) Given stage 2 and a useless unit value, get the branch you're on.The full generality of this operator would be tough to implement.
I'm not sure it could be implemented in a concrete way. In which partition would be the observer? I suppose you're talking about the abstract implementation?
Index -- Get an (Omni s2 Bool).Projection at s1 -- Given an (Omni s1 Bool), get the s1 parts of the branch. (This input is our prediction of (Omni s2 Bool).)Projection at s2 -- Get the s2 parts of the branch.Projection at s3 -- Get the s3 parts of the branch.
Index -- Get an (Atom s1 p1 Bool * Atom s2 p2 Bool * Atom s3 p3 Bool).Projection at s1 -- Given an (Omni s1 Bool * Omni s1 Bool), get the s1 parts of the branch. (This input is our prediction of (Atom s2 p2 Bool * Atom s3 p3 Bool).)Projection at s2 -- Given an (Omni s2 Bool), get the s2 parts of the branch. (This input is our prediction of (Atom s3 p3 Bool).)Projection at s3 -- Get the s3 parts of the branch.
Index -- Get an (Omni s2 Bool).Projection at s1 -- Given an (Omni s1 Bool), get the s1 parts of the branch.Projection at s2..s3 -- Get the s2 and s3 parts of the branch.
Index -- Get an (Omni s2 Bool).Projection at s1 -- Given an (Omni s1 Bool), get the entire branch. We cannot use this operation unless we promise our computation becomes dead code after stage s1.Projection at s2 -- Get the entire branch. We cannot use this operation unless we promise our computation becomes dead code after stage s2.Projection at s3 -- Get the entire branch. We cannot use this operation unless we promise our computation becomes dead code after stage s3 (which is trivial if we only have these three stages).
getSourceCode :(a -> {s2..s3} b)-> {s1..s1}(Atom s1 p1 (a -> {s2..s3} b))
Last night I put my sketch here:
https://github.com/rocketnia/underreact/blob/gh-pages/notes/reactive-staging.txtI've been trying to get a handle on this, but I'm having some difficulty following the significance of each stage constraint. I think I got lost starting at #procureDeadAtom. Some extra inline documentation might help.
Sigma type expressions typically have local variables, so the use of typereps doesn't mean you're avoiding variables overall.
could you describe the primary reasons you're pursuing tacit programming rather than programming with variables?
You're asking about partitions. The way I'm thinking about this, whenever we have a pair of stages where one precedes the other, the later stage lives in a particular partition of the earlier one. Say partition p2 lives in partition p1 of stage s1. When we perform operations in stage s2, there's a side effect at stage s1, which is to generate program code that's observable by partition p1.
Projection at s1 -- Given an (Omni s1 Bool * Omni s1 Bool), get the s1 parts of the branch. (This input is our prediction of (Atom s2 p2 Bool * Atom s3 p3 Bool).)
my number one concern is that it might not make sense with closures.
How could we get the source code of the encapsulated signal?
Now that I've written this email, I think I might be ready to make some of those drastic changes. For posterity, the link above originally pointed to this version (and still does, for now): https://github.com/rocketnia/underreact/blob/810d0b5478cd3492412c0ed1dd0f20ef90b72e1d/notes/reactive-staging.txt.
On Sat, Aug 31, 2013 at 3:57 PM, Ross Angle <rok...@gmail.com> wrote:Sigma type expressions typically have local variables, so the use of typereps doesn't mean you're avoiding variables overall.
True. I've been thinking a bit about how to model 'local variables' for tracking this sort of dependent type analysis (in an environment like Awelon's, which lacks scope, and RDP, which lacks identity). Best I can come up with is to maintain an affine typed uniqueness-source in the environment as a name generator, and to somehow bind proof elements to each signal.
could you describe the primary reasons you're pursuing tacit programming rather than programming with variables?I put this in another e-mail.You're asking about partitions. The way I'm thinking about this, whenever we have a pair of stages where one precedes the other, the later stage lives in a particular partition of the earlier one. Say partition p2 lives in partition p1 of stage s1. When we perform operations in stage s2, there's a side effect at stage s1, which is to generate program code that's observable by partition p1.I'm interpreting this as: Omni is only observed by the compiler or typechecker, which happens somewhere.
Projection at s1 -- Given an (Omni s1 Bool * Omni s1 Bool), get the s1 parts of the branch. (This input is our prediction of (Atom s2 p2 Bool * Atom s3 p3 Bool).)
We're making a prediction in s1 about the future states in s2 and s3. What happens when we predict wrongly?
my number one concern is that it might not make sense with closures.How could we get the source code of the encapsulated signal?What do you mean the 'source code' of a signal?
One way to model closures is a static behavior coupled with a record of signals - a "poor man's closure". If we're feeling insecure about this, we could make the coupling tight using either a sealer/unsealer pair or a dedicated primitive (we don't really need identity for a closure).The only source-code associated with such signals would be the data plumbing.Hmm. I wonder if I should provide primitive support for this pattern. Maybe something like:capture :: ((x*y)~>z)*x ~> (y~>z)But I kind of like how my poor-man's closure keeps everything visible in the type system; makes it a lot easier to understand the limits and security implications. I'll need to reconsider adding this primitive much later, after everything else is working.
Now that I've written this email, I think I might be ready to make some of those drastic changes. For posterity, the link above originally pointed to this version (and still does, for now): https://github.com/rocketnia/underreact/blob/810d0b5478cd3492412c0ed1dd0f20ef90b72e1d/notes/reactive-staging.txt.
I'll take another look later, then.
What do you mean the 'source code' of a signal?
For the two-stage case you originally described (static and dynamic), we don't really need getSourceCode. Our entire static stage generates a single program for the dynamic stage. We compile a program and then (at some point) execute it, with no programmable steps in between.On the other hand, you represent dynamic behaviors as typed atoms. I think source code representations would have pretty much exactly the same type, and I think some of the same operations and implementations would apply to both. Underreact lacks dynamic behaviors, and I think this might be a great way to approach them.
It's interesting that you're encountering a desire to have closures at the static level, but it makes complete sense because you're trying to have open-system composition at that level too.
A real solution would probably involve some kind of cryptography so as to distribute data and capabilities into the outside world while keeping them encapsulated. Are you implying this with the sealer/unsealer pairs you mentioned, or would those work by some other mechanism?
This topic makes me want to mention something slightly related. The multi-stage programming model would potentially support function types as a concept *derived* from staged Sigmas:
Still haven't gotten around to it. :) Last weekend was my dad's birthday, and then the "Mighty No. 9" Kickstarter sapped all my attention. (Can't wait to see where that goes....)
I can't say I understand all the dependent type stuff. Hopefully I'll have an a-ha moment at some point. :)I'll move on to the stuff I do understand. Hope you don't mind.
On Sat, Sep 7, 2013 at 6:11 PM, Ross Angle <rok...@gmail.com> wrote:
What do you mean the 'source code' of a signal?For the two-stage case you originally described (static and dynamic), we don't really need getSourceCode. Our entire static stage generates a single program for the dynamic stage. We compile a program and then (at some point) execute it, with no programmable steps in between.On the other hand, you represent dynamic behaviors as typed atoms. I think source code representations would have pretty much exactly the same type, and I think some of the same operations and implementations would apply to both. Underreact lacks dynamic behaviors, and I think this might be a great way to approach them.I tend to think of 'staging' this way: there is a static stage, then there is a dynamic stage. But, there are some dynamic behaviors, and they can have their static stage at runtime.
You seem to be thinking more in terms of a multi-stage static computation, perhaps with later stages getting close to a dynamic configuration that will somehow snap safely into place.
Composition and link time program logic|Run time program logic (the meat of the application)/ | \Dynamically generated code for storage or distribution
Personally, I'll need to take your word for it that such a thing is even plausible. Well, yours and Oleg's and Adam Chlipala's and Matt McClelland's. ;)But I have this nagging fear.My main motivations for dynamic behavior haven't changed since I described it a couple years ago: hot-pluggable extensibility, interacting with lazy or incremental systems, brokering connections or smart-networks, optional control capabilities for mutable views, a form of linking to ad-hoc resources in a dynamic environment. The common factor in all these cases is that the dynamic behavior depends not only on runtime inputs, but runtime source-code from sources that you might not even know about statically.
My fear is this: we will build a powerful, dependently typed framework around everything. And it will be good for those cases where we might want dynamic behaviors based on runtime inputs. But that pattern won't really fit the motivating use-cases for dynamic behavior, so the best we'll accomplish is modeling some type-safe interpreters and compilers for ad-hoc remote source code. And while that might be cool (providing a smaller trusted base) it won't really be useful.
OTOH, perhaps I'm underestimating the number of use-cases for dynamic behaviors based on observing runtime values and building the code locally.
It's interesting that you're encountering a desire to have closures at the static level, but it makes complete sense because you're trying to have open-system composition at that level too.
Well, that's part of it. But I think the tipping point was when I realized I can provide static signal inputs to a dynamic behavior, i.e. like configuration variables. I imagine this could make high-performance dynamic behaviors quite feasible.
A real solution would probably involve some kind of cryptography so as to distribute data and capabilities into the outside world while keeping them encapsulated. Are you implying this with the sealer/unsealer pairs you mentioned, or would those work by some other mechanism?
Awelon's sealer/unsealer pairs currently are only good for static computations. If I figure out how to 'serialize' a sealed value for partition crossing (without unsealing it first) perhaps I'll address cryptography then. :)
Capabilities could be modeled by encrypting code or by using a GUID to represent stored code. It might also be interesting if capabilities can encapsulate some signals. I'm still thinking on how to best model them in Awelon. What I have decided is that some partitions will provide ambient authority, and that ambient authority partitions will typically have an associated 'sandbox' partition that can only manipulate the ambient partition with an associated capability. I'm doing this mostly because ambient authority is a better fit for the AVM and compiler.
Still haven't gotten around to it. :) Last weekend was my dad's birthday, and then the "Mighty No. 9" Kickstarter sapped all my attention. (Can't wait to see where that goes....)
Mighty 9 looks nice. An obvious Mega Man clone. Though, I guess when you've got one of the original Mega Man lead developers... maybe it doesn't count as a clone.I'll look into picking it up if it gets a PS3 release. :)