Turing completeness in the *spatial* dimension.
In Era, I'm avoiding this scenario by using a system inspired by ML modules. All type parameters must be provided upon importing a definition, and each definition can only import a finite number of others. Dependency loops are prohibited; a definition can only be *installed* in a knowledgebase if all its imports are already there. (For this reason, it will usually be less frustrating if a library comes bundled with its dependencies.)
In Era, these type parameters are actually given as executable (terminating) expressions which can use previous imports in the list. This is makes it possible to import a type and then use it in another import.
The whole business is very applicative in Era, but I suspect a tacit version is technically possible. Take what would usually be syntactic classes (e.g. "term", "polymorphic type annotation") and treat them as data types for some non-polymorphic base operators. This could be challenging because your basic structural operations are polymorphic (e.g. preserving the tail of a stack), so they won't be available.
intro1 :: x ~> (Unit * x) -- PRIM
For instance, this type expression might become something like this:
"x" typeVar typeUnit "x" typeVar typeTimes typeAnytimeFn
where these primitives have the following non-polymorphic types:
typeVar :: (Static String :&: SyntaxStack) ~> SyntaxStack
typeUnit :: SyntaxStack ~> SyntaxStack
typeTimes :: SyntaxStack ~> SyntaxStack
typeAnytimeFn :: SyntaxStack ~> SyntaxStack
This uses names, so maybe it's not very exciting....
A) Awelon also has a simple feature to support rich compile-time metaprogramming (based on static reflection and introspection) so this sort of 'infinite expansion' could potentially happen by accident, e.g. as a consequence of modeling aspect oriented programming.
B) One of my goals for Awelon is to serve as an effective RDP distribution language - i.e. so clients can send RDP code to mutually distrustful services and vice versa. Part of being effective (IMO) is being robust and adaptable: I am not going to give up compile-time metaprogramming.
One of the more uniformly viable solutions is to enforce a resource quota, e.g. to introduce a monitor or sandbox (VM, process, etc.) that tracks its use of the subprogram's CPU, network, and space resources, and to terminate the subprogram if it violates those conditions.
Unfortunately, I don't know any resource control model for distrusted computations that is *elegant* with respect to the original comptuation theory. (If you know such a theory, I beg, tell me!) Adding quotas seems ugly no matter the programming model.
I assume that, even if I do not model quotas, they'll still exist. If that is the case, should not I model quotas - with no regard for elegance?
Turing completeness in the *spatial* dimension.I think this arises because you're using Haskell-like polymorphism.
In Era, I'm avoiding this scenario by using a system inspired by ML modules. All type parameters must be provided upon importing a definition, and each definition can only import a finite number of others. Dependency loops are prohibited; a definition can only be *installed* in a knowledgebase if all its imports are already there.
intro1 :: x ~> (Unit * x) -- PRIMFor instance, this type expression might become something like this:
"x" typeVar typeUnit "x" typeVar typeTimes typeAnytimeFn
where these primitives have the following non-polymorphic types:
typeVar :: (Static String :&: SyntaxStack) ~> SyntaxStack
typeUnit :: SyntaxStack ~> SyntaxStack
typeTimes :: SyntaxStack ~> SyntaxStack
typeAnytimeFn :: SyntaxStack ~> SyntaxStack
This uses names, so maybe it's not very exciting....
A) Awelon also has a simple feature to support rich compile-time metaprogramming (based on static reflection and introspection) so this sort of 'infinite expansion' could potentially happen by accident, e.g. as a consequence of modeling aspect oriented programming.What kind of static reflection and introspection do you have in mind?
B) One of my goals for Awelon is to serve as an effective RDP distribution language - i.e. so clients can send RDP code to mutually distrustful services and vice versa. Part of being effective (IMO) is being robust and adaptable: I am not going to give up compile-time metaprogramming.If the metaprogramming layer is resolved by the service rather than the client,
You chose to model total failure for errors in distributed computation because it fit with the need to model disruption. Since disruption was necessary, you encouraged building systems that were tolerant of glitches and node failures. How is this any different? The distributed program can semantically be a "discovered" network node which is failing because it's under strain.
It may be ugly for the programming model, but it fits nicely into the physical model.
On Thu, Aug 15, 2013 at 5:24 PM, Ross Angle <rok...@gmail.com> wrote:
Turing completeness in the *spatial* dimension.I think this arises because you're using Haskell-like polymorphism.I believe it happens even without polymorphism. With static operations (due to my more expressive 'first'):[copy apply] [copy apply] apply =~= (\x.(x x) \x.(x x))
And, even before then I had the same issue with dynamic behaviors, I just wasn't focusing on it.
beval :: (BDynamic b b', SigInP p x, SigInP p y)=> DT -> b (S p (b' x y) :&: x) (y :|: S p ())beval = bevalx . bdelay
I've decided to drop the polymorphism I described earlier using `=|` extensible definitions.
I'm just giving up that form of extensibility. I was finding myself reluctant to use anything that depends on definitions or shared names, since such wouldn't be ideal for code distribution. My new philosophy for modules is: no powers beyond dumb substitution of definitions, basically as a form of compression. (No recursion, etc.)
At the moment I am contemplating how I should utilize static choice as a basis for modularity. A static `+` type seems in many ways like it shouldn't be special, yet in other ways... well, we don't actually need to handle all the types; instead, most paths are statically negated, and developers shouldn't even think about them.
(I wonder whether it's formally sane to feed info back from a bad search path to adjust the next one, e.g. using negative types across the static version of `+`.)
In Era, I'm avoiding this scenario by using a system inspired by ML modules. All type parameters must be provided upon importing a definition, and each definition can only import a finite number of others. Dependency loops are prohibited; a definition can only be *installed* in a knowledgebase if all its imports are already there.
I believe Awelon's simplified module system is under similar constraints.
For instance, this type expression might become something like this:
"x" typeVar typeUnit "x" typeVar typeTimes typeAnytimeFn
Might be useful while building my AVM. :)
A) Awelon also has a simple feature to support rich compile-time metaprogramming (based on static reflection and introspection) so this sort of 'infinite expansion' could potentially happen by accident, e.g. as a consequence of modeling aspect oriented programming.What kind of static reflection and introspection do you have in mind?You can ask: is this a `*` type? is this a `+` type? is it an integer? is this Static Int greater than zero? is this Static equal to this string? etc.. I suppose it's just introspection. (You can't change anything like you might with reflection.) Anyhow, based on answers, different behaviors might be computed.
B) One of my goals for Awelon is to serve as an effective RDP distribution language - i.e. so clients can send RDP code to mutually distrustful services and vice versa. Part of being effective (IMO) is being robust and adaptable: I am not going to give up compile-time metaprogramming.If the metaprogramming layer is resolved by the service rather than the client,Awelon's metaprogramming is modeled within Awelon itself. It doesn't depend on any features unique to a client or service. It does use AVM primitives.
(second e-mail):
You chose to model total failure for errors in distributed computation because it fit with the need to model disruption. Since disruption was necessary, you encouraged building systems that were tolerant of glitches and node failures. How is this any different? The distributed program can semantically be a "discovered" network node which is failing because it's under strain.If I understand your point correctly, you are right that I could treat an unforeseen quota violation as a form of node loss or network disruption. RDP is tolerant to such things. Unfortunately, in general, humans aren't. When a process is smashed because resource consumption grows out of control, resources are wasted and nobody is satisfied. If this is a preventable concern, I would like to prevent it.
It may be ugly for the programming model, but it fits nicely into the physical model.Hmm. We do have logics suitable for physical models. I suppose we could model quotas in terms of dividing a static number among different operations. OTOH, we'd lose a lot of equational reasoning.
And, even before then I had the same issue with dynamic behaviors, I just wasn't focusing on it.Sirea's dynamic behavior types can use Haskell's polymorphism directly.beval :: (BDynamic b b', SigInP p x, SigInP p y)=> DT -> b (S p (b' x y) :&: x) (y :|: S p ())beval = bevalx . bdelayRight here, the `x` and `y` types are determined by Haskell's polymorphism. The beval definition is parameterized over these types, but Haskell makes the "forall" implicit.
I'm just giving up that form of extensibility. I was finding myself reluctant to use anything that depends on definitions or shared names, since such wouldn't be ideal for code distribution. My new philosophy for modules is: no powers beyond dumb substitution of definitions, basically as a form of compression. (No recursion, etc.)
I'm glad. XD; Hopefully dynamic behaviors and such can handle whatever use cases you had in mind.
At the moment I am contemplating how I should utilize static choice as a basis for modularity. A static `+` type seems in many ways like it shouldn't be special, yet in other ways... well, we don't actually need to handle all the types; instead, most paths are statically negated, and developers shouldn't even think about them.
I could make random recommendations, but they'd just lead to Lathe's self-organizing precedence system... or Era. Usually Era. :-p
I think you might be able to get negative and fractional types to work for static signals. However, I think you might want to model * and + with different types than :&: and :|: just so you can avoid awkward differences between these two type systems.
You can ask: is this a `*` type? is this a `+` type? is it an integer? is this Static Int greater than zero? is this Static equal to this string? etc.. I suppose it's just introspection. (You can't change anything like you might with reflection.) Anyhow, based on answers, different behaviors might be computed.Yeah, that's reasonable.Hewitt's odd paper "Mathematics self-proves its own consistency" (and the resulting LtU discussion) has got me thinking about the potential theoretic value of a language that can't pattern-match on its own types. After all, mathematical paradoxes are frequently discussed in terms of first-class natural-language sentences, and the very idea of formalizing the paradox makes the formal system suspect. When we're dealing with natural-language sentences and proofs, it doesn't make much sense to do structural induction on their symbols. :)
Awelon's metaprogramming is modeled within Awelon itself. It doesn't depend on any features unique to a client or service. It does use AVM primitives.So your point is, you're considering both cases? Awelon's compiler running on the client and Awelon's compiler running on the server? This doesn't impact what I was saying, I think.
If I understand your point correctly, you are right that I could treat an unforeseen quota violation as a form of node loss or network disruption. RDP is tolerant to such things. Unfortunately, in general, humans aren't. When a process is smashed because resource consumption grows out of control, resources are wasted and nobody is satisfied. If this is a preventable concern, I would like to prevent it.Ah, well you already model static latency. I wonder if there'd be similar ways to model memory footprint and energy use.
maybe negative and fractional types could help with energy constraints due to their reversibility
Well, I think that's what the linear and rational types would amount to. I suppose these suggestions just parrot what you've already been thinking about. :)(I hope I can use "rational types" as shorthand for "negative and fractional types.")
On Thu, Aug 15, 2013 at 9:55 PM, Ross Angle <rok...@gmail.com> wrote:
And, even before then I had the same issue with dynamic behaviors, I just wasn't focusing on it.Sirea's dynamic behavior types can use Haskell's polymorphism directly.beval :: (BDynamic b b', SigInP p x, SigInP p y)=> DT -> b (S p (b' x y) :&: x) (y :|: S p ())beval = bevalx . bdelayRight here, the `x` and `y` types are determined by Haskell's polymorphism. The beval definition is parameterized over these types, but Haskell makes the "forall" implicit.It doesn't matter what `x` and `y` are. They could be concrete unit types, and a dynamic behavior could still look up another copy of itself somewhere, and beval it.
I hope so too. It seems they should. Yet I'm still trying to figure out a recursive `copy` without recursion. :)
I think you might be able to get negative and fractional types to work for static signals. However, I think you might want to model * and + with different types than :&: and :|: just so you can avoid awkward differences between these two type systems.Awelon is not built upon Sirea. Awelon will be a full, compiled language. I'll use Haskell to bootstrap it, but it's all fresh code. When I started talking about Awelon, I started using * and +. Because those colons are unreasonably painful to enter. (I only did so before due to Haskell's type operator requirements.)
Awelon's metaprogramming is modeled within Awelon itself. It doesn't depend on any features unique to a client or service. It does use AVM primitives.So your point is, you're considering both cases? Awelon's compiler running on the client and Awelon's compiler running on the server? This doesn't impact what I was saying, I think.Perhaps I missed your point. Compiling Awelon on either side should not impact the *outcome* of the metaprogramming (static operations shouldn't have that kind of side-effect). However, it does affect where costs are distributed. If you were alluding to the latter, yeah, it could be treated as a service. OTOH, most of the time you send code to a server it's because you want it to execute on the server... e.g. as a software agent to make some decisions or filter junk on your behalf.
Ah, well you already model static latency. I wonder if there'd be similar ways to model memory footprint and energy use.I've not found any that don't seem hackish, and also tend to make equational reasoning mostly useless (e.g. `first f >>> first g == first (f >>> g)` is an arrow law, but does it still hold if you include energy and memory footprints?)
Rational type? You mean like 12.3 is 123/10? =)
Maybe we should call them 'salmon types' because they swim upstream?
Perhaps we can store certain behaviors in resources, but only if their type makes it clear they can diverge.
I'm doing something obliquely similar to this in Era, where I have a catch-all wrapper type "Sink" for untyped programming. It can only wrap certain value types, and (Sink -> Sink) isn't one of them since it would allow nontermination. However, (Sink -> Partial Sink) is just fine.
I hope so too. It seems they should. Yet I'm still trying to figure out a recursive `copy` without recursion. :)I think you could accomplish `copy` using primitive recursion and/or induction-recursion. (I don't really know the difference.... I think maybe induction-recursion refers to the definition syntax, and primitive recursion refers to the level of expressiveness.) If you define an inductive-recursive function that pattern-matches on (a :&: b), you get access to `a`, `b`, and also `recurOnA` and `recurOnB`. You don't get to recur on the full value.
...Actually, now that I've taken another look at that paper, it seems like they introduced negative and fractional types as an extension of a less exotic system where product types and sum types were traditional. So I suppose they might be the same product and sum operators after all. That's nice. :)
What I said was "If the metaprogramming layer is resolved by the service rather than the client, then it's part of the service's interface, and personally I might want to account for that explicitly and treat it as a full feature."Suppose a vending machine provides a service for cooking custom recipes, and it expects these recipes as Awelon programs and compiles them on the fly. Then perhaps I'm on the run from some outrageous adversary and I simply have to solve a convoluted logic puzzle in order to unravel the secrets of the place I'm trapped in. Fortunately, I remember Awelon's compile time semantics happen to feature a constraint solver. So I sneak over to the vending machine and ask it to bake me a cake whose icing describes the solution.Much to my dismay, the baking process takes a long time. Too long. By the time it's complete, I've already been captured and it doesn't matter anymore. Wouldn't it have saved me some agony if I knew I couldn't rely on the vending machine in the first place?
Hmm. Should that be a primitive, or defined in terms of a fixpoint combinator?
inductionRecursionOverTree :-- For each construction that's based on an unlabeled binary tree...(constructionType : Type) -> (p : Tree -> constructionType) ->-- ...if the construction works for a leaf...p treeLeaf ->-- ...and if we have an inductive step for a branch...((a : Tree) -> p a -> (b : Tree) -> p b -> p (treeBranch a b)) ->-- ...then the construction works for any tree.(x : Tree) -> p x
inductionOverTree = inductionRecursionOverTree ProprecursionOverTree = inductionRecursionOverTree Type
countLeaves : Tree -> IntcountLeaves = recursionOverTree (\_ -> Int)1(\a countOfA b countOfB ->countOfA + countOfB)
I agree knowing the cake is a lie would save you some agony. And in a sense, any service that accepts code will have the appropriate 'block' type in its interfaces. But I'm still not sure what you mean by "account for that and explicitly treat it as a full feature".
intro1 :: x ~> (Unit * x) -- PRIMFor instance, this type expression might become something like this:
"x" typeVar typeUnit "x" typeVar typeTimes typeAnytimeFn
where these primitives have the following non-polymorphic types:
typeVar :: (Static String :&: SyntaxStack) ~> SyntaxStack
typeUnit :: SyntaxStack ~> SyntaxStack
typeTimes :: SyntaxStack ~> SyntaxStack
typeAnytimeFn :: SyntaxStack ~> SyntaxStack
This uses names, so maybe it's not very exciting....
On Fri, Aug 16, 2013 at 12:34 PM, David Barbour <dmba...@gmail.com> wrote:
Hmm. Should that be a primitive, or defined in terms of a fixpoint combinator?(In short: It makes sense as a kind of combinator, but the type is exotic enough that I'd recommend providing it as a syntactic construct.)
I think all I want is for that "block" type to be well-understood.If I'm thinking about using someone else's API and it seems to support a very wide range of use cases, I want to be reasonably confident my own use case was anticipated by the designer. (From the other side, as an API designer, I want to understand what use cases to anticipate.)
Whoops, I don't know what I was thinking here. The type of the "x" word would usually be (a ~> (Static String :&: a)), so it's polymorphic too.
It's possible to fix this by combining ("x" typeVar) into a dedicated syntax, perhaps #typeVar[x]. The #typeVar[...] words would have type (SyntaxStack ~> SyntaxStack).
On the topic of polymorphism in concatenative languages, I just did a search for [combinator dependent typing], which has now led me to the topic of "illative combinatory logic." Apparently this is about combinatory logic with some operators for managing inference.
On Fri, Aug 16, 2013 at 11:19 PM, Ross Angle <rok...@gmail.com> wrote:
(In short: It makes sense as a kind of combinator, but the type is exotic enough that I'd recommend providing it as a syntactic construct.)I don't plan to add syntax if I can avoid it.
But if I build on a call-by-value fixpoint combinator, I can probably simplify the common use cases to a couple arguments.I'll get back to you on the induction recursion when I've had time to build some combinators. I plan to start writing my parsers and bootstraps up after some sleep.
I think all I want is for that "block" type to be well-understood.If I'm thinking about using someone else's API and it seems to support a very wide range of use cases, I want to be reasonably confident my own use case was anticipated by the designer. (From the other side, as an API designer, I want to understand what use cases to anticipate.)I really don't follow. The connection between "type well-understood" and all those other properties you mentioned? I'm not sure one exists.
First, from the API developer's POV, understanding use cases is best performed with a very *rough* understanding of the types involved:
If API designers focus too much on precise type details, they'll be outflanked by bigger problems. Precise types are for machines to prove the properties that machines care about (like type safety). At higher levels of human thinking, it has to be in terms of authority, responsibility, and similar high level concepts.
Second, clients who write outside the lines, push at the edges, beyond what is anticipated - THOSE are the clients I would most aim to support.
The question "was my program anticipated?" doesn't matter nearly as much as "was my program accepted? does it do what I expected?" The API designers are often most happy with these clients, too, because (in most cases) these unanticipated uses are not abusive.
Aside from 'use cases', a concern is resource control and risk of denial of service attacks. Those risks are mostly independent of a block type. Unless we can guarantee that time/space for processing code is a small polynomial of code size (at most cubic, by my estimates) then we're subject to denial-of-service attacks on our resources - unless we use another control mechanism, such as quotas ... or charging for resources, which is actually a very effective and usable approach.
(next e-mail)
Whoops, I don't know what I was thinking here. The type of the "x" word would usually be (a ~> (Static String :&: a)), so it's polymorphic too.
It's possible to fix this by combining ("x" typeVar) into a dedicated syntax, perhaps #typeVar[x]. The #typeVar[...] words would have type (SyntaxStack ~> SyntaxStack).I'm not seeing the problem with parametric polymorphism in this case.
On the topic of polymorphism in concatenative languages, I just did a search for [combinator dependent typing], which has now led me to the topic of "illative combinatory logic." Apparently this is about combinatory logic with some operators for managing inference.I've found a few hits, but my sleepy head's not in a good condition to look at streams of Cs and Ks. I'll take a look at it later.
I don't plan to add syntax if I can avoid it.You were already hoping to have a =| syntax, and this would be a tamer version of that.
Hmm, if the main reason you opt for a termination guarantee is to prohibit *accidental* time explosion, then (Type : Type) might not be so bad. It doesn't take much code to use (Type : Type) to apply Girard's paradox, but that code is pretty complicated, so I don't think it would happen by accident.
First, from the API developer's POV, understanding use cases is best performed with a very *rough* understanding of the types involved:People will interact with each other based on rough understandings, but they can avoid lose-lose tragedies and court expenses by applying some deep expertise upfront. You've taught me that as language designers, we have a unique opportunity to lay solid groundwork so that people can develop *valid* intuitions and expertise.
If API designers focus too much on precise type details, they'll be outflanked by bigger problems. Precise types are for machines to prove the properties that machines care about (like type safety). At higher levels of human thinking, it has to be in terms of authority, responsibility, and similar high level concepts.I don't think those concepts are more "high-level" than precise types. They're just more fuzzy, more contradictory, and more honest.
I believe they can be modeled within a precise type system anyway, and that the precise type system would help us reason about these honest APIs without losing information.
In the struggles between API designer and client, you would generally favor the client?
I expect we have a whole world of people who could be both API designers and clients, but most people don't receive the opportunities or encouragement to deploy their own APIs.
Even if we don't understand all the creative ways our APIs can impact the outside world, we can still formally reason about them on the implementation side, and we can seek to formally reason about how they combine with other APIs in specific ways. When I said I wanted API designers to be able to anticipate use cases, I meant it in the latter sense, where they're anticipating their own maintenance needs rather than the actual impact on the outside world.
Unless we can guarantee that time/space for processing code is a small polynomial of code size (at most cubic, by my estimates) then we're subject to denial-of-service attacks
If you don't mind, how did you come up with "cubic"? Personally, I keep trying to keep it at "linear," but some of the things I want to do bump this up to "quadratic" or occasionally can't be bounded by any polynomial.
First, from the API developer's POV, understanding use cases is best performed with a very *rough* understanding of the types involved:People will interact with each other based on rough understandings, but they can avoid lose-lose tragedies and court expenses by applying some deep expertise upfront. You've taught me that as language designers, we have a unique opportunity to lay solid groundwork so that people can develop *valid* intuitions and expertise.Yeah. It is important part is that this "rough" intuition-based understanding can also be "valid" with some effort. Capabilities and linear types are great tools for this, but developers still need to consider many aspects of an API carefully (e.g. things like stealing keyboard focus or transparent clickable areas can be UI security concerns).If API designers focus too much on precise type details, they'll be outflanked by bigger problems. Precise types are for machines to prove the properties that machines care about (like type safety). At higher levels of human thinking, it has to be in terms of authority, responsibility, and similar high level concepts.I don't think those concepts are more "high-level" than precise types. They're just more fuzzy, more contradictory, and more honest.They can be realized in more ways than precise types, therefore they're higher level. QED. =)
In the struggles between API designer and client, you would generally favor the client?I believe that supporting creative and unexpected applications truly benefits both the API designers and the clients.An API designer should protect against abuse, but should also find means to do so that support innovation.
I expect we have a whole world of people who could be both API designers and clients, but most people don't receive the opportunities or encouragement to deploy their own APIs.Yeah, this was one of the problems RDP was designed to address. By representing overlay networks, an RDP application can deploy its own APIs atop remote services. Any client of a service can essentially extend its API, either through an intermediate cloud server or injecting code.
If you don't mind, how did you come up with "cubic"? Personally, I keep trying to keep it at "linear," but some of the things I want to do bump this up to "quadratic" or occasionally can't be bounded by any polynomial.I can't actually recall. I came up with it many years ago. What I do remember is that cubic is when it starts to become really problematic with regards to estimated memory/CPU vs. bandwidth and mirroring patterns. Also, IIRC, I was accounting for the difficulty of heuristically discriminating legit code vs. stupidly massive code during the attack itself to support most requests. (Higher polynomials allow the DoS attack to look closer to legit service requests.)
You can think of it as a family of fixpoint combinators, one for every type you want to do induction-recursion over. The combinator will look like an induction rule.
Personally, I still haven't learned what to do if the type contains other types (e.g. a list containing numbers), let alone what to do if multiple user-defined types contain each other.
Interestingly, the explanation of induction-recursion reduces it to a single data type definition and a single function definition. I get the sense the expansion to multiple-definition scenarios is just too trivial and tedious to mention, but it's still a mystery to me. XDWell, I hope I haven't misinformed you about too many other things. :-p