To Quota or not to Quota? RDP in the spatial dimension.

18 views
Skip to first unread message

David Barbour

unread,
Aug 14, 2013, 5:17:04 PM8/14/13
to reactiv...@googlegroups.com
Reactive Demand Programming (RDP) has an ugly little wart: Turing completeness in the *spatial* dimension. 

Originally, this was a problem just for dynamic behaviors (i.e. with arrow apply). But my development of Awelon - to support a tacit programming style - introduced the same concern at compile time:

    first :: (Static (x ~> x') * (x * y)) ~> (x' * y) -- PRIM
    intro1 :: x ~> (Unit * x) -- PRIM
    elim1 :: (Unit * x) ~> x -- PRIM
    swap :: (a * b) ~> (b * a) -- PRIM
    assocl :: (a * (b * c)) ~> ((a * b) * c) -- PRIM
    copyStatic :: Static a ~> (Static a * Static a) -- PRIM
    [ foo ] :: e ~> (Static (a ~> b) * e) -- literal / block of code

    -- second :: (Static (y ~> y') * (x * y)) ~> (x * y')
    second = swap [swap] first swap first swap

    -- assocr :: ((a * b) * c) ~> (a * (b * c))
    assocr = [swap] first swap assocl [swap] first swap

    -- apply :: (Static (x ~> y) * x) ~> y
    apply = [intro1 swap] second first swap elim1

     -- the y knot, expands infinitely at compile time
     y = [copyStatic apply] apply

(Aside: tacit programming with arrows is awesome - low noise, high abstraction, richly typed, excellent for staged programming.)

Being Turing complete is not itself the problem, but RDP is intended to be Turing complete only in the temporal dimension. My ideal is that I would somehow enforce that computation is *proportional* to time. But I'm willing to settle for a much weaker condition: 

  computation over a bounded period of time should be bounded

I was concerned with this expansion even for dynamic behaviors, but the Turing complete compile time has me more immediately concerned. A few points:

A) Awelon also has a simple feature to support rich compile-time metaprogramming (based on static reflection and introspection) so this sort of 'infinite expansion' could potentially happen by accident, e.g. as a consequence of modeling aspect oriented programming.

B) One of my goals for Awelon is to serve as an effective RDP distribution language - i.e. so clients can send RDP code to mutually distrustful services and vice versa. Part of being effective (IMO) is being robust and adaptable: I am not going to give up compile-time metaprogramming.

C) A distrustful client or server should not accept code that might exhaust resources, because that becomes a security and safety concern - denying service to other clients or denying resources to critical applications. 

Fortunately, there are many feasible solutions to this problem. 

One of the more uniformly viable solutions is to enforce a resource quota, e.g. to introduce a monitor or sandbox (VM, process, etc.) that tracks its use of the subprogram's CPU, network, and space resources, and to terminate the subprogram if it violates those conditions. 

Unfortunately, I don't know any resource control model for distrusted computations that is *elegant* with respect to the original comptuation theory. (If you know such a theory, I beg, tell me!) Adding quotas seems ugly no matter the programming model. 

I assume that, even if I do not model quotas, they'll still exist. If that is the case, should not I model quotas - with no regard for elegance?

If I model quotas, at least I gain a few nice properties:

1) Quotas can be computed deterministically. E.g. a spatial quota might be computed by counting primitives in the expansion of the program (before optimization is applied). This enables both participants in the code exchange to understand precisely what the quota means. 

2) In some cases, especially if we know the usage context, quotas can be validated statically, offering a great deal more confidence that our code won't break on us.

3) By acknowledging quotas within the code, we might also be capable of statically or dynamically 'adapting' to them - e.g. specifying, through compile-time metaprogramming, which subprograms should be cut if cut they must be. This can offer a much more robust system, with graceful degradation when our systems are tight or overbooked on resources. 

These points seem convincing to me, but I am interested in other opinions, and if there is something that I am not seeing. 

Ross Angle

unread,
Aug 15, 2013, 8:24:25 PM8/15/13
to reactiv...@googlegroups.com
Turing completeness in the *spatial* dimension. 


I think this arises because you're using Haskell-like polymorphism. The way I see it, this implicitly populates type variables in a way that is compatible with Turing-completeness. Specifically, it doesn't populate them fully before the "Static" phase. That phase uses the operations of lambda calculus, so the types fall prey to that expressiveness.

In Era, I'm avoiding this scenario by using a system inspired by ML modules. All type parameters must be provided upon importing a definition, and each definition can only import a finite number of others. Dependency loops are prohibited; a definition can only be *installed* in a knowledgebase if all its imports are already there. (For this reason, it will usually be less frustrating if a library comes bundled with its dependencies.)

In Era, these type parameters are actually given as executable (terminating) expressions which can use previous imports in the list. This is makes it possible to import a type and then use it in another import.

The whole business is very applicative in Era, but I suspect a tacit version is technically possible. Take what would usually be syntactic classes (e.g. "term", "polymorphic type annotation") and treat them as data types for some non-polymorphic base operators. This could be challenging because your basic structural operations are polymorphic (e.g. preserving the tail of a stack), so they won't be available.



    intro1 :: x ~> (Unit * x) -- PRIM

For instance, this type expression might become something like this:

"x" typeVar typeUnit "x" typeVar typeTimes typeAnytimeFn

where these primitives have the following non-polymorphic types:

typeVar :: (Static String :&: SyntaxStack) ~> SyntaxStack

typeUnit :: SyntaxStack ~> SyntaxStack

typeTimes :: SyntaxStack ~> SyntaxStack

typeAnytimeFn :: SyntaxStack ~> SyntaxStack

This uses names, so maybe it's not very exciting....




A) Awelon also has a simple feature to support rich compile-time metaprogramming (based on static reflection and introspection) so this sort of 'infinite expansion' could potentially happen by accident, e.g. as a consequence of modeling aspect oriented programming.


What kind of static reflection and introspection do you have in mind?


B) One of my goals for Awelon is to serve as an effective RDP distribution language - i.e. so clients can send RDP code to mutually distrustful services and vice versa. Part of being effective (IMO) is being robust and adaptable: I am not going to give up compile-time metaprogramming.


If the metaprogramming layer is resolved by the service rather than the client, then it's part of the service's interface, and personally I might want to account for that explicitly and treat it as a full feature. For instance, I might send some code to that service just so it can compile it to a trivial run time program, which then gives me code I can use for another service. I think calling this "compile-time metaprogramming" may make it easier to overlook semantic issues that nevertheless exist.

(I'll get to the quota part later.)

Warm Regards,
Ross

Ross Angle

unread,
Aug 15, 2013, 10:14:27 PM8/15/13
to reactiv...@googlegroups.com
On Wed, Aug 14, 2013 at 2:17 PM, David Barbour <dmba...@gmail.com> wrote:

One of the more uniformly viable solutions is to enforce a resource quota, e.g. to introduce a monitor or sandbox (VM, process, etc.) that tracks its use of the subprogram's CPU, network, and space resources, and to terminate the subprogram if it violates those conditions. 

Unfortunately, I don't know any resource control model for distrusted computations that is *elegant* with respect to the original comptuation theory. (If you know such a theory, I beg, tell me!) Adding quotas seems ugly no matter the programming model. 


It may be ugly for the programming model, but it fits nicely into the physical model.


I assume that, even if I do not model quotas, they'll still exist. If that is the case, should not I model quotas - with no regard for elegance?


You chose to model total failure for errors in distributed computation because it fit with the need to model disruption. Since disruption was necessary, you encouraged building systems that were tolerant of glitches and node failures. How is this any different? The distributed program can semantically be a "discovered" network node which is failing because it's under strain.

-Ross

David Barbour

unread,
Aug 15, 2013, 10:46:47 PM8/15/13
to reactiv...@googlegroups.com
On Thu, Aug 15, 2013 at 5:24 PM, Ross Angle <rok...@gmail.com> wrote:

Turing completeness in the *spatial* dimension. 


I think this arises because you're using Haskell-like polymorphism.

I believe it happens even without polymorphism. With static operations (due to my more expressive 'first'):

      [copy apply] [copy apply] apply   =~=   (\x.(x x) \x.(x x))

And, even before then I had the same issue with dynamic behaviors, I just wasn't focusing on it.
 
I've decided to drop the polymorphism I described earlier using `=|` extensible definitions. I was reminded that predictable compile-time performance is important. 

I'm just giving up that form of extensibility. I was finding myself reluctant to use anything that depends on definitions or shared names, since such wouldn't be ideal for code distribution. My new philosophy for modules is: no powers beyond dumb substitution of definitions, basically as a form of compression. (No recursion, etc.) 

At the moment I am contemplating how I should utilize static choice as a basis for modularity. A static `+` type seems in many ways like it shouldn't be special, yet in other ways... well, we don't actually need to handle all the types; instead, most paths are statically negated, and developers shouldn't even think about them. (I wonder whether it's formally sane to feed info back from a bad search path to adjust the next one, e.g. using negative types across the static version of `+`.)


In Era, I'm avoiding this scenario by using a system inspired by ML modules. All type parameters must be provided upon importing a definition, and each definition can only import a finite number of others. Dependency loops are prohibited; a definition can only be *installed* in a knowledgebase if all its imports are already there.

I believe Awelon's simplified module system is under similar constraints. 


    intro1 :: x ~> (Unit * x) -- PRIM

For instance, this type expression might become something like this:

"x" typeVar typeUnit "x" typeVar typeTimes typeAnytimeFn

where these primitives have the following non-polymorphic types:

typeVar :: (Static String :&: SyntaxStack) ~> SyntaxStack

typeUnit :: SyntaxStack ~> SyntaxStack

typeTimes :: SyntaxStack ~> SyntaxStack

typeAnytimeFn :: SyntaxStack ~> SyntaxStack

This uses names, so maybe it's not very exciting....






A) Awelon also has a simple feature to support rich compile-time metaprogramming (based on static reflection and introspection) so this sort of 'infinite expansion' could potentially happen by accident, e.g. as a consequence of modeling aspect oriented programming.


What kind of static reflection and introspection do you have in mind?

You can ask: is this a `*` type? is this a `+` type? is it an integer? is this Static Int greater than zero? is this Static equal to this string? etc.. I suppose it's just introspection. (You can't change anything like you might with reflection.) Anyhow, based on answers, different behaviors might be computed.
 

B) One of my goals for Awelon is to serve as an effective RDP distribution language - i.e. so clients can send RDP code to mutually distrustful services and vice versa. Part of being effective (IMO) is being robust and adaptable: I am not going to give up compile-time metaprogramming.


If the metaprogramming layer is resolved by the service rather than the client,

Awelon's metaprogramming is modeled within Awelon itself. It doesn't depend on any features unique to a client or service. It does use AVM primitives. 

(second e-mail):

You chose to model total failure for errors in distributed computation because it fit with the need to model disruption. Since disruption was necessary, you encouraged building systems that were tolerant of glitches and node failures. How is this any different? The distributed program can semantically be a "discovered" network node which is failing because it's under strain.

If I understand your point correctly, you are right that I could treat an unforeseen quota violation as a form of node loss or network disruption. RDP is tolerant to such things. Unfortunately, in general, humans aren't. When a process is smashed because resource consumption grows out of control, resources are wasted and nobody is satisfied. If this is a preventable concern, I would like to prevent it.


It may be ugly for the programming model, but it fits nicely into the physical model.

Hmm. We do have logics suitable for physical models. I suppose we could model quotas in terms of dividing a static number among different operations. OTOH, we'd lose a lot of equational reasoning.
 

Ross Angle

unread,
Aug 16, 2013, 12:55:04 AM8/16/13
to reactiv...@googlegroups.com
On Thu, Aug 15, 2013 at 7:46 PM, David Barbour <dmba...@gmail.com> wrote:

On Thu, Aug 15, 2013 at 5:24 PM, Ross Angle <rok...@gmail.com> wrote:

Turing completeness in the *spatial* dimension. 


I think this arises because you're using Haskell-like polymorphism.

I believe it happens even without polymorphism. With static operations (due to my more expressive 'first'):

      [copy apply] [copy apply] apply   =~=   (\x.(x x) \x.(x x))


There are at least two problems with this non-polymorphic example: `copy` and `apply`. How would you type these without polymorphism?


And, even before then I had the same issue with dynamic behaviors, I just wasn't focusing on it.


Sirea's dynamic behavior types can use Haskell's polymorphism directly.

beval :: (BDynamic b b', SigInP p x, SigInP p y) 
      => DT -> b (S p (b' x y) :&: x) (y :|: S p ())
beval = bevalx . bdelay

Right here, the `x` and `y` types are determined by Haskell's polymorphism. The beval definition is parameterized over these types, but Haskell makes the "forall" implicit.


I've decided to drop the polymorphism I described earlier using `=|` extensible definitions.

Oh, I was talking about parametric polymorphism, and here you're talking about ad hoc polymorphism. :)

 
I'm just giving up that form of extensibility. I was finding myself reluctant to use anything that depends on definitions or shared names, since such wouldn't be ideal for code distribution. My new philosophy for modules is: no powers beyond dumb substitution of definitions, basically as a form of compression. (No recursion, etc.) 


I'm glad. XD; Hopefully dynamic behaviors and such can handle whatever use cases you had in mind.


At the moment I am contemplating how I should utilize static choice as a basis for modularity. A static `+` type seems in many ways like it shouldn't be special, yet in other ways... well, we don't actually need to handle all the types; instead, most paths are statically negated, and developers shouldn't even think about them.

Took me a while to realize you're still talking about =|. =)

I could make random recommendations, but they'd just lead to Lathe's self-organizing precedence system... or Era. Usually Era. :-p


(I wonder whether it's formally sane to feed info back from a bad search path to adjust the next one, e.g. using negative types across the static version of `+`.)


I think you might be able to get negative and fractional types to work for static signals. However, I think you might want to model * and + with different types than :&: and :|: just so you can avoid awkward differences between these two type systems.

I wonder, if we had * + :&: and :|: all at once, would we be able to support any distributive laws, or would each system just be stranded inside the atoms of the other?



In Era, I'm avoiding this scenario by using a system inspired by ML modules. All type parameters must be provided upon importing a definition, and each definition can only import a finite number of others. Dependency loops are prohibited; a definition can only be *installed* in a knowledgebase if all its imports are already there.

I believe Awelon's simplified module system is under similar constraints. 


Yeah, it sounds like the same sort of constraint to me.



For instance, this type expression might become something like this:

"x" typeVar typeUnit "x" typeVar typeTimes typeAnytimeFn



Good luck! Can't say I have much of a reply to that blog post, 'cause I think we've talked about all these ideas in some form already.





A) Awelon also has a simple feature to support rich compile-time metaprogramming (based on static reflection and introspection) so this sort of 'infinite expansion' could potentially happen by accident, e.g. as a consequence of modeling aspect oriented programming.


What kind of static reflection and introspection do you have in mind?

You can ask: is this a `*` type? is this a `+` type? is it an integer? is this Static Int greater than zero? is this Static equal to this string? etc.. I suppose it's just introspection. (You can't change anything like you might with reflection.) Anyhow, based on answers, different behaviors might be computed.
 

Yeah, that's reasonable.

Hewitt's odd paper "Mathematics self-proves its own consistency" (and the resulting LtU discussion) has got me thinking about the potential theoretic value of a language that can't pattern-match on its own types. After all, mathematical paradoxes are frequently discussed in terms of first-class natural-language sentences, and the very idea of formalizing the paradox makes the formal system suspect. When we're dealing with natural-language sentences and proofs, it doesn't make much sense to do structural induction on their symbols. :)

But that doesn't necessarily mean your and my systems have to play by these rules. It's just interesting.



B) One of my goals for Awelon is to serve as an effective RDP distribution language - i.e. so clients can send RDP code to mutually distrustful services and vice versa. Part of being effective (IMO) is being robust and adaptable: I am not going to give up compile-time metaprogramming.


If the metaprogramming layer is resolved by the service rather than the client,

Awelon's metaprogramming is modeled within Awelon itself. It doesn't depend on any features unique to a client or service. It does use AVM primitives. 

So your point is, you're considering both cases? Awelon's compiler running on the client and Awelon's compiler running on the server? This doesn't impact what I was saying, I think.



(second e-mail):

You chose to model total failure for errors in distributed computation because it fit with the need to model disruption. Since disruption was necessary, you encouraged building systems that were tolerant of glitches and node failures. How is this any different? The distributed program can semantically be a "discovered" network node which is failing because it's under strain.

If I understand your point correctly, you are right that I could treat an unforeseen quota violation as a form of node loss or network disruption. RDP is tolerant to such things. Unfortunately, in general, humans aren't. When a process is smashed because resource consumption grows out of control, resources are wasted and nobody is satisfied. If this is a preventable concern, I would like to prevent it.


Ah, well you already model static latency. I wonder if there'd be similar ways to model memory footprint and energy use. There's a good chance linear types would help model computations under memory constraints, and maybe negative and fractional types could help with energy constraints due to their reversibility. Even if the hardware doesn't support energy-perfect reversible computations, negative and fractional types might be eliminated for free at compile time.



It may be ugly for the programming model, but it fits nicely into the physical model.

Hmm. We do have logics suitable for physical models. I suppose we could model quotas in terms of dividing a static number among different operations. OTOH, we'd lose a lot of equational reasoning.
 

Well, I think that's what the linear and rational types would amount to. I suppose these suggestions just parrot what you've already been thinking about. :)

(I hope I can use "rational types" as shorthand for "negative and fractional types.")

-Ross

David Barbour

unread,
Aug 16, 2013, 2:35:02 AM8/16/13
to reactiv...@googlegroups.com
On Thu, Aug 15, 2013 at 9:55 PM, Ross Angle <rok...@gmail.com> wrote:

And, even before then I had the same issue with dynamic behaviors, I just wasn't focusing on it.


Sirea's dynamic behavior types can use Haskell's polymorphism directly.

beval :: (BDynamic b b', SigInP p x, SigInP p y) 
      => DT -> b (S p (b' x y) :&: x) (y :|: S p ())
beval = bevalx . bdelay

Right here, the `x` and `y` types are determined by Haskell's polymorphism. The beval definition is parameterized over these types, but Haskell makes the "forall" implicit.

It doesn't matter what `x` and `y` are. They could be concrete unit types, and a dynamic behavior could still look up another copy of itself somewhere, and beval it.
 
 
I'm just giving up that form of extensibility. I was finding myself reluctant to use anything that depends on definitions or shared names, since such wouldn't be ideal for code distribution. My new philosophy for modules is: no powers beyond dumb substitution of definitions, basically as a form of compression. (No recursion, etc.) 


I'm glad. XD; Hopefully dynamic behaviors and such can handle whatever use cases you had in mind.

I hope so too. It seems they should. Yet I'm still trying to figure out a recursive `copy` without recursion. :)
 

At the moment I am contemplating how I should utilize static choice as a basis for modularity. A static `+` type seems in many ways like it shouldn't be special, yet in other ways... well, we don't actually need to handle all the types; instead, most paths are statically negated, and developers shouldn't even think about them.

I could make random recommendations, but they'd just lead to Lathe's self-organizing precedence system... or Era. Usually Era. :-p

One of my mottos: Every requirements analysis of a sufficiently complex software project will lead to the same truths about what a general purpose programming language should be. - Convergent Language Design

I'll spend this weekend thinking about it. If I'm still stuck, I'll ask for random recommendations. =)
 

I think you might be able to get negative and fractional types to work for static signals. However, I think you might want to model * and + with different types than :&: and :|: just so you can avoid awkward differences between these two type systems.

Awelon is not built upon Sirea. Awelon will be a full, compiled language. I'll use Haskell to bootstrap it, but it's all fresh code. When I started talking about Awelon, I started using * and +. Because those colons are unreasonably painful to enter. (I only did so before due to Haskell's type operator requirements.)



You can ask: is this a `*` type? is this a `+` type? is it an integer? is this Static Int greater than zero? is this Static equal to this string? etc.. I suppose it's just introspection. (You can't change anything like you might with reflection.) Anyhow, based on answers, different behaviors might be computed.
 

Yeah, that's reasonable.

Hewitt's odd paper "Mathematics self-proves its own consistency" (and the resulting LtU discussion) has got me thinking about the potential theoretic value of a language that can't pattern-match on its own types. After all, mathematical paradoxes are frequently discussed in terms of first-class natural-language sentences, and the very idea of formalizing the paradox makes the formal system suspect. When we're dealing with natural-language sentences and proofs, it doesn't make much sense to do structural induction on their symbols. :)

It seems to me that such a language would exhibit extreme parametricity. That's something I am intentionally abandoning for Awelon. My hypothesis is that ad-hoc polymorphism will be more effective for discovering valid 'proofs', especially those involving heterogeneous systems requiring different kinds of specialization. 
 

Awelon's metaprogramming is modeled within Awelon itself. It doesn't depend on any features unique to a client or service. It does use AVM primitives. 

So your point is, you're considering both cases? Awelon's compiler running on the client and Awelon's compiler running on the server? This doesn't impact what I was saying, I think.

Perhaps I missed your point. Compiling Awelon on either side should not impact the *outcome* of the metaprogramming (static operations shouldn't have that kind of side-effect). However, it does affect where costs are distributed. If you were alluding to the latter, yeah, it could be treated as a service. OTOH, most of the time you send code to a server it's because you want it to execute on the server... e.g. as a software agent to make some decisions or filter junk on your behalf.



If I understand your point correctly, you are right that I could treat an unforeseen quota violation as a form of node loss or network disruption. RDP is tolerant to such things. Unfortunately, in general, humans aren't. When a process is smashed because resource consumption grows out of control, resources are wasted and nobody is satisfied. If this is a preventable concern, I would like to prevent it.


Ah, well you already model static latency. I wonder if there'd be similar ways to model memory footprint and energy use.

I've not found any that don't seem hackish, and also tend to make equational reasoning mostly useless (e.g. `first f >>> first g == first (f >>> g)` is an arrow law, but does it still hold if you include energy and memory footprints?)

 
maybe negative and fractional types could help with energy constraints due to their reversibility

I think most CPU efficiencies are many orders of magnitude away from reversibility making an observable difference. Rather, better ways to target energy constraints would be to focus on frequencies and the computations.

 


Well, I think that's what the linear and rational types would amount to. I suppose these suggestions just parrot what you've already been thinking about. :)

(I hope I can use "rational types" as shorthand for "negative and fractional types.")

Rational type? You mean like 12.3 is 123/10? =)

Maybe we should call them 'salmon types' because they swim upstream?

Best,

Dave


Ross Angle

unread,
Aug 16, 2013, 2:25:49 PM8/16/13
to reactiv...@googlegroups.com
On Thu, Aug 15, 2013 at 11:35 PM, David Barbour <dmba...@gmail.com> wrote:

On Thu, Aug 15, 2013 at 9:55 PM, Ross Angle <rok...@gmail.com> wrote:

And, even before then I had the same issue with dynamic behaviors, I just wasn't focusing on it.


Sirea's dynamic behavior types can use Haskell's polymorphism directly.

beval :: (BDynamic b b', SigInP p x, SigInP p y) 
      => DT -> b (S p (b' x y) :&: x) (y :|: S p ())
beval = bevalx . bdelay

Right here, the `x` and `y` types are determined by Haskell's polymorphism. The beval definition is parameterized over these types, but Haskell makes the "forall" implicit.

It doesn't matter what `x` and `y` are. They could be concrete unit types, and a dynamic behavior could still look up another copy of itself somewhere, and beval it.


Oh, right, you're letting behaviors be stored in resources. That is a conundrum.

Perhaps we can store certain behaviors in resources, but only if their type makes it clear they can diverge.

I'm doing something obliquely similar to this in Era, where I have a catch-all wrapper type "Sink" for untyped programming. It can only wrap certain value types, and (Sink -> Sink) isn't one of them since it would allow nontermination. However, (Sink -> Partial Sink) is just fine.

(This Sink type isn't essential to the Era module system, but it's a way for Era modules to represent untyped lambda calculus programs.)



I hope so too. It seems they should. Yet I'm still trying to figure out a recursive `copy` without recursion. :)


I think you could accomplish `copy` using primitive recursion and/or induction-recursion. (I don't really know the difference.... I think maybe induction-recursion refers to the definition syntax, and primitive recursion refers to the level of expressiveness.) If you define an inductive-recursive function that pattern-matches on (a :&: b), you get access to `a`, `b`, and also `recurOnA` and `recurOnB`. You don't get to recur on the full value.




I think you might be able to get negative and fractional types to work for static signals. However, I think you might want to model * and + with different types than :&: and :|: just so you can avoid awkward differences between these two type systems.

Awelon is not built upon Sirea. Awelon will be a full, compiled language. I'll use Haskell to bootstrap it, but it's all fresh code. When I started talking about Awelon, I started using * and +. Because those colons are unreasonably painful to enter. (I only did so before due to Haskell's type operator requirements.)


That's not my point. Negative and fractional types don't necessarily treat products and sums the same way as linear types do. I think you'll want two separate product types and two separate sum types if you want to use all these concepts at once.

...Actually, now that I've taken another look at that paper, it seems like they introduced negative and fractional types as an extension of a less exotic system where product types and sum types were traditional. So I suppose they might be the same product and sum operators after all. That's nice. :)



Awelon's metaprogramming is modeled within Awelon itself. It doesn't depend on any features unique to a client or service. It does use AVM primitives. 

So your point is, you're considering both cases? Awelon's compiler running on the client and Awelon's compiler running on the server? This doesn't impact what I was saying, I think.

Perhaps I missed your point. Compiling Awelon on either side should not impact the *outcome* of the metaprogramming (static operations shouldn't have that kind of side-effect). However, it does affect where costs are distributed. If you were alluding to the latter, yeah, it could be treated as a service. OTOH, most of the time you send code to a server it's because you want it to execute on the server... e.g. as a software agent to make some decisions or filter junk on your behalf.


What I said was "If the metaprogramming layer is resolved by the service rather than the client, then it's part of the service's interface, and personally I might want to account for that explicitly and treat it as a full feature."

Suppose a vending machine provides a service for cooking custom recipes, and it expects these recipes as Awelon programs and compiles them on the fly. Then perhaps I'm on the run from some outrageous adversary and I simply have to solve a convoluted logic puzzle in order to unravel the secrets of the place I'm trapped in. Fortunately, I remember Awelon's compile time semantics happen to feature a constraint solver. So I sneak over to the vending machine and ask it to bake me a cake whose icing describes the solution.

Much to my dismay, the baking process takes a long time. Too long. By the time it's complete, I've already been captured and it doesn't matter anymore. Wouldn't it have saved me some agony if I knew I couldn't rely on the vending machine in the first place?



Ah, well you already model static latency. I wonder if there'd be similar ways to model memory footprint and energy use.

I've not found any that don't seem hackish, and also tend to make equational reasoning mostly useless (e.g. `first f >>> first g == first (f >>> g)` is an arrow law, but does it still hold if you include energy and memory footprints?)


Doesn't it? :) In Underreact, the >>> operation always creates some intermediate histories for the atomic signals, but otherwise those would compile to the same network graph. I think it would be possible to remove those intermediate histories at some point.



Rational type? You mean like 12.3 is 123/10? =)


Yes. "We combine the above two extensions into a language, which we call [\Pi^{\eta\epsilon}] , whose type system allows any rational number to be used as a type. Moreover the types satisfy the same familiar and intuitive isomorphisms that are satisfied in the mathematical field of rational numbers." The rest of the paper is about giving this a computational metaphor and describing how this kind of computation is useful.

Personally, I don't find the computational metaphor very easy to understand or the usefulness very convincing yet, but the comparison to rational numbers makes it a memorable technique anyway.


Maybe we should call them 'salmon types' because they swim upstream?


Only if we color-code the others as 'amber types' because they once dripped down out of a tree. ^_-

-Ross

David Barbour

unread,
Aug 16, 2013, 3:34:22 PM8/16/13
to reactiv...@googlegroups.com
On Fri, Aug 16, 2013 at 11:25 AM, Ross Angle <rok...@gmail.com> wrote:

Perhaps we can store certain behaviors in resources, but only if their type makes it clear they can diverge.

I'm doing something obliquely similar to this in Era, where I have a catch-all wrapper type "Sink" for untyped programming. It can only wrap certain value types, and (Sink -> Sink) isn't one of them since it would allow nontermination. However, (Sink -> Partial Sink) is just fine.

That's an interesting possibility. 




I hope so too. It seems they should. Yet I'm still trying to figure out a recursive `copy` without recursion. :)


I think you could accomplish `copy` using primitive recursion and/or induction-recursion. (I don't really know the difference.... I think maybe induction-recursion refers to the definition syntax, and primitive recursion refers to the level of expressiveness.) If you define an inductive-recursive function that pattern-matches on (a :&: b), you get access to `a`, `b`, and also `recurOnA` and `recurOnB`. You don't get to recur on the full value.

Yeah, that could work pretty well. It seems safer than using a fixpoint combinator, and more comprehensible. 

Hmm. Should that be a primitive, or defined in terms of a fixpoint combinator?
 

...Actually, now that I've taken another look at that paper, it seems like they introduced negative and fractional types as an extension of a less exotic system where product types and sum types were traditional. So I suppose they might be the same product and sum operators after all. That's nice. :)

Yep! Rational types work quite well with plain old sums and products. It's very nice. (I'd have been very reluctant to use them otherwise.)
 

What I said was "If the metaprogramming layer is resolved by the service rather than the client, then it's part of the service's interface, and personally I might want to account for that explicitly and treat it as a full feature."

Suppose a vending machine provides a service for cooking custom recipes, and it expects these recipes as Awelon programs and compiles them on the fly. Then perhaps I'm on the run from some outrageous adversary and I simply have to solve a convoluted logic puzzle in order to unravel the secrets of the place I'm trapped in. Fortunately, I remember Awelon's compile time semantics happen to feature a constraint solver. So I sneak over to the vending machine and ask it to bake me a cake whose icing describes the solution.

Much to my dismay, the baking process takes a long time. Too long. By the time it's complete, I've already been captured and it doesn't matter anymore. Wouldn't it have saved me some agony if I knew I couldn't rely on the vending machine in the first place?

This story has me laughing out loud. =)

I agree knowing the cake is a lie would save you some agony. And in a sense, any service that accepts code will have the appropriate 'block' type in its interfaces. But I'm still not sure what you mean by "account for that and explicitly treat it as a full feature". 

I imagine someone might provide optimization services, however, that acts as the identity function on `Static (x~>y) * Context` pairs. :)


Ross Angle

unread,
Aug 17, 2013, 2:19:48 AM8/17/13
to reactiv...@googlegroups.com
On Fri, Aug 16, 2013 at 12:34 PM, David Barbour <dmba...@gmail.com> wrote:

Hmm. Should that be a primitive, or defined in terms of a fixpoint combinator?
 

(In short: It makes sense as a kind of combinator, but the type is exotic enough that I'd recommend providing it as a syntactic construct.)

You can think of it as a family of fixpoint combinators, one for every type you want to do induction-recursion over. The combinator will look like an induction rule.

To make this demonstration easier, I'm going to assume (Type : Type). Here goes:

inductionRecursionOverTree :
  -- For each construction that's based on an unlabeled binary tree...
  (constructionType : Type) -> (p : Tree -> constructionType) ->
  
  -- ...if the construction works for a leaf...
  p treeLeaf ->
  
  -- ...and if we have an inductive step for a branch...
  ((a : Tree) -> p a -> (b : Tree) -> p b -> p (treeBranch a b)) ->
  
  -- ...then the construction works for any tree.
  (x : Tree) -> p x

For induction, we construct proofs (whose type is some proposition, whose type is `Prop` in some languages). For primitive recursion, we construct values (whose type is some type, whose type is `Type`).

inductionOverTree = inductionRecursionOverTree Prop
recursionOverTree = inductionRecursionOverTree Type

Then we can define a primitive recursive function like so:

countLeaves : Tree -> Int
countLeaves = recursionOverTree (\_ -> Int)
  1
  (\a countOfA b countOfB ->
    countOfA + countOfB)

To avoid (Type : Type), we could avoid defining inductionRecursionOverTree as a normal function. Instead its first two arguments could be passed in using a dedicated syntax in the language, so that they don't have to be expressions which have their own types.

Since the type signature of these combinators is so closely related to an ADT definition, it's typical to allow users to define their own inductive types. (I think the Calculus of Inductive Constructions pioneered this.) Personally, I still haven't learned what to do if the type contains other types (e.g. a list containing numbers), let alone what to do if multiple user-defined types contain each other.

I haven't even given Era a single induction principle yet. I keep running across design hazards where I might accidentally support induction, but I'm not ready for that until the expression problem solution is safely in place. So even writing this description of induction was a nice learning opportunity for me, and when all is said and done, I hope I'm not misinforming you. :-p



I agree knowing the cake is a lie would save you some agony. And in a sense, any service that accepts code will have the appropriate 'block' type in its interfaces. But I'm still not sure what you mean by "account for that and explicitly treat it as a full feature". 

I think all I want is for that "block" type to be well-understood.

If I'm thinking about using someone else's API and it seems to support a very wide range of use cases, I want to be reasonably confident my own use case was anticipated by the designer. (From the other side, as an API designer, I want to understand what use cases to anticipate.)

-Ross

Ross Angle

unread,
Aug 17, 2013, 2:36:10 AM8/17/13
to reactiv...@googlegroups.com
On Thu, Aug 15, 2013 at 5:24 PM, Ross Angle <rok...@gmail.com> wrote:

    intro1 :: x ~> (Unit * x) -- PRIM

For instance, this type expression might become something like this:

"x" typeVar typeUnit "x" typeVar typeTimes typeAnytimeFn

where these primitives have the following non-polymorphic types:

typeVar :: (Static String :&: SyntaxStack) ~> SyntaxStack

typeUnit :: SyntaxStack ~> SyntaxStack

typeTimes :: SyntaxStack ~> SyntaxStack

typeAnytimeFn :: SyntaxStack ~> SyntaxStack

This uses names, so maybe it's not very exciting....



Whoops, I don't know what I was thinking here. The type of the "x" word would usually be (a ~> (Static String :&: a)), so it's polymorphic too.

It's possible to fix this by combining ("x" typeVar) into a dedicated syntax, perhaps #typeVar[x]. The #typeVar[...] words would have type (SyntaxStack ~> SyntaxStack).

On the topic of polymorphism in concatenative languages, I just did a search for [combinator dependent typing], which has now led me to the topic of "illative combinatory logic." Apparently this is about combinatory logic with some operators for managing inference.

-Ross

David Barbour

unread,
Aug 17, 2013, 6:30:22 AM8/17/13
to reactiv...@googlegroups.com
On Fri, Aug 16, 2013 at 11:19 PM, Ross Angle <rok...@gmail.com> wrote:


On Fri, Aug 16, 2013 at 12:34 PM, David Barbour <dmba...@gmail.com> wrote:

Hmm. Should that be a primitive, or defined in terms of a fixpoint combinator?
 

(In short: It makes sense as a kind of combinator, but the type is exotic enough that I'd recommend providing it as a syntactic construct.)

I don't plan to add syntax if I can avoid it. But if I build on a call-by-value fixpoint combinator, I can probably simplify the common use cases to a couple arguments.

I'll get back to you on the induction recursion when I've had time to build some combinators. I plan to start writing my parsers and bootstraps up after some sleep.
 

I think all I want is for that "block" type to be well-understood.

If I'm thinking about using someone else's API and it seems to support a very wide range of use cases, I want to be reasonably confident my own use case was anticipated by the designer. (From the other side, as an API designer, I want to understand what use cases to anticipate.)

I really don't follow. The connection between "type well-understood" and all those other properties you mentioned? I'm not sure one exists.

First, from the API developer's POV, understanding use cases is best performed with a very *rough* understanding of the types involved:

* linear arguments and the required output type for a block both express responsibilities
* capabilities (provided as input to a block) can express and grant authority
* sealers/unsealers can enforce blindness for certain operations (privacy, parametricity, ADTs)

If API designers focus too much on precise type details, they'll be outflanked by bigger problems. Precise types are for machines to prove the properties that machines care about (like type safety). At higher levels of human thinking, it has to be in terms of authority, responsibility, and similar high level concepts.

Second, clients who write outside the lines, push at the edges, beyond what is anticipated - THOSE are the clients I would most aim to support. The question "was my program anticipated?" doesn't matter nearly as much as "was my program accepted? does it do what I expected?"  The API designers are often most happy with these clients, too, because (in most cases) these unanticipated uses are not abusive. Clients do have some reason to be concerned about "does it do what I expected" with regards to ad-hoc polymorphism (and not knowing the complete context of use). Parametric polymorphism wouldn't be a problem, though, and ad-hoc adaptive code can actually work both ways (make a subprogram more or less sensitive to variations in context).

Aside from 'use cases', a concern is resource control and risk of denial of service attacks. Those risks are mostly independent of a block type. Unless we can guarantee that time/space for processing code is a small polynomial of code size (at most cubic, by my estimates) then we're subject to denial-of-service attacks on our resources - unless we use another control mechanism, such as quotas ... or charging for resources, which is actually a very effective and usable approach.

(next e-mail)

Whoops, I don't know what I was thinking here. The type of the "x" word would usually be (a ~> (Static String :&: a)), so it's polymorphic too.

It's possible to fix this by combining ("x" typeVar) into a dedicated syntax, perhaps #typeVar[x]. The #typeVar[...] words would have type (SyntaxStack ~> SyntaxStack).

I'm not seeing the problem with parametric polymorphism in this case.  
 

On the topic of polymorphism in concatenative languages, I just did a search for [combinator dependent typing], which has now led me to the topic of "illative combinatory logic." Apparently this is about combinatory logic with some operators for managing inference.

I've found a few hits, but my sleepy head's not in a good condition to look at streams of Cs and Ks. I'll take a look at it later. 

 

Ross Angle

unread,
Aug 17, 2013, 6:55:23 PM8/17/13
to reactiv...@googlegroups.com
On Sat, Aug 17, 2013 at 3:30 AM, David Barbour <dmba...@gmail.com> wrote:


On Fri, Aug 16, 2013 at 11:19 PM, Ross Angle <rok...@gmail.com> wrote:


(In short: It makes sense as a kind of combinator, but the type is exotic enough that I'd recommend providing it as a syntactic construct.)

I don't plan to add syntax if I can avoid it.

You were already hoping to have a =| syntax, and this would be a tamer version of that. Nevertheless, I think it would be more cumbersome to use and to implement, so I wouldn't really blame you if you chose to avoid it.

Hmm, if the main reason you opt for a termination guarantee is to prohibit *accidental* time explosion, then (Type : Type) might not be so bad. It doesn't take much code to use (Type : Type) to apply Girard's paradox, but that code is pretty complicated, so I don't think it would happen by accident.


But if I build on a call-by-value fixpoint combinator, I can probably simplify the common use cases to a couple arguments.

I'll get back to you on the induction recursion when I've had time to build some combinators. I plan to start writing my parsers and bootstraps up after some sleep.
 

I think all I want is for that "block" type to be well-understood.

If I'm thinking about using someone else's API and it seems to support a very wide range of use cases, I want to be reasonably confident my own use case was anticipated by the designer. (From the other side, as an API designer, I want to understand what use cases to anticipate.)

I really don't follow. The connection between "type well-understood" and all those other properties you mentioned? I'm not sure one exists.


I listed those other properties to explain what I meant by "well-understood" in this context. Maybe that's not enough explanation, but hopefully my responses below will help clarify what I mean.


First, from the API developer's POV, understanding use cases is best performed with a very *rough* understanding of the types involved:

People will interact with each other based on rough understandings, but they can avoid lose-lose tragedies and court expenses by applying some deep expertise upfront. You've taught me that as language designers, we have a unique opportunity to lay solid groundwork so that people can develop *valid* intuitions and expertise.

 
If API designers focus too much on precise type details, they'll be outflanked by bigger problems. Precise types are for machines to prove the properties that machines care about (like type safety). At higher levels of human thinking, it has to be in terms of authority, responsibility, and similar high level concepts.


I don't think those concepts are more "high-level" than precise types. They're just more fuzzy, more contradictory, and more honest. I believe they can be modeled within a precise type system anyway, and that the precise type system would help us reason about these honest APIs without losing information.


Second, clients who write outside the lines, push at the edges, beyond what is anticipated - THOSE are the clients I would most aim to support.

In the struggles between API designer and client, you would generally favor the client? Do you consider our current systems to be oppressive in favor of a few API designers exploiting many clients? Personally I do perceive this correlation, but I consider it to be misleading. I expect we have a whole world of people who could be both API designers and clients, but most people don't receive the opportunities or encouragement to deploy their own APIs.


The question "was my program anticipated?" doesn't matter nearly as much as "was my program accepted? does it do what I expected?"  The API designers are often most happy with these clients, too, because (in most cases) these unanticipated uses are not abusive.

Even if we don't understand all the creative ways our APIs can impact the outside world, we can still formally reason about them on the implementation side, and we can seek to formally reason about how they combine with other APIs in specific ways. When I said I wanted API designers to be able to anticipate use cases, I meant it in the latter sense, where they're anticipating their own maintenance needs rather than the actual impact on the outside world.

You could rebut this by saying there's always some feedback loop, so that external circumstances will leak into the way we implement an API. For instance, suppose I want to publish an iPhone app that uses the accelerometer to measure the force of hitting the ground from various heights. I should consider that the user won't always be able to read the screen afterwards. I prefer to claim that these strange loops are rare and special enough that we'll still find it useful to apply formal reasoning in between.


Aside from 'use cases', a concern is resource control and risk of denial of service attacks. Those risks are mostly independent of a block type. Unless we can guarantee that time/space for processing code is a small polynomial of code size (at most cubic, by my estimates) then we're subject to denial-of-service attacks on our resources - unless we use another control mechanism, such as quotas ... or charging for resources, which is actually a very effective and usable approach.


If you don't mind, how did you come up with "cubic"? Personally, I keep trying to keep it at "linear," but some of the things I want to do bump this up to "quadratic" or occasionally can't be bounded by any polynomial.



(next e-mail)

Whoops, I don't know what I was thinking here. The type of the "x" word would usually be (a ~> (Static String :&: a)), so it's polymorphic too.

It's possible to fix this by combining ("x" typeVar) into a dedicated syntax, perhaps #typeVar[x]. The #typeVar[...] words would have type (SyntaxStack ~> SyntaxStack).

I'm not seeing the problem with parametric polymorphism in this case.  


Hmm... obviously this alone can't lead to Turing-completeness, so I think I just wanted to stamp it out to get a system that didn't use polymorphism at all. This way we don't need a kind of circular reasoning where we define polymorphism and then use it to justify the axioms we started with. There might be sound ways to do that reasoning, but I'd rather not bother to find them unless it's absolutely necessary. :)

Anyway, I don't know how promising this approach really is if it just has a single type, SyntaxStack, and no accounting for errors in the stack structure.



On the topic of polymorphism in concatenative languages, I just did a search for [combinator dependent typing], which has now led me to the topic of "illative combinatory logic." Apparently this is about combinatory logic with some operators for managing inference.

I've found a few hits, but my sleepy head's not in a good condition to look at streams of Cs and Ks. I'll take a look at it later. 


Yeah, I haven't really looked at it yet either.

Warm Regards,
Ross

David Barbour

unread,
Aug 17, 2013, 7:43:23 PM8/17/13
to reactiv...@googlegroups.com
On Sat, Aug 17, 2013 at 3:55 PM, Ross Angle <rok...@gmail.com> wrote:

I don't plan to add syntax if I can avoid it.

You were already hoping to have a =| syntax, and this would be a tamer version of that.

I wasn't eager for it; it's just the first solution I saw. But at this point my goal is to pretty much eliminate the module system from the equation (except as a simple way to reuse/compress code). 

 
Hmm, if the main reason you opt for a termination guarantee is to prohibit *accidental* time explosion, then (Type : Type) might not be so bad. It doesn't take much code to use (Type : Type) to apply Girard's paradox, but that code is pretty complicated, so I don't think it would happen by accident.

See my other message on static choices and let me know what you think.
 


First, from the API developer's POV, understanding use cases is best performed with a very *rough* understanding of the types involved:

People will interact with each other based on rough understandings, but they can avoid lose-lose tragedies and court expenses by applying some deep expertise upfront. You've taught me that as language designers, we have a unique opportunity to lay solid groundwork so that people can develop *valid* intuitions and expertise.

Yeah. It is important part is that this "rough" intuition-based understanding can also be "valid" with some effort. Capabilities and linear types are great tools for this, but developers still need to consider many aspects of an API carefully (e.g. things like stealing keyboard focus or transparent clickable areas can be UI security concerns).  
 
 
If API designers focus too much on precise type details, they'll be outflanked by bigger problems. Precise types are for machines to prove the properties that machines care about (like type safety). At higher levels of human thinking, it has to be in terms of authority, responsibility, and similar high level concepts.


I don't think those concepts are more "high-level" than precise types. They're just more fuzzy, more contradictory, and more honest. 

They can be realized in more ways than precise types, therefore they're higher level. QED. =)

 
I believe they can be modeled within a precise type system anyway, and that the precise type system would help us reason about these honest APIs without losing information.

With this I agree.


In the struggles between API designer and client, you would generally favor the client?

I believe that supporting creative and unexpected applications truly benefits both the API designers and the clients. 

An API designer should protect against abuse, but should also find means to do so that support innovation. 

 
I expect we have a whole world of people who could be both API designers and clients, but most people don't receive the opportunities or encouragement to deploy their own APIs.

Yeah, this was one of the problems RDP was designed to address. By representing overlay networks, an RDP application can deploy its own APIs atop remote services. Any client of a service can essentially extend its API, either through an intermediate cloud server or injecting code. 
 

Even if we don't understand all the creative ways our APIs can impact the outside world, we can still formally reason about them on the implementation side, and we can seek to formally reason about how they combine with other APIs in specific ways. When I said I wanted API designers to be able to anticipate use cases, I meant it in the latter sense, where they're anticipating their own maintenance needs rather than the actual impact on the outside world.

Ah. Okay, that makes more sense to me.
 


Unless we can guarantee that time/space for processing code is a small polynomial of code size (at most cubic, by my estimates) then we're subject to denial-of-service attacks


If you don't mind, how did you come up with "cubic"? Personally, I keep trying to keep it at "linear," but some of the things I want to do bump this up to "quadratic" or occasionally can't be bounded by any polynomial.

I can't actually recall. I came up with it many years ago. What I do remember is that cubic is when it starts to become really problematic with regards to estimated memory/CPU vs. bandwidth and mirroring patterns. Also, IIRC, I was accounting for the difficulty of heuristically discriminating legit code vs. stupidly massive code during the attack itself to support most requests. (Higher polynomials allow the DoS attack to look closer to legit service requests.)

Warm Regards,

Dave


Ross Angle

unread,
Aug 21, 2013, 2:41:35 AM8/21/13
to reactiv...@googlegroups.com
On Sat, Aug 17, 2013 at 4:43 PM, David Barbour <dmba...@gmail.com> wrote:




First, from the API developer's POV, understanding use cases is best performed with a very *rough* understanding of the types involved:

People will interact with each other based on rough understandings, but they can avoid lose-lose tragedies and court expenses by applying some deep expertise upfront. You've taught me that as language designers, we have a unique opportunity to lay solid groundwork so that people can develop *valid* intuitions and expertise.

Yeah. It is important part is that this "rough" intuition-based understanding can also be "valid" with some effort. Capabilities and linear types are great tools for this, but developers still need to consider many aspects of an API carefully (e.g. things like stealing keyboard focus or transparent clickable areas can be UI security concerns).  
 
 
If API designers focus too much on precise type details, they'll be outflanked by bigger problems. Precise types are for machines to prove the properties that machines care about (like type safety). At higher levels of human thinking, it has to be in terms of authority, responsibility, and similar high level concepts.


I don't think those concepts are more "high-level" than precise types. They're just more fuzzy, more contradictory, and more honest. 

They can be realized in more ways than precise types, therefore they're higher level. QED. =)

Oh, when I think of one system being more "high-level" than another, it's about whether the first system builds on the second system and (for most purposes) makes it unnecessary to use directly.

In this case, I think we develop these systems in a back-and-forth process, and yet that either system could be considered independently of the other. Still, I can take a side: Imprecise scenarios dominate our "for most purposes." Living without precise reasoning is fine, but it would take a pretty exotic or boring lifeform to live without imprecise reasoning. So I agree that precise reasoning, including type systems, is low-level.




In the struggles between API designer and client, you would generally favor the client?

I believe that supporting creative and unexpected applications truly benefits both the API designers and the clients. 

An API designer should protect against abuse, but should also find means to do so that support innovation. 

Suppose we design a code block format which is deceptively expressive and dub it "BetaReductionMax." Sometimes, businesses will take BetaReductionMax blocks as parameters and thereby provide a little bit more power to their customers than they expected (but not more than what they're happy to support). However, most of the time, I think businesses will distribute BetaReductionMax blocks of their own, and customers will bear the risk of running them. Worse yet, when businesses partake in their own format wars on top of this platform, every person who wants to make a well-informed decision will have at least one extra layer of unintuitiveness to unravel: BetaReductionMax.

Well, we're not talking about a code block format that would cause privacy leaks or unauthorized actions; we're just talking about lost computational resources. For me, the bugs and inefficiencies in my code will probably be more than enough to make this moot. :-p I'll just say that I'm not motivated to find a code block format that's any more or less expressive than it takes to integrate smoothly with the other concepts involved.



 
I expect we have a whole world of people who could be both API designers and clients, but most people don't receive the opportunities or encouragement to deploy their own APIs.

Yeah, this was one of the problems RDP was designed to address. By representing overlay networks, an RDP application can deploy its own APIs atop remote services. Any client of a service can essentially extend its API, either through an intermediate cloud server or injecting code. 


Publicly accessible APIs are one thing, but I also mean the APIs we could be designing every day to manage our personal device settings, private conversations, game entities, and so on.

But yeah, I'm all for overlay networks. :)



If you don't mind, how did you come up with "cubic"? Personally, I keep trying to keep it at "linear," but some of the things I want to do bump this up to "quadratic" or occasionally can't be bounded by any polynomial.

I can't actually recall. I came up with it many years ago. What I do remember is that cubic is when it starts to become really problematic with regards to estimated memory/CPU vs. bandwidth and mirroring patterns. Also, IIRC, I was accounting for the difficulty of heuristically discriminating legit code vs. stupidly massive code during the attack itself to support most requests. (Higher polynomials allow the DoS attack to look closer to legit service requests.)

Thanks a lot for explaining that. ^_^

Warm Regards,
Ross

Ross Angle

unread,
Aug 27, 2013, 12:18:13 AM8/27/13
to reactiv...@googlegroups.com
Several emails ago, I wrote...


On Fri, Aug 16, 2013 at 11:19 PM, Ross Angle <rok...@gmail.com> wrote:

You can think of it as a family of fixpoint combinators, one for every type you want to do induction-recursion over. The combinator will look like an induction rule.


Now that I look at Agda Vs Coq, I see "induction-recursion" isn't the right terminology for what I explained. The relevant quote from that wiki page is "Induction-recursion allows you to define mutually recursive data types and functions."

It almost sounds related to this other topic I brought up...


Personally, I still haven't learned what to do if the type contains other types (e.g. a list containing numbers), let alone what to do if multiple user-defined types contain each other.


...but actually, "mutually recursive data types and functions" refers to cases where a data type depends on a function and vice versa. I'm now looking at the Wikipedia article, which I find pretty clarifying.

Interestingly, the explanation of induction-recursion reduces it to a single data type definition and a single function definition. I get the sense the expansion to multiple-definition scenarios is just too trivial and tedious to mention, but it's still a mystery to me. XD

Well, I hope I haven't misinformed you about too many other things. :-p

-Ross

David Barbour

unread,
Aug 27, 2013, 3:46:08 AM8/27/13
to reactiv...@googlegroups.com
On Mon, Aug 26, 2013 at 9:18 PM, Ross Angle <rok...@gmail.com> wrote:

Interestingly, the explanation of induction-recursion reduces it to a single data type definition and a single function definition. I get the sense the expansion to multiple-definition scenarios is just too trivial and tedious to mention, but it's still a mystery to me. XD

Well, I hope I haven't misinformed you about too many other things. :-p

I think your explanations are still useful, even if they're named wrongly. :)


Reply all
Reply to author
Forward
0 new messages