Philosophy of errors and state

65 views
Skip to first unread message

Ross Angle

unread,
Nov 19, 2013, 3:07:17 AM11/19/13
to reactiv...@googlegroups.com
Hi again David,

After two months, I'm finally responding to the "Staging for PL/UI synthesis" thread, but I'm putting my deepest response here in this new thread.

As you know, I consider it generally dangerous to build tools that use state, because stateful systems are more likely to be perceived as people, leading to controversy. When you propose for "all agents and behaviors" to be "modeled as editing a program history," that sounds like every time a user wants to execute an agent or behavior, they must provide that program with access to some state of its own, thus making the program a potential person. Likewise, when you talk about a program having access to "initially exclusive state," that almost suggests the user actually grants full ownership of this state to the program.

By the time you say "Humans, then, could model for themselves some user-agents that might grow some independent moral awareness and capacity. :) " you're just rustling my jimmies. ^_^

However, I do see your point when you say a person should be able to take rich real-time actions without ironing out a fully specified program first. I've been giving this a lot of thought, and finally I was able to write a blog post that outlined some very specific implications. In short, to handle errors in the best possible way, a program must be able to simulate its developer's mind. If the program and developer are in constant contact, the simulation won't have to diverge by much, but if they're apart, the two minds may become inconsistent with each other (exemplifying the CAP theorem), to the point where onlookers may say the two should be emancipated from each other. So if I want to hold on to the idea of publishing code for other people to find and execute, I must accept that programmers will strive toward systems that let them publish independent agents along with their program.

With this new expectation in mind, I actually want to use a principled approach to stateful APIs after all, so that at least the state is used in a morally responsible way. I'm thinking that a program which requires state will also come with an explicit place to plug in a mediator service. If a user finds this program code and wants to execute it, they're obligated to plug it into a fair mediator, and failure to do so means they (the user) and the unfair mediator are accepting joint responsibility for the program's behavior--even though the program actually carries out the intent of the developer, a fourth party!

I'm not sure what the mediator's exact API would be, but I would hope to aim for a decentralized court system rather than letting a supreme court be a single point of failure.

Anyway, I'd say this kind of state management is a secondary measure. If people are guided toward actions and programming techniques that don't cause remote errors, they'll rarely need to deploy intelligent ambassadors in the first place. So I still believe it's important to make elegant systems where programs are correct by construction, but I now have detailed opinions about what an error mechanism should look like, too.

Warm Regards,
Ross

David Barbour

unread,
Nov 19, 2013, 11:36:28 AM11/19/13
to reactiv...@googlegroups.com
On Tue, Nov 19, 2013 at 2:07 AM, Ross Angle <rok...@gmail.com> wrote:

As you know, I consider it generally dangerous to build tools that use state, because stateful systems are more likely to be perceived as people, leading to controversy. When you propose for "all agents and behaviors" to be "modeled as editing a program history," that sounds like every time a user wants to execute an agent or behavior, they must provide that program with access to some state of its own, thus making the program a potential person.

Well, RDP behaviors are still internally stateless. Perhaps I should have said: "all stateful resources". The idea is that we can pretend that a video camera (for example) is continuously editing a program to maintain an image type. An interesting RDP behavior will typically orchestrate communication among multiple stateful resources. Some resources are more explicitly 'state' resources, in that they maintain state in a predictable manner based on the demands.

As you know, I consider state problematic for a lot of reasons unrelated to the potential ethical concerns surrounding synthetic intelligence. State is an easy shelter for bugs. Composing stateful components requires combinatorial reasoning. Stateful systems tend to degrade over time, accumulating errors. So I'm still very interested in minimizing state.

 
Likewise, when you talk about a program having access to "initially exclusive state," that almost suggests the user actually grants full ownership of this state to the program.

In many cases it's easier to construct or reason about single writer state than collaborative state models. Further, "initially exclusive" state is useful if we wish to specify and update state models. We can model exclusivity by leveraging linear or affine types and an initially unique value.

Though I'm not sure what "full ownership" means in context. In a sense, the user is still in full control of the state, having the ability to revoke exclusive access at any time. Perhaps one might call it a "lease" rather than "ownership". Also, the user would generally control the subprogram that manipulates the state, via live coding.
 

By the time you say "Humans, then, could model for themselves some user-agents that might grow some independent moral awareness and capacity. :) " you're just rustling my jimmies. ^_^

Yep! :)
 

I was able to write a blog post that outlined some very specific implications. In short, to handle errors in the best possible way, a program must be able to simulate its developer's mind.

I read that article last week, but hadn't formed an opinion on it at the time.  I like your notion of "design holes". We could consider dependency on steady state to form a design hole, and `K` an operator to make a useful class of design holes explicit.

But with regards to handling errors in the 'best possible way', it isn't clear to me that we would want to simulate a developer's mind. This assertion is based upon knowing "the program's intended purpose", but it seems you are attributing purpose to the programmer rather than to the user or use-case. 

As a user of a program, a software component, I don't want every subprogram to limp along trying to fulfill its intended purpose against all odds. I want most to fail in clean, simple, predictable, easily observable ways, so that I can either use fallbacks or propagate the failure. Is not the same true for users of the software component I've constructed?

The cases where I do desire a lot of robust, automatic flexibility might be modeled as constraint solvers or blackboard systems. In these cases, a certain degree of intelligence might be developed - e.g. learning which solutions tend to be stable and work well, multiple agents contributing to a solution. But I think it would not be a human-like intelligence, nor have any concern for emancipation.


I actually want to use a principled approach to stateful APIs after all, so that at least the state is used in a morally responsible way. I'm thinking that a program which requires state will also come with an explicit place to plug in a mediator service.

I think providing state - even exclusive, single-writer state - through revocable capabilities would generally serve the role of enabling observation, auditing, history and rewind,  and probably whatever you mean by 'mediation'. :)

 
If a user finds this program code and wants to execute it, they're obligated to plug it into a fair mediator, and failure to do so means they (the user) and the unfair mediator are accepting joint responsibility for the program's behavior--even though the program actually carries out the intent of the developer, a fourth party!

I haven't thought much about 'responsibility'. I agree that we can't reasonably hold a programmer responsible for safe program behavior if the user is mucking about with program state.

Warm Regards,

Dave

Matt McLelland

unread,
Nov 19, 2013, 11:56:35 AM11/19/13
to reactiv...@googlegroups.com
> As you know, I consider it generally dangerous to build tools that use state, because stateful systems are more likely to be perceived as people, leading to controversy.

Was this not a joke?   Do you have further exposition of this viewpoint somewhere?  

Best,
Matt





On Tue, Nov 19, 2013 at 2:07 AM, Ross Angle <rok...@gmail.com> wrote:

--
You received this message because you are subscribed to the Google Groups "reactive-demand" group.
To unsubscribe from this group and stop receiving emails from it, send an email to reactive-dema...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Ross Angle

unread,
Nov 19, 2013, 2:40:25 PM11/19/13
to reactiv...@googlegroups.com
On Tue, Nov 19, 2013 at 8:56 AM, Matt McLelland <mclella...@gmail.com> wrote:
> As you know, I consider it generally dangerous to build tools that use state, because stateful systems are more likely to be perceived as people, leading to controversy.

Was this not a joke?   Do you have further exposition of this viewpoint somewhere?

Hi Matt,

This blog post is the first thing I wrote about it, I think, but I'll explain again here.

I'm not joking, but it's something I don't expect people to strongly care about for a long time, and I expect my claims to be rebutted and revised in many ways before they actually become relevant (...or don't). So I've been using this line of reasoning to focus and motivate my own designs (and my own favorite designs by others), but besides that, it's just something I find thought-provoking to talk about, jokes included. :)

I start with the skeptical premise that we don't really know what people of the future will consider right and wrong. We don't even know what they'll consider to be "people." However, a person is a special kind of system. It's stubbornly hard for an onlooker to predict, and yet it has features (like memory and mobility) that make it valuable to interact with. As we invent more varieties of systems that are valuable but inscrutable, the community of "users" will want to externally motivate them to act in certain ways, sometimes by legally enforcing their rights and responsibilities. So concepts like morality and personhood emerge, even without setting them down as preconceived notions.

A stateless system can't change its ways, so we can't really hold it responsible for its own behavior. So I like the idea of having stateless communication networks, so we can discuss the rising complexity in the world without also generating new scapegoats and dissenters as we go along. I think when complexity outpaces communication, we get controversy and bitter conflict.

Now I'm also thinking that our desire to express our intentions at a distance will force us to deal with inconsistency, where the representative we deploy is not actually the same person as we are. Present-day programming is largely about building things that continue to work without the developer's hand-holding, so I'd like to explore ideas to tackle this inconsistency head-on.

I hope I'm not being too kooky here. :)

-Ross

Ross Angle

unread,
Nov 19, 2013, 7:54:00 PM11/19/13
to reactiv...@googlegroups.com
On Tue, Nov 19, 2013 at 8:36 AM, David Barbour <dmba...@gmail.com> wrote:

Well, RDP behaviors are still internally stateless.

Yes, I would like to guide developers toward building more RDP behaviors, rather than building more RDP state resources. I was just reluctant to even think about anything stateful because I couldn't think of techniques to keep things ethical. (Now I still can't, but I've got a place to start.)



Though I'm not sure what "full ownership" means in context. In a sense, the user is still in full control of the state, having the ability to revoke exclusive access at any time. Perhaps one might call it a "lease" rather than "ownership". Also, the user would generally control the subprogram that manipulates the state, via live coding.

Just because one person can revoke another person's state doesn't make it right. Even if that revocation counts as part of the program's formal API, it's assisted suicide. There's a much subtler and more culturally sensitive condition that must be met here, I think.

If the developer can comprehensively mimic the program's behavior, or if they can demonstrate that the program cannot survive independently of their effort, those would make strong cases for revocation. If other people can also demonstrate these things, or if the program can demonstrate that it has the same advantages over the developer, these would weaken the case for revocation.


 

By the time you say "Humans, then, could model for themselves some user-agents that might grow some independent moral awareness and capacity. :) " you're just rustling my jimmies. ^_^

Yep! :)

Cool. :-p


 

I was able to write a blog post that outlined some very specific implications. In short, to handle errors in the best possible way, a program must be able to simulate its developer's mind.

I read that article last week, but hadn't formed an opinion on it at the time.

Then whoops, you saw it before I was really satisfied with it. Since then, with help from akkartik and evanrmurphy at Arc Forum, I restructured the intro and some of my reasoning to be a bit clearer.

This ended up changing my conclusion. Before, I concluded that I'd like to build error mechanisms that are predictable in hindsight, because that's a desirable feature an AI would lack. But no, it's just another way for the error mechanism to explain itself to the developer, so an AI would also be able to take advantage of this technique. This change in approach led me to think about mediation, so I expanded Observation 1 to talk about that.

Well, if you notice, I'm talking about it a lot right here. I don't consider that blog post to be necessary reading.



I like your notion of "design holes". We could consider dependency on steady state to form a design hole, and `K` an operator to make a useful class of design holes explicit.

But with regards to handling errors in the 'best possible way', it isn't clear to me that we would want to simulate a developer's mind. This assertion is based upon knowing "the program's intended purpose", but it seems you are attributing purpose to the programmer rather than to the user or use-case. 

Yes, I'm attributing purpose to the developer. The developer deliberately wrote that program rather than some variation thereof. Even if the developer leaves a design hole, they expect that hole to behave according to the same expectations they hold for the rest of the program.

Suppose the developer wants the client to be able to set their own goals for the program. Well, I just said "the developer wants," so this is one of the developer's design goals!



As a user of a program, a software component, I don't want every subprogram to limp along trying to fulfill its intended purpose against all odds. I want most to fail in clean, simple, predictable, easily observable ways, so that I can either use fallbacks or propagate the failure. Is not the same true for users of the software component I've constructed?

I would like that too. I think a lot of developers would design their programs to fail that way. A design hole happens when the developer doesn't have a goal in mind, not even the goal to fail in a certain way.

To detour a bit, if developer representatives are effective enough, who needs programs? In a sense, every single program is a developer representative. So when I talk about bundling a formally written program together with an intelligent representative, I'm talking about two programs conjoined by some kind of fallback mechanism. This kind of fallback from preplanned behavior to intelligent behavior is what I think faithfully reflects the notion of a design hole.

Specifically for Awelon, what kind of failure do you have in mind? If a program fails in one partition, how long does it keep running in the others?



The cases where I do desire a lot of robust, automatic flexibility might be modeled as constraint solvers or blackboard systems. In these cases, a certain degree of intelligence might be developed - e.g. learning which solutions tend to be stable and work well, multiple agents contributing to a solution. But I think it would not be a human-like intelligence, nor have any concern for emancipation.

No, I wouldn't expect any human-like intelligence from that. Also, I don't actually expect many of these representatives to seek their own emancipation; a developer would only occasionally intend for that kind of program behavior.

(This is unlike other scenarios I was worried about in a previous conversation: I expect people to create human-like computer agents specifically to liase with other humans, or specifically to displace responsibility from themselves to the agent. In those cases, humans might fight to emancipate human-like agents for empathetic or manipulative reasons.)

-Ross

David Barbour

unread,
Nov 19, 2013, 11:33:20 PM11/19/13
to reactiv...@googlegroups.com
On Tue, Nov 19, 2013 at 6:54 PM, Ross Angle <rok...@gmail.com> wrote:

This ended up changing my conclusion. Before, I concluded that I'd like to build error mechanisms that are predictable in hindsight, because that's a desirable feature an AI would lack. But no, it's just another way for the error mechanism to explain itself to the developer, so an AI would also be able to take advantage of this technique.

I think you'd want both: (a) predictable failure modes and error mechanisms, (b) an intelligent agent that developers or users can consult to help understand the behavior of a program (error or success). And these can be pretty much orthogonal. Certainly, a predictable error mechanism would be useful in developing a good explanation.


Specifically for Awelon, what kind of failure do you have in mind? If a program fails in one partition, how long does it keep running in the others?

For a meta-layer failure (e.g. violated assumptions, failed assertions, broken contracts) that static analysis couldn't eliminate, I'm aiming for 2*latency plus a little.  In most cases, within a fraction of a second, or better if the failure is speculated. The behavior is halted at some logical instant. A failover program might take its place in the same logical instant.

Compared to in-band failure handling, meta-layer failures will have a larger scope, higher latency, and require out-of-band communication.

Matt McLelland

unread,
Nov 20, 2013, 12:18:49 PM11/20/13
to reactiv...@googlegroups.com
Ross,

The usual ethical approach to dealing with persons is out of empathy; we assume that other people experience consciousness, pain, etc. in the same way we do and then arrive at laws protecting persons as part of a social contract.  There are well known ethical concerns around artificial intelligences, though AFAIK there's nothing like consensus regarding what properties should make such an AI eligible for protections akin to personhood (we can't even agree on how animals ought to be treated). 

But you're talking about something different, I think.  Currently, by convention, parents cannot in general be held liable for the actions of their children, but mad scientists can be held liable for actions of the killer robot they cooked up.  As robots of the future become more adaptable, it makes sense that we might want to shift some of that responsibility away from the robot's creator to someone else.  Shifting it to the robot would probably make sense in some circumstances.

...

I'm going to stop contemplating this stuff now and jump to the conclusion that avoidance of all state in systems you're building today on the basis of these considerations does seem pretty kooky :).

Best,
Matt





--

Ross Angle

unread,
Nov 21, 2013, 4:31:07 AM11/21/13
to reactiv...@googlegroups.com
On Wed, Nov 20, 2013 at 9:18 AM, Matt McLelland <mclella...@gmail.com> wrote:
Ross,

The usual ethical approach to dealing with persons is out of empathy; we assume that other people experience consciousness, pain, etc. in the same way we do and then arrive at laws protecting persons as part of a social contract.  There are well known ethical concerns around artificial intelligences, though AFAIK there's nothing like consensus regarding what properties should make such an AI eligible for protections akin to personhood (we can't even agree on how animals ought to be treated). 

But you're talking about something different, I think.  Currently, by convention, parents cannot in general be held liable for the actions of their children, but mad scientists can be held liable for actions of the killer robot they cooked up.  As robots of the future become more adaptable, it makes sense that we might want to shift some of that responsibility away from the robot's creator to someone else.  Shifting it to the robot would probably make sense in some circumstances.

Yeah, this lack of consensus is the primary thing I find concerning. As our technology becomes more complicated, our consensus might fall far behind it unless we keep finding better ways to communicate. But the most efficient ways to communicate will let us express incomplete thoughts, and the most faithful representations of these thoughts could take on lives of their own. Perhaps even this effect doesn't lead to a great deal of complexity, so what simple theories and programming styles could navigate it? Mediator services are my one and only idea at the moment.

Hmm...

Speaking of incomplete thoughts, here are some rough hypotheses I'd really like to refine. Maybe you (or anyone) can tell me if you've ever come across something like this, maybe related to information theory or hidden Markov models.

A understands and manipulates B, and B has no other clients ==> B is A's state resource

A intrigues and invites B, and B has no other clients ==> B is A's spontaneity resource

A intrigues and invites B, and B intrigues and invites A, and A and B communicate with low latency ==> A and B can be understood as a single agent

A can understand and manipulate C, and B can understand and manipulate C, and C has no other clients, and C is not very stateful ==> C can be understood as a communication channel between A and B

These are the kinds of generalizations I might hastily make about agents, but expressed in a way that might generalize past a boxes-and-arrows view of communication and toward a more continuous, resource-aware worldview.

I don't know how much further I could follow the topic from here, 'cause I'm far more interested in symbolic reasoning....

Wow, from Wikipedia's listing of approaches to theoretical sociology, it looks like social geometry/pure sociology takes an approach quite a lot like what I'm thinking of (modeling numeric resources, explaining conflict resolution without moral realism, etc.). That could be pretty promising.




I'm going to stop contemplating this stuff now and jump to the conclusion that avoidance of all state in systems you're building today on the basis of these considerations does seem pretty kooky :).

Well, state is necessary for many things, and I expect to keep using mutable variables in my code for a long time because that's what I'm used to. :-p My opinions about ethics are long-term, and I don't have opinions about any one person in one scenario unless I expect them to make wildly successful mistakes.

-Ross
Reply all
Reply to author
Forward
0 new messages