As you know, I consider it generally dangerous to build tools that use state, because stateful systems are more likely to be perceived as people, leading to controversy. When you propose for "all agents and behaviors" to be "modeled as editing a program history," that sounds like every time a user wants to execute an agent or behavior, they must provide that program with access to some state of its own, thus making the program a potential person.
Likewise, when you talk about a program having access to "initially exclusive state," that almost suggests the user actually grants full ownership of this state to the program.
By the time you say "Humans, then, could model for themselves some user-agents that might grow some independent moral awareness and capacity. :) " you're just rustling my jimmies. ^_^
I was able to write a blog post that outlined some very specific implications. In short, to handle errors in the best possible way, a program must be able to simulate its developer's mind.
I actually want to use a principled approach to stateful APIs after all, so that at least the state is used in a morally responsible way. I'm thinking that a program which requires state will also come with an explicit place to plug in a mediator service.
If a user finds this program code and wants to execute it, they're obligated to plug it into a fair mediator, and failure to do so means they (the user) and the unfair mediator are accepting joint responsibility for the program's behavior--even though the program actually carries out the intent of the developer, a fourth party!
--
You received this message because you are subscribed to the Google Groups "reactive-demand" group.
To unsubscribe from this group and stop receiving emails from it, send an email to reactive-dema...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
> As you know, I consider it generally dangerous to build tools that use state, because stateful systems are more likely to be perceived as people, leading to controversy.
Was this not a joke? Do you have further exposition of this viewpoint somewhere?
Well, RDP behaviors are still internally stateless.
Though I'm not sure what "full ownership" means in context. In a sense, the user is still in full control of the state, having the ability to revoke exclusive access at any time. Perhaps one might call it a "lease" rather than "ownership". Also, the user would generally control the subprogram that manipulates the state, via live coding.
By the time you say "Humans, then, could model for themselves some user-agents that might grow some independent moral awareness and capacity. :) " you're just rustling my jimmies. ^_^Yep! :)
I was able to write a blog post that outlined some very specific implications. In short, to handle errors in the best possible way, a program must be able to simulate its developer's mind.I read that article last week, but hadn't formed an opinion on it at the time.
I like your notion of "design holes". We could consider dependency on steady state to form a design hole, and `K` an operator to make a useful class of design holes explicit.But with regards to handling errors in the 'best possible way', it isn't clear to me that we would want to simulate a developer's mind. This assertion is based upon knowing "the program's intended purpose", but it seems you are attributing purpose to the programmer rather than to the user or use-case.
As a user of a program, a software component, I don't want every subprogram to limp along trying to fulfill its intended purpose against all odds. I want most to fail in clean, simple, predictable, easily observable ways, so that I can either use fallbacks or propagate the failure. Is not the same true for users of the software component I've constructed?
The cases where I do desire a lot of robust, automatic flexibility might be modeled as constraint solvers or blackboard systems. In these cases, a certain degree of intelligence might be developed - e.g. learning which solutions tend to be stable and work well, multiple agents contributing to a solution. But I think it would not be a human-like intelligence, nor have any concern for emancipation.
This ended up changing my conclusion. Before, I concluded that I'd like to build error mechanisms that are predictable in hindsight, because that's a desirable feature an AI would lack. But no, it's just another way for the error mechanism to explain itself to the developer, so an AI would also be able to take advantage of this technique.
Specifically for Awelon, what kind of failure do you have in mind? If a program fails in one partition, how long does it keep running in the others?
--
Ross,
The usual ethical approach to dealing with persons is out of empathy; we assume that other people experience consciousness, pain, etc. in the same way we do and then arrive at laws protecting persons as part of a social contract. There are well known ethical concerns around artificial intelligences, though AFAIK there's nothing like consensus regarding what properties should make such an AI eligible for protections akin to personhood (we can't even agree on how animals ought to be treated).
But you're talking about something different, I think. Currently, by convention, parents cannot in general be held liable for the actions of their children, but mad scientists can be held liable for actions of the killer robot they cooked up. As robots of the future become more adaptable, it makes sense that we might want to shift some of that responsibility away from the robot's creator to someone else. Shifting it to the robot would probably make sense in some circumstances.
A understands and manipulates B, and B has no other clients ==> B is A's state resourceA intrigues and invites B, and B has no other clients ==> B is A's spontaneity resourceA intrigues and invites B, and B intrigues and invites A, and A and B communicate with low latency ==> A and B can be understood as a single agentA can understand and manipulate C, and B can understand and manipulate C, and C has no other clients, and C is not very stateful ==> C can be understood as a communication channel between A and B
I'm going to stop contemplating this stuff now and jump to the conclusion that avoidance of all state in systems you're building today on the basis of these considerations does seem pretty kooky :).