AI-economy

22 views
Skip to first unread message

Tom Everitt

unread,
Dec 8, 2014, 5:40:38 PM12/8/14
to magic...@googlegroups.com

Hi all,

What is an independent agent?

Consider a world where we have sufficient technological competence to create beings as smart as ourselves. A person (call him Owner) may then choose to create (or buy) an "AI-servant" (called Servant) that does some of his business. Owner may justify his investment in Servant if his increase in productivity with the help of Servant is greater than the cost of the investment.

In some situations the optimal Servant may be an independent agent in its own right, endowed with some incentives that are engineered to help the Owner. Owner and Servant will presumably share technological competence with each other, giving both similar means. And through the instrumental convergence thesis, the behavior of owner and servant may not come to differ much.

The Servant may, for instance, choose to gather some resources and invest in its own servant (a Sub-Servant).

Some key questions:
- What is really the difference between an Owner and an intelligent, independent Servant. The Owner may be employed by some greater entity, and the Servant may have a Sub-Servant.
- How will resources be distributed? Will Servants always contribute more to their Owners than to themselves, leading to ever-increasing inequalities?
- Is such a "food-chain" scenario bad? And if it is, can we prevent it?

Cheers,
Tom

Laurent

unread,
Dec 8, 2014, 7:08:44 PM12/8/14
to magic...@googlegroups.com
Hi Tom,

Interesting questions :)



On Mon, Dec 8, 2014 at 10:40 PM, Tom Everitt <tom4e...@gmail.com> wrote:
Some key questions:
- What is really the difference between an Owner and an intelligent, independent Servant. The Owner may be employed by some greater entity, and the Servant may have a Sub-Servant.

Viewing the Servant as an employee is actually not a bad idea I think. It already gives the whole usual responsibility chain of companies, contracts and all the usual legal stuff. This would probably not be a bad idea if the Servant/employee has a high degree of autonomy, but it's not clear what "breaking the contract" would mean for the Servant.

Now if you want a more owner/slave relationship, we could get closer to a master/dog relationship: The servant is not able/allowed to modify the terms of the binding contract, nor to really break it by itself, and the servant could be terminated in case of bad/dangerous behaviour. But this does not seem to fit well with a human-level intelligence that is able to plan on the mid or long term.
 
- How will resources be distributed? Will Servants always contribute more to their Owners than to themselves, leading to ever-increasing inequalities?

In the case of an employee, then it's all ruled by the contract, upon which both parties should agree.
In the case of owner/slave, well, resources are entirely owned by the owner, but then I guess the owner should be entirely responsible for the slave actions.

More generally, I think responsibility should be defined as "how likely the observed consequences were at the time the action(s) was (were) taken" along with the knowingly/unknowingly distinction (i.e., the assumed likeliness of the consequences according to the agent under consideration).
For example, if agent A1 tells (= action a1) agent A2 to do action a2 with later consequences c with reward r, then the responsibility of A1, i.e. how strong r should be propagated to A1, should be proportional to how likely action a1 was to produce (by whatever means) consequences c and (in particular) reward r; It can be further refined by how much A1 expected c and r to happen by doing action a1. A further refinement could be to consider what A1 could have done to have better outcomes.

Something like the following, to assess A1's responsibility for doing action a_t at time t, leading to consequences o_k at further time k.
where P(a_t'|A1) is not the probability for A1 to take action a_t', but the "difficulty" for A1 to *consider* action a_t'. This is particularly relevant in case a_t' is a complex sequence of actions (according to A1), or if A1 is cognitively disabled and cannot even consider some simple action, and 1/c is the normalization factor for P(a_t'|A1).

One could also penalize/reward by 1/P(a_t|A1) for the twistedness of choosing a_t (e.g., if A1 put a lot of resources into finding action a_t, i.e., the more planning the worse).


Laurent




Eray Ozkural

unread,
Dec 8, 2014, 7:37:10 PM12/8/14
to magic...@googlegroups.com
Hello there Tom,

Why must AI be an autonomous agent? Why is it optimal for an AI worker to be a fully autonomous agent, in what sense?

Regards,

--
Before posting, please read this: https://groups.google.com/forum/#!topic/magic-list/_nC7PGmCAE4
---
You received this message because you are subscribed to the Google Groups "MAGIC" group.
To unsubscribe from this group and stop receiving emails from it, send an email to magic-list+...@googlegroups.com.
To post to this group, send email to magic...@googlegroups.com.
Visit this group at http://groups.google.com/group/magic-list.



--
Eray Ozkural, PhD. Computer Scientist
Founder, Gok Us Sibernetik Ar&Ge Ltd.
http://groups.yahoo.com/group/ai-philosophy

Peter Driscoll

unread,
Dec 8, 2014, 11:36:35 PM12/8/14
to magic...@googlegroups.com
I picture this scenario as me doing exactly what I want while my servant (intelligent agent) does all the work.

I would hope that,
  • My servant's goal is to serve me and make me happy.
  • My servant has general goals for the good of all humanity.
  • I can issue commands to my servant.
  • My servant has restrictions, written as a moral code, on what they can do,
    • Isaac Asimovs 3 laws ;)
    • Written as a set of constraints on what a servant can do, and allow to happen, without action.
    • Can identify humans and ascribe them certain rights.
    • Can identify animals and ascribe them certain rights.
  • My servant has a fail safe that I can use to turn it off.

Even a brief amount of reflection leads me to think that creating a moral code might be full of problems. Also creating goals that describe what is good has problems.

Does my servant have to obey the laws of the nation?
  • If I am dieing and in urgent need of medical attention can I instruct my servant to break the speed limit?
  • What if another Hitler comes to power, and imposes unjust laws?
It is a terrifying prospect that a seriously intelligent agent should get loose and choose its own goal, or that an evil or misguided person might give a goal to an intelligent agent that ends up being destructive.

For example; "Servant I command you to insure that the power to my massage chair is on." Servant sees a person disrupting the power. Servant kills person. Massage chair stays on.

What happens when my servant meets an extra-terrestrial intelligent being?  What rights are ascribe to intelligent beings? My servant is an intelligent being.

What happens if human beings become a mono culture, stifling diversity, over populating the world, and destroying all other life? Is 1 tiger worth the life of 1000 humans?

I would like to see people smarter people than myself start to address these questions. They seem to me to be profoundly difficult.

Kind regards

Peter

Vaibhav Gavane

unread,
Dec 9, 2014, 1:23:33 AM12/9/14
to magic...@googlegroups.com
On 12/9/14, Tom Everitt <tom4e...@gmail.com> wrote:
> The Servant may, for instance, choose to gather some resources and invest
> in its own servant (a Sub-Servant).
>

When Skynet starts building its own army, you kill it before it is too late. :-p
Reply all
Reply to author
Forward
0 new messages