Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

The Four Laws of Robotics

3 views
Skip to first unread message

TAE S SON

unread,
Oct 13, 1993, 3:23:43 PM10/13/93
to
Ok . . . since someone asked for them, here they are.

0. A robot must not harm humanity or through inaction allow humanity to
come to harm.
1. A robot must not harm a human or through inaction allow a human to come
to harm except in situations which conflict with the zeroeth law.
2. A robot must obey the commands given by a human unless it conflicts with
the zeroeth or first law.
3. A robot must protect its own existance unless it conflicts with the
zeroeth, first or second laws.

Carl Seiler

unread,
Oct 13, 1993, 6:01:18 PM10/13/93
to
In article <son3776.19...@utdallas.edu>,

The original poster also asked when they were appeared. I was waiting for
someone else to answer before I stuck my head out. I'll go ahead and say
that the Laws One through Three appeared first in 1941 in Astounding Science
Fiction. "Liar" and "Reason" were the first among the short robot stories
to use the Laws, and these appear later in _I, Robot_.

The Zeroeth Law appears in _Robots_of_Dawn_ in 1983.

Please correct me if I am wrong on these dates or titles. I don't doubt that
I am wrong on these.

Carl

--
--
Leslie Carl Seiler | Texas A&M Geography
lcs...@tamsun.tamu.edu | This sig for identification purposes only.
-------------------------------------------------------------------------

Daniel Quinlan

unread,
Oct 13, 1993, 9:09:54 PM10/13/93
to
>>>>> son...@utdallas.edu (TAE S SON) said:

Before everyone comments "But, there are only *3* laws!", I will add
some commentary. Miniscule spoilers?

> 0. A robot must not harm humanity or through inaction allow humanity to
> come to harm.

This law was added after thousands of years of robot thought on how to
better accomplish law #1 and was actually implemented BY robots
(Namely, Daneel R. Olivaw (sp?))

> 1. A robot must not harm a human or through inaction allow a human to come
> to harm except in situations which conflict with the zeroeth law.
> 2. A robot must obey the commands given by a human unless it conflicts with
> the zeroeth or first law.
> 3. A robot must protect its own existance unless it conflicts with the
> zeroeth, first or second laws.

This are the original 3 laws that were embedded into robots from the
very beginning. Of course, the original 3 did not mention the zeroth
(It is spelled "zeroth" and not "zeroeth") law.

Hopefully, that clarifies the situation a bit.

// Dan

--
Daniel Quinlan <qui...@spectrum.cs.bucknell.edu>

Carl Seiler

unread,
Oct 14, 1993, 1:26:30 AM10/14/93
to

As mentioned before, possible minor spoiler...

In article <QUINLAN.93...@scarlet.cs.bucknell.edu>,


Daniel Quinlan <qui...@spectrum.cs.bucknell.edu> wrote:
>> 0. A robot must not harm humanity or through inaction allow humanity to
>> come to harm.
>
>This law was added after thousands of years of robot thought on how to
>better accomplish law #1 and was actually implemented BY robots
>(Namely, Daneel R. Olivaw (sp?))

If I recall correctly, the zeroth law was devised by both R. Giskard Reventlov
and R. Daneel Olivaw, but I might be recalling incorrectly.

Christian Almgren

unread,
Oct 14, 1993, 6:00:17 AM10/14/93
to
In <29inu6$d...@tamsun.tamu.edu> lcs...@tamsun.tamu.edu (Carl Seiler) writes:

>As mentioned before, possible minor spoiler...

>In article <QUINLAN.93...@scarlet.cs.bucknell.edu>,
>Daniel Quinlan <qui...@spectrum.cs.bucknell.edu> wrote:
>>> 0. A robot must not harm humanity or through inaction allow humanity to
>>> come to harm.
>>
>>This law was added after thousands of years of robot thought on how to
>>better accomplish law #1 and was actually implemented BY robots
>>(Namely, Daneel R. Olivaw (sp?))

>If I recall correctly, the zeroth law was devised by both R. Giskard Reventlov
>and R. Daneel Olivaw, but I might be recalling incorrectly.

It was mainly Daneel who invented it, but Giskard was with him att that time. Giskard however couldn't "live" with it.

>--
>--
>Leslie Carl Seiler | Texas A&M Geography
>lcs...@tamsun.tamu.edu | This sig for identification purposes only.
>-------------------------------------------------------------------------

--
-Christian Almgren Internet: d93...@nada.kth.se
Who am I? Why am I here? Forget the questions! Someone gimme another beer!
"Meatloaf"

Anand Rangarajan

unread,
Oct 14, 1993, 1:56:34 PM10/14/93
to

In article <29htre$k...@tamsun.tamu.edu>, lcs...@tamsun.tamu.edu (Carl Seiler) writes:
|> In article <son3776.19...@utdallas.edu>,
|> TAE S SON <son...@utdallas.edu> wrote:

|> The Zeroeth Law appears in _Robots_of_Dawn_ in 1983.
|>
|> Please correct me if I am wrong on these dates or titles. I don't doubt that
|> I am wrong on these.
|>
|> Carl
|>

I think the zeroth law appeared in _Robots and
Empire_. If I remember correctly, _The Robots of Dawn_
involved Baley and Giskard (with Daneel playing a
peripheral role) and set up Amadiro's hatred of Earth.
In _Robots and Empire_, Elijah Baley is long dead (and
appears in a flashback) and now Daneel and Giskard
hold long conversations on the three laws, psychohistory,
mob statistics etc. Giskard is really the one who paves
the way for Seldon AND Gaia.

Anand Rangarajan
rangaraj...@cs.yale.edu


Duane Morin

unread,
Oct 14, 1993, 8:36:53 AM10/14/93
to
In article <29htre$k...@tamsun.tamu.edu> lcs...@tamsun.tamu.edu (Carl Seiler) writes:
>The original poster also asked when they were appeared. I was waiting for
>someone else to answer before I stuck my head out. I'll go ahead and say
>that the Laws One through Three appeared first in 1941 in Astounding Science
>Fiction. "Liar" and "Reason" were the first among the short robot stories
>to use the Laws, and these appear later in _I, Robot_.

Was it _Runaround_ in which two scientists stranded on Mercury(?) have a
malfunctioning robot, so they need to ride a couple of larger, older models
out onto the surface of the planet to see what the problem is? And they
find that the fumes from the material that the original robot was sent to
gather are screwing up its positronic brain? It ends up in an infinite
loop - it walks away from the fumes until it regains control of its brain,
and then its original order kicks back in, so it approaches again....

I think the introductions to my story collections have Asimov claiming that
this story was the first in which he used the laws.

>Carl

Duane

Carl Seiler

unread,
Oct 14, 1993, 4:46:49 PM10/14/93
to
In article <1993Oct14....@schunix.dmc.com>,

Duane Morin <so...@schunix.dmc.com> wrote:

Was it _Runaround_ in which two scientists stranded on Mercury(?) have a
[...]

I think the introductions to my story collections have Asimov claiming that
this story was the first in which he used the laws.

"Reason" and "Liar!" are copyright 1941, and "Runaround" is copyright 1942,
and that is why I said those two were the first stories. It is possible
that Asimov wrote "Runaround" first.

Carl Seiler

unread,
Oct 14, 1993, 4:51:23 PM10/14/93
to
In article <29k3si...@thor.systemsz.cs.yale.edu>,

Anand Rangarajan <rangaraj...@cs.yale.edu> wrote:
I think the zeroth law appeared in _Robots and
Empire_. If I remember correctly, _The Robots of Dawn_
involved Baley and Giskard (with Daneel playing a
peripheral role) and set up Amadiro's hatred of Earth.
In _Robots and Empire_, Elijah Baley is long dead (and
appears in a flashback) and now Daneel and Giskard
hold long conversations on the three laws, psychohistory,
mob statistics etc. Giskard is really the one who paves
the way for Seldon AND Gaia.

That sounds much better than my suggestion of _Robots of Dawn_. I knew
Giskard and Daneel were involved together somehow. My memory isn't the
best.

Lance Shaw

unread,
Oct 15, 1993, 2:39:00 PM10/15/93
to
Have any of you read Caliban? It isn't by Asimov, but it is set in his
universe on one of the spacer worlds. It is at the point in time when the
settlers are way ahead, and the spacer worlds are struggling. The story takes
place on Hades on of the spacer worlds.
Anyway, the main story is about a new kind of robot brain that is based
on the positronic brain so it doesn't have the three laws. It is a mystery
that reminded me of Caves of Steel, but there is also an interesting discussion
on the three laws as they are, and (now that they have the opurtunity) how they
should be. It discusses why the spacers have become lazy and dependant on
their robots, and how some new laws could help solve that with out getting red
of robots.
You end up haveing three types of robots: 3 law robots, new law
robots, and no law robots.

I read this story over the summer, and can't remeber the new laws well
enough to post them now, maybe later if there is an interest.
(maybe I'll have to reread it. If I ever get caught up on my homework...)

Lance Shaw


Archie Medrano

unread,
Oct 16, 1993, 4:20:46 AM10/16/93
to
In article <15OCT199...@vx9000.weber.edu> ls...@vx9000.weber.edu (Lance Shaw) writes:
>
> Have any of you read Caliban? It isn't by Asimov, but it is set in his
>universe on one of the spacer worlds. It is at the point in time when the

I have read _Isaac Asimov's Caliban_ by Roger MacBride Allen [author of
_The Modular Man_].

> Anyway, the main story is about a new kind of robot brain that is based
>on the positronic brain so it doesn't have the three laws. It is a mystery

The positronic brain is replaced with the gravitonic brain. Here are
the New Laws of Robotics (pp. 214-215):

1) A robot may not injure a human being. [Human protection *from*
robots, not *by* robots.]
2) A robot must cooperate with human beings except where such
cooperation would conflict with the First Law. [Robot cooperation,
not obedience.]
3) A robot must protect its own existence, as long as such protection
does not conflict with the First Law. (Second Law not mentioned.)
[Robotic self-preservation.]
4) A robot may do anything it likes except where such action would
violate the First, Second, or Third Law. [Robotic freedom and
creativity.]

> Lance Shaw

The second (forthcoming) book in this series is entitled/subtitled _Inferno_.


Archie

--
Archie Medrano (amed...@euclid.ucsd.edu)
"The most exciting phrase in science, the one that heralds
new discoveries, is not 'Eureka' (I found it!) but
'That's funny...'" - Isaac Asimov

Duane Morin

unread,
Oct 16, 1993, 9:48:21 AM10/16/93
to
In article <15OCT199...@vx9000.weber.edu> ls...@vx9000.weber.edu (Lance Shaw) writes:
> I read this story over the summer, and can't remeber the new laws well
>enough to post them now, maybe later if there is an interest.
>(maybe I'll have to reread it. If I ever get caught up on my homework...)

I just happen to have my copy within arm's reach, so allow me:

New First Law: A robot may not injure a human being.

Intent: People won't have to worry about being injured BY robots, but should
not expect robots to protect them constantly.

New Second Law: A robot must cooperate with human beings except where such
cooperation conflicts with the First Law.

Intent: "Cooperate", not "obey".

New Third law: A robot must protect its own existence, as long as such
protection does not conflict with the first law.

Intent: Note lack of reference to the second law. A robot will not
cooperate with a human to destroy itself.

New Fourth Law: A robot may do anything it likes except where such action
would violate the First, Second or Third Laws.

Intent: Play time.

>
> Lance Shaw

I wasn't too crazy about this book. At first, it struck me as a typical
"Oh-good-asimov's-dead-now-I-can-use-all-of-his-ideas-about-robots" story
(what are some others...Robots in Time...) But it also just wasn't a good
story, IMHO.

Duane, who just realized he hasn't read Robots and Empire yet

David Alpert

unread,
Oct 16, 1993, 9:40:36 PM10/16/93
to

In article <son3776.19...@utdallas.edu>, son...@utdallas.edu (TAE S
SON) writes...

>0. A robot must not harm humanity or through inaction allow humanity to
> come to harm.
>1. A robot must not harm a human or through inaction allow a human to come
> to harm except in situations which conflict with the zeroeth law.
>2. A robot must obey the commands given by a human unless it conflicts with
> the zeroeth or first law.
>3. A robot must protect its own existance unless it conflicts with the
> zeroeth, first or second laws.

And one little-used and non-canon law:

4. A robot must reproduce, unless said reproduction conflicts with the
zeroeth, first, second, or third laws.

The fourth law from a story in "Foundation's Friends" called "The Fourth Law of
Robotics." I don't think it really work with the zeroeth, because it was
devided by a smoking hippie robot. But it needed to be added to any COMPLETE
list of robot's laws.

--
Eat, drink, and have sex, for tomorrow we may all have hypothalmic lesions.

Andy Nicola

unread,
Oct 18, 1993, 1:10:53 AM10/18/93
to

In a previous article, lcs...@tamsun.tamu.edu (Carl Seiler) says:


>The Zeroeth Law appears in _Robots_of_Dawn_ in 1983.
>
>Please correct me if I am wrong on these dates or titles. I don't doubt that
>I am wrong on these.
>
>Carl
>

The Zeroth Law first appears in _Robots and Empire_, Part V, Chapter 18.

regards, Andy

--



Andy Nicola

unread,
Oct 18, 1993, 1:21:56 AM10/18/93
to

In a previous article, qui...@scarlet.cs.bucknell.edu (Daniel Quinlan) says:


>> 0. A robot must not harm humanity or through inaction allow humanity to
>> come to harm.
>
>This law was added after thousands of years of robot thought on how to
>better accomplish law #1 and was actually implemented BY robots
>(Namely, Daneel R. Olivaw (sp?))

The Zeroeth Law was actually the inception of R. Giskard Reventlov. It was
discussed with R. Daneel Olivaw and eventually passed to him near the time
of Giskard's (death?). It was the internal conflict with the implementation
of this Law which ultimately killed Giskard. You must remember that Giskard
was not designed to be as sophisticated as Daneel. Giskard was able to
comprehend this concept and decided that passing his ability, programming
and Law concept to Daneel would stand a better chance at success.

regards, Andy
--



0 new messages