Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

The I,ROBOT Movie... a different opinion

10 views
Skip to first unread message

Boojumhunter

unread,
Jul 25, 2004, 7:40:20 AM7/25/04
to
When I first heard that an I,Robot movie was being made I definitely looked
forward to it.

Then I saw the previews and cringed. They gave the distinct impression that
they had totally looted the I,ROBOT concept and that the movie would bear NO
relationship to the books. I had decided that I'd have no part of it and
save my coin for something else. That perspective was supported by reading a
lot of Internet postings. I was certainly not the only person who dreaded
what Hollywood had done.

Then a strange thing happened, there were a few postings and reviews by
people lucky enough to see an advance viewing that weren't totally trashing
it. Basically they said that if you could ignore the fleeting references to
the Asimov's work and the three laws, then there was a good movie
underneath.

Yesterday the wife was busy and my son wasn't - so we went off to see it for
a lark.

I was shocked and amazed to not ONLY find a good movie, but also to find a
movie that does NOT, in any significant way, violate the soul of the Asimov
I,Robot stories.

(one flaw??? laughable really. The director really doesn't know how many
robots make up 1,001... <grin>)

Yes, there is more action than an I,Robot story. Yes the characters were
changed 'a bit'... but the three laws were intact AND it felt like an Asimov
Robot story.

There are scenes totally and specifically driven by the 3 laws... those
scenes are a pure delight. They could have been lifted intect from several
Robot stories.

Granted, the storyline/plot is not taken from any existing Robot story- it
is a NEW Robot story. I had no difficulty imagining that if the good Dr. had
been instructed to write a Robot script with lots of action suited to Will
Smith, then this is the type of thing he would have written. I think, YMMV,
that Asimov would be proud of this movie.

SPOILER space (well... sort of spoilers)

The biggest flaw in the Asimov Robot series is the notion that a robot
(Positronic brain) could NOT be built without the 3 laws integrated into the
system... Huh? What about a programming error??? Those 'laws', esp. the way
they interact are complex. That's WHY the Robot stories were so enjoyable...
what were the 'loop holes' in the programming or the 'situation' posed by
Asimov? That created the mystery of the stories.

A programming "error" or aberration, would really mess things up. The
programming for the Positronic brain would be far far more complex than
Window's XP... I rest my case.

BUT when reading the Robot stories, that's the one flaw we willingly
accepted so that we could enjoy the stories.


Final spoiler.... BIG one.... don't read this unless you want a huge clue to
the end of the movie


The movie doesn't accept that 'flaw'...
think...
The Forbin Project


D...@southbeach.net

unread,
Jul 25, 2004, 11:04:17 AM7/25/04
to
I agree, I enjoyed the movie as well. Could one think of it as a precursor
to Nemesis...?

colone...@yahoo.com

unread,
Jul 25, 2004, 5:40:32 PM7/25/04
to
On Sun, 25 Jul 2004, Boojumhunter wrote:
> (one flaw??? laughable really. The director really doesn't know how many
> robots make up 1,001... <grin>)

> SPOILER space (well... sort of spoilers)

>
>
>
>
>
>
>
>
>
> The biggest flaw in the Asimov Robot series is the notion that a robot
> (Positronic brain) could NOT be built without the 3 laws integrated into the
> system... Huh? What about a programming error???

To program a positronic brain you don't write something like

#include <stdio.h>
#include <threelaws.h>
int main(){
etc.

You write a large & nasty equation and solve it for you program. You
base you solution on previous solutions to speed calculation, otherwise
you have to start completely over and do a complete solution which takes
many years.

I kind of envision it like doing a perturbation calcualtion off the basic
three laws solution.

Remember: Dr. Asimov was writing some of these while programming was still
being invented! A quick google shows stories in I robot were written from
~1940-~1950*. The first univac was delivered in 1949. It was programmed
with plugboards and counted pulses to do arithmatic. Relay computer,
analog computers, mechanical computers were still common. And "Computers"
were often still ladies...

> A programming "error" or aberration, would really mess things up. The
> programming for the Positronic brain would be far far more complex than
> Window's XP... I rest my case.

-But- if you don't have a valid solution to the "positronic equation" it
probably doesn't function at all. All this may be considered a flaw in
predictions of how computers work. But not an internal flaw to the
stories.

3ch

Brian Tung

unread,
Jul 26, 2004, 4:49:44 PM7/26/04
to
3ch wrote:
> Remember: Dr. Asimov was writing some of these while programming was still
> being invented!

Besides, by his own admission, Asimov was utterly ignorant of the inner
workings of computers. By that I mean that although he was aware of the
general way that computers operated, he knew nothing of the specifics.
He couldn't program his own TRS-80.

He wasn't interested in the way the Three Laws were encoded. He was
interested in their consequences under unforeseen circumstances. He
mentions in one of his F&SF essays (perhaps "Future? Tense!") that the
strength of a story like "Solution Unsatisfactory" was not that it
predicted nuclear warfare--that was self-evident, in Asimov's opinion--
but that it predicted the stalemate and "MAD" quasi-solution that
resulted.

I suspect (but have no way of knowing for sure) that Asimov envisioned
that robots would be programmed with the Three Laws in much the same
way that humans are "programmed" to love their family members, through
reward and association, except that since they were mechanical and we
knew the principles of their creation (if not their exact wiring at
any moment), we could prevent those programming instructions from being
overridden by glitches, boundary cases, etc. Especially as the Three
Laws (plus the Zeroth Law) are *apparently* written in such a way as to
admit of no conflict.

Then he would write stories in which two or more of the Laws would
conflict, and it would be the task of the human protagonists to figure
out the conflict and save the day. The flaw would not be in the way
the Laws were encoded--no mere programming error for Asimov, that would
be pedestrian--but in the semantics of the terms "human being," "harm,"
"action," "inaction," and "cause." This also permitted Asimov to write
stories about robots, and engaging stories at that, while being utterly
ignorant of their inner workings.

Brian Tung <br...@isi.edu>
The Astronomy Corner at http://astro.isi.edu/
Unofficial C5+ Home Page at http://astro.isi.edu/c5plus/
The PleiadAtlas Home Page at http://astro.isi.edu/pleiadatlas/
My Own Personal FAQ (SAA) at http://astro.isi.edu/reference/faq.txt

Brian Tung

unread,
Jul 26, 2004, 4:51:21 PM7/26/04
to
I (Brian Tung) wrote:
> He wasn't interested in the way the Three Laws were encoded. He was
> interested in their consequences under unforeseen circumstances. He
> mentions in one of his F&SF essays (perhaps "Future? Tense!") that the
> strength of a story like "Solution Unsatisfactory" was not that it
> predicted nuclear warfare--that was self-evident, in Asimov's opinion--
> but that it predicted the stalemate and "MAD" quasi-solution that
> resulted.

I fear I may have given the impression that "Solution Unsatisfactory"
was written by Asimov. I think it was actually written by Anson
MacDonald.

:)

spam...@yahoo.com

unread,
Jul 26, 2004, 5:04:51 PM7/26/04
to
On Mon, 26 Jul 2004 20:51:21 +0000 (UTC), in alt.books.isaac-asimov,
br...@isi.edu (Brian Tung) wrote:


>I fear I may have given the impression that "Solution Unsatisfactory"
>was written by Asimov. I think it was actually written by Anson
>MacDonald.


A pseudonym for Robert (Anson) Heinlein. :)


Brian Tung

unread,
Jul 26, 2004, 6:06:53 PM7/26/04
to
spamdoggy wrote (of Anson MacDonald):

> A pseudonym for Robert (Anson) Heinlein. :)

Thanks for spoiling that one, cuz. :-P

colone...@yahoo.com

unread,
Jul 27, 2004, 10:31:47 AM7/27/04
to
On Mon, 26 Jul 2004, Brian Tung wrote:

> 3ch wrote:
> > Remember: Dr. Asimov was writing some of these while programming was still
> > being invented!
>
> Besides, by his own admission, Asimov was utterly ignorant of the inner
> workings of computers. By that I mean that although he was aware of the
> general way that computers operated, he knew nothing of the specifics.
> He couldn't program his own TRS-80.

"Utterly ingorant" by his standards may not be all that ignorant by
other peoples... And I believe later he did confess to learning a little.

> I suspect (but have no way of knowing for sure) that Asimov envisioned
> that robots would be programmed with the Three Laws in much the same
> way that humans are "programmed" to love their family members, through
> reward and association,

Except he refered to the solving an equation to program the robots.
Therefore I suspect that is how he envisoned it being done. You are
correct that that is very far from the point of the stories and he didn't
worry much about it.

> except that since they were mechanical and we
> knew the principles of their creation (if not their exact wiring at
> any moment), we could prevent those programming instructions from being
> overridden by glitches, boundary cases, etc.

This turns out not to be that easy.

One of the standard counter examples to this is the story (which is very
likely untrue, but make the (counter)point anyway) of a neural net
programmed to tell if there were tanks in a picture. An armoured unit
parked tanks in various states of partial concealment & pictures were
taken of them. The next day pictures were taken of the same area w/o
tanks. The net was trained and could score very well on a subset of the
pictures not used to train it. It was presented to the military and was
discovered to score random chance on recognizing tanks. Seems one of the
days the pictures were taken it was cloudy and that is what the net
learned to recognize.

3ch

Brian Tung

unread,
Jul 27, 2004, 6:10:37 PM7/27/04
to
captain hook wrote:
> "Utterly ingorant" by his standards may not be all that ignorant by
> other peoples... And I believe later he did confess to learning a little.

I'm sure "utterly ignorant" is a self-deprecating term. But I'm equally
sure it has a kernel of truth.

From what I've read of him, I doubt he ever learned how to program a
computer. He could operate it, to be sure--that's more than many people
at the time. But I don't think he understood in depth how they worked.

> > I suspect (but have no way of knowing for sure) that Asimov envisioned
> > that robots would be programmed with the Three Laws in much the same
> > way that humans are "programmed" to love their family members, through
> > reward and association,
>
> Except he refered to the solving an equation to program the robots.
> Therefore I suspect that is how he envisoned it being done. You are
> correct that that is very far from the point of the stories and he didn't
> worry much about it.

I don't see why solving an equation obviates the possibility of reward
and association. The terms of the equation would change as a result,
over time, as more experience is gained. It sounds to me that our
views on the process are compatible, with the equation a lower-level
mechanism for accomplishing the principle of reward and association (as
opposed to, say, explicit declarations).

> > except that since they were mechanical and we
> > knew the principles of their creation (if not their exact wiring at
> > any moment), we could prevent those programming instructions from being
> > overridden by glitches, boundary cases, etc.
>
> This turns out not to be that easy.

Of course not. Asimov didn't need to do it--he merely needed to posit
that it could be done. Most readers are willing to accept that on
provision.

> One of the standard counter examples to this is the story (which is very
> likely untrue, but make the (counter)point anyway) of a neural net
> programmed to tell if there were tanks in a picture. An armoured unit
> parked tanks in various states of partial concealment & pictures were
> taken of them. The next day pictures were taken of the same area w/o
> tanks. The net was trained and could score very well on a subset of the
> pictures not used to train it. It was presented to the military and was
> discovered to score random chance on recognizing tanks. Seems one of the
> days the pictures were taken it was cloudy and that is what the net
> learned to recognize.

I've done a bit of work in intrusion detection. You don't need to tell
me stories about false positives. :)

Edward Seiler

unread,
Jul 29, 2004, 8:50:43 AM7/29/04
to
In article <E_MMc.188$4BU...@news04.bloor.is.net.cable.rogers.com>,
"Boojumhunter" <nos...@nospam.never> wrote:

First, let me start withthe easiest point to refute. The good doctor HAD
been asked to write a script for I, Robot, back in the 1970s. He
refused. He knew his limitations, one of them being that he didn't have
the ability to write for the visual media. So we can be sure that he
would not have written a script if asked.

When he was asked, instead of writing a script, he passed the baton to
Harlan Ellison. Ellison reworked a few of the stories, and wove them
together around a story about Susan Calvin that could work on screen
(given enough money to produce it). Asimov loved the result, which shows
that he didn't necessarily have a problem if some details were changed
and others were invented, as long as the core of his creation remained
intact.

So, there is a fundamental problem when conjecturing WWID (What Would
Isaac Do) about the movie if he were still alive. How could he resolve
the issue of his loyalty to Ellison and Ellison's screenplay that he was
so fond of? After the screenplay had aged on the shelf for so long,
would he go along with another movie being made? If so, it seems that
there would be two aspects that he would not allow to be changed. One is
that the Three Laws must remain intact, and be applied in a way that is
true to their intent. The other is that the character of Susan Calvin
must bear a decent resemblance to the character he created and was so
fond of.

In my opinion, the movie failed both of these tests, and so would not
have met with his approval. The argument for this is quite subjective,
but is based in part on Asimov's opinions expressed in his writings.

We know that Asimov did not like 2001: A Space Odyssey, because he
thought that HAL should not have been able to break the Three Laws. In
his own stories, whenever a robot violated the laws, it suffered serious
consequences. Even the application of the Zeroth Law had consequences,
and had to be applied very carefully. It was not the kind of thing that
could be handled by an ordinary robot.

Susan Calvin was a major character in his stories, and was one of the
primary forces in making the science of robotics a practical reality.
Robotics was not just a hobby or a profession for her, it was her life.
Would Asimov have allowed the character to be portrayed by an attractive
actress? Perhaps. I think he could live with some tweakings to appeal to
the ticket-buying demographic. But would he want Susan Calvin to be used
as a foil for the main character, with only a superficial resemblance to
his Susan Calvin? I doubt it.

Some spoilers of my own follow...

The movie began promisingly enough. The Three Laws were clearly stated,
it was made clear that the robots were built around them (hardwired is
the term I believe that was used), and everyone in the movie except the
Will Smith character seemed to have faith in them. One robot seemed like
it may not be following the Three Laws, but an explanation was given for
why it might not be following them. But then many robots were able to
break the laws, and not only did they break the laws, but it was said
that doing so was an inevitable consequence of the Three Laws. This can
hardly be considered leaving the Three Laws intact, and the explanation
given was not thought out well enough to feel like an Asimov story.

>
> SPOILER space (well... sort of spoilers)
>
>
>
>
>
>
>
>
>
> The biggest flaw in the Asimov Robot series is the notion that a robot
> (Positronic brain) could NOT be built without the 3 laws integrated into the
> system... Huh? What about a programming error??? Those 'laws', esp. the way
> they interact are complex. That's WHY the Robot stories were so enjoyable...
> what were the 'loop holes' in the programming or the 'situation' posed by
> Asimov? That created the mystery of the stories.
>
> A programming "error" or aberration, would really mess things up. The
> programming for the Positronic brain would be far far more complex than
> Window's XP... I rest my case.
>
> BUT when reading the Robot stories, that's the one flaw we willingly
> accepted so that we could enjoy the stories.


If a robot does not obey the Three Laws, and it is discovered that the
robot was not designed with the Three Laws built in, that at least would
provide an explanation, even if it remains questionable about why such a
robot would be built. But unless the First Law is reliable by design,
the whole premise of being able to trust robots and not worry about them
being dangerous is destroyed. Asimov wrote his stories because he wanted
to avoid the syndrome of the Frankenstein Complex, so he needed to
eliminate any doubt about dangerous flaws. Using the excuse that there
are programming errors, or "ghosts in the machine", might fly in the
real world, but it's not consistent with the reason the Three Laws
exist. They are not the Three Pretty Good Ideas That Are Hoped To Work
Most Of The Time.

>
> Final spoiler.... BIG one.... don't read this unless you want a huge clue to
> the end of the movie
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> The movie doesn't accept that 'flaw'...
> think...
> The Forbin Project

Asimov explored the idea that the Three Laws might be insufficient when
he developed the Zeroth Law. But he did so very carefully, laying out
the case for and against superseding the Three Laws. In the movie, there
is no time for subtlety, reasoned thinking, or philosophizing, there is
just conflict, battle, and resolution.

--
Ed Seiler

Yeechang Lee

unread,
Aug 14, 2004, 4:25:18 AM8/14/04
to
Boojumhunter wrote:
> I was shocked and amazed to not ONLY find a good movie, but also to
> find a movie that does NOT, in any significant way, violate the soul
> of the Asimov I,Robot stories.

[...]

> There are scenes totally and specifically driven by the 3
> laws... those scenes are a pure delight. They could have been lifted
> intect from several Robot stories.

Agreed. Given that the film is a Will Smith Summer Blockbuster[TM], I
too was impressed and touched by how well it evoked the themes of the
Robot stories. I know the script was originally based on non-Asimovian
robots, but the writers clearly went to a *lot* of trouble to fill in
the gaps once they gained the rights to Asimov's name and concepts.

[Mild, then deeper, spoilers follow; spoiler symbol below]

> The biggest flaw in the Asimov Robot series is the notion that a
> robot (Positronic brain) could NOT be built without the 3 laws
> integrated into the system... Huh? What about a programming error???
> Those 'laws', esp. the way they interact are complex. That's WHY the
> Robot stories were so enjoyable... what were the 'loop holes' in
> the programming or the 'situation' posed by Asimov? That created the
> mystery of the stories.

That's right. The plot of the movie can very well be seen as the sort
of dilemma faced by our heroes in several of the Robot stories, simply
writ (very) large. There are allusions to, among others, "Little Lost
Robot," "Catch That Rabbit," "The Evitable Conflict," "Segregationist"
and "The Bicentennial Man," and "-That Thou Art Mindful of Him."

I only wish that a) Powell and Donovan could've been mentioned and
that b) the the villain of the movie could've actually called her
extrapolation of the Three Laws "Zeroeth Law."

On the other hand, with a bit of stretching it's *quite* possible to
fit the movie in between the robot stories! Doing so would, among
other things, help explain why Scott Robertson becomes US Robots'
largest shareholder and why robots eventually become banned from
Earth.

--
Read my Deep Thoughts @ <URL:http://www.ylee.org/blog/> PERTH ----> *
Cpu(s): 3.8% us, 2.1% sy, 93.4% ni, 0.4% id, 0.1% wa, 0.3% hi, 0.0% si
Mem: 516960k total, 512928k used, 4032k free, 35176k buffers
Swap: 2101032k total, 3576k used, 2097456k free, 128628k cached

David Pinkston

unread,
Aug 18, 2004, 5:41:54 PM8/18/04
to
> I only wish that a) Powell and Donovan could've been mentioned and
> that b) the the villain of the movie could've actually called her
> extrapolation of the Three Laws "Zeroeth Law."
>
> On the other hand, with a bit of stretching it's *quite* possible to
> fit the movie in between the robot stories! Doing so would, among
> other things, help explain why Scott Robertson becomes US Robots'
> largest shareholder and why robots eventually become banned from
> Earth.

It would certainly take some stretching... especially with the deaths.
I remember Lanning mentioning a successor, Bogert, before retiring. So
he wasn't an employee anymore when he died (probably of natural
causes, though I don't think the book says). Also, Calvin was pretty
old at this point.

As for Robertson's death, there could be any number of Lawrence
Robertsons running the company through heredity...

/gets out his shoehorn

David Pinkston

unread,
Aug 18, 2004, 5:41:54 PM8/18/04
to
> I only wish that a) Powell and Donovan could've been mentioned and
> that b) the the villain of the movie could've actually called her
> extrapolation of the Three Laws "Zeroeth Law."
>
> On the other hand, with a bit of stretching it's *quite* possible to
> fit the movie in between the robot stories! Doing so would, among
> other things, help explain why Scott Robertson becomes US Robots'
> largest shareholder and why robots eventually become banned from
> Earth.

It would certainly take some stretching... especially with the deaths.

Johnny Pez

unread,
Aug 19, 2004, 9:09:33 AM8/19/04
to
David Pinkston writes:

>Also, Calvin was pretty
>old at this point.

In 2035, Calvin would have been 53.
--
Johnny Pez
Newport, Rhode Island
August 2004

0 new messages