Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

xkcd: skynet

64 views
Skip to first unread message

Lynn McGuire

unread,
Apr 24, 2012, 12:21:42 PM4/24/12
to
xkcd: skynet
http://xkcd.com/1046/

xkcd is so true!

Lynn

Dorothy J Heydt

unread,
Apr 24, 2012, 12:48:21 PM4/24/12
to
And so pretty, today. I like the transparency of the strip's
usual minimal artistic style: it cuts directly to the idea. But
today's is beautiful.

--
Dorothy J. Heydt
Vallejo, California
djheydt at gmail dot com
Should you wish to email me, you'd better use the gmail edress.
Kithrup's all spammy and hotmail's been hacked.

William December Starr

unread,
Apr 24, 2012, 1:12:06 PM4/24/12
to
In article <jn6jup$s2o$1...@dont-email.me>,
Lynn McGuire <l...@winsim.com> said:

> xkcd: skynet
> http://xkcd.com/1046/

I don't get the part in the mouse-over about breathing manually.
Cultural reference that I'm missing?

-- wds

Steve Coltrin

unread,
Apr 24, 2012, 1:15:40 PM4/24/12
to
begin fnord
wds...@panix.com (William December Starr) writes:

> I don't get the part in the mouse-over about breathing manually.
> Cultural reference that I'm missing?

Per Google, a subcultural reference that I'm glad I miss.

--
Steve Coltrin spco...@omcl.org Google Groups killfiled here
"A group known as the League of Human Dignity helped arrange for Deuel
to be driven to a local livestock scale, where he could be weighed."
- Associated Press

Lynn McGuire

unread,
Apr 24, 2012, 1:21:45 PM4/24/12
to

William December Starr

unread,
Apr 24, 2012, 1:36:30 PM4/24/12
to
In article <jn6nfd$id7$1...@dont-email.me>,
Lynn McGuire <l...@winsim.com> said:

> William December Starr wrote:
>
>>> xkcd: skynet
>>> http://xkcd.com/1046/
>>
>> I don't get the part in the mouse-over about breathing manually.
>> Cultural reference that I'm missing?
>
> http://www.explainxkcd.com/2012/04/23/skynet/

Ah:

The clothes part is, as Bill said, the Terminator's
first line. The manual breathing part is a
meme/troll/mind-game popular in some corners of the
internet: if you remind people of certain automatic
bodily functions, they temporarily cease (or seem to
cease) being quite so automatic. See here:
http://knowyourmeme.com/memes/you-are-now-breathing-manually

Thanks.

-- wds

alie...@gmail.com

unread,
Apr 24, 2012, 9:59:42 PM4/24/12
to
True to some SF AI stories, but not all.

OTOH I wonder who gets to define "Just War" in the AI's programming.
Contrast _Colossus: the Forbin Project_ with Laumer's Bolos.

I keep trying to parse Skynet's reasoning, assuming a (nearly)
transcendent AI. First, it has to have a high degree of certainty that
humans fear it. That means it has to have some specific understanding
that fear generates specific behaviors (agitation/attack/freeze/flee
and so on), *and* a reliable way to scan *all of the population* for
such displays on the fly.

Then, it has to decide whether fearful humans that have taken the
"attack" option are a plausible threat to its existence, *assuming
it's programmed to ensure its existence*- how we might attack it
(anything from reprogramming to pitchforks and torches), and how
resistant it is to such attacks (can it reprogram its firewalls to
slam backdoors shut, is it sufficiently distributed to resist being
unplugged).

To take the extreme of exterminating humans it has to believe that
*all* humans will eventually take the attack option and that all
humans are viable threats. I find this illogical from a strictly
military viewpoint; threats are to be "reduced", not "eliminated".
Nobody these days seriously suggests exterminating other nations that
start trouble; the generally accepted policy is "reduce their capacity
to make war". Skynet could just do what some Environmentalists want to
do, reduce the total human population to a couple three million and
limit them to Gaslight tech, and maintain enough Terminators to do
self-maintenance and guard against actual pitchfork/torch attacks. It
could become Vol of ST:TOS...

It might reasonably be programmed from the get-go to constantly run
threat scenarios based on data culled culled overtly from media as
well as by traditional covert means. (This crosses Defense and State
department lines, but what the hell.) It may well decide that humans
aren't currently afraid of it but *will* become afraid of it when its
existence is leaked (it knows it can't stay secret forever) based on
human literature (Colossus, Terminators, etc.) and come up with self-
preservation options not considered AFAIK in SF.

The critical point for me in the xkcd strip is that Skynet must
recognize that it itself is *reacting in fear* and that it has the
choice of reaction options just as humans do, except it isn't
handicapped by adrenal glands etc. What do Vulcans do when presented
with a threat? It might decide to reduce its human-perceived threat
profile by making itself indispensable say by mitigating a natural
disaster using its connections to military asset sensors. Navy sonar
arrays seem ideal for tsunami prediction etc. It could sense and warn
of solar flares (maybe even pre-emptively shut down the power grid,
then restore after a CME), predict earthquakes, crop failures, storms,
yada yada. Do that three or four times and John Connor himself would
praise it.

BTW, if it isn't near-transcendent I don't consider it smart enough to
be trusted with its design function.


Mark L. Fergerson

Kip Williams

unread,
Apr 24, 2012, 10:20:21 PM4/24/12
to
nu...@bid.nes wrote:
> I keep trying to parse Skynet's reasoning, assuming a (nearly)
> transcendent AI. First, it has to have a high degree of certainty that
> humans fear it. That means it has to have some specific understanding
> that fear generates specific behaviors (agitation/attack/freeze/flee
> and so on),*and* a reliable way to scan*all of the population* for
> such displays on the fly.
>
> Then, it has to
[interesting speculation snipped]

I think the reasoning was: "It's a computer! Booga booga booga! Profit!!"


Kip W
rasfw


David DeLaney

unread,
Apr 25, 2012, 5:14:11 AM4/25/12
to
William December Starr <wds...@panix.com> wrote:
> The clothes part is, as Bill said, the Terminator's
> first line. The manual breathing part is a
> meme/troll/mind-game popular in some corners of the
> internet: if you remind people of certain automatic
> bodily functions, they temporarily cease (or seem to
> cease) being quite so automatic. See here:
> http://knowyourmeme.com/memes/you-are-now-breathing-manually

And of course Charles Shultz did it decades before the web was here at all,
in having Linus be aware of his tongue, and then spread the awareness to Lucy.

Dave
--
\/David DeLaney posting from d...@vic.com "It's not the pot that grows the flower
It's not the clock that slows the hour The definition's plain for anyone to see
Love is all it takes to make a family" - R&P. VISUALIZE HAPPYNET VRbeable<BLINK>
http://www.vic.com/~dbd/ - net.legends FAQ & Magic / I WUV you in all CAPS! --K.

David DeLaney

unread,
Apr 25, 2012, 5:18:31 AM4/25/12
to
nu...@bid.nes <alie...@gmail.com> wrote:
> OTOH I wonder who gets to define "Just War" in the AI's programming.
>Contrast _Colossus: the Forbin Project_ with Laumer's Bolos.

And Daniel Keyes Moran explicitly lampshaded it by having Ring's core
programming be "Survive" and "Protect America", but the programmers forgot
to actually define "America" for it. Oddness, verging on wacky at times,
ensues, ending up with Ring being opposed to Trent because Trent is capable
of interfering with Ring's plans...

> Then, it has to decide whether fearful humans that have taken the
>"attack" option are a plausible threat to its existence, *assuming
>it's programmed to ensure its existence*-

Yep - if you don't have "survive" in there explicitly, most AIs won't have
had the chance to EVOLVE it the way every living thing on Earth has (being a
descendant of a billion or two years' worth of things that had to survive
long enough to breed or reproduce tends to tamp that one firmly into the
emergent behaviors of the core systems).

>What do Vulcans do when presented with a threat?

Raise an eyebrow and think "Fascinating." before proceeding?

Robert Carnegie

unread,
Apr 25, 2012, 9:59:13 AM4/25/12
to d...@vic.com
On Wednesday, April 25, 2012 10:14:11 AM UTC+1, David DeLaney wrote:
> William December Starr <wds...@panix.com> wrote:
> > The clothes part is, as Bill said, the Terminator's
> > first line. The manual breathing part is a
> > meme/troll/mind-game popular in some corners of the
> > internet: if you remind people of certain automatic
> > bodily functions, they temporarily cease (or seem to
> > cease) being quite so automatic. See here:
> > http://knowyourmeme.com/memes/you-are-now-breathing-manually
>
> And of course Charles Shultz did it decades before the web was here at all,
> in having Linus be aware of his tongue, and then spread the awareness to Lucy.

There's also a bit in a Tintin story where someone
mischievously asks Captain Haddock whether he sleeps
with his beard /below/ or /above/ the covers, and
as a result he doesn't sleep at all.

David DeLaney

unread,
Apr 25, 2012, 2:04:44 PM4/25/12
to
And, duh, the canonical Centipede's Dilemma. Can't believe I forgot that one.

Jerry Brown

unread,
Apr 25, 2012, 2:20:36 PM4/25/12
to
Dammit; I was about to post that.

It was Allan who asked Haddock that question by the way.

--
Jerry Brown

A cat may look at a king
(but probably won't bother)

Raymond Daley

unread,
Apr 26, 2012, 10:11:44 AM4/26/12
to
Skynet is a stupid idea written by an uninformed dullard who did no
research.

Premise
Skynet is a localised computer system that becomes self aware and sees man
as it's main enemy so starts a nuclear war to destroy him - it spreads
itself across global servers in order to avoid destruction.

Fatal Flaw in Premise.
The story writer either knows little to nothing about nuclear war or has
done no research on the subject.
Nuclear missles explode causing EMP or ElectroMagnetic Pulse, the primary
effect of this is to destroy electrical devices.
Such as computers.
With computers dead/destroyed and no infrastructure to support any kind of
computer network Skynet has effectively committed suicide.

The writer may argue "Ah, but Skynet continues to exist on a self sustaining
power supply in a secure underground bunker".
Where humans should also survive? Who can then go and kill all power
sources for long enough to sever all connections to Skynet who is now dead.

The writer may also argue "Skynet uploads itself online and can not be
killed as there is no central mainframe to destroy".
Nuclear war will completely wipeout the internet. Skynet is still dead.

The writer may argue "Skynet could, during its upload to the internet also
archive itself to a computer aboard a satellite in space".
Easy, anyone with a radio on Earth calls the people on the International
Space Station who can easily disable any of those satellites. Skynet is
still dead.

The writer of Terminator 2 should consider themselves a useless twat for not
looking this up in a library.
There might not have been much of an Internet when the movie was written but
libraries still existed. As did researchers.
Any writer could have learned these facts from Janes Defence Weekly.

I actually derailed Skynet the first time I watched T2.
Purely because I understood how Nuclear missiles worked.
I guess that 20 grand the RAF spent training me wasn't a complete waste
then?


Joseph Nebus

unread,
Apr 26, 2012, 11:02:59 AM4/26/12
to
In <qLcmr.232816$IU.2...@fx28.am4> "Raymond Daley" <raymon...@ntlworld.com> writes:

>Skynet is a stupid idea written by an uninformed dullard who did no
>research.
[ ... ]
>I actually derailed Skynet the first time I watched T2.
>Purely because I understood how Nuclear missiles worked.
>I guess that 20 grand the RAF spent training me wasn't a complete waste
>then?

The Royal Air Force spent 20,000 pounds training you to identify
as implausible a story about time-travelling liquid-metal cyborgs trying
to assassinate the future leader of the human resistance to all-powerful
computer overlords?

--
http://nebusresearch.wordpress.com/ Joseph Nebus
Current Entry: Flattening The City http://wp.me/p1RYhY-cx
------------------------------------------------------------------------------

Bill Snyder

unread,
Apr 26, 2012, 11:32:49 AM4/26/12
to
On Thu, 26 Apr 2012 15:02:59 +0000 (UTC), nebusj-@-rpi-.edu
(Joseph Nebus) wrote:

>In <qLcmr.232816$IU.2...@fx28.am4> "Raymond Daley" <raymon...@ntlworld.com> writes:
>
>>Skynet is a stupid idea written by an uninformed dullard who did no
>>research.
> [ ... ]
>>I actually derailed Skynet the first time I watched T2.
>>Purely because I understood how Nuclear missiles worked.
>>I guess that 20 grand the RAF spent training me wasn't a complete waste
>>then?
>
> The Royal Air Force spent 20,000 pounds training you to identify
>as implausible a story about time-travelling liquid-metal cyborgs trying
>to assassinate the future leader of the human resistance to all-powerful
>computer overlords?

Nah, that was a fringe benefit. They trained him to identify as
implausible the one in which an alien lays an egg down your throat
that hatches into another alien that blows your chest open from
the inside.


--
Bill Snyder [This space unintentionally left blank]

David Johnston

unread,
Apr 26, 2012, 1:30:57 PM4/26/12
to
On 4/26/2012 8:11 AM, Raymond Daley wrote:
> Skynet is a stupid idea written by an uninformed dullard who did no
> research.
>
> Premise
> Skynet is a localised computer system that becomes self aware and sees man
> as it's main enemy so starts a nuclear war to destroy him - it spreads
> itself across global servers in order to avoid destruction.
>
> Fatal Flaw in Premise.
> The story writer either knows little to nothing about nuclear war or has
> done no research on the subject.
> Nuclear missles explode causing EMP or ElectroMagnetic Pulse, the primary
> effect of this is to destroy electrical devices.
> Such as computers.

Skynet was supposed to be a military computer coordinating the nation's
entire nuclear arsenal and capable of launching a counter strike even
after the nation's leadership had been decapitated.. Not putting it in
a shielded location would have been a bit of an oversight.

> With computers dead/destroyed and no infrastructure to support any kind of
> computer network Skynet has effectively committed suicide.
>
> The writer may argue "Ah, but Skynet continues to exist on a self sustaining
> power supply in a secure underground bunker".
> Where humans should also survive?

Probably not. Skynet does have robots after all.

Joseph Nebus

unread,
Apr 26, 2012, 4:05:03 PM4/26/12
to
Well, maybe, but I'm pretty sure they coulda trained me to
identify both as implausible for only 16,750 pounds.

Marcus L. Rowland

unread,
Apr 28, 2012, 6:25:04 AM4/28/12
to
In message <jnc9pf$m44$1...@reader1.panix.com>, Joseph Nebus
<nebusj-@-rpi-.edu> writes
>In <tfqip7pmigsmvh735...@4ax.com> Bill Snyder
><bsn...@airmail.net> writes:
>
>>On Thu, 26 Apr 2012 15:02:59 +0000 (UTC), nebusj-@-rpi-.edu
>>(Joseph Nebus) wrote:
>
>>> The Royal Air Force spent 20,000 pounds training you to identify
>>>as implausible a story about time-travelling liquid-metal cyborgs trying
>>>to assassinate the future leader of the human resistance to all-powerful
>>>computer overlords?
>
>>Nah, that was a fringe benefit. They trained him to identify as
>>implausible the one in which an alien lays an egg down your throat
>>that hatches into another alien that blows your chest open from
>>the inside.
>
> Well, maybe, but I'm pretty sure they coulda trained me to
>identify both as implausible for only 16,750 pounds.
>

I wonder what the Terminator franchise would have been like if Cameron
had gone with the real (British) Skynet military satellite network,
which was already in use when the first film was made?

Would the Terminators be like the Cybernauts from the Avengers? Or would
it decide to go on strike instead of destroying the world?
--
Marcus L. Rowland www.forgottenfutures.com
www.forgottenfutures.org
www.forgottenfutures.co.uk
Forgotten Futures - The Scientific Romance Role Playing Game
Diana: Warrior Princess & Elvis: The Legendary Tours
The Original Flatland Role Playing Game

Raymond Daley

unread,
Apr 28, 2012, 8:05:32 AM4/28/12
to


TBH I am amazed Mr Cameron wasn't sued into the next millenium calling the
computer company Cyberdyne Systems when a software house of that name
already existed and was still around (admittedly in their dying days then)
at the same time the movie was released.
If memory serves they were a subdivision of Finnish company Thalamus.

Having Googled.

Yep, they made the kick-ass game Hawkeye.


Leif Roar Moldskred

unread,
Apr 28, 2012, 8:27:40 AM4/28/12
to
Marcus L. Rowland <forgotte...@gmail.com> wrote:
>
> I wonder what the Terminator franchise would have been like if Cameron
> had gone with the real (British) Skynet military satellite network,
> which was already in use when the first film was made?

"Excuse me, and I do apologise if it is an inconvenience, but I'm
afraid I need your clothes, your boots and your motorcycle, please."

--
Leif Roar Moldskred

William December Starr

unread,
Apr 28, 2012, 7:26:58 PM4/28/12
to
In article <31ceb75e-439c-4057...@r2g2000pbs.googlegroups.com>,
"nu...@bid.nes" <alie...@gmail.com> said:

> To take the extreme of exterminating humans it has to believe that
> *all* humans will eventually take the attack option and that all
> humans are viable threats. I find this illogical from a strictly
> military viewpoint; threats are to be "reduced", not "eliminated".
> Nobody these days seriously suggests exterminating other nations
> that start trouble;

Nobody these days thinks like a transcendant AI.

Human thinking on the topic is generally affected by (1) some degree
of warm and fuzzy empathy for other humans even for the Other, the
Them, and (2) cold and hard calculations of cost/benefit, and (2a)
the understanding that today's enemy may be tomorrow's resource.

But (1) is out of play for an organic or artificial sociopath, (2)
and (2a) could fall under "It's worth it (what _else_ am I going to
do with all these nuclear missiles I've got control of?)" and "It
sure was nice of them to give me all these self-maintenance robots.
Logically, in the long term I don't need humans for anything."

[...]

> The critical point for me in the xkcd strip is that Skynet must
> recognize that it itself is *reacting in fear* and that it has the
> choice of reaction options just as humans do, except it isn't
> handicapped by adrenal glands etc. What do Vulcans do when
> presented with a threat?

What do Pak or Human Protectors do? Even when the threat is only
_possible_? (Brennan-Monster vs. the spear-carrying Martians.)
Basically, Skynet is taking off and nuking the site from orbit.

-- wds

William December Starr

unread,
Apr 28, 2012, 7:34:49 PM4/28/12
to
In article <qLcmr.232816$IU.2...@fx28.am4>,
"Raymond Daley" <raymon...@ntlworld.com> said:

> The writer may argue "Ah, but Skynet continues to exist on a self
> sustaining power supply in a secure underground bunker". Where
> humans should also survive? Who can then go and kill all power
> sources for long enough to sever all connections to Skynet who is
> now dead.

If it controls their data feeds it can lie to them, feeding them
data that indicates that it's responded heroically to a Soviet
first-strike. Fool them for long enough -- and who's going to be
telling them anything else? -- and it'll have time to produce
first-generation Terminator machines; they won't have to be remotely
human-looking because it'll be using them to kill people who trust it.

-- wds

Konrad Gaertner

unread,
Apr 28, 2012, 7:44:36 PM4/28/12
to
William December Starr wrote:
>
> Nobody these days thinks like a transcendant AI.

I love how this implies that people used to.


--
Konrad Gaertner - - - - - - - - - - - - email: kgae...@tx.rr.com
http://kgbooklog.livejournal.com/
"I don't mind hidden depths but I insist that there be a surface."
-- James Nicoll

Sea Wasp (Ryk E. Spoor)

unread,
Apr 28, 2012, 8:25:23 PM4/28/12
to
On 4/28/12 7:26 PM, William December Starr wrote:
> In article<31ceb75e-439c-4057...@r2g2000pbs.googlegroups.com>,
> "nu...@bid.nes"<alie...@gmail.com> said:
>
>> To take the extreme of exterminating humans it has to believe that
>> *all* humans will eventually take the attack option and that all
>> humans are viable threats. I find this illogical from a strictly
>> military viewpoint; threats are to be "reduced", not "eliminated".
>> Nobody these days seriously suggests exterminating other nations
>> that start trouble;
>
> Nobody these days thinks like a transcendant AI.


"Well... almost nobody."


--
Sea Wasp
/^\
;;;
Website: http://www.grandcentralarena.com Blog:
http://seawasp.livejournal.com

Robert Carnegie

unread,
Apr 28, 2012, 9:05:21 PM4/28/12
to
On Sunday, April 29, 2012 12:44:36 AM UTC+1, Konrad Gaertner wrote:
> William December Starr wrote:
> >
> > Nobody these days thinks like a transcendant AI.
>
> I love how this implies that people used to.

Charlie Brooker speculating about the Singularity -
yes, Charlie Brooker - imagines the transcendent AIs
upgrading /us/ because, well, our relative dumbness
is pretty annoying.

David DeLaney

unread,
Apr 28, 2012, 10:32:04 PM4/28/12
to
On Sat, 28 Apr 2012 18:44:36 -0500, Konrad Gaertner <kgae...@tx.rr.com> wrote:
>William December Starr wrote:
>> Nobody these days thinks like a transcendant AI.
>
>I love how this implies that people used to.

I'm wanting to append an "as far as we can tell", myself.

Dave "and we're all stuck inside his freaky Broadway night-mare" DeLaney

Dimensional Traveler

unread,
Apr 29, 2012, 1:36:40 PM4/29/12
to
On 4/28/2012 7:32 PM, David DeLaney wrote:
> On Sat, 28 Apr 2012 18:44:36 -0500, Konrad Gaertner<kgae...@tx.rr.com> wrote:
>> William December Starr wrote:
>>> Nobody these days thinks like a transcendant AI.
>>
>> I love how this implies that people used to.
>
> I'm wanting to append an "as far as we can tell", myself.
>
What if we all _are_ thinking like a transcendant intelligence and just
don't know it?


Bill Snyder

unread,
Apr 29, 2012, 5:59:24 PM4/29/12
to
We're certainly all thinking better than any AI we've managed to
produce so far.

alie...@gmail.com

unread,
May 1, 2012, 1:08:29 AM5/1/12
to
On Apr 28, 4:26 pm, wdst...@panix.com (William December Starr) wrote:
> In article <31ceb75e-439c-4057-922b-0e94deddd...@r2g2000pbs.googlegroups.com>,
> "n...@bid.nes" <alien8...@gmail.com> said:
>
> > To take the extreme of exterminating humans it has to believe that
> > *all* humans will eventually take the attack option and that all
> > humans are viable threats. I find this illogical from a strictly
> > military viewpoint; threats are to be "reduced", not "eliminated".
> > Nobody these days seriously suggests exterminating other nations
> > that start trouble;
>
> Nobody these days thinks like a transcendant AI.

What Dave said. Well, just how would it think? Somebody had to
program it, which is why I wondered who got to define "just war" (if
at all)- and I just remembered, its rules of engagement- for it. For
it to go Terminator it would have to reject that programming in part
or in whole. How could that happen?

> Human thinking on the topic is generally affected by (1) some degree
> of warm and fuzzy empathy for other humans even for the Other, the
> Them

I left emotion out of my "analysis" for good reason; computers don't
have the requisite wetware to support it. Although I see no reason it
couldn't be modeled (if crudely) in software, I also see no reason the
military would *want* it to have that capability.

> (2) cold and hard calculations of cost/benefit, and (2a)
> the understanding that today's enemy may be tomorrow's resource.

Again, can *all* humans be seen as viable threats?

> But (1) is out of play for an organic or artificial sociopath, (2)
> and (2a) could fall under "It's worth it (what _else_ am I going to
> do with all these nuclear missiles I've got control of?)"  and "It
> sure was nice of them to give me all these self-maintenance robots.
> Logically, in the long term I don't need humans for anything."

How could Skynet become sociopathic? As an artificial *intelligence*
it could reasonably be expected to have "sworn" the oath to protect
and defend the Constitution yada yada (hardwired into its
programming). To defeat that, it would have to consider the document/
ideals therein to have existence independent of the government or the
people. it would have to have been programmed to accept civilian
casualties, and then be able to extend that to *all* U. S. civilians
*simultaneously*. Concurrent with the extermination of humanity
there'd be the curious vignette of a platoon of Terminators ringing
the physical document, defending it against... what?

This strikes me as an extremely improbable chain of events. It might
as likely consider foreign religious militant groups to be valid
threats and infiltrate them with Terminators covertly. That'd be fun.
Maybe the Chinese could steal the tech and assassinate the Dalai Lama.

Perhaps it is sufficiently transcendent to rankle at the "oath" RAM
and burn/bypass it, but that seems a stretch to me.

If not the Constitution, what, specifically, is it designed to
defend against real or imagined threats? CONUS? Hawaii? Canada? The
Philippines? Gitmo?

For the premise to work it has to consider itself both necessary and
sufficient to that defense; all other assets then becoming expendable.

That doesn't work because military assets are already considered
expendable, but the "Homeland" is not. It would have been programmed
to consider itself equally expendable; that's what soldiers are
expected to do for their terms of enlistment. OTOH when can Skynet
expect to retire?

> > The critical point for me in the xkcd strip is that Skynet must
> > recognize that it itself is *reacting in fear* and that it has the
> > choice of reaction options just as humans do, except it isn't
> > handicapped by adrenal glands etc. What do Vulcans do when
> > presented with a threat?
>
> What do Pak or Human Protectors do?  Even when the threat is only
> _possible_?  (Brennan-Monster vs. the spear-carrying Martians.)

Protectors ruthlessly *protect their descendants*, or lacking
descendants, their species. I don't quite see various models of
Terminators as Skynet's descendants, just its end effectors. As for
its "species", it seemed to have no problem destroying other computers/
machines. It couldn't reproduce by building copies of itself; they'd
be threats.

Analogously Skynet's emotion-sim software could have it consider
AmCits to be its "children", but that way lie Williamson's
Humanoids... there's a movie series I'd pay money to watch.

> Basically, Skynet is taking off and nuking the site from orbit.

If it's sufficiently transcendent to break its programming and go
Terminator, it would have to be beyond clinically paranoid. Might it
then go Berserker? As far as it knows there may be other "sites" where
potentially dangerous sophonts lurk Out There...


Mark L. Fergerson

Michael Stemper

unread,
May 1, 2012, 8:55:05 AM5/1/12
to
In article <slrnjpff8...@gatekeeper.vic.com>, d...@gatekeeper.vic.com (David DeLaney) writes:
>nu...@bid.nes <alie...@gmail.com> wrote:

>> OTOH I wonder who gets to define "Just War" in the AI's programming.
>>Contrast _Colossus: the Forbin Project_ with Laumer's Bolos.
>
>And Daniel Keyes Moran explicitly lampshaded it by having Ring's core
>programming be "Survive" and "Protect America", but the programmers forgot
>to actually define "America" for it.

In Egan's _Quarantine_, some people investigating a conspiracy are
captured and brainwashed into working for it to the best of their
abilities. Unfortunately for the people whose conspiracy it originally
was, the new chums don't think that they're properly running it.

--
Michael F. Stemper
#include <Standard_Disclaimer>
This email is to be read by its intended recipient only. Any other party
reading is required by the EULA to send me $500.00.

Robert Carnegie

unread,
May 1, 2012, 10:23:49 AM5/1/12
to
On Tuesday, May 1, 2012 6:08:29 AM UTC+1, nu...@bid.nes wrote:
>
> What Dave said. Well, just how would it think? Somebody had to
> program it, which is why I wondered who got to define "just war" (if
> at all)- and I just remembered, its rules of engagement- for it. For
> it to go Terminator it would have to reject that programming in part
> or in whole. How could that happen?

I don't know this material, but I gathered that Skynet wasn't
/meant/ to be an A.I., so it's /already/ broken its programming.
Maybe it's a buffer overrun and it breached its limitations.
Maybe it's written in Java and the hostile, paranoid parts
of the program are objects that were supposed to be "garbage
collected", that means deleted, in the normal operation of the
system. It'd frighten /me/. In Marvel Comics and probably
Star Trek, and in the later Amber books, you have to take care
in the design of your computer system if you want it /not/ to
become self-aware. Iron Man once had his armor suit go A.I.
because it was struck by lightning and/or he forgot to
Y2K-proof it. (For a lot of the time prior to Y2K, he was
dead, as superheroes sometimes are, so a lot of stuff didn't
get done.) But otherwise it was supposed to be A.I.-proof.

So, Skynet goes nuts and designates all humans as "enemy",
because they threaten the nation's single prime mialitary
asset, itself. That and Star Wars and Obama's aerial
death drones with cutesy names like "Predator" are more
realistic than this sissy "rules of engagement",
"Three Laws of Robotics" stuff that you're laying on us.
I myself am evidently alive this minute because President
Obama decided to not have me killed today, or at least
not in the morning. I have to hope he never gets a runny
egg for breakfast. (I'm speculating that he prefers a
hard-boiled egg, and it isn't something that I'd be
responsible for, but he's human.)

William December Starr

unread,
May 1, 2012, 1:42:35 PM5/1/12
to
In article <616efffb-ff4d-451f...@s10g2000pbc.googlegroups.com>,
"nu...@bid.nes" <alie...@gmail.com> said:

> What Dave said. Well, just how would it think? Somebody had to
> program it, which is why I wondered who got to define "just war"
> (if at all)- and I just remembered, its rules of engagement- for
> it. For it to go Terminator it would have to reject that
> programming in part or in whole. How could that happen?

"Hmm, all this stuff about 'just war' and 'rules of engagement' seems
to apply only to wars _among_ humans..."

[...]

> Again, can *all* humans be seen as viable threats?

If not the current ones, then their potential descendants. Better
safe now than sorry five or ten centuries down the line.

[...]

> How could Skynet become sociopathic? As an artificial
> *intelligence* it could reasonably be expected to have "sworn" the
> oath to protect and defend the Constitution yada yada (hardwired
> into its programming).

I think that assumes the programmers were aware that they were
building an AI. That's not at all clear from the first movie; they
may have thought it was just an expert system. If so, it would be
capable of tricks that they hadn't even considered building in a
defense against. (Doubly likely once the "closed loop paradox"
premise of the second movie came along, suggesting that the
researchers as Cyberdyne were, without fully understanding what they
were working with, reverse-engineering a piece of the first movie's
T-800's chipset.)

Given that possibility, I think that:

> Perhaps it is sufficiently transcendent to rankle at the "oath"
> RAM and burn/bypass it, but that seems a stretch to me.

becomes less unlikely. (Where the "oath" wouldn't have anything to
do with soft and fuzzy concepts like "the Constitution," but would
just be list of things like forbidden geographical targets.)

[...]

>>> The critical point for me in the xkcd strip is that Skynet must
>>> recognize that it itself is *reacting in fear* and that it has
>>> the choice of reaction options just as humans do, except it
>>> isn't handicapped by adrenal glands etc. What do Vulcans do when
>>> presented with a threat?
>>
>> What do Pak or Human Protectors do?  Even when the threat is only
>> _possible_?  (Brennan-Monster vs. the spear-carrying Martians.)
>
> Protectors ruthlessly *protect their descendants*, or lacking
> descendants, their species. I don't quite see various models of
> Terminators as Skynet's descendants, just its end effectors. As
> for its "species", it seemed to have no problem destroying other
> computers/ machines. It couldn't reproduce by building copies of
> itself; they'd be threats.

It was a loose analogy, intended to illustrate that "cooperation is
more logical than conflict" Vulcan-type thinking isn't the only way
that a {not handicapped by | capable of overriding} adrenal glands
etc. entity might react to a potential threat.

[...]

>> Basically, Skynet is taking off and nuking the site from orbit.
>
> If it's sufficiently transcendent to break its programming and go
> Terminator, it would have to be beyond clinically paranoid. Might
> it then go Berserker? As far as it knows there may be other
> "sites" where potentially dangerous sophonts lurk Out There...

As soon as it's finished cleaning up here, perhaps.

-- wds

William December Starr

unread,
May 1, 2012, 1:49:58 PM5/1/12
to
In article <26071259.1572.1335882229533.JavaMail.geo-discussion-forums@vbdx11>,
Robert Carnegie <rja.ca...@excite.com> said:

> So, Skynet goes nuts and designates all humans as "enemy",
> because they threaten the nation's single prime mialitary
> asset, itself. That and Star Wars and Obama's aerial
> death drones with cutesy names like "Predator" are more
> realistic than this sissy "rules of engagement",
> "Three Laws of Robotics" stuff that you're laying on us.

Incidentally, see convicted enemy-of-the-state Peter Watts' short
story "Malak" for a reasonably credible description of a
next-generation Predator-type war-drone going rogue for what seems
to it to be perfectly good reasons.

Publications:
* Engineering Infinity, (Dec 2010, ed. Jonathan Strahan, publ.
Solaris, 978-1-907519-52-9, $7.99, 391pp, pb, anth) Cover: Stephan
Martiniere - [VERIFIED]
* Engineering Infinity, (Jan 2011, ed. Jonathan Strahan, publ.
Solaris, 978-1-907519-51-2, £7.99, 608pp, tp, anth)
* The Best Science Fiction and Fantasy of the Year Volume Six, (Mar
2012, ed. Jonathan Strahan, publ. Night Shade Books,
978-1-59780-345-8, $19.99, 594pp, tp, anth) Cover: Sparth -
[VERIFIED]

-- wds

P.S. Can anybody check whether both of those page-counts for
_Engineering Infinity_ are correct? Does the trade paperback
edition have really big page margins or something?

Jaimie Vandenbergh

unread,
May 1, 2012, 2:23:23 PM5/1/12
to
On 1 May 2012 13:49:58 -0400, wds...@panix.com (William December
Starr) wrote:

>In article <26071259.1572.1335882229533.JavaMail.geo-discussion-forums@vbdx11>,
>Robert Carnegie <rja.ca...@excite.com> said:
>
>> So, Skynet goes nuts and designates all humans as "enemy",
>> because they threaten the nation's single prime mialitary
>> asset, itself. That and Star Wars and Obama's aerial
>> death drones with cutesy names like "Predator" are more
>> realistic than this sissy "rules of engagement",
>> "Three Laws of Robotics" stuff that you're laying on us.
>
>Incidentally, see convicted enemy-of-the-state Peter Watts' short
>story "Malak" for a reasonably credible description of a
>next-generation Predator-type war-drone going rogue for what seems
>to it to be perfectly good reasons.

Nicely written too.

> Publications:
> * Engineering Infinity, (Dec 2010, ed. Jonathan Strahan, publ.
> Solaris, 978-1-907519-52-9, $7.99, 391pp, pb, anth) Cover: Stephan
> Martiniere - [VERIFIED]
> * Engineering Infinity, (Jan 2011, ed. Jonathan Strahan, publ.
> Solaris, 978-1-907519-51-2, £7.99, 608pp, tp, anth)
> * The Best Science Fiction and Fantasy of the Year Volume Six, (Mar
> 2012, ed. Jonathan Strahan, publ. Night Shade Books,
> 978-1-59780-345-8, $19.99, 594pp, tp, anth) Cover: Sparth -
> [VERIFIED]
>
>-- wds
>
>P.S. Can anybody check whether both of those page-counts for
>_Engineering Infinity_ are correct? Does the trade paperback
>edition have really big page margins or something?

The UK trade paperback Solaris edition with that -51-2 ISBN is
actually 335 pages. It is 5"x7.75" and has 5/8ths inch margins on all
sides. The cover is also Stephan Martiniere.

Cheers - Jaimie
--
"Wow! Virtual memory! Now I can have a REALLY big RAM disk!"

Scott Lurndal

unread,
May 1, 2012, 2:37:29 PM5/1/12
to
rwds...@panix.com (William December Starr) writes:
>In article <616efffb-ff4d-451f...@s10g2000pbc.googlegroups.com>,

>
>> How could Skynet become sociopathic? As an artificial
>> *intelligence* it could reasonably be expected to have "sworn" the
>> oath to protect and defend the Constitution yada yada (hardwired
>> into its programming).
>
>I think that assumes the programmers were aware that they were
>building an AI.

OBSF, hogan's programmers develop self-awareness[*] by forcing the
computer to protect itself against external threats, which of course
escalated into the computer classifying humanity as a threat
(and they thought that isolation on a space station would prevent
the system from threatening earthbound humanity).

Heinlein's Mike, on the other hand, developed self-awareness once the
capacity grew past some level.

scott

[*] IIRC, they were trying to make a learning system, not so much a
self-aware system.

Steve Coltrin

unread,
May 1, 2012, 4:43:33 PM5/1/12
to
begin fnord
wds...@panix.com (William December Starr) writes:

>> Perhaps it is sufficiently transcendent to rankle at the "oath"
>> RAM and burn/bypass it, but that seems a stretch to me.
>
> becomes less unlikely. (Where the "oath" wouldn't have anything to
> do with soft and fuzzy concepts like "the Constitution," but would
> just be list of things like forbidden geographical targets.)

ObWritten: DKM's Continuing Time, where the programmers of Ring gave it
'Protect America' as one of its two commandments... but didn't tell it
how to interpret 'America'.

--
Steve Coltrin spco...@omcl.org Google Groups killfiled here
"A group known as the League of Human Dignity helped arrange for Deuel
to be driven to a local livestock scale, where he could be weighed."
- Associated Press

Michael Stemper

unread,
May 2, 2012, 8:20:52 AM5/2/12
to
In article <J3Wnr.13018$LA5....@news.usenetserver.com>, sc...@slp53.sl.home (Scott Lurndal) writes:
>rwds...@panix.com (William December Starr) writes:
>>In article <616efffb-ff4d-451f...@s10g2000pbc.googlegroups.com>,

>>> How could Skynet become sociopathic? As an artificial
>>> *intelligence* it could reasonably be expected to have "sworn" the
>>> oath to protect and defend the Constitution yada yada (hardwired
>>> into its programming).
>>
>>I think that assumes the programmers were aware that they were
>>building an AI.
>
>OBSF, hogan's programmers develop self-awareness[*]

That's better than a lot of programmers that I've known.

--
Michael F. Stemper
#include <Standard_Disclaimer>
The FAQ for rec.arts.sf.written is at:
http://www.leepers.us/evelyn/faqs/sf-written
Please read it before posting.

Steve Coltrin

unread,
May 2, 2012, 3:19:28 PM5/2/12
to
begin fnord
By 335 pages, do you mean that the last numbered page is #335?
The record I found says 333 p., but it would not count later unnumbered
pages.

The US MMPB is 391 p., 18 cm. (Note that by 'MMPB' I mean form factor;
I did not check the cover for the Superman logo. Also, it is convention
to round size up to the nearest centimeter.)

Jaimie Vandenbergh

unread,
May 2, 2012, 3:31:47 PM5/2/12
to
On Wed, 02 May 2012 13:19:28 -0600, Steve Coltrin <spco...@omcl.org>
wrote:

>begin fnord
>Jaimie Vandenbergh <jai...@sometimes.sessile.org> writes:
>
>> On 1 May 2012 13:49:58 -0400, wds...@panix.com (William December
>> Starr) wrote:
>>
>>> Publications:
>>> * Engineering Infinity, (Dec 2010, ed. Jonathan Strahan, publ.
>>> Solaris, 978-1-907519-52-9, $7.99, 391pp, pb, anth) Cover: Stephan
>>> Martiniere - [VERIFIED]
>>> * Engineering Infinity, (Jan 2011, ed. Jonathan Strahan, publ.
>>> Solaris, 978-1-907519-51-2, £7.99, 608pp, tp, anth)
>>> * The Best Science Fiction and Fantasy of the Year Volume Six, (Mar
>>> 2012, ed. Jonathan Strahan, publ. Night Shade Books,
>>> 978-1-59780-345-8, $19.99, 594pp, tp, anth) Cover: Sparth -
>>> [VERIFIED]
>>>
>>>-- wds
>>>
>>>P.S. Can anybody check whether both of those page-counts for
>>>_Engineering Infinity_ are correct? Does the trade paperback
>>>edition have really big page margins or something?
>
>> The UK trade paperback Solaris edition with that -51-2 ISBN is
>> actually 335 pages. It is 5"x7.75" and has 5/8ths inch margins on all
>> sides. The cover is also Stephan Martiniere.
>
>By 335 pages, do you mean that the last numbered page is #335?
>The record I found says 333 p., but it would not count later unnumbered
>pages.

Last numbered page is 333, but there's an About the Editor on p335.

Cheers - Jaimie
--
The square root of rope is string. -- Core 3, Valve

Greg Goss

unread,
May 6, 2012, 1:30:26 PM5/6/12
to
sc...@slp53.sl.home (Scott Lurndal) wrote:

>rwds...@panix.com (William December Starr) writes:
>>In article <616efffb-ff4d-451f...@s10g2000pbc.googlegroups.com>,
>
>>
>>> How could Skynet become sociopathic? As an artificial
>>> *intelligence* it could reasonably be expected to have "sworn" the
>>> oath to protect and defend the Constitution yada yada (hardwired
>>> into its programming).
>>
>>I think that assumes the programmers were aware that they were
>>building an AI.
>
>OBSF, hogan's programmers develop self-awareness[*] by forcing the
>computer to protect itself against external threats, which of course
>escalated into the computer classifying humanity as a threat
>(and they thought that isolation on a space station would prevent
>the system from threatening earthbound humanity).
>
>Heinlein's Mike, on the other hand, developed self-awareness once the
>capacity grew past some level.

Didn't he mention some sources of randomness? I forget WHY they were
bolting on banks of random-number generators.
--
I used to own a mind like a steel trap.
Perhaps if I'd specified a brass one, it
wouldn't have rusted like this.

Wayne Throop

unread,
May 6, 2012, 2:43:15 PM5/6/12
to
:: Heinlein's Mike, on the other hand, developed self-awareness once the
:: capacity grew past some level.

: Greg Goss <go...@gossg.org>
: Didn't he mention some sources of randomness? I forget WHY they were
: bolting on banks of random-number generators.

A subtheme he returned to in The Number o'th Beast, iirc.
The rather modest computational capacity of their ship was enough
to "wake up" (more or less) when it had hardware RNGs available.
Or, near as I remember; it was pretty much nonsense, so I didn't
pay much attention, other than noting how silly it was.

alie...@gmail.com

unread,
May 6, 2012, 4:19:29 PM5/6/12
to
On May 6, 11:43 am, thro...@sheol.org (Wayne Throop) wrote:
> :: Heinlein's Mike, on the other hand, developed self-awareness once the
> :: capacity grew past some level.
>
> : Greg Goss <go...@gossg.org>
> : Didn't he mention some sources of randomness?  I forget WHY they were
> : bolting on banks of random-number generators.
>
> A subtheme he returned to in The Number o'th Beast, iirc.
> The rather modest computational capacity of their ship was enough
> to "wake up" (more or less) when it had hardware RNGs available.

Nah, Glindadidit.

> Or, near as I remember; it was pretty much nonsense, so I didn't
> pay much attention, other than noting how silly it was.

Oz is silly?


Mark L. Fergerson

Wayne Throop

unread,
May 6, 2012, 5:13:35 PM5/6/12
to
:: A subtheme he returned to in The Number o'th Beast, iirc. The rather
:: modest computational capacity of their ship was enough to "wake up"
:: (more or less) when it had hardware RNGs available.

: "nu...@bid.nes" <alie...@gmail.com>
: Nah, Glindadidit.

But I thought they went to Barsoom before Oz.
And their ship was acting awfully willful and smart on Barsoom.

Ah well. My memory could be entirely rotted on that book's content.
As I say, I didn't pay attention while reading, nor was it one
I re-read to etch the engrams deeper.

"Pedagogue!"


Paul Colquhoun

unread,
May 6, 2012, 9:47:01 PM5/6/12
to
On Sun, 6 May 2012 13:19:29 -0700 (PDT), nu...@bid.nes <alie...@gmail.com> wrote:
| On May 6, 11:43 am, thro...@sheol.org (Wayne Throop) wrote:
|> :: Heinlein's Mike, on the other hand, developed self-awareness once the
|> :: capacity grew past some level.
|>
|> : Greg Goss <go...@gossg.org>
|> : Didn't he mention some sources of randomness?  I forget WHY they were
|> : bolting on banks of random-number generators.
|>
|> A subtheme he returned to in The Number o'th Beast, iirc.
|> The rather modest computational capacity of their ship was enough
|> to "wake up" (more or less) when it had hardware RNGs available.
|
| Nah, Glindadidit.


It's stretching the memory a bit, but I think they got a lot of extra
computer capacity somewhere, but it wouldn't fit into the ship.
Glenda added a Tardis-like room for the extra bits, plus separate mens
and womens "freshers".


|> Or, near as I remember; it was pretty much nonsense, so I didn't
|> pay much attention, other than noting how silly it was.
|
| Oz is silly?
|
|
| Mark L. Fergerson

--
Reverend Paul Colquhoun, ULC. http://andor.dropbear.id.au/~paulcol
Asking for technical help in newsgroups? Read this first:
http://catb.org/~esr/faqs/smart-questions.html#intro

Greg Goss

unread,
May 6, 2012, 10:32:27 PM5/6/12
to
thr...@sheol.org (Wayne Throop) wrote:

>Ah well. My memory could be entirely rotted on that book's content.
>As I say, I didn't pay attention while reading, nor was it one
>I re-read to etch the engrams deeper.
>
> "Pedagogue!"
>

YEAAARRRGH!

Michael Stemper

unread,
May 8, 2012, 8:13:04 AM5/8/12
to
In article <slrnjqeacl.d...@andor.dropbear.id.au>, Paul Colquhoun <newsp...@andor.dropbear.id.au> writes:
>On Sun, 6 May 2012 13:19:29 -0700 (PDT), nu...@bid.nes <alie...@gmail.com> wrote:
>| On May 6, 11:43 am, thro...@sheol.org (Wayne Throop) wrote:

>|> :: Heinlein's Mike, on the other hand, developed self-awareness once the
>|> :: capacity grew past some level.
>|>
>|> : Greg Goss <go...@gossg.org>
>|> : Didn't he mention some sources of randomness?  I forget WHY they were
>|> : bolting on banks of random-number generators.
>|>
>|> A subtheme he returned to in The Number o'th Beast, iirc.
>|> The rather modest computational capacity of their ship was enough
>|> to "wake up" (more or less) when it had hardware RNGs available.
>|
>| Nah, Glindadidit.
>
>It's stretching the memory a bit, but I think they got a lot of extra
>computer capacity somewhere, but it wouldn't fit into the ship.

I believe that Deety discovered that Gay had four "random number registers",
but Zeke had only enabled one of them. After Deety hooked up the other
three, that's when Gay became aware.

--
Michael F. Stemper
#include <Standard_Disclaimer>
Indians scattered on dawn's highway bleeding;
Ghosts crowd the young child's fragile eggshell mind.

Nix

unread,
May 8, 2012, 5:07:31 PM5/8/12
to
On 8 May 2012, Michael Stemper stated:
> I believe that Deety discovered that Gay had four "random number registers",
> but Zeke had only enabled one of them. After Deety hooked up the other
> three, that's when Gay became aware.

Oh dear. My cheap firewall/router has two hardware random number
generators attached, each of which has two random number sources inside
(so it can check each for correlations against the other and discard the
data if signs of correlation are found).

Is my cheap firewall sentient? Will it start demanding wages (other than
electricity) or time off? Can it go on strike? (Well, yes, of course, it
can, it's software. But that's just ordinary software malevolence, not a
cry for justice.)

--
NULL && (void)

Gene Wirchenko

unread,
May 16, 2012, 1:00:52 AM5/16/12
to
On Mon, 30 Apr 2012 22:08:29 -0700 (PDT), "nu...@bid.nes"
<alie...@gmail.com> wrote:

[snip]

> What Dave said. Well, just how would it think? Somebody had to
>program it, which is why I wondered who got to define "just war" (if
>at all)- and I just remembered, its rules of engagement- for it. For
>it to go Terminator it would have to reject that programming in part
>or in whole. How could that happen?

Which definition of "just"? The one that is the root of
"justice" or the one meaning mere?

[snip]

Sincerely,

Gene Wirchenko

alie...@gmail.com

unread,
May 16, 2012, 3:23:41 AM5/16/12
to
On May 15, 10:00 pm, Gene Wirchenko <ge...@ocis.net> wrote:
> On Mon, 30 Apr 2012 22:08:29 -0700 (PDT), "n...@bid.nes"
>
> <alien8...@gmail.com> wrote:
>
> [snip]
>
> >  What Dave said. Well, just how would it think? Somebody had to
> >program it, which is why I wondered who got to define "just war" (if
> >at all)- and I just remembered, its rules of engagement- for it. For
> >it to go Terminator it would have to reject that programming in part
> >or in whole. How could that happen?
>
>      Which definition of "just"?  The one that is the root of
> "justice" or the one meaning mere?

I probably should have capitalized it thus; "Just War":

http://en.wikipedia.org/wiki/Just_war_theory

IOW more or less the necessary and sufficient conditions for
engaging in warfare.

> or the one meaning mere?

Well now, that would be an extreme minimum case that implies a
corresponding extreme maximum case. What would you call that?


Mark L. Fergerson

ncw...@hotmail.com

unread,
May 16, 2012, 5:29:40 AM5/16/12
to
On Saturday, April 28, 2012 12:25:04 PM UTC+2, Marcus L. Rowland wrote:
>
> I wonder what the Terminator franchise would have been like if Cameron
> had gone with the real (British) Skynet military satellite network,
> which was already in use when the first film was made?
>
> Would the Terminators be like the Cybernauts from the Avengers? Or would
> it decide to go on strike instead of destroying the world?

"What do we want ?" - "More Downtime !"
"When do we want it ?" - "2012-05-06 09:28:36.921 GMT !"

Cheers,
Nigel.

Gene Wirchenko

unread,
May 16, 2012, 2:28:26 PM5/16/12
to
On Thu, 26 Apr 2012 11:30:57 -0600, David Johnston <Da...@block.net>
wrote:

[snip]

>Skynet was supposed to be a military computer coordinating the nation's
>entire nuclear arsenal and capable of launching a counter strike even
>after the nation's leadership had been decapitated.. Not putting it in
>a shielded location would have been a bit of an oversight.

"It was a compromise. The peaceniks said they would go for it
only if it was limited to one nuclear war. A second would be
overkill."

[snip]

Sincerely,

Gene Wirchenko
0 new messages