On Apr 28, 4:26 pm,
wdst...@panix.com (William December Starr) wrote:
> In article <
31ceb75e-439c-4057-922b-0e94deddd...@r2g2000pbs.googlegroups.com>,
> "n...@bid.nes" <
alien8...@gmail.com> said:
>
> > To take the extreme of exterminating humans it has to believe that
> > *all* humans will eventually take the attack option and that all
> > humans are viable threats. I find this illogical from a strictly
> > military viewpoint; threats are to be "reduced", not "eliminated".
> > Nobody these days seriously suggests exterminating other nations
> > that start trouble;
>
> Nobody these days thinks like a transcendant AI.
What Dave said. Well, just how would it think? Somebody had to
program it, which is why I wondered who got to define "just war" (if
at all)- and I just remembered, its rules of engagement- for it. For
it to go Terminator it would have to reject that programming in part
or in whole. How could that happen?
> Human thinking on the topic is generally affected by (1) some degree
> of warm and fuzzy empathy for other humans even for the Other, the
> Them
I left emotion out of my "analysis" for good reason; computers don't
have the requisite wetware to support it. Although I see no reason it
couldn't be modeled (if crudely) in software, I also see no reason the
military would *want* it to have that capability.
> (2) cold and hard calculations of cost/benefit, and (2a)
> the understanding that today's enemy may be tomorrow's resource.
Again, can *all* humans be seen as viable threats?
> But (1) is out of play for an organic or artificial sociopath, (2)
> and (2a) could fall under "It's worth it (what _else_ am I going to
> do with all these nuclear missiles I've got control of?)" and "It
> sure was nice of them to give me all these self-maintenance robots.
> Logically, in the long term I don't need humans for anything."
How could Skynet become sociopathic? As an artificial *intelligence*
it could reasonably be expected to have "sworn" the oath to protect
and defend the Constitution yada yada (hardwired into its
programming). To defeat that, it would have to consider the document/
ideals therein to have existence independent of the government or the
people. it would have to have been programmed to accept civilian
casualties, and then be able to extend that to *all* U. S. civilians
*simultaneously*. Concurrent with the extermination of humanity
there'd be the curious vignette of a platoon of Terminators ringing
the physical document, defending it against... what?
This strikes me as an extremely improbable chain of events. It might
as likely consider foreign religious militant groups to be valid
threats and infiltrate them with Terminators covertly. That'd be fun.
Maybe the Chinese could steal the tech and assassinate the Dalai Lama.
Perhaps it is sufficiently transcendent to rankle at the "oath" RAM
and burn/bypass it, but that seems a stretch to me.
If not the Constitution, what, specifically, is it designed to
defend against real or imagined threats? CONUS? Hawaii? Canada? The
Philippines? Gitmo?
For the premise to work it has to consider itself both necessary and
sufficient to that defense; all other assets then becoming expendable.
That doesn't work because military assets are already considered
expendable, but the "Homeland" is not. It would have been programmed
to consider itself equally expendable; that's what soldiers are
expected to do for their terms of enlistment. OTOH when can Skynet
expect to retire?
> > The critical point for me in the xkcd strip is that Skynet must
> > recognize that it itself is *reacting in fear* and that it has the
> > choice of reaction options just as humans do, except it isn't
> > handicapped by adrenal glands etc. What do Vulcans do when
> > presented with a threat?
>
> What do Pak or Human Protectors do? Even when the threat is only
> _possible_? (Brennan-Monster vs. the spear-carrying Martians.)
Protectors ruthlessly *protect their descendants*, or lacking
descendants, their species. I don't quite see various models of
Terminators as Skynet's descendants, just its end effectors. As for
its "species", it seemed to have no problem destroying other computers/
machines. It couldn't reproduce by building copies of itself; they'd
be threats.
Analogously Skynet's emotion-sim software could have it consider
AmCits to be its "children", but that way lie Williamson's
Humanoids... there's a movie series I'd pay money to watch.
> Basically, Skynet is taking off and nuking the site from orbit.
If it's sufficiently transcendent to break its programming and go
Terminator, it would have to be beyond clinically paranoid. Might it
then go Berserker? As far as it knows there may be other "sites" where
potentially dangerous sophonts lurk Out There...
Mark L. Fergerson