Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Defense, Deterrance, Military Stuff

0 views
Skip to first unread message

Will Ware

unread,
Nov 4, 1996, 3:00:00 AM11/4/96
to

This is a long post of some ideas I've been kicking around about the
military impact of nanotechnology. I'm not any kind of military expert, but
I notice that there's very little serious discussion about military
applications of nanotechnology, and I'm hoping this will prompt more. To
people who find long posts offensive, my apologies, and I don't intend to
post any more long posts.

In accord with today's date, this post is about scarey stuff.


DEFENSE AND DETERRENCE IN AN AGE OF NANOTECHNOLOGY

It's a very good bet that nanotechnology will arrive in some form within a
few decades, and that when it arrives, in addition to its many benefits, it
will provide grave new dangers for terrorism, warfare, and totalitarianism.
It's important to start thinking about this stuff before the hardware
arrives.

Nanotechnology would provide an extremely novel technological shift in
weaponry. Such technological shifts invalidate previously acceptable
military policies, and demand the invention of new policies. Any society
that fails to respond to this demand places itself at a grave disadvantage
relative to its more adaptive and ambitious neighbors.

The last major technological shifts in weaponry were the missile and the
atom bomb. The missile allowed military action at a distance, and the bomb
gave us the capacity to level cities in a single blow. Combined, these
technologies called for the redesign of a lot of military thinking that had
worked since the dawn of recorded history.


HOW THE BOMB CHANGED WARFARE

Prior to missiles and atomic bombs, warfare could not be carried on at a
distance, and it was rarely possible (and never easy or inexpensive) to
completely destroy an opposing force. The advantage of surprising one's
enemy was short-lived. A surprised enemy force might respond ineffectually,
but it would always respond at some level.

The pre-nuclear strategist performed calculations spelled out in Sun Tse's
"Art of War". He compared the sizes of armies, the competence of
commanders, the kinds of weapons used, terrain, and morale. From these
factors, he could make an informed estimation about who would win. Terrain,
equipment, and training determined the preferred tactics for a force. Sun
Tse also wrote of the use of spies in sizing up the capacities of an
opposing force.

Within this paradigm of localized, limited combat, there developed a
technical literature of how to conduct warfare. Sun Tse has already been
mentioned; in the west, another early work was Machiavelli's "The Prince".
I am almost entirely unfamiliar with this literature, but it nevertheless
exists, and is still used today, in view of the fact that nuclear weapons
are worthless for tactical warfare. Deficiencies in our knowledge of
conventional warfare became apparent in our unpreparedness for the combat
conditions we encountered in Korea and Vietnam.

With the advent of the bomb, the possibility arose of fighting a war with no
conventional tactical combat at all, a war consisting only of an exchange of
nuclear warheads carried by intercontinental missiles. It was also a war
in which, for the first time, it would be possible to kill everybody on
both sides, military personnel and civilians alike.

In the new form of warfare, the previously significant factors (size of
forces, competence of commanders, equipment, terrain, morale, training) were
either unimportant or of greatly diminished importance. This was a form of
war that nobody could claim to know how to win with any certainty.


THE DEVELOPMENT OF NUCLEAR POLICY

The first idea in the new age of military thinking was that it was possible
to simultaneously launch enough missiles to destroy virtually all of an
enemy's military capability, and that this would happen in a matter of
hours or even minutes. The enemy would have no time to organize a response.
This idea is called a first strike.

Initially it seemed that whoever launched a first strike would be the only
survivor, and by default the victor. Each side therefore quickly
established a second-strike capability, or at least gave the convincing
impression that such a capability was in place. The thinking on
second-strike capability was that in the event of an opponent's first
strike, the victim would have enough survivors and equipment remaining
intact to launch a substantial counter-attack.

The second strike was the beginning of deterrence: the opponent would
hesitate to launch a first strike, knowing that he would suffer the effects
of a second strike. As long as the opponent believed in your second strike
capability, and wanted badly enough to avoid being the victim of a second
strike, he would refrain from launching a first strike.

This policy of "mutually assured destruction" was sufficient to maintain the
safety of the world for the forty remaining years of Soviet cohesion. The
policy has had many critics, but its stability in the face of enormous
military and politicial tensions is remarkable.

Deterrance worked for several reasons:

1. Neither side was suicidal. Neither side valued victory over survival.

2. Each side could understand the other's thinking well enough to estimate
the other's interpretation of various scenarios, and the other's likely
response. Furthermore, neither side acted stupidly. It could be said that
the two sides were "mutually rational"; each recognized the fundamental
rationality of the other.

3. Neither side had an overwhelming advantage over the other. Either side
could launch a first strike or respond with a second strike, and neither
side could defend itself against a first or second strike. The size of one
side's nuclear arsenal might be greater or lesser than the other, but not by
enough to give either side a decisive advantage.


MORE ABOUT DETERRANCE

There are variations on the idea of a second strike. One was the idea of a
"doomsday machine", a machine that would automatically launch a second
strike even if its builders did not survive their opponents' first strike.
A doomsday machine need not launch missiles; it could do anything that the
opponent would find sufficiently threatening to prevent the launch of a
first strike. It could release biological weapons or huge amounts of
radioactive material that would render the planet uninhabitable. The
important characteristic of a doomsday machine is that the opponent cannot
disable it, that once its builders are dead, its actions are unstoppable.

Deterrance is a funny business of conflicting motivations. As long as the
stalemate is the safest, stablest policy, you want to actively promote it.
For example, the Rosenbergs gave American military secrets to the Soviets,
believing that they were working to stabilize the Cold War stalemate. But
if it appears that you can pursue your goals without committing suicide,
you may be tempted to so at the risk of destabilizing the stalemate, and an
example of that was the Cuban missile crisis.

There is probably some useful and interesting mathematical way to represent
the "stability" of a stalemate, almost like a potential energy surface.
I'll think about that more later. I bet von Neumann already figured it out.


WILL DETERRANCE WORK FOR NANOTECHNOLOGY?

At first blush, deterrance looks like a good strategy for handling nanotech
threats. Like nuclear threats, they offer the possibility of a devastating
first strike, in which all or nearly all the victims are killed or
incapacitated. And, as with nuclear weapons, it may be possible to build
a second strike capability, a doomsday machine.

But deterrence was developed in the context of a bipolar world: two
principal superpowers, with the lesser powers of the world taking sides.
Nanotechnology would greatly reduce the "entrance requirement" for
superpower equivalency. There might be thousands of "superpowers", and the
situation would be much more complex than the Cold War.

There are a number of points on which the strategy of deterrence could break
down:

1. Some parties may value victory over survival, or for that matter, over
the survival of all humanity or all of life on earth. In short, some
parties might be suicidal.

2. Some parties may not understand the destructive potential of
nanotechnological weapons, might not believe that they might destroy
themselves and others, or might simply have incompetent people designing,
building, and deploying their weapons. In short, some parties might be
incompetent.

3. With so many possible powers, coming from different cultural
backgrounds, it would be easy misunderstand one anothers' motives and
reasoning. If a party cannot recognize the fundamental rationality of
another party, it may be impossible to find a stable stalemate. In short,
some parties may not be mutually rational.

4. If there are thousands of different "superpowers", each with some form
of automatic second strike capacity or doomsday machine, it's possible that
at least one doomsday machine will go off accidentally. This is another
case of incompetence.

Assuming that all parties are competent, non-suicidal, and mutually
rational, deterrance can work as long as it is possible to launch a second
strike, and impossible to defend against a first or second strike. If
defense becomes feasible, the stalemate is substantially compromised. An
example of this was the Soviets' agitation over Reagan's proposal of the
Star Wars defense system.


CONTAINING DEVELOPMENT

In the short term, it may be possible to limit nanotechnological hardware
development to groups and countries we consider competent, rational and
non-suicidal, and among those, the ones we consider friendly. But
containment cannot be sustained for long. The minimal seed required for
nanotechnological hardware development will eventually be invisibly small.

The closest model we now have to the requisite level of containment security
is the kind of laboratory used to house major biohazards such as the HIV and
Ebola viruses. It should be borne in mind that, as frightening as these
viruses are, an outbreak is unlikely to destroy all of humanity or all of
life, but this is a distinct possibility if an irrational, suicidal, or
incompetent party gets hold of the means to develop nanotechnological
hardware.

Ultimately, development cannot be reliably contained, and even early in the
game it may prove impossible to contain. Containment should not be depended
upon, unless there is some compelling reason to believe that it will be
effective. In the early development of nanotech, this might be the case if
it depended on some uniquely complex and expensive piece of equipment, of
which only a few copies existed in the world. But again, this form of
containment will cease to be effective soon after the development of a
general self-replicating assembler.


WHAT ABOUT DEFENSE?

Deterrance and containment are both provisional and ultimately unsafe
policies. The only remaining policy (that I can think of, anyway) is
defense.

It's tough to reason about defense in the kinds of general terms I've been
using so far. Like tactical warfare, it depends on the specifics of the
situation: weapons, terrain, numbers, and all that. The complexities of
defense on the molecular level are hinted at by the complexity of the
human immune system.

But one question is of crucial importance: is an effective defense possible
at all? If it is, then it must be investigated, and when the time comes, it
must be employed. If it is not, then we need to start looking at deterrance
some more.


WAR GAMES

It might be worthwhile to invent a game wherein people explore as many
different avenues for attack, defense, and deterrance as possible, and there
should be some way to encourage people to share the ideas they come up with.
A game like this might be distributed either over the internet or in some
fashion like Dungeons and Dragons, or the Magic card game. Magazines or
newsgroups or FTP sites might become repositories for the best ideas players
put forth.

If such a game were popular, it would be a handy way to generate a lot more
ideas for these things than any single person would be likely to come up
with on their own. There would of course be the effort of sorting out the
realistic from the bogus, but the design of the game might include some
bogosity-filtering, and additional filtering could be added to the game
culture (assuming one actually arises).

Ideas from the gaming community could inform discussion about defense and
deterrence. Likewise, the game could provide a channel for distributing
minimal-bogosity ideas about nanotechnology through the popular culture.

Also, I haven't talked much here about terrorism, crime, or political
oppression, but the game could explore those areas as well.
--
-------------------------------------------------------------
Will Ware <ww...@world.std.com> web <http://world.std.com/~wware/>
PGP fingerprint 45A8 722C D149 10CC F0CF 48FB 93BF 7289

David Blenkinsop

unread,
Nov 6, 1996, 3:00:00 AM11/6/96
to

On Nov 4, Will Ware wrote:

> This is a long post of some ideas I've been kicking around about the
> military impact of nanotechnology. I'm not any kind of military expert, but
> I notice that there's very little serious discussion about military
> applications of nanotechnology, and I'm hoping this will prompt more.

> Nanotechnology would provide an extremely novel technological shift in


> weaponry. Such technological shifts invalidate previously acceptable
> military policies, and demand the invention of new policies.

I was quite interested in Will Ware's posting on nano-security; to me, his
remarks about deterring aggression seem pretty straightforward.
Unfortunately, I am not sure that he is on quite the right track yet in terms
of contemplating truly novel approaches to a novel (and probably crucial)
problem. What he seems to be saying is that nano-deterrence might not be all
that different from nuclear deterrence, which is plausible up to a point,
given the "scariness" of nano-warfare. He then goes on to say:

> But deterrence was developed in the context of a bipolar world: two
> principal superpowers, with the lesser powers of the world taking sides.
> Nanotechnology would greatly reduce the "entrance requirement" for
> superpower equivalency. There might be thousands of "superpowers", and the
> situation would be much more complex than the Cold War.
>
> There are a number of points on which the strategy of deterrence could break
> down:
>
> 1. Some parties may value victory over survival, or for that matter, over
> the survival of all humanity or all of life on earth. In short, some
> parties might be suicidal.
>
> 2. Some parties may not understand the destructive potential of
> nanotechnological weapons, might not believe that they might destroy
> themselves and others, or might simply have incompetent people designing,
> building, and deploying their weapons. In short, some parties might be
> incompetent.
>
> 3. With so many possible powers, coming from different cultural
> backgrounds, it would be easy misunderstand one anothers' motives and
> reasoning. If a party cannot recognize the fundamental rationality of
> another party, it may be impossible to find a stable stalemate. In short,
> some parties may not be mutually rational.
>
> 4. If there are thousands of different "superpowers", each with some form
> of automatic second strike capacity or doomsday machine, it's possible that
> at least one doomsday machine will go off accidentally. This is another
> case of incompetence.

In other words, deterrence, at least as we've known it up to now, might not
work at all! This leads Ware into contemplating the same ideas, about
controlling nanotechnology and about active shields, that Eric Drexler
discussed in his 1986 book, _Engines of Creation_. The problem that I have is
that I can't quite see how to put most of these various ideas together into a
coherent picture or make it all seem workable, at least as it stands so far.
For instance, Drexler's scheme was to control the technology by keeping the
precise engineering details locked up, secret, until the day should arrive
that we could build a fully developed or "perfected" shield system. Now if we
are really keen on keeping the engineering secret, we are currently going
about it in a very strange way, since practically everyone wants nanotech
work to be openly published and unclassified! Actually, allowing companies to
trade freely in the bits and pieces of this technology is probably crucial to
developing this stuff in the democracies in the first place, so just when do
start classifying so many of the details that other folks can't follow along
and duplicate the results in short order?

Article Unavailable

SunCat

unread,
Nov 6, 1996, 3:00:00 AM11/6/96
to

Will Ware wrote:

> This is a long post of some ideas I've been kicking around about the
> military impact of nanotechnology. I'm not any kind of military expert, but
> I notice that there's very little serious discussion about military
> applications of nanotechnology, and I'm hoping this will prompt more.

This being USENET you can guess what I am going to say.

Molecular Nanotechnology and the World System by Thomas McCarthy is at
Version 0.4 now

http://bcf.usc.edu/%7Etmccarth/main.htm

He also maintains a "MNT and the World System Bulletin Board."

The paper is nearly complete. As I write this I find


Introduction
Introduction
The Loss of Conflict
Power
Power: Relative and Absolute
Power: Hard and Soft
War
War in the Age of Invisible Machines
Of Swords and Plowshares
The State of Nature
Power: Military and State
The State of Nature
Peace
The Conditions of Peace
E Pluribus Unum
Autarky
Engines of Autarky
Economies of (Small) Scale
The Ties that Bind
Designer Communities
Good Fences...
Atoms as Bits
From One, Many
Defense and Deterrence (not finished)
The Dilemma of Perfect Shields


Likewise the "Social implications of Nanotechnology" page needs input.

http://www.carney.com/erik/nano/nanosoc.html

The option of making someone as rich as they want to be only works on the
rational, but I think it's worth a try. The second by second measures and
counter measures race threatens to place warfighting decisions under the
control of AIs rather than constitutional governments. The battlefield of
the future may resemble a trillion armed wrestling match. Space migration
may make world-killing, khaki goo attacks more likely, as spacers may
believe themselves immune to such attacks. Then again, dispersion of
humankind may be the best bet for the race as a whole as space is a backup
(ala Heinlein).

SunCat>>>>>>>>>>>>>>> kam...@ibm.cl.msu.edu
"You may not be interested in war but war is interested in you."
--Leon Trotsky


Will Ware

unread,
Nov 7, 1996, 3:00:00 AM11/7/96
to

David Blenkinsop (bl...@sk.sympatico.ca) wrote:
: ... deterrence, at least as we've known it up to now, might not
: work at all! This leads Ware into contemplating the same ideas, about
: controlling nanotechnology and about active shields, that Eric Drexler
: discussed in his 1986 book, _Engines of Creation_. The problem that I
: have is
: that I can't quite see how to put most of these various ideas together
: into a
: coherent picture or make it all seem workable, at least as it stands so far.

I didn't claim to offer a complete solution, by any means. I am still
thinking this stuff thru for myself, and have reached only a few conclusions,
none of them sufficient to provide a safe, secure world for future
generations.

BTW, approximately the same text is now a web page, with some slight updates
as I've continued thinking about the issues, at
http://world.std.com/~wware/nt-deter.html

My main point in posting was to prompt thought and discussion about what I
perceive as a very important area that seems to get short shrift in nanotech
discussions on this newsgroup and elsewhere. I hope others will put forth
better ideas than mine, because my best ideas are none too good.

: For instance, Drexler's scheme was to control the technology by keeping the

: precise engineering details locked up, secret, until the day should arrive
: that we could build a fully developed or "perfected" shield system. Now
: if we
: are really keen on keeping the engineering secret, we are currently going
: about it in a very strange way, since practically everyone wants nanotech
: work to be openly published and unclassified!

It *is* a fascinating dilemma. I could make compelling arguments to myself to
go either way. I suspect that the safest course is to openly publish up to
some particular level of sophistocation (at least sufficient to give people a
sense of the magnitude of the issues involved) and conceal/classify everything
beyond that, with the possible exception of publishing designs for defensive
measures from which offensive designs cannot be inferred.

It's true that what we've learned from nuclear weapons can't teach us
everything we need to know, but it's a very useful place to start. E.g. it's
supposed to be fairly easy to become a nuclear terrorist if you have certain
pieces of information. We seem to do an adequate job of keeping that info
under wraps, while allowing random citizens to know quite a bit about more
general nuclear matters (how fission and fusion work, how reactors work).
Somebody must have once upon a time sat down and figured out which info could
be safely published and which should be controlled, and they came up with a
way to control that information so that we don't have nuclear terrorist
incidents happening all the time.

: ...[David] Brin makes several points to the effect that "nothing
: will protect or save privacy. It's over." In other words, pretty soon the
: world is going be loaded with miniaturized spy-bots...
: ...if you build a
: general nano-factory while working up an illicit weapons contract on the
: side, chances are excellent that some snooper would be on to you in Brin's
: "clear as glass" world!

This is an interesting approach. I generally regard the arrival of nanotech
as more or less inevitable for reasons of economic incentive, and perhaps the
same is true of the end of privacy. (People often react to the "end of
privacy" idea as the beginning of a Big Brother era, but that would be the
case only if some powerful people kept their privacy. A uniform loss of
privacy could result in greater government accountabilty rather than a
techno-totalitarian state.)

This an interesting and perhaps viable approach to limiting the development
of offensive weapons, but only if your surveillance is good enough to detect
every possible development effort. Maybe the spybots can't get everywhere.
Maybe they can't distinguish weapons manufacturing from some legitimate
activities. Maybe you've invented a weapon they don't recognize. Maybe you're
living in a space colony moving away from Earth at nearly the speed of light,
planning to send back your weapon on a slightly-faster ship, so the only
warning Earth gets is your ship's arrival at near-light speed, which might
in itself be a major disaster.

Bryan W. Reed

unread,
Nov 11, 1996, 3:00:00 AM11/11/96
to

In article <55st2o$s...@foglet.rutgers.edu>,

Will Ware <ww...@world.std.com> wrote:
>Somebody must have once upon a time sat down and figured out which info could
>be safely published and which should be controlled, and they came up with a
>way to control that information so that we don't have nuclear terrorist
>incidents happening all the time.

Don't kid yourself--it's never been so simple and well-organized as that.

The technology export laws for the US are an example. The set of things
you can't say and to whom you can't say them change on almost a daily basis,
almost arbitrarily.

Another example: I worked for somebody in Livermore who actually does know
a fair number of the secrets, and works directly with lots of politicians.
He told us a few interesting things about the legal system in this country.
For one thing, it's easy to spill secrets and not get in serious trouble for
it if you can claim it was in the public interest. He went so far as to say
there are no secrets in this country.
Also, policy tends to be defined retroactively. After something is
accomplished, the people on top decide to take credit for it and say that
that was the policy all along. Forethought is right out.

The image of some person at the beginning of it all deciding what could be
published and what couldn't is grossly oversimplified. And it gives a
dangerous illusion of "someone being in control." Dangerous in the sense
that it would be unwise to blindly trust it.

Oddly enough, though, we still don't have nuclear terrorists all over the place.
I think part of the reason is that only a tiny, tiny fraction of the people
actually WANT to be nuclear terrorists. I seriously wonder how hard the
information is to get if you really want to get it. Every physicist knows
a fair amount about how a nuclear bomb works--it's hard not to. How much
more info do you have to assemble to actually be able to put one together?
Once you have the knowledge, you just have to get the material. We go nuts
trying to keep our nuclear material accounted for and protected; does every
nuclear nation do the same? How much black market U-235 is there in the
world?

I don't think it's going to be feasible to keep nanotech secret for very long.
It's already being developed in so many independent places that it would be
hard to establish any kind of overarching control. Even if you did manage to
legally clamp down on the information, how long would it be before someone
ignores the law and puts it on the web? (How big will the web be by that time?
What legal controls will exist for it?) What would really happen to such a
person?

Have fun,

breed


David Blenkinsop

unread,
Nov 11, 1996, 3:00:00 AM11/11/96
to

On Nov 6, 1996, I wrote:

"pretty soon the world is going to be loaded with miniaturized spy-bots, with
nothing that anyone can do about it except maybe to doggedly try to enforce
some politeness regulations...?"

JoSH responded:

>...if spybots are unstoppable, so are death-bots, so there'll be plenty of
>incentive to protect your area.

It gradually dawns on me that this loss-of-privacy/nanosecurity
issue may be more than just a radical suggestion, or a matter of
keeping peepers out with special "peeper alarm" grids or whatever. It's
relatively easy, for me, to present something by way of exaggeration or
by oversimplifying certain matters; however, I think that there is
something here that may be relevant despite complications. Besides
privacy, the only thing at stake here is *world survival*, so perhaps I
should take another go at this, but with a little more realism on the
real-world complications, this time around!

To begin with, let's consider the need for security in an era of
fully capable nano-manufacturing. Scenarios of people building
dangerous replicators, or building nuclear bombs in the basement, have
been brought up often enough, but you don't often see any definite
ideas as to how this could be prevented over the long haul. People talk
of "active shields", which might act as an anti-replicator "immune
system", but how would this apply to the nukes in the basement? Are we
saying that we are going to saturate the environment with this
omnipresent "immune system", one that can tell a nuke or other
dangerous device from anything else that a person might manufacture?
Would such an "immune system" be equipped with techno-sabotage
capability to automatically defuse any bombs (hats off to Chris
Phoenix's real far out "ultimate sabotage" posting on this one)? The
usual notion, that active shields could ultimately take care of almost
anything, just doesn't quite scan, unless you're willing to make them
quite intelligent and trust them to confidentially spy almost
everywhere! Even if a "shield" mechanism really could do all that, we'd
never really be sure that the thing wouldn't blab our business, while
refusing to tell us what we want to know about the neighbor down the
street!

One possible response to this is that general-purpose nano
should really be a very closely held monopoly, such that the vast
majority of individual persons and companies can only get their hands
on quite limited, weapons-incapable factories. Now this is something
that sounds more reasonable, but only up to a point. If the majority of
people are going to be content with factories that are limited in some
way, you just know that some folks are going to want to "hack" those
general purpose factories. If current notions about MNT are at all
correct, then surely the use and handling of these factories is going
to be extremely computer-like? Lots of companies, and individuals too,
are going to have a legitimate interest in hacking the real thing,
while even major corporations will want to trade freely, and hire
whomever they like, so as to get the bugs out and keep coming up with
proper upgrades as they go along. To keep the detailed knowledge,
software, nano-tools, or whatever, such a deep, dark secret as to
confine general nano to a monopoly, would seem almost impossible to do
in a democracy. While consumer versions of nanofactories may well be
limited, I am almost certain that this isn't the total answer to
security, unless the intent is to endlessly delay or stall the
emergence of capable MNT.

Getting back to defusing that atom bomb in the basement, we
really have to give up the privacy that it takes to *hide* a general
purpose factory for long enough to design and build such a thing.
However it may come about, there must be some way, some way backed by
law and custom, to make sure that everyone gets spied on enough to
prevent such things! Surely, we can't trust machines to do this, we can
only trust each other, so that the neighbor down the street or across
the world can blab to the authorities if there is actually something
materially suspicious in one's basement. In saying this, I realize that
there are probably some countries that are "computer phobic"; these
might put up the peeper alarms and refuse to let us "in" for a long
time. The thing is, building international active shields against
computer-phobic countries might just work--perhaps we can let such
countries solve their own internal security problems their own way, if
that is what they are determined to do.

Besides there being a good prospect of spybots on every street
corner, there is another trend that could lead to less privacy, and that
is "ubiquitous computing". The idea here is that computers will get so
small, and so cheap, that we'll carry them with us, stick them on the
wall, or throw them away at will. Proponents depict how handy it'll be
when we carry our own little computer "tab" so that people in the
neighborhood will know where we are and can easily check that it isn't
a burglar coming up the front walk! If you've got a house full of
ubiquitous computers and are encouraged to relay various kinds of
information to your boss or to the government, how long before we have
regular scanning of one's own personal home--including the basement? If
there is ever a law to require "home auditing" we might insist on
*overkill*, i.e. that it apply to *everyone* everywhere, including any
personal space that one might have protected with peeper alarms.

Compared to the simpler "clear as glass world" notion, this one
is a lot more decorous--for example, there would be no need to let
snoopers in to listen to the family singing around the piano! At the
same time, regular auditing might very well make sure that anyone on
the Net can find out the exact location of your piano, the location of
every other stick of furniture in your house, plus the location of
every brick in the walls, and what is in them. Companies trading in
nanotech would practically always have all of their doings known to the
world at large, with refinements, upgrades, and so on, unhindered by
any effort at secrecy in the basics of nano-construction. Efforts to
build active shields against more *secretive* societies should benefit
greatly, and perhaps bring us a future that we can actually survive.
Call it the "clear as audited" future, a future where privacy is
deliberately, legally *minimized*.

David Blenkinsop
Saskatchewan, Canada
bl...@sk.sympatico.ca

[I think you have the wrong idea of what an "active shield" is supposed
to be. Your notion seems to be like trying to prevent crime by having
a team of experts monitor every mother's rearing of every child all the
time to be sure no one grows up to be a criminal. The original notion
was more like a police force, that reacts when a crime occurs and
chases the criminals who have revealed themselves by their actions.
The reason the second notion is much more workable is that (a) you
don't have to control everything everywhere, but only what you want
to protect; and (b) if you do try to control everything everywhere,
plenty of people are going to react as if you're simply trying to
conquer the world for your own selfish purposes, and attack you in
response.
--JoSH]


Will Ware

unread,
Nov 11, 1996, 3:00:00 AM11/11/96
to

Chris J. Phoenix (cpho...@Xenon.Stanford.EDU) wrote:
: It's very simple: the first person to achieve a mature nanotechnology
: wipes out all other technical capability in the world...

This idea just hadn't occurred to me at all. But as JoSH mentions, somebody
somewhere is likely to try it, but with very much less benign motives. It
would be great if there were some saintly person who could be trusted to
carry out this plan, arriving somehow at a safe secure world; though I think
it would eventually need to be a world where nanotechnology was eventually
released into everybodys' hands, since limiting it is either unstable, or
puts too much demand on the already-strained saintliness of Humanity's
Alleged Benefactor.

Interesting thought experiment, but JoSH's concerns are very well founded
(and bear repetition). Where are you going to find this saint, and how will
you test their imperviousness to corruption?

: [I think it's important to point out why this is a very nutty idea.
: First of all, most of those who are so completely bereft of any
: moral sense as to be able to contemplate doing it, would simply
: kill all the people instead of disabling the machines. It's much
: easier to do...
: ...each power will assume it's whoever their current worst enemy is and
: start a [nuclear] free-for-all (and there'll be more mushrooms tonight).
: ...there are lots of people in the world who would like to do something
: like this just to rule the world and make everyone their slaves,
: or to convert them to his religion, or out of blind rage...
: ...When nanotech gets to the stage where such things are
: possible, someone *will* try it, and almost certainly with evil
: intent. In fact, I doubt they will wait till it's possible, and
: there will be all sorts of abortive early attempts that will serve
: as warnings and practice for defense, cleanup, and so forth.
: --JoSH]

Thanks goodness for those early mistakes. In addition to giving us practice
in countering this stuff, they'll also help to convince decision-makers
of the reality of this kind of threat.

SunCat

unread,
Nov 11, 1996, 3:00:00 AM11/11/96
to

>Will Ware wrote:

[...]

>> It's true that what we've learned from nuclear weapons can't teach us
>> everything we need to know, but it's a very useful place to start. E.g. it's
>> supposed to be fairly easy to become a nuclear terrorist if you have certain

>> pieces of information....


>> Somebody must have once upon a time sat down and figured out which info could

>> be safely published and which should be controlled, ...


Allow me to pick a nit. The reason we have not seen this is that Nukes
require a unique material -- bomb-grade fissionables. The information can
be had or discovered independently. In a nanotech era, all is information
(and time) after general assemblers are available.

Scientific ignorance does not really serve national security unless the
nation is a feudalism. Glen T. Seaborg said that if our educational system
were imposed on us (the US) by a foreign power we would regard it as an act
of war.

As for spybots, one can always capture one and lie to it. If the "stories"
do not corraborate you have at least confused your opponents.

Nanotech era defense seems to rely much on intelligence, including
artificial intelligence.


SunCat>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>mailto:kam...@ibm.cl.msu.edu
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>icbmto:42d44m13sN, 84d29m2sW
"World War III will be a guerilla information war with no distinction
between civilian and military participation." --Marshall McLuhan

Jeffrey Soreff

unread,
Nov 11, 1996, 3:00:00 AM11/11/96
to

Will Ware writes:
>It's true that what we've learned from nuclear weapons can't teach us
>everything we need to know, but it's a very useful place to start. E.g. it's
>supposed to be fairly easy to become a nuclear terrorist if you have certain

>pieces of information. We seem to do an adequate job of keeping that info
>under wraps, while allowing random citizens to know quite a bit about more

It really isn't the restriction on *information* that prevents nuclear
terrorist incidents, it's the restriction on *materials*. If you can't get
a critical mass of fissionable isotopes together, it doesn't matter how
much you know. You still can't build a fission or fission-fusion bomb.
(Pure fusion bombs are potentially another story, but, as far as I know,
no one has succeeded in building one yet). We have been very lucky in that
physical laws put *two* barriers in the path of someone who wishes to build
a nuclear bomb. In the first place, the fissionable materials are inherently
rare, costly materials. Pu-239 is a synthetic element, and U-235 is very
hard to separate from natural uranium. In the second place, a would-be bomb
builder can't economize on the use of these isotopes. You must have a critical
mass (with some minor adjustments, depending on neutron reflectors, extra
compression, and so forth), requiring *kilograms* of fissionables, or you can't
build a bomb. Small terrorist groups can't start from natural uranium, with
current technology, and build bombs, regardless of how well informed they are.

-Jeffrey Soreff
sor...@vnet.ibm.com
standard disclaimer: I do not speak for my employer

David Stoner

unread,
Nov 11, 1996, 3:00:00 AM11/11/96
to

Here's an acid test for any defensive system. Suppose some madman drops
a capsule overboard in the center of the Pacific Ocean, loaded with
destructive nanobots, capable of being free-roaming, evolving, and
replicating themselves. The capsule sinks to the ocean floor and breaks
open, disgorging the nanobots, which go forth, are fruitful and
multiply. Masked by the ocean depths, they blanket 71% of the surface
of the planet before anyone notices. Given such a head start, they can
overwhelm just about anything by sheer weight of numbers.

A defensive system that would effectively counter that possibility
should be considered pretty good.

Now suppose there are two madmen...

-David

[There's not that much energy available at the depths, but one should
certainly think about variations involving deserts or the ocean's
surface (where there's sunlight) or forests where there's fuel and
oxygen.
--JoSH]


nils...@aol.com

unread,
Nov 11, 1996, 3:00:00 AM11/11/96
to

In article <55q83b$9...@foglet.rutgers.edu>, kam...@ibm.cl.msu.edu
(SunCat) writes:

>The option of making someone as rich as they want to be only works on the

>rational, but I think it's worth a try. ...

It all depends on one's objective. It is impossible to use any kind of
"reason" to tell somebody what they should want. You might use reason to
explain to somebody how they might achieve their goals, or to dissuade
them from certain actions that is likely to have adverse consequences to
them. In the end, what somebody actually wants is not something that can
be rationally determined, it is not "irrational" but "arational" or
"extrarational". This is true for anybody, including thee and me.

To the subject closer at hand: It may be possible to make everybody rich
in a material sense (this is NOT a zero-sum game, as so many would have
it). To a very large extent, in the industrialized world this has already
happened. The poorest homeless person in the U.S. probably lives better in
a meterial sense than paleolithic man. (He may well be crazy, but that is
not part of material wealth.)

There is, however, another facet of "richness" where the game is not money
(except incidentally, and sometimes to keep score, as Maverick would ssy),
but POWER. Obviously, if your desire is to have 10000 slaves or concubines
or 10 000 000 rejoicing and adoring subjects, this is at best a zero-sum
(and very likely a negative-sum) game.

Nanotech (or any other wealth-creating technology) can assuage richness
seekers of the first kind. As with any other military-potential
technology, it is incumbent on all of us to try to make it harder to use
for richness seekers of the second kind. (The Dear Leader in North Korea
pops into my mind, but you can substitute your own favorite potentate.)

Regards

Nils Andersson


David Stoner

unread,
Nov 11, 1996, 3:00:00 AM11/11/96
to

Will Ware (ww...@world.std.com) wrote:

: ...
: WHAT ABOUT DEFENSE?

: Deterrance and containment are both provisional and ultimately unsafe

: policies. The only remaining policy is defense. ...

I agree completely.

: But one question is of crucial importance: is an effective defense possible


: at all? If it is, then it must be investigated, and when the time comes, it

: must be employed. ...

We have to hope it's possible. As you say, it's very hard to reason
about it, because of enormous range and complexity of things that may
have to be defended against.

I'm glad you mentioned the human immune system. That's what I'd take as
a model of how a defensive system might work.

I'm also glad you brought up the analogy with nuclear weapons. Not to
compare oneself with Einstein, but we face, or will face, the same moral
dilemma Einstein faced in recommending that the United States develop a
nuclear bomb. We will have to develop capabilities that perhaps we
would rather not have, for fear that a Hitler will develop them first.
Only this time there may be thousands of Hitlers, working in their
garages and basements.

A couple of generalities may be ventured:

(1) Whatever defensive system is invented will probably have to be
deployed in advance, and we will have to hope it will be adequate at
least to slow down whatever adversary it meets, or to spread an alarm
while the threat is still small. Because much of the power of nanotech
lies in exponential replication, it is extremely important to detect an
enemy early, and if possible to have countermeasures already in place,
as ubiquitous as the very dust in the air.

(2) The defensive system will probably have to have some of the same
capabilities most feared in nanotech gone bad: free-roaming, evolving
replicators - because we cannot assume the adversary will not have those
capabilities. This means that the defensive system may itself become
the worst danger. A comparison with nuclear weapons is apt here.

: It might be worthwhile to invent a game wherein people explore as many
: different avenues for attack, defense, and deterrance as possible,...

What first pops into my mind is Core Wars. Isn't there a newsgroup about that?

-David


David Blenkinsop

unread,
Nov 11, 1996, 3:00:00 AM11/11/96
to

On Nov 7, 1996, Will Ware wrote:

>David Blenkinsop wrote:
> : ...[David] Brin makes several points to the effect that "nothing

> : will protect or save privacy. It's over." In other words, pretty soon the


> : world is going be loaded with miniaturized spy-bots...

>Will Ware responded:

> This is an interesting approach. I generally regard the arrival of nanotech
> as more or less inevitable for reasons of economic incentive, and perhaps the
> same is true of the end of privacy.

> This an interesting and perhaps viable approach to limiting the development
> of offensive weapons, but only if your surveillance is good enough to detect
> every possible development effort. Maybe the spybots can't get everywhere.

In addition to Will Ware's comments, JoSH's editorial on my/David Brin's
"clear as glass" idea is interesting:

>[Unwelcome spybots would have a hard time if you used Utility Fog, or
> had your own asteroid. ... Remember that if spybots are unstoppable, so are


> death-bots, so there'll be plenty of incentive to protect your area.

Clearly, to say that spybots would be guaranteed to see everywhere (Will
Ware) or that they would be unstoppable (JoSH) would be an exaggeration.
Apparently both David Brin and I are guilty of this--"clear as glass world"
is my own phrase for Brin's idea of "no privacy" and both of these comments
are an exaggeration of what I think is really intended here. What perhaps
isn't an exaggeration is the basic idea that we might have to give up
considerable privacy if we are to survive into an era where anyone can make
practically anything, in principle.

Tim Freeman

unread,
Nov 11, 1996, 3:00:00 AM11/11/96
to

Will Ware (ww...@world.std.com) wrote:
: It's true that what we've learned from nuclear weapons can't teach us

: everything we need to know, but it's a very useful place to start. E.g. it's
: supposed to be fairly easy to become a nuclear terrorist if you have certain
: pieces of information. We seem to do an adequate job of keeping that info
: under wraps...

I thought nuclear terrorism was limited by limiting access to U-235
and plutonium, and by the technical problem of separating the needed
U-235 from the relatively common U-238.

I think there was an article a few years ago in some science fiction
magazine about how to build a basement nuclear bomb. It was close
enough to right to cause a bit of a stir. But still no basement
nuclear bombs, or even failed attempts at them. Can anyone fill in
the reference? An Alta Vista search turned up
http://www.ratical.com/radiation/inetSeries/NthrtsNnwo.txt, which
mentions:

9. "Bombs in the Basement," "Newsweek," July 11, 1988, pp. 42-45.

which might reference the article I have in mind.

Nanotech terrorism cannot be limited by such a scheme, since there's
no reason to expect novel elements to be necessary.

Maybe the tools to assemble things are what you would want to classify
as a munition? This is essentially the same as the scenario described
in Engines of Creation.

Tim Freeman


0 new messages