Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

10/21/94 Foresight Gathering

1 view
Skip to first unread message

Steven C. Vetter

unread,
Nov 20, 1994, 11:40:13 PM11/20/94
to
On October 21, 1994, the Foresight Institute sponsored its Senior
Associate Gathering, a Nanotechnology (NT) workshop, with over 70
Senior Associates attending, and talks by Eric Drexler, Ralph Merkle,
and other leaders in NT research. Preceding and following this
meeting were several days of less formal discussions between various
Senior Associates of the Foresight family of organizations, and other
researchers. Following is a summary of the formal day of
presentation. (I am greatly indebted to Edward Neihaus for providing
copious notes taken throughout the meeting, without which this account
would be less detailed, to say the least. However, I take full
responsibility, and apologize in advance, for any misrepresentations.


BRIEF SUMMARY

The conference began with a series of general session presentations.
Eric Drexler talked about his current thoughts on when, why, and how
we will get NT. He showed several new illustrations that are destined
for a new book he is preparing. Steve Vetter spoke about Molecular
Manufacturing Enterprises, Inc. (MMEI), a seed capital firm
specializing in NT. Ralph Merkle talked about recent progress in
getting the word out, in particular about Internet and the World Wide
Web. Ted Kaehler talked about protein folding, which is an important
path to early (self-assembling) NT precursor systems. Jim Bennett
reviewed a wide variety of export control systems and issues and made
recommendations for the sorts of controls that should be applied to
NT.

The latter part of the meeting then split into two parallel tracks:
computer security and personal action. This summary covers only the
computer security track. Computer security applies to NT in several
ways: as a means of placing export controls while minimizing their
negative impact on progress, as a dynamic issue with the expected
explosive growth in computational resources, and as a model for NT
security itself.


SYNOPSIS

The meeting got off on a humorous note. To make the overhead
projections less distorted, we placed the projector on a booster chair
(from the restaurant hosting the meeting). This caused someone to
comment that NT was still in its infancy, thus requiring the booster
chair.


ERIC DREXLER

Eric Drexler started off the day covering a wide variety of issues and
stimulating some excellent discussions. He announced that there would
be another Feynmann prize of at least $5,000 awarded at the next
nanotechnology conference in November of 1995. The intent is for this
to be a regular prize awarded at each biennial conference to the
researcher making the greatest contribution in advancing us towards
manufacturing with atomic precision.

Eric showed several new and impressive illustrations. One did a good
job of showing displacement probabilities as a function of stiffness.
The conclusion is that a stiffness of 10 newtons/m (quite achievable)
gives a vanishingly small chance of connecting with the wrong atom (at
room temperature). Eric pointed out that all of his past
illustrations showed bearings, etc., floating in a vacuum or attached
to smooth objects that do not show the atomic detail. He has
rectified that with an elaborate "system view" diagram, showing
essentially the equivalent of a transmission all at the atomic level.
The diagram had hundreds, if not thousands of atoms, including many of
the familiar bearings, planetary gears, etc., that we have seen in
other contexts.

Eric talked about the convergent assembly process for assembling large
objects with atomic precision. Basically, the idea is that if you
picture some sort of robot arm putting together things the size of a
bread box, it takes on the order of one second to do that. If you
assume that each breadbox-sized thing is built from two objects of
one-half the size, they can be assembled at least as fast. You follow
this reasoning backwards until you have, say, two to the hundredth
power pieces (small molecules) being assembled in parallel. In this
way, you can assemble virtually anything in roughly 100 seconds. This
oversimplifies things a great deal, but it has at least two
conservative aspects: you could have a branching factor of greater
than two, reducing the number of steps needed, and the earlier stages
would almost certainly go faster than one operation per second. Eric
imagines this process physically as being something like a cube with a
bunch of niches in its sides and robot arm inside that takes things
from the niches and assembles them to make something larger. Each of
these niches is, of course, another cube with niches in its sides and
a smaller robot arm grabbing smaller objects from these niches. This
continues until you get to small enough objects that you would use
some of the other processes he discusses in his book Nanosystems.

Eric graphically showed an interesting analysis of the advantages of
scaling. He started with one robot arm inside a cubic meter and
listed the friction, power consumption, productivity, etc. (choosing
one arbitrary unit for each). He then scaled it by a factor of two,
resulting in eight arms inside the same space and showed how these
various measures scale. His next illustration was "at a slightly
smaller scale", he said as he showed a gray cube (of roughly
micron-sized assemblers). Because of the power dissipation problems
at this point, he slowed down the operations per second, ending up
with a cubic meter of assemblers that is vastly superior to the
original cubic meter in every way. For example, he calculates that a
0.05 cubic meter manufacturing system of about 1 kg mass could in each
hour convert 1.6 kg of feedstock solution and 0.9 kg of atmospheric
oxygen into 1.5 kg of high-purity water and 1 kg of other products.
In addition, this system would continuously produce 3.6 kW surplus
power, and 1.1 kW of waste heat.

Eric discussed the Foresight Institute's goals and several points of
general policy and philosophy. Foresight is concentrating on
"requirements for assured, useful, safe emergence of NT." This means
developing a broad understanding of the essentials of sound policy and
of the technologies that lead to NT. He named two things that will
make NT come into being faster: education and the Internet. Foresight
is working in both of these areas.

On the issue of how long will it take, he talked about two
"conservative" estimates. If you want to rely on it happening, it is
conservative to plan on twenty years. If you are concerned the
competition is going to get it first, it is conservative to plan on
ten years.

In answer to the often-asked question "will it happen in my lifetime?"
Eric described the singularity concept, already discussed at length on
the Internet. We had bacteria billions of years before we even got to
"genuine sex." This was followed by mammals for hundreds of millions
of years. He continued this list of powers of ten, up to "organized
technology development" during the last 100 years or so. A possible
conclusion is that we are on a super-exponential acceleration that
will lead to a super-abrupt "singularity" of sorts. We aren't quite
there yet, but Eric's answer to anyone who thinks we are still 100
years away is, "What do you think researchers will be doing between
2035 and 2045 that will be so difficult that it will leave more
decades of work to reach NT?" He then asked how many people in the
audience had children under the age of five. He pointed out that they
were likely to be going through puberty and the singularity at the
same time (scary thought).

In answer to the skeptics who recalled the similar sounding claims of
artificial intelligence and fusion efforts, Eric pointed out that
those two fields had fundamental scientific questions to be answered,
such as "what is intelligence anyway?" NT has no such fundamental
questions outstanding. It still needs a lot of engineering work, but
there aren't any gaping holes in our knowledge of what the target may
look like.

OK, so how do we reach the target of full-blown NT? Eric suggests
that a good intermediate goal would be a replicating assembler with on
the order of 100 moving parts. He pointed out that we are already
making some clearly leading-to-NT devices without a coordinated
effort. If we coordinated our efforts, progress toward molecular
manufacturing would be faster yet. He pointed out that intermediate
goals would result in some useful products (e.g., new catalysts or
drugs) even before the advent of full-fledged NT. These intermediate
products would provide enough economic incentive to encourage progress
towards molecular manufacturing even among those people and
institutions that do not look far enough ahead to see the benefits of
the overall goal. He also discussed economic impacts enough to say
that during the 40's and 50's people believed technology transitions
occurred slowly, but that the arguments "turn out not to apply to
Nanotechnology", so there are reasons to think that the transition
will be relatively fast.

His view is that if general NT is developed in the U.S. first, it will
start in academia, be watched closely by lots of people concerned with
national security, and probably happen under reasonable controls,
which is desirable. If it happens in a competitive way among several
countries, the dynamics could be very risky. Any multinational
approach would be better if done cooperatively. In many ways, Japan
is leading right now, and that they have huge strengths. In general,
Eric seems to be leaning towards open cooperation with other countries
to both hasten the benefits, and to make sure we are on the "winning"
side.

On the issue of complexity, Eric stated that diamond window panes look
really simple. Even single stage to orbit vehicles look easy -- we
already know the area well. On the other hand, cell repair machines
look very much harder. We know it is possible because evolution
proves it, but we don't know the details of how we might do it with
NT. One possible solution might be to harness "artificial evolution"
to figure it out. This, of course, would have to be approached very
carefully.


STEVE VETTER

Next, Steve Vetter spoke about Molecular Manufacturing Enterprises,
Inc. (MMEI), a new seed capital firm devoted to advancing the state of
the art of molecular manufacturing. He pointed out that like other
senior associates (SA's), he donates money to the Foresight family of
organizations. This "family" currently includes the Foresight
Institute (FI), the Institute for Molecular Manufacturing (IMM), and
the Center for Constitutional Issues in Technology (CCIT). This money
is important, but most senior associates want to do more yet. One
possibility is for SA's to seek out researchers that have the
capabilities to advance NT, but require education about NT,
entrepreneurial skills, contacts, investments, or other help to steer
their work more directly towards our goals. In this way, we can start
a lot of small-scale enterprises that would advance the overall field
of NT, while pursuing intermediate goals. While it is clearly too
early to "make a quick buck" from NT, it is not too early to stimulate
developments directly leading to significant advances -- MMEI's
primary objective. As Eric remarked, there are good possibilities for
many useful products along the way to full NT development.

MMEI has many of the resources (business and technical expertise,
money, contacts, etc.) to encourage advancement of NT as quickly and
efficiently as possible. MMEI seeks proposals that meet the following
requirements:

1. Modest capital requirements (so MMEI's resources can cover a broad
range of areas).
2. Must directly advance the art of molecular manufacturing.
3. Submitted by a for-profit, non-profit, corporate, private, or educational
party.
4. Ideally, include some prospect of making a return on investment.

MMEI was incorporated and is currently privately owned by Steve Vetter
(software engineer and entrepreneur), Scott MacLaren (post-doc
material science researcher and consultant at the University of
Illinois), and Tanya Sienko (researcher with the technical policy arm
of the Japanese government). In addition, MMEI is advised by a broad
spectrum of business and technical advisors, including Ralph Merkle
(head of the Computational Nanotechnology Project at Xerox PARC), and
Roald Hoffmann (Professor at Cornell and 1981 Nobel laureate in
chemistry).

This presentation stimulated a lot of subsequent discussions and
interesting possibilities. MMEI provided a detailed brochure to those
interested in participating in its programs.


RALPH MERKLE

After a break, Ralph Merkle spoke about Internet and how it is helping
advance NT. (He also cited the September 19th issue of Business Week,
which discusses computational modeling, using NT as one of the
examples.) Preceding his discussion of Internet, however, he talked a
bit about recent advances in molecular modeling and construction
tools.

Ralph briefly reviewed the hydrogen abstraction tool designed and
modeled by Charles Musgrave (last year's Feynmann prize winner), and
mentioned a useful force field model recently developed at the Naval
Research Lab (NRL). Using these as starting points, we now think that
we can build almost any rigid hydrocarbon structure with just four
tools: hydrogen abstraction, hydrogen deposition, carbene insertion,
and dimmer deposition (two carbon atoms inserted at a time). These
tools are being designed and modeled now. One possible negative
discovery from this work is that it appears we may need an assembler
with a greater range of motion than that provided by a simple Stewart
platform. In any case, this is getting to be an extremely exciting
area of investigation.

Ralph couldn't say enough good about Internet. He talked about how
fast it was growing (hundreds of millions of hosts expected by 2000),
and how international it was becoming. He talked about sci.nanotech,
and the sudden flood of World Wide Web (WWW) pages. He talked about
the hypertext nature of WWW, showed us several examples of NT-related
WWW pages, and encouraged us all to develop pages of our own and
interlink them with the others. He provided the following list of
NT-related URL's for people with WWW access to check out:
ftp://parcftp.xerox.com/pub/merkle/merklesHomePage.html
ftp://parcftp.xerox.com/pub/nano/nano.html
ftp://parcftp.xerox.com/pub/nano/reversible.html
ftp://parcftp.xerox.com/pub/nano/feynmanPrize.html
ftp://parcftp.xerox.com/pub/nano/nano4.html
http://www.portal.com/~carols/mass.html
http://planchet.rutgers.edu
ftp://ftp.cs.rmit.oz.au/pub/rmit/nano
http://www.watson.ibm.com/journal/rd.html
http://planchet.rutgers.edu/updates.html
http://www.ioppublishing.com
http://www.gpl.net/mmsg/mmsg.html
http://nanothinc.com/
http://www.wired.com/Etext/1.6/features/nanotech.html


TED KAEHLER

Next, Ted Kaehler spoke about protein folding. His talk was based on
work by Dr. Ken Dill of the University of California at San Francisco.
Ted pointed out that we know how to make proteins today, but are poor
at designing them. This is in contrast to Ralph's bearings, which we
can design but not make. He talked about self-assembly and
illustrated it with some experimental results of the T4 virus and
E-coli's flagellar motor.

It now appears that the main force driving protein folding in nature
is the differing water affinities of the various amino acids. The
non-polar acids are typically hydrophobic (avoid water), and the polar
acids are typically hydrophilic (attracted to water). In general, the
protein is looking for a low-energy state. It wants to fold into a
ball with the hydrophilic parts on the outside and the hydrophobic
parts on the inside. There are other factors governing the folding
process, such as molecular shape and charge distributions, but the
overwhelming force appears to be the affinity for water.

This folding process occurs in hierarchical steps. Local parts of the
amino-acid chain start to fold into a few common shapes: alpha helix,
beta barrel, beta sheet, etc. These pieces then fold further,
typically ending in roughly a ball. The relative positions of the
interior acids typically doesn't matter much, but the positions of the
outer acids must be just right for the protein to do its work. A
random string of amino acids has nearly infinite ways to fold. The
challenge for evolution (or for engineering) is to make a protein that
has a good chance of folding in a particular way.

It appears that the way nature has done this is to use just the right
amount of hydrophobic groups. In general, hydrophobic groups stick
together like glue. Therefore, the more hydrophobic acids in the
chain, the more glue, and the more stability. The problem is, too
much glue can make the protein too stable in too many possible
configurations. The overall rule seems to be to use just enough glue
to hold the protein together in one configuration, but not enough to
hold it together in any other configuration.

Ted pointed out that we may be able to make things that self-assemble
using things other than amino acids as long as you have at least two
kinds that have differing affinities for a given solvent and you can
control the order in which they are placed in a string.

He went on to talk about what we know about proteins in people.
Humans have roughly 100,000 types of proteins. Each cell requires
about 4,000 to keep it running. You also need to know the three
dimensional structure; only a few of these are known today. He talked
briefly about the human genome project and about DNA tags, which
provide a set of landmarks for determining placement of each gene. He
mentioned a freezer in Colorado that contains about half of all human
DNA tags (patents being applied for). He emphasized that the gene
itself is in many ways more important than where it is in the genome,
so these tags are important because they allow you to get at the gene
without even having to know where it is.

Ted went on to discuss some of his own work in NT, providing more
examples of what other people can do to directly contribute. In 1990,
he took six weeks off (from Apple Computer, Inc.) to work at the
Foresight Institute. He started two NT discussion groups: "The
Assembler Multitude" and a "Nanosystems study group." (At this point
someone mentioned similar groups in the Los Angles area.) He also
promotes NT to other groups, and reads the literature.

JIM BENNETT

Jim Bennett, president of the newest member of the Foresight family,
the Center for Constitutional Issues in Technology (CCIT), talked
about export controls. (The goal of CCIT is to shape the debate over
NT controls in such a way as to maximize the long-term safety and
benefits of such a powerful technology.) Jim discussed the history,
types, advantages, and disadvantages of export controls. He then went
on to make general recommendations of the sorts of controls that seem
to work best.

Export controls go back at least as far as the Iron Age, when certain
rulers would lame their ironsmiths to keep them from exporting the new
iron technology to potential enemies. Throughout history, the common
pattern emerges: controls slow down the development within the country
trying to protect it. The more open, the faster. World War II and
the development of the atomic bomb made security regimes stronger, and
export controls remain fairly strong today.

Export control regimes can be separated on two dimensions: unilateral
versus multinational, and specific items versus generic capabilities.
Typically, control of specific items are unilateral and control of
generic capabilities are multinational. An example of a specific,
unilateral control is the United States and the Stinger hand-held
anti-aircraft missile. Examples of multinational, generic controls
are plutonium separation and long-range missiles.

Typically, specific regimes are not very effective because they
require an impermeable shield against transfer of technology and even
information related to the technology. In addition, political winds
within a single country can shift rapidly. In the Stinger case, we
relaxed our controls by giving the missiles to the Mujhadin in
Afghanistan to ward off the Soviet incursion.

Multilateral attempts to prevent acquisition of generic capabilities
can work for awhile if the number of countries with the capability is
small, and they all agree to prevent its spread. Plutonium separation
is known by only three countries, so we may be able to seal it off for
the time being. However, missile technology is known by about 30
countries, so sealing it off is impractical. In these cases, we
usually rely on making the cost of acquisition prohibitive. In either
case, outlaw countries can cooperate to cause problems, such as the
entente between South Africa and Israel of a few years ago.

The three main problems with all export controls are enforcement,
enforcement, and enforcement. If the cost is low enough or the
technology is useful enough, it will get transferred no matter what.
The best that any export control regime can do is buy time. The big
question is how do we put that time to good use? History shows that
controls that are too tight will slow down development, so the time
bought is less productive. The weapons potential of NT is certain to
start a movement towards export controls as we get closer. The bottom
line is to keep the controls as uninhibitting as possible, and to put
the time they buy to good use.

TWO PARALLEL TRACKS

For the remainder of the agenda, we split into two parallel tracks.
One group concentrated on issues of computer security, with emphasis
on advanced computing capabilities afforded by NT. The other group
covered a wide variety of topics ranging from what can you do
personally to advance NT, to will a vegetarian eat "meat" that has
been produced by NT. Unfortunately, I was not in that group and I do
not have any detailed notes of what transpired there. (Perhaps
someone who attended that session could provide a more detailed
summary?)


COMPUTER SECURITY

Eric Drexler started the computer security session with a short
presentation about "trusty computing." The idea is to make computers
more responsive and more reliable so that they may become a better
communications means and more useful in general. He drew several
parallels between software and molecular machinery: they are both
small (bits and molecules), they both make copies of themselves
sometimes, and they are both limited more by problems of design than
by problems of a physical nature.

Xerox Palo Alto Research Center (PARC) and others are talking about
ubiquitous computing, towards which we are heading at breakneck speed.
Many people and organizations have fears, somewhat justified, such as
viruses; even well-intentioned software has damaging bugs. An example
of a response common on Internet is a corporation erecting firewalls
(gateways) in attempts to ward off problems.

If we can move to having truly trustworthy computing in the
well-defined mathematical sense, it would make possible ubiquitous
computing that actually works. It makes for a good model for NT,
which also will have computation and extend beyond the control of
people.

Next, Mark Miller of Agorics, Inc. (an operating system developer and
consultant company) and Ralph Merkle of Xerox PARC lead a discussion
of computer security issues. Mark began by providing some orienting
concepts. First, despite an existence proof (KeyCose, designed by
Norm Hardy) that a secure operating system can be successful in a
computer science sense, the world market is so dominated by non-secure
systems that most people aren't even aware of just how secure a system
can be. Secondly, cryptography by itself is not enough to secure a
system. At some point, your key is sitting somewhere on the system,
and it can be compromised at that point. A system can contain
cryptography components, but the components do not make a complete
system.

The most common approach to security is that you assume the programs
run by a person are to be trusted as much as you trust the person.
You assume that everything the software does is intended by the person
running it. Fact is, you have to worry more about the software than
the people. You can have a system with a series of security levels
and a given agent (program) can access only certain levels. It is
possible to secure each level by having it in a separate address space
or even in a separate machine. But if a person has access rights and
he or she runs a program that does things not directly intended by
that user, you have compromised security. That is a fundamental
problem with access control lists; they assume away the agency
problem.

A capabilities-based security system is more secure. Each agent has a
list of those things that it has the right to access. Capabilities
cannot be forged. Capabilities can be transferred, but only to agents
that have permission. Use the principle of least authority: give an
agent only the authority it needs to get its assigned task done. This
by itself does not give you a secure system, but it bounds the amount
of damage to the point where it is something you can think about. You
have to be most paranoid about the basic theme from which you derive
security. (This led to one of my favorite quotes of the day: "Forbid
only that which you can make impossible; facilitate that which is
possible; and have the wisdom to explain the difference.")

Mark talked about one-way gate systems, where the system connects only
to authorized phone numbers and only when the user is around. The
hole here is that the line can be tapped. Encrypting the message
still isn't good enough. (There is a devilish scheme to get around
even this.)

The bottom line of all this is that simplicity is required for two
main reasons: so people can and will use the system, and so there is a
better chance of making the system truly secure. If people can't
understand your security system, they cannot use it and your security
is compromised. Norm Hardy argues for minimalism with an example of
building multiple walls around something. It is easy to fall into the
trap of each wall thinking that another wall has a given problem
covered. Instead, Norm suggests you build your system with one wall
and thoroughly design it in a digital/mathematical sense so that it
covers all possibilities.

What we really want in a security system is to prevent information
from leaking out, prevent bad guys from getting at our information,
and allow the good guys to develop and deploy something decent.

One thing that will drive us towards more secure, capabilities-based
operating systems is electronic cash. Agorics, Inc. and Sun
Microsystems are building a system to do electronic commerce within
companies, including micro-cent and nano-cent transactions, without
any person in the loop. This allows you to charge a person per line
or per word instead of per book for information that they access.
Even the information of who is purchasing what is of value to a
marketing person. This may lead to the equivalent of a discount for
paying with credit card (i.e., with buyer's information).

The issue of electronic cash being anonymous versus traceable was
discussed. Sometimes there is value is buying anonymously (like when
you don't want a marketing person to have your information). But
there is a problem with anonymous cash. Ralph gave an example of
someone going around threatening to blow up computer scientists unless
they provide him with a certain amount of anonymous cash. A possible
solution is to mark bills, but that requires the collusion of a
bank-like agent. However, if there is no bank, it may not be possible
to trace.

Cryptographic keys and how to protect them was the next subject
covered. With classical computers, key-cracking can be defeated by
simply using larger keys. However, the possibility of quantum
computers having the power to factor truly huge numbers may cause a
problem with this. Ralph was an innovator in a generic approach to
protecting cryptographic systems. The basic idea is to produce a
cryptographic system with a "dial on the side" that you can set to
increasing levels of difficulty. You then offer a series of prizes to
the first person to break each level of the system. Upon attaining
each level you suggest to end users that they use a higher level of
the system. When someone claims the a prize, you know they are
getting closer to breaking the system in use, so you up the level in
use before that level gets compromised. (By spacing the levels
sufficiently far apart, you can virtually insure that someone will
claim the prize before anyone can break the next level.) For example,
Ralph developed a cryptographic system and got Xerox to sponsor a
prize for breaking each successive level of the system. Eventually
someone claimed the prize for the "two-pass" version, but the
"four-pass" version prize has yet to be claimed. By having thousands
of minds attempting to break the system, you are much more likely to
find any weaknesses in the system. You now have a fairly reliable
pressure gauge for your security system. You have a good chance of
finding out that someone is getting close to breaking it before it is
broken.

So, how does all this relate to NT? The hypothesis is that NT will be
developed by a well-behaved group. We want to freely share
information among the members of that group while keeping non
well-behaved people out for as long as possible. We can use systems
that are outside of NT, such as reputational sources, soft deterrents,
ethics, Underwriters' labs, etc. We also can use a market-based
approach and post an award for NT breakthroughs, or for compromising
lower levels of security.

Another way security applies to NT is to make sure that the technology
itself does not get out of control. Ways of insuring this include
putting the device (or a model of it) within a test tube or computer
simulating a real environment. If it breaks out of or damages the
simulated environment, we don't place it in the real environment.
Similarly, if artificial intelligence progresses to the point of
having intelligences on the order of humans, we could simulate a bunch
of agents running around trying to break things or trying to find
holes in the system. The computational costs of doing this are
enormous by present-day standards, but with NT it may be well within
reach.

We could even use genetic algorithms to develop more secure and yet
more capable NT systems. It may even be possible to turn evolution on
to NT and make it do some of the work. This is scary, but perhaps
inevitable.


CONCLUSION

The summary above barely begins to capture the content of the
meetings, and it does not do justice to the excitement of
participating in such a workshop. However, hopefully it is of use to
those of you who could not attend this time, and perhaps even for
those of you who did. Mark your calendars now for the next
nanotechnology technical conference, coming in November of 1995.


Steven C. Vetter <sve...@maroon.tc.umn.edu>
Molecular Manufacturing Enterprises, Inc.
9653 Wellington Lane
Saint Paul, MN 55125 USA

0 new messages