Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Slashdot: molecular electronics story

0 views
Skip to first unread message

Will Ware

unread,
Jan 25, 2002, 3:28:55 AM1/25/02
to

There is a very worthwhile Slashdot story about the ongoing HP/UCLA
collaboration on molecular electronics:
http://slashdot.org/article.pl?sid=02/01/24/1527200&mode=nested
My impression is that the precipitating event was this press release:
http://www.hp.com/hpinfo/newsroom/press/23jan02b.htm

This addresses what I had seen as a fundamental limitation in the work
being done in molecular electronics. I'd seen a few different groups
claim to be able to fabricate mole-quantities of molecular switching
elements (transistor or logic gate equivalents) in a test tube, or in
some kind of regular crystalline array, but useful circuitry requires
that switches be wired up in specific and irregular ways.

The present work claims to resolve this limitation. An initially
uniform sea of wires-and-gates is programmed by blowing fuses, which
makes it possible to specify detailed connectivity. The result is a
little like the write-once FPGAs that Actel used to make. There is
also some discussion of using clever routing to work around
manufacturing defects.

As far as I can see, this was the last fundamental obstacle to the
large-scale deployment of molecular electronics. It will be very
interesting to see how things proceed from here.

[AFAIK, in programmable ICs, you have a horizontal and vertical grid of wires.
Fuses make connections (usually diodes) between the wire grids. You blow a
fuse when you don't want the horizontal and vertical wires to cross-connect at
a certain point, and the grid then implements "and-or" logic.

In this case, they're using molecules for the "fuses". *AND* they are able to
cut the wires at controllable points. *AND* it looks like they have found a
molecule, catenane, that can be reversibly switched (unlike many fuses). So it
looks like they can take this sea of fuses and cut wires to turn it into a
detailed array of non-volatile but field-programmable PALs. The nanowires are
2 nm wide. They have a technology to connect the nanowires to larger-scale
(200 nm) lithographic wires. Put it all together and you have *very* complex,
*very* compact logic. The only thing I haven't heard is whether they have an
amplifier, or whether this is essentially diode logic (which would limit the
amount of logic they can do before they have to re-amplify the signal).

See http://www.patentinsight.com/article1064.html and
http://www.trnmag.com/Stories/071801/HP_maps_molecular_memory_071801.html for
descriptions of some of their technology (not sure if this is what they're
using in the latest breakthrough.) --CJP]

Will Ware

unread,
Jan 26, 2002, 4:06:59 AM1/26/02
to

Responding to one's own post is supposed to be a breach of netiquette,
I think, but the recent HP/UCLA work is an extraordinarily important
development. The nanotech community now has the chance to think about
it and discuss it a bit before it is massively deployed.

Chris wrote:
> The only thing I haven't heard is whether they have an amplifier, or
> whether this is essentially diode logic (which would limit the
> amount of logic they can do before they have to re-amplify the
> signal).

The kind of diode logic Chris is talking about is illustrated here:
http://www.redbrick.dcu.ie/help/reference/CLD/AppendixB/appendixB.doc2.html
Note the AND and OR gates in figure B6. Diode logic uses entirely
passive switching elements. The good thing is they don't need power
supplies, the bad thing is that you can only do a few stages of this
before you need an active amplifying element to boost the signals back
up. The gain element will require a power supply.

With diode logic one can build a large rectangular grid with vertical
wires as inputs and horizontal wires as outputs, pull-down resistors
on each output, and each intersection either having or not having a
diode. This gives a large array of OR gates with many shared inputs.
It's also easy to make a large array of AND gates with many shared
inputs. Connecting the outputs of the AND array to the inputs of the
OR array forms Boolean sums of products. This is called a programmable
logic array (PLA) and is diagrammed in Nanosystems on page 360. Also:
http://www.zyvex.com/nanotech/mechano.html
http://www.cs.rug.nl/~ben/Courses/CompSys/lesson2/sld026.htm
http://6004.lcs.mit.edu/Fall99/handouts/L08.pdf (page 13)

The HP/UCLA work essentially gives us huge tracts of real estate and
nearly-free diodes (discounting manufacturing defects, which can be
worked around to some extent). That means the PLA approach can
implement huge, very economical blocks of combinatorial logic. This
will dictate a change in how logic designs are synthesized, but most
of that change will be hidden inside design tools.

I haven't been able to determine whether the HP/UCLA folks have an
active gain element. If they don't have one now, they are certainly
aware of the need for it, and will probably develop one shortly.
Then they'll have the problem of supplying power to it.

So computers continue to get smaller, faster, cheaper, but do so
faster than we expected. What does all this mean for the bigger
picture of getting to mature nanotechnology?

John Koza (http://www.genetic-programming.com/) is using a big Linux
cluster as an "invention machine", very similar to the "automated
engineering" discussed in Engines, also called by some "fully
automated design". See:
http://www.foresight.org/EOC/EOC_Chapter_5.html#section03of06
Koza's use of genetic algorithms echoes Drexler's prose:
> The engineer will sit at a screen to type in goals for the design
> process and draw sketches of proposed designs. The system will
> respond by refining the designs, testing them, and displaying
> proposed alternatives, with explanations, graphs, and diagrams.

The transition to mature nanotechnology involves a technological
chicken-and-egg problem. We can't build assemblers with the tools we
have today, and we don't have the assemblers to make the tools that
could be used to build assemblers. An essential question is whether
molecular electronics gives us any traction in attacking this problem.

With enough computing horsepower, we can backtrack from a simulated
working nanotechnology to today's technology. Following the
development of the simulated assembler backwards in time, one gets a
tree of prerequisites whose nodes are various tools, substances, and
processes, and far enough back along the tree, we start to find things
that can be accomplished today. The initial attempt at a virtual
nanotech probably won't be exactly where we ultimately want to go, but
the whole tree can periodically be updated as we improve our
simulations and discover new science. See Merkle's paper
http://www.zyvex.com/nanotech/compNano.html, especially the last
diagram, of the two trees facing each other.

I wouldn't mind a wristwatch computer that outperforms my current
desktop, with a speech interface. It should also tell time.

----------------------------------
References:
http://www.hp.com/hpinfo/newsroom/press/23jan02b.htm
http://www.patentinsight.com/article1064.html
http://www.trnmag.com/Stories/071801/HP_maps_molecular_memory_071801.html
http://dailynews.yahoo.com/htx/nm/20020123/tc/tech_hewlett_nanochips_dc_1.html
http://www.foresight.org/Updates/Update44/Update44.3.html#UCLAmolectronics
http://www.foresight.org/Updates/Update42/Update42.2.html#MolElUCLA
http://slashdot.org/article.pl?sid=02/01/24/1527200&mode=nested
http://nanodot.org/article.pl?sid=02/01/24/2026235&mode=nested

erincss

unread,
Jan 26, 2002, 4:52:41 AM1/26/02
to

I have a direct question related to this new molecular electronics information.
I was in a personal discussion/debate with someone who was putting down the
practicality of MNT, and I told him about molecular computers being one near
term application, and he says "Bah, they would be too fragile, one loose
particle or electron striking the molecular circuits could destroy the whole
thing!" Is there any validity to what he claims there, or not at all?

Basically he is saying this: If the circuits are molecule in size, then
anything that is capable of breaking chemical bonds, such as a loose molecule,
a cosmic ray, or whathave you, will shut down or harm the function of the
molecular computer.

TERRAV

unread,
Jan 26, 2002, 12:39:18 PM1/26/02
to

In a post that had the subject of 'Re: Slashdot: molecular electronic story',
<ww...@alum.mit.edu> (Will Ware) wrote:

>We can't build assemblers with the tools we
>have today, and we don't have the assemblers to make the tools that
>could be used to build assemblers.

It seems to me that this statement is not strictly true. If we had a complete
design for an assembler, could we not build it atom by atom with an atomic
force microscope? It obviously would take a number of years to do so due to
the large number of atoms involved, but I seem to remember that an afm is
capable of pushing one atom against another atom hard enough to force a bond to
form. Like the Chinese saying, "A journey of a thousand miles begins with the
first step."

If my memory is correct one basic assembler design is a Stewart Platform and a
good percentage of the atoms in it are structural elements needed only for
stiffness. These structural elements could be whittled down from bulk materials
and then attached to other portions of this basic assembler with the afm. Then
attach the remaining atoms that cannot be whittled down from bulk materials.

Probably multiple afms would be used to advantage to speed up the process for
any subassemblies of the assembler which are not unique.

If this approach had been taken back when Nanosystems had been first published,
I think we would be well on our way to having that first crude assembler today.
And it avoids the chicken-or-the-egg problem that Will Ware alludes to above.

I expect that I'll hear from the newsgroup if this path to MNT is not feasible.
If it is possible, I'd like to hear estimates of how long this approach might
actually take.

Regards,

Bob Johannessen

Bob Johannnessen
Ter...@aol.com

Robert Shimmin

unread,
Jan 26, 2002, 1:12:41 PM1/26/02
to

On 26 Jan 2002, erincss wrote:

> Basically he is saying this: If the circuits are molecule in size, then
> anything that is capable of breaking chemical bonds, such as a loose molecule,
> a cosmic ray, or whathave you, will shut down or harm the function of the
> molecular computer.

This is a problem only insofar as the destruction of a single computing
element would result in rendering the whole inoperable. While this is the
case with most of today's computers, it need not be the case -- militaries
have put a lot of money into the design of computers that could take a
bullet and continue to run at reduced capacity, and distributed computing
networks are designed under the assumption that substantial portions of
their computing abilities may suddenly cease to exist without warning;
yea, indeed without notice.

--RS

[Note that this is a somewhat different problem to solve. If a whole computer
goes off-line, it's easy to detect. But how do you detect if a single gate
goes bad inside a CPU? Remember Intel's Pentium math bug? How would you
detect it if every chip developed a different "math bug" at a random time?
You'd need some sort of error-detecting logic, some sort of Boolean-logic
checksum--or at least redundant voting circuits. --CJP]

H. Phil Duby

unread,
Jan 26, 2002, 6:27:50 PM1/26/02
to

"TERRAV" <ter...@aol.com> wrote in message
news:a2upk...@enews4.newsguy.com...

>
> In a post that had the subject of 'Re: Slashdot: molecular electronic
story',
> <ww...@alum.mit.edu> (Will Ware) wrote:
>
> >We can't build assemblers with the tools we
> >have today, and we don't have the assemblers to make the tools that
> >could be used to build assemblers.
>
> It seems to me that this statement is not strictly true. If we had a
complete
> design for an assembler, could we not build it atom by atom with an atomic
> force microscope? It obviously would take a number of years to do so due
to
> the large number of atoms involved, but I seem to remember that an afm is
> capable of pushing one atom against another atom hard enough to force a
bond to
> form. Like the Chinese saying, "A journey of a thousand miles begins with
the
> first step."

I think this is vastly over simplified. Several problems occur with this
path.

1) The 'pushed' atom might bond to the tool tip instead of to the work
piece.

2) With the current (lack of) degree of control, sometimes the atom would
bond in the wrong location.

3) Removing an incorrect bond is another issue altogether (can you say
start over with almost every error).

4) The intermediate work product could be unstable, and spontaneously
break / decompose. This is especially true when building so slowly. Lots
of time for unwanted things to happen.

5) Getting the atom to 'push' could also be a problem. Most atoms of
interest are already part of another molecule.

All of this is not to see that it absolutely will not work, or that it
should not
be attempted. I just expect this will be limited (if it can be done at all)
to
building the tools to build the tools to ...

[ .. snip .. snip .. ]


> If this approach had been taken back when Nanosystems had been first
published,
> I think we would be well on our way to having that first crude assembler
today.
> And it avoids the chicken-or-the-egg problem that Will Ware alludes to
above.

Even if everything else mentioned / implied works, you have to have a design
before you can start building. An assembler is much more than a moveable
arm.

> I expect that I'll hear from the newsgroup if this path to MNT is not
feasible.
> If it is possible, I'd like to hear estimates of how long this approach
might
> actually take.
>
> Regards,
>
> Bob Johannessen
>
> Bob Johannnessen
> Ter...@aol.com
>

--
Phil


Will Ware

unread,
Jan 27, 2002, 1:34:04 PM1/27/02
to

> [A critic of molecular electronics said:] "Bah, they would be too

> fragile, one loose particle or electron striking the molecular
> circuits could destroy the whole thing!" ... If the circuits are

> molecule in size, then anything that is capable of breaking chemical
> bonds, such as a loose molecule, a cosmic ray, or whathave you, will
> shut down or harm the function of the molecular computer.

My knee-jerk response would be to invoke life as an existence proof
that reliable molecular machinery is possible. That's a valid proof
that at least one possible nanotechnology will work, but not a good
proof that this particular approach (rotaxanes, catenanes, PLA
architecture) will work. The possibility of an errant cosmic ray,
alpha particle, or reactive group is not unrealistic. What to do?

There is a company called Stratus that makes outlandishly reliable
computers. I interviewed there about 15 years ago. I don't know what
they're doing now, but this is what they did then. They built dual-CPU
boards that sat on a shared bus. All the CPUs were synchronized. Each
board would compare what its two CPUs were doing, and if there were
ever a mismatch, the board would take itself off the system bus.

Stratus shipped systems with two boards in them. When a CPU had a
problem, its board would shut down while the other board continued to
run. The system would notice the fault, and dial up Stratus on the
modem. The customer would get a visit from a Stratus technician who
would replace the board, which would be the first time the customer
would know that anything had happened.

That worked great, and Stratus is still in business so they may be
doing the same thing still. But it's costly: you need four times
the hardware you'd need if it were all totally reliable.

Another approach is to make three sets of hardware instead of four,
and assume that only one will fail. When you get results, they will
all be identical if there's been no failure. If there has been a
failure, two will match and one will be different, so you assume the
two that match are correct. If you get three different answers, you're
out of luck. So three-fold redundancy with voting is cheaper. The
question of handling failures of the voting circuitry is left as an
exercise for the reader.

CDROMs are fundamentally unreliable media which rely upon
error-correcting codes to offer accurate data retrieval. Each eight
bits stored on a CDROM is encoded as 14 bits on the disk, so that if
one of the 14 bits gets flipped on read-back, there is still enough
information to retrieve the original eight bits.

I have spent a few hours now attempting to design logic circuits using
the error-correcting code principle, hoping to come up with some form
of redundancy that doesn't triple or quadruple the gate count, without
success. Maybe I'll putter with it more in future.

A more informed approach would be to study the HP/UCLA technology
carefully enough to get a sense of what kinds of failures will crop
up, and with what frequency. I can infer some failure modes given my
understanding of the general PLA circuit, but I'm not sure all of
those will transfer to the molecular version.

Fraser Orr

unread,
Jan 28, 2002, 12:15:15 AM1/28/02
to

CJP wrote

> The only thing I haven't heard is whether they have an
> amplifier, or whether this is essentially diode logic (which would limit
the
> amount of logic they can do before they have to re-amplify the signal).

I believe that these systems are also missing a capability for
a Not gate, meaning that they are incomplete from the point of
view of Boolean logic. (For example one could not build the
ALU of a processor out of such a restricted Boolean logic.)
I believe this fact is mentioned in the paper Will originally
cited.

TERRAV

unread,
Jan 28, 2002, 2:00:03 PM1/28/02
to

H. Phil Duby wrote:

>"TERRAV" <ter...@aol.com> wrote in message
>news:a2upk...@enews4.newsguy.com...
>>
>> In a post that had the subject of 'Re: Slashdot: molecular electronic
>story',
>> <ww...@alum.mit.edu> (Will Ware) wrote:
>>
>> >We can't build assemblers with the tools we
>> >have today, and we don't have the assemblers to make the tools that
>> >could be used to build assemblers.
>>
>> It seems to me that this statement is not strictly true. If we had a
>complete
>> design for an assembler, could we not build it atom by atom with an atomic
>> force microscope? It obviously would take a number of years to do so due
>to
>> the large number of atoms involved, but I seem to remember that an afm is
>> capable of pushing one atom against another atom hard enough to force a
>bond to
>> form. Like the Chinese saying, "A journey of a thousand miles begins with
>the
>> first step."
>
>I think this is vastly over simplified. Several problems occur with this
>path.
>
>1) The 'pushed' atom might bond to the tool tip instead of to the work
>piece.

To bond the atom to the workpiece rather than the tool tip you need to make
sure the bond between the atom and workpiece is stronger (at a lower energy
level) than the bond between the atom and the tool tip. Then when the tool tip
is retracted the bond between it and the atom will break, leaving the atom
bonded to the workpiece.

>2) With the current (lack of) degree of control, sometimes the atom would
>bond in the wrong location.

AFAIK the current generation of atomic force microscopes has the required
precision.

>3) Removing an incorrect bond is another issue altogether (can you say
>start over with almost every error).

With the correct precision, you won't get incorrect bonds.

>4) The intermediate work product could be unstable, and spontaneously
>break / decompose. This is especially true when building so slowly. Lots
>of time for unwanted things to happen.

This is a possibility I suppose, but if this really can happen, then MNT is
probably never going to happen in the Drexlerian sense of assemblers performing
mechanosynthesis because you won't be able to build that first assembler. The
assembler design should be such that this problem is minimized. The order of
assembly should also be such that this is avoided.

>5) Getting the atom to 'push' could also be a problem. Most atoms of
>interest are already part of another molecule.

Any molecule you use would be of only one element. For example, you would not
use CO2 if you wanted to bond an oxygen atom to the workpiece. You would use
O2 instead. So the design might be made such that anytime you wanted one
oxygen atom bonded there would be a place in the design for the second oxygen
atom, or else it would unbond from the first one due to the process itself.

>All of this is not to see that it absolutely will not work, or that it
>should not
>be attempted. I just expect this will be limited (if it can be done at all)
>to
>building the tools to build the tools to ...

I appreciate Mr. Duby's comments above, but I still do not see any
insurmountable problems to creating that first assembler using an afm. But as
I stated in the original post, we need to come up with the design of that first
assembler before we will get anywhere.

Rob Virkus

unread,
Jan 29, 2002, 3:11:26 AM1/29/02
to

Fraser Orr wrote:

I think IBM already demonstrated a working nanotube based molecular
NOT circuit. Consider also that if there were no hope of a practical
molecular computer then a lot of bright people are wasting a lot of time
working the problem. There seems to be an explosion of device concepts
and ideas to make moltronics practical. There are two major (in quality
but not size) corporations already formed (and probably more). Here
is the link:

http://researchweb.watson.ibm.com/resources/news/20010827_logiccircuit.shtml

Regarding a single particle failure, some incidents may be recoverable,
some may be anticipated by building redundant systems as mentioned
in other responses here but consider also that the chances of a collision
that could destroy a system go way down with size. There may be a much
higher probability that an Alpha particle would upset a semiconductor
memory that a molecular memory. My suspicion is that moletronics
will be more robust than silicon rather than less as well as cheaper by
orders of magnitude. In time there will probably be the equivalent of
many supercomputers built into the most trivial of products. In fact,
I think greatly reduced cost will drive the move to moletronics first
before greatly expanded capabilities. But take that as a lay opinion,
not an expert opinion.


[If you flip a bit, the computer might crash, but it'll be OK when you reboot.
But if you damage a gate, the computer has a subtle error from then on. Do
alpha particles damage semiconductor gates, or just flip bits? Buckytubes may
be resistant to damage, but I suspect the single-carbon-backbone chemicals are
not. Anyone know for sure? --CJP]

Unomor

unread,
Jan 29, 2002, 12:35:48 PM1/29/02
to

Chris,

"Single carbon backbone" molecules are indeed not highly resistant to
conversion to radicals or ions. Such "damage" tends to cause them to
attack any available neighbors and/or "unzip." That being the case, I
cannot agree with the analysis that molecular electronics should be
more resistant to single particle damage than current semiconductor
electronics. A smaller target may be "harder" to hit, but when it is
hit, the damage is to a larger fraction of the "working volume" of the
device. At best, it's a wash. Redundancy is likely to be the best
bet to deal with this situation.

Unomor (chemist who lurks here, mostly)
--
Clean it up to reply by e-mail.

"Rob Virkus" <a017...@msp.sc.ti.com> wrote in message
news:a35lf...@enews1.newsguy.com...
>
<snip>


> I think IBM already demonstrated a working nanotube based molecular
> NOT circuit. Consider also that if there were no hope of a
practical
> molecular computer then a lot of bright people are wasting a lot of
time
> working the problem. There seems to be an explosion of device
concepts
> and ideas to make moltronics practical. There are two major (in
quality
> but not size) corporations already formed (and probably more).
Here
> is the link:
>
>
http://researchweb.watson.ibm.com/resources/news/20010827_logiccircuit

...shtml


>
> Regarding a single particle failure, some incidents may be
recoverable,
> some may be anticipated by building redundant systems as mentioned
> in other responses here but consider also that the chances of a
collision
> that could destroy a system go way down with size. There may be a
much
> higher probability that an Alpha particle would upset a
semiconductor
> memory that a molecular memory. My suspicion is that moletronics
> will be more robust than silicon rather than less as well as cheaper
by
> orders of magnitude. In time there will probably be the
equivalent of
> many supercomputers built into the most trivial of products. In
fact,
> I think greatly reduced cost will drive the move to moletronics
first
> before greatly expanded capabilities. But take that as a lay
opinion,
> not an expert opinion.

And Chris Phoenix replied:

DrFirelip

unread,
Jan 30, 2002, 2:17:02 AM1/30/02
to

There is little question that a direct hit by an alpha particle of any
reasonable energy will break a carbon chain. In the case of fullerenes, one
bond breaking is unlikely to cause permanent damage since the atoms are still
held in a rigid framework. Most likely, the bond would just reform unless
there was some other molecule near by that is more reactive such as a stray O.

Regarding molecular electronics, why are we talking about carbon linkages
exclusively? People tend to think that carbon compounds are the answer for
everything just because we understand that chemistry best thus far. While
fullerenes hold great promise in this area, the best molecular circuit designs
will probably contain various compound semiconductors. There are several
reasons why such materials are preferable from a chemist's perspective. For
example, compare the bond strength of GaN bonds vs. that of C-C. I do not wish
to put down all of the valuable research done with DNA and other more familiar
starting points for a computational system, but there is a world of other,
potentially more promising alternatives to carbon based molecular computing. I
would also suggest that operating in the realm of digital computing and Boolean
operators is only half the picture. It is theoretically possible to create a
self-assembling, computing device which starts from traditional programming,
but quickly progresses beyond that to discover its own capabilities and
processes. One other thing to keep in mind is that most buckytubes are several
molecular layers thick and quite large compared to other molecular entities.
Nanocrystalline semiconductors and fullerene constructions will almost
certainly lead to nano-computing, but not quite to what we generally think of
as molecular computing. This is the realm of very large molecules.

While Drexler's mechanical computer is an interesting possibility, we are far
closer to creating devices that work entirely with light based on these
nano-scale systems. The mechanical nanocomputer will require advances in a
host of technological areas that we are not even sure are doable. Systems as i
describe are much more easily investigated. The technology is almost in place,
all we have to do is run the experiments and see if we can make them work.
Anybody got a spare few million???

Jim Johansen


Will Ware

unread,
Jan 30, 2002, 9:21:25 PM1/30/02
to

DrFirelip wrote:
> why are we talking about carbon linkages
> exclusively?

Before Rob Virkus mentioned nanotubes, this thread was discussing
the recent work at HP and UCLA on rotaxanes and catenantes. The
reason for discussing these things is that because of their work
to date, this wasn't a discussion of something that might work in
principle, but a discussion of something that is nearly working
right now. If one could afford to perform negations and
amplifications in the silicon realm, there is already a viable
technology here for building very large blocks of combinatorial
logic.

Chris Phoenix had pointed out that the HP/UCLA technology does
not yet include any kind of amplification, and therefore presently
permits only diode-logic-like PLA architectures, two or three
levels deep. Fraser Orr then pointed out the absence of anything
like an inverter. Rob mentioned nanotubes because a nanotube-based
inverter has been demonstrated; it turns out that nanotubes can
be doped to behave like either P-channel or N-channel FETs.

A nanotube inverter would provide both the inverter and the gain
stage, allowing us to build really large, complex circuits. Unlike
the rest of the HP/UCLA work, however, there is no clear path to
get there. Those guys have actually built the things they've talked
about; their work is not mere theoretical conjecture. The earlier
posts in this thread, and the following press release, make
interesting reading.
http://www.hp.com/hpinfo/newsroom/press/23jan02b.htm
You can get to the earlier posts via Google Groups:
http://groups.google.com/groups?hl=en&threadm=a386le01gue%40enews2.newsguy.com

> compare the bond strength of GaN bonds vs. that of C-C... there is


> a world of other, potentially more promising alternatives to carbon
> based molecular computing.

That may be so, but the current molecular electronics horserace will
be won either by the rotaxane/catenane guys or the nanotube-FET guys.
When you start buying molecular-electronics-based components for
your desktop, it will be using one of these technologies. In the
remote future, we'll have a wide assortment of things to play with,
and the means to build circuits with any of them. But it's interesting
to look at what will come sooner.

> One other thing to keep in mind is that most buckytubes are
> several molecular layers thick and quite large compared to
> other molecular entities.

I believe the FETs were built with single-wall nanotubes, with a
typical thickness of about 1 nm. That doesn't sound so huge. If
they were used in conjunction with the rotaxane AND/OR planes, most
of the nanotubes would be oriented in the vertical direction,
perpendicular to the plane.

> While Drexler's mechanical computer is an interesting
> possibility, we are far closer to creating devices that
> work entirely with light based on these nano-scale systems.

To do this you'd need purely optical logic gates. There is work
being done in this area, or at least there was as of a couple
years ago:
http://science.nasa.gov/headlines/y2000/ast28apr_1m.htm
but I don't think these guys hope to have products on retail
shelves by 2006, as the HP guys seem likely to do. Optical logic
is likely to eventually be a beneficiary of all the mindshare and
capital now going into fiber optic networking.

DrFirelip

unread,
Jan 31, 2002, 10:54:47 PM1/31/02
to

Will Ware wrote:

>this wasn't a discussion of something that might work in
>principle, but a discussion of something that is nearly working
>right now.

I was also discussing work that has recently been proven in part, however, we
have not published any of our work. We know how to make some rather simple
device constructions and current experiments suggest that we are fairly close
to realizing a very powerful method for creating nanoscale devices based on
13-15 semiconductor nanocrystals. We do not expect to run any experiments
directly focused on optical computation networks for a couple of years, but the
experiment is already well designed. Our current focus is on
telecommunications, biosensors, and light harvesting. Unfortunately, i can not
describe the details.
Obviously, i dropped in on an ongoing discussion that is over my head in
some degree. I do not pretend to understand much about logic gates etc. that
is my business partner's job. I am a chemist. I do understand that anything
that can be done in this area with organic compounds, we can almost certainly
do better with our materials set.

>the current molecular electronics horserace will
>be won either by the rotaxane/catenane guys or the nanotube-FET guys.

>When you start buying molecular-electronics-based components for
>your desktop, it will be using one of these technologies.

Are you sure about that? He who publishes first is not always the one to win
the race. Also, the initial "winner" may lose in the end to better technology
that may take a little longer. None but our wildest concepts are more than
four properly funded years away from market. My point was that there are
better materials for the job and that there is a world of work in this area
that is not being released. This forum discusses potential future
developments. I had hoped to broaden this discussion and learn more about
current and proposed research areas. Such firm statements as those quoted
above imply that you are familiar with all of the current research in this
area. If this is so, then perhaps i need to hire you as my consultant.

>I believe the FETs were built with single-wall nanotubes, with a
>typical thickness of about 1 nm

The wall thickness of a single-wall, fullerene nanotube is not 1nm. It is the
same as the thickness of one layer of graphite, or roughly equivalent to the
Bohr radius of a graphitic Carbon atom. However, single-wall tubes are rather
difficult to grow as of yet. Certainly, this area of research is progressing
rapidly. Several groups have reported methods of growing arrays of vertically
oriented tubes. The point which i was trying to make is that there is a
difference between nanocomputing and molecular computing, and that, thus far,
fullerene constructions have been discussed primarily for nanocomputing. When
i began my graduate work, we were working on the synthesis of "molecular
wires". We were seeking single strands of GaAs with selectively reactive ends.
Buckytubes are a step larger even at their simplest.

Will also quotes me as saying:

>> While Drexler's mechanical computer is an interesting
>> possibility, we are far closer to creating devices that
>> work entirely with light based on these nano-scale systems.
>

To which he says:

>
>To do this you'd need purely optical logic gates. There is work
>being done in this area, or at least there was as of a couple
>years ago:
>http://science.nasa.gov/headlines/y2000/ast28apr_1m.htm
>but I don't think these guys hope to have products on retail
>shelves by 2006, as the HP guys seem likely to do. Optical logic
>is likely to eventually be a beneficiary of all the mindshare and
>capital now going into fiber optic networking.

We synthesized "purely optical logic gates" by the trillions, in many different
forms and compositions. the real problem is to address them. There has been a
great deal of work done on optical computing. i recall from graduate school
that someone had built a very simple working prototype in the early 90s. In
the referenced article, they also focus on organic systems and compare their
materials only with bulk semiconductors. Nanocrystalline semiconductors are
not mentioned that i can see in my quick perusal.

Regarding, "the mindshare and
capital now going into fiber optic networking", the fiber-optics industry is in
trouble now because of a lack of optical signal processing devices. Until the
terminus of the fibers is cheap enough for the home and small business, the
fiber manufactures can not sell the last five miles of cable. Those last five
miles amounts to more cable than is already in place for the major transmission
lines. so, the big guys are laying off production workers and putting money
into R&D. Right now that terminus costs at least $10000 and is composed of
several large pieces of equipment. I agree that this technological bottleneck
will help pave the way for practical, all optical computing. In fact, our most
recent proposal was to create that terminus on a single chip for under $100.
These are the intermediate steps.

Jim Johansen


Malcolm McMahon

unread,
Feb 2, 2002, 2:10:30 AM2/2/02
to

On 30 Jan 2002 07:17:02 GMT, drfi...@aol.com (DrFirelip) wrote:

>While Drexler's mechanical computer is an interesting possibility, we are far
>closer to creating devices that work entirely with light based on these
>nano-scale systems.

The trouble is that when you get down to the nano-scale photons, like
electrons, are just too fuzzy.

[In fact, even fuzzier--photons have a longer wavelength than electrons.
Electrons do OK at a scale of a few nanometers. --CJP]

DrFirelip

unread,
Feb 6, 2002, 9:52:24 AM2/6/02
to

>drfi...@aol.com (DrFirelip) wrote:
>
>>While Drexler's mechanical computer is an interesting possibility, we are
>far
>>closer to creating devices that work entirely with light based on these
>>nano-scale systems.

To which Malcolm McMahon replied


>The trouble is that when you get down to the nano-scale photons, like
>electrons, are just too fuzzy.
>
>[In fact, even fuzzier--photons have a longer wavelength than electrons.

>Electrons do OK at a scale of a few nanometers. --CJP]
>

The "fuzziness" you describe is a real problem if you are trying to use those
photons for lithography. When it comes to computational devices, optical
processes are expected to be about two orders of magnitude faster and more
efficient than the theoretical limit to equivalent electronics.


Jim Johansen
(DrFirelip)


Steve Lenhert

unread,
Feb 6, 2002, 8:46:30 PM2/6/02
to
TERRAV wrote:

> H. Phil Duby wrote:
....


> >> If we had a complete
> >> design for an assembler, could we not build it atom by atom with an atomic
> >> force microscope?

....


>
> >2) With the current (lack of) degree of control, sometimes the atom would
> >bond in the wrong location.
>
> AFAIK the current generation of atomic force microscopes has the required
> precision.

The most advanced scanning probe microscopes (SPM - including AFM & other
varieties) may have the precision to position some (not all) atoms one at a time
non-covalently in a very cold ultrahigh vacuum on some (not all) two dimensional
surfaces, but creating covalent bonds is a completely different story. A more
realistic approach might be to design your assembler out of molecules &
components
that can get from chemists and then assemble those non-covalently with the SPM.
Still - with the current generation it may take hours of positioning for each
molecule - and one small mistake might bring down the whole stack of cards. So,
I'm still pushing for a "self-assembler"
http://www.nanoword.net/PDF/SelfAssem.pdf

--
Steve Lenhert
http://www.nanoword.net

Steve Lenhert

unread,
Feb 7, 2002, 3:51:36 AM2/7/02
to

DrFirelip wrote:

I agree that photons, electrons and other homogeneously fuzzy (or even
coherent) materials are promising components for nanotechnology - for
example: http://www.nanoword.net/PDF/ElectMicro.pdf

This point is often overlooked by sometimes overly defensive proponents
of MNT, who are used to defending classical machine-like nanotech by
saying quantum effects won't prevent the nanomachines from working.
In fact, these quantum effects are what makes MNT possible!

Regarding the potential of optical computing - here is one example:
Qiao B, Ruda HE, Quantum computing using entanglement states in
a photonic band gap, J APPL PHYS 86: (9) 5237-5244 NOV 1 1999.

Malcolm McMahon

unread,
Feb 7, 2002, 3:55:02 AM2/7/02
to

On 6 Feb 2002 14:52:24 GMT, drfi...@aol.com (DrFirelip) wrote:

>
>The "fuzziness" you describe is a real problem if you are trying to use those
>photons for lithography. When it comes to computational devices, optical
>processes are expected to be about two orders of magnitude faster and more
>efficient than the theoretical limit to equivalent electronics.
>

Can you explain why? Surely the ultimate problem at small scales is the
indeterminancy of position. Whether in lithography or actual use as
signals the electron is, in this sense, smaller than the optical photon.

[I'm also curious about the theoretical limit to electronics. What is it and
what are the assumptions? If it's kT per transistor event, there's definitely
a way around it. --CJP]

Larry Burford

unread,
Feb 7, 2002, 7:32:31 PM2/7/02
to

Malcolm McMahon <mal...@pigsty.demon.co.uk> wrote in message
news:<a3tfd...@enews2.newsguy.com>...

I share the moderator's curiosity, since it (and in fact many
"theoritical limits") seems to be a moving target. There has rarely
been a shortage of experts who were willing to step up to the
microphone and say " ... but this, THIS is CLEARLY not possible. I
have the equations right here in my hand to prove it." History is
litered with their egg-covered faces.

There must be things things that are actually impossible, so the
experts can't ALWAYS be wrong. How are we to know if the latest limit
is real, or just the result of another expert on the verge of Petering
Out?

I suspect it depends partly on how we define concepts like "theory"
and "limit". Given the fleeting nature of many of these limits, it
seems that the most authoritative thing we can say about the prospects
for MNT and related technology is "it's very likely to be do-able".

Looks like the line at the microphone labeled "Why MNT can't work" is
getting pretty short, though.

Regards,
LB

10of100

unread,
Feb 8, 2002, 9:13:05 AM2/8/02
to

> I share the moderator's curiosity, since it (and in fact many
> "theoritical limits") seems to be a moving target. There has rarely
> been a shortage of experts who were willing to step up to the
> microphone and say " ... but this, THIS is CLEARLY not possible. I
> have the equations right here in my hand to prove it." History is
> litered with their egg-covered faces.
>
> There must be things things that are actually impossible, so the
> experts can't ALWAYS be wrong. How are we to know if the latest limit
> is real, or just the result of another expert on the verge of Petering
> Out?
>
> I suspect it depends partly on how we define concepts like "theory"
> and "limit". Given the fleeting nature of many of these limits, it
> seems that the most authoritative thing we can say about the prospects
> for MNT and related technology is "it's very likely to be do-able".
>
> Looks like the line at the microphone labeled "Why MNT can't work" is
> getting pretty short, though.
>
LB,

Most experts, if they are good, usually put in the qualifier, "with
what we know now, no "

erincss

unread,
Feb 8, 2002, 3:29:18 PM2/8/02
to

It is interesting to note that in Engines of Creation Drexler said the
strongest possible material made of common matter (not in those words) was
carbyne, the straight chain of carbon atoms. But, in 1985 (A year before),
fullerene was discovered, and then in 1991 carbon nanotubes were discovered.
Which are stronger? Carbon nanotubes?

e...@ekj.vestdata.no

unread,
Feb 9, 2002, 10:26:37 AM2/9/02
to

On 8 Feb 2002, Larry Burford wrote:

> I share the moderator's curiosity, since it (and in fact many
> "theoritical limits") seems to be a moving target.
>

> There must be things things that are actually impossible, so the
> experts can't ALWAYS be wrong. How are we to know if the latest limit
> is real, or just the result of another expert on the verge of Petering
> Out?

There are *some* limits that are fairly *hard* limits, in the sense that
they will not change unless our most fundamental understanding of
physics changes.

These therefore apply not only to any electronic circuit, but to any
computing-machinery whatsoever, regardless of it's implementation.

Schneier has a good explanation of these in his Applied Cryptography,
the below is based on his text.

The second law of thermodynamics has as a consequence that to record a
single bit of information required no less than kT where T is the
absolute temperature of the machinery, and k is the Boltzman constant.

k = 1.38*10^-16 erg/Kelvin

So, if you run your computer at 3.2K, the background-temperature in the
universe, you need 4.4*10^-16 ergs every time you set or clear a single
bit.

If we used the *total* output from the sun for powering such an ideal
computer, and again assume that the radiation from the sun is captured
perfectly and losslessly by a Dyson-shell or something like this. Then
this would give power to flip 2.7*10^56 bits.

If we let out computer work with 128 bit numbers (small, considering
that even many current computers manipulate data in sets of 64 bits),
and that an average computer-instruction requires flipping all the bits
in such a number 10 times.

Then we get an upper bound on the performance of a computer powered by
our sun, of something like 2.1*10^53 operations per second.

By the same computation, the upper bound on the performance of a 128 bit
computer powered by 100 watts of electricity, and operated at
room-temperature is something like:

* Energy for a bit-change: 300*1.38*10^-16= 4.14*10^14 erg

* Energy for a single "operation": 4.14*10^14*128*10= 5.3*10^-11 erg

* Operations: 100*10^7/5.3*10^-11 = 1.9*10^19

These estimates are high, in practice it probably takes considerably
more than 1280 bit-flips to do an "operation" for a 128-bit computer.

We're also unlikely to develop perfect computers anytime soon. :)

Anyways, compare these numbers to the current clockspeeds of around 10^9
hz, and current power-consumption of around 50 watts for a CPU, and you
see that there is still plenty of room on the bottom.

The current cpus are atleast a factor of a million, possibly a factor of
a billion away from their theoretical limits. (I'm not sure how many
bits a current cpu can flip each clocktick, but it would not surprise me
if the numer is several thousands)

But then again, if we keep getting a doubling of performance every 18
months, this means they will hit "ultimate" power in about 20 years. An
interesting idea.

And this is true regardless of if we use electronics, or mechanical
nano-computers or something entirely different.


Regards,
Eivind Kjrstad


John Devereux

unread,
Feb 9, 2002, 11:35:32 PM2/9/02
to

On 9 Feb 2002 15:26:37 GMT, <e...@ekj.vestdata.no> wrote:
>On 8 Feb 2002, Larry Burford wrote:
>>
>> There must be things things that are actually impossible, so the
>> experts can't ALWAYS be wrong. How are we to know if the latest limit
>> is real, or just the result of another expert on the verge of Petering
>> Out?
>
>There are *some* limits that are fairly *hard* limits, in the sense that
>they will not change unless our most fundamental understanding of
>physics changes.
>
>These therefore apply not only to any electronic circuit, but to any
>computing-machinery whatsoever, regardless of it's implementation.
<SNIP>

>The second law of thermodynamics has as a consequence that to record a
>single bit of information required no less than kT where T is the
>absolute temperature of the machinery, and k is the Boltzman constant.
>
>k = 1.38*10^-16 erg/Kelvin
>
<SNIP>

>Then we get an upper bound on the performance of a computer powered by
>our sun, of something like 2.1*10^53 operations per second.
>
>By the same computation, the upper bound on the performance of a 128 bit
>computer powered by 100 watts of electricity, and operated at
>room-temperature is something like:
>
>* Energy for a bit-change: 300*1.38*10^-16= 4.14*10^14 erg
>
>* Energy for a single "operation": 4.14*10^14*128*10= 5.3*10^-11 erg
>
>* Operations: 100*10^7/5.3*10^-11 = 1.9*10^19
>
>These estimates are high, in practice it probably takes considerably
>more than 1280 bit-flips to do an "operation" for a 128-bit computer.

<SNIP>


>And this is true regardless of if we use electronics, or mechanical
>nano-computers or something entirely different.

Does not "reversible computing" or "Reversible logic"
provide a way to sidestep the above? Ralph Merkle did some
work on this a while ago:
<http://www.zyvex.com/nanotech/reversible.html>

So maybe even your example of a truly "hard limit" isn't so
hard after all!

--

John Devereux

jo...@devereux.demon.co.uk

G. Waleed Kavalec

unread,
Feb 11, 2002, 2:04:15 AM2/11/02
to

"John Devereux" <jo...@devereux.demon.co.uk> wrote in message
news:a44ta...@enews3.newsguy.com...
<SNIP details on limits>

> >And this is true regardless of if we use electronics, or mechanical
> >nano-computers or something entirely different.
>
> Does not "reversible computing" or "Reversible logic"
> provide a way to sidestep the above? Ralph Merkle did some
> work on this a while ago:
> <http://www.zyvex.com/nanotech/reversible.html>
>
> So maybe even your example of a truly "hard limit" isn't so
> hard after all!
>


Add in the progress being made in quantum computing and those limits back
off ever farther.

G. Waleed Kavalec
------------------- I send my heartfelt thanks to the authors of...
http://members.rogers.com/malikelshabazz/swf/thisisislam.swf
----- Original Message -----

DrFirelip

unread,
Feb 11, 2002, 7:58:17 PM2/11/02
to

i said:

>optical
>>processes are expected to be about two orders of magnitude faster and more
>>efficient than the theoretical limit to equivalent electronics.

Malcolm says:

>Can you explain why? Surely the ultimate problem at small scales is the

>indeterminacy of position. Whether in lithography or actual use as


>signals the electron is, in this sense, smaller than the optical photon.


Unfortunately, i am afraid that i overstepped my knowledge base. Regarding the
relative merits of photonic vs. electronic computing. i was repeating what had
been told to me a thousand times during my graduate research. The magnitude of
the advantage of photonic computing is not something that i am qualified to
argue. However, there is no question that integrated circuits have some real
problems operating at microwave frequencies. Photons have the advantage of not
having capacitive effects and problems caused by the skin conduction effect.
Today's microelectronics are rapidly reaching the limits of lithography and
wire bonding. Another clear advantage is switching speed. Off-resonance
nonlinearoptic effects can result in a change of refractive index in
picoseconds to perhaps femtoseconds. It is unlikely that similar switching
speeds can be realized without radically different approaches. It is very
likely that there is some proposed mechanism by which the limits to
microelectronics can be overcome. I do not claim to know every theory on
electronic computing, but my statement is reasonably applicable to CMOS and
related approaches.

The "size" and fuzziness of photons is not an issue. Photons interact on a
single atom scale. Our first optical transistors were 30X40 angstroms. they
interact with a photon as if it were essentially dimensionless.

sorry to have overstated my position. Again, i depend on my physicist and
engineer associates to explain this stuff to me. I am the chemist, i just make
the stuff: they tell me what it is good for.

Jim Johansen


Will Ware

unread,
Feb 12, 2002, 8:39:36 AM2/12/02
to

e...@ekj.vestdata.no wrote:
> The second law of thermodynamics has as a consequence that to record a
> single bit of information required no less than kT where T is the
> absolute temperature of the machinery, and k is the Boltzman constant...

> And this is true regardless of if we use electronics, or mechanical
> nano-computers or something entirely different.

There is actually a work-around for this limitation, called reversible
computing. The kT cost is incurred when a computer destroys a bit of
information, performing an irreversible operation. Reversible computing
has been studied by a few different folks. The ones I am most aware of
are Ralph Merkle, and a group at MIT:
http://www.ai.mit.edu/~cvieri/reversible.html
http://www.zyvex.com/nanotech/reversible.html

I went to a thesis defense once by C. Vieri where he described a
reversible processor architecture. I took some notes and have posted
them here:
http://willware.net:8080/cvieri.html

Fraser Orr

unread,
Feb 15, 2002, 10:16:45 AM2/15/02
to

<e...@ekj.vestdata.no> wrote in message
news:a43f3...@enews2.newsguy.com...
> [snipped bit flip takes kT]

> And this is true regardless of if we use electronics, or mechanical
> nano-computers or something entirely different.

As others pointed out a paradigm shift to reversible computing or
quantum computing readily overcomes these limits. (And that is
just technology that we know of today.) I read a statement somewhere
which seems appropriate here: "intelligence always trumps Physics."
That is to say, whenever we encounter some physical law of the
universe we can overcome it, not by beating the law as such, but
by a paradigm shift.

What is the fastest a man can travel? A thousand years ago it was
considered that the fastest speed would be perhaps 50mph. Why?
Because nobody ever considered that a faster mode of transport than
a horse would be available. Today we say the limit is 3e8m/s,
and that is a hard physical limit. However, perhaps we can't
travel any faster than that, but we might be able to get somewhere
faster than that by using the non Euclidian structure of space to
our advantage. In the past the question "how fast can I get from
here to there" was automatically reframed as "how fast can a
horse run from here to there", today the question is how long will
it take to get to Alpha Centuri? Which is automatically rephrased
as "at the fastest velocity possible, how long will it take
to traverse 4 light years?" But perhaps a paradigm shift might
allow us to get there without traveling the space in between.

"Intelligence always trumps Physics" means that new technology
can move the goal posts so that the dumb laws of physics don't
get in our way anymore.


0 new messages